uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,497,015
arxiv
\section{Introduction} There is an old joke in astrophysics that with one source you have a discovery, and with two you have a population. With a population of sources it becomes possible to constrain astrophysical models. Until recently, studies of milli-Hertz gravitational wave science have either focused on making predictions about the source populations, or have looked at detection and parameter estimation for individual sources. These types of studies have featured heavily in the science assessment of alternative space-based gravitational wave mission concepts, where metrics such as detection numbers and histograms of the parameter resolution capabilities for fiducial population models were used to rate science performance (see {\it eg.} Ref.~\cite{2009CQGra..26i4014S}). These are certainly useful metrics, but they only tell part of the story. A more powerful and informative measure of the science capabilities is the ability to discriminate between alternative population models. Inferring the underlying population model, and the attendant astrophysical processes responsible for the observed source distribution, from the time series of a gravitational wave detector is the central science challenge for a future space mission. It folds together the difficult task of identifying and disentangling the multiple overlapping signals that are in the data, inferring the individual source parameters, and reconstructing the true population distributions from incomplete and imperfect information. The past few years have seen the first studies of the astrophysical model selection problem in the context of space based gravitational astronomy. Gair and collaborators~\cite{Gair:2010yu,Gair:2010bx,Sesana:2010wy,AmaroSeoane:2012je} have looked at how extreme mass ratio inspiral (EMRI) formation scenarios and massive black hole binary assembly scenarios can be constrained by GW observations using Bayesian model selection with a Poisson likelihood function. Plowman and collaborators~\cite{Plowman:2009rp, Plowman:2010fc} have performed similar studies of black hole population models using a frequentist approach based on error kernels and the Kolmogorov-Smirnov test. Related work on astrophysical model selection for ground based detectors can be found in Refs.~\cite{Mandel:2009pc, O'Shaughnessy:2012zc}. We develop a simple yet comprehensive Hierarchical Bayesian modeling approach that uses the full multi-dimensional and highly correlated parameter uncertainties of a collection of signals to constrain the joint parameter distributions of the underlying astrophysical models. The method is general and can be applied to any number of astrophysical model selection problems~\cite{2009ApJ...704..629M, 2012arXiv1206.3540S, 2012arXiv1208.3036L} A remarkable feature of the Hierachial Bayesian method is that in its purest form it is completely free of selection effects such as Malmquist bias. By ``purest form'' we mean where the signal model extends over the entire source population, including those with vanishingly small signal-to-noise ratio~\cite{Messenger:2012jy}. In practice it is unclear how to include arbitrarily weak sources in the analysis, and in any case the computational cost would be prohibitive, so we are forced to make some kind of selection cuts on the signals, and this will introduce a bias if left uncorrected~\cite{Schutz:2011tw}. To illustrate the Hierachical Bayesian approach and to investigate where bias can arise, we look at the problem of determining the population model for white dwarf binaries in the Milky Way. Future space based missions are expected to detect thousands to tens of thousands of white dwarf binaries~\cite{AmaroSeoane:2012je, Crowder:2006eu, Nissanke:2012eh, Timpano:2005gm, Littenberg:2011zg}. Here we focus on determining the spatial distribution and the chirp mass distribution, but in future work we plan to extend our study to include a wider class of population characteristics such as those described in Ref.~\cite{Nissanke:2012eh}. Determining the galaxy shape using gravitational wave observations of white dwarf binaries will be an independent measure on the shape of the galaxy to complement electromagnetic observations. Additionally, the white dwarf binaries that are not detectable form a very bright stochastic foreground. Accurately modeling the confusion foreground level is crucial for the detection of extragalactic stochastic gravitational wave signals~\cite{2010PhRvD..82b2002A}. The paper is organized as follows: The Hierarchical Bayesian approach is described in \S~\ref{HB}, and is illustrated using a simple toy model in \S~\ref{toy1}. A more realistic toy model is developed in \S~\ref{toy2} to explore mis-modeling biases that can occur when using Gaussian approximations to the likelihood function. In \S~\ref{galaxy} the method is applied to simulated observations of galactic white dwarf binaries, and in \S~\ref{approx} the possibility of using the Fisher Information Matrix approximation to the likelihood is explored. Concluding thoughts follow in \S~\ref{concl}. \section{Hierarchical Bayesian Modeling}\label{HB} Hierarchical Bayesian modeling has been around since at least the 1950's~\cite{Good:1965,1972_LindleySmith,Morris:1992,MacKay94}, but it is only now becoming widely known and used. The term ``hierarchical" arises because the analysis has two levels. At the highest level are the space of models being considered, and at the lower level are the parameters of the models themselves. Hierachical Bayes provides a method to simultaneously perform model selection and parameter estimation. In this work we will consider models of fixed dimension that can be parameterized by smooth functions of one or more hyper-parameters. The joint posterior distribution for the model parameters $\vec{\lambda}$ and the hyper-parameters $\vec{\alpha}$ given data $s$ follows from Bayes' theorem: \begin{equation}\label{formal} p(\vec{\lambda}, \vec{\alpha} \vert s) = \frac{ p(s \vert \vec{\lambda}, \vec{\alpha}) p(\vec{\lambda} \vert \vec{\alpha}) p(\vec{\alpha})}{ p(s)} \, , \end{equation} where $p(s \vert \vec{\lambda}, \vec{\alpha})$ is the likelihood, $p(\vec{\lambda} \vert \vec{\alpha})$ is the prior on the model parameters for a model described by hyper-parameters $\vec{\alpha}$, $p(\vec{\alpha})$ is the hyper-prior and $p(s)$ is a normalizing factor \begin{equation} p(s) = \int p(s, \vec{\alpha}) d\vec{\alpha}= \int p(s \vert \vec{\lambda}, \vec{\alpha}) p(\vec{\lambda} \vert \vec{\alpha}) p(\vec{\alpha}) d\vec{\lambda} d\vec{\alpha} \, . \end{equation} The quantity $p(s, \vec{\alpha})$ can be interpreted as the ``density of evidence'' for a model with hyper-parameters $\vec{\alpha}$. The integral marginalizing over the hyper-parameters is often only tractable numerically, and this can be computationally expensive. Empirical Bayes is a collection of methods that seek to estimate the hyper-parameters in various ways from the data~\cite{Casella:1985, Carlin:2000}. Markov chain Monte Carlo (MCMC) techniques allow us to implement Hierachical Bayesian modeling without approximation by producing samples from the joint posterior distributions, which simultaneously informs us about the model parameters $\vec{\lambda}$ and the hyper-parameters $\vec{\alpha}$. This approach helps reduce systematic errors due to mis-modeling, as the data helps select the appropriate model. An example of this is the use of hyper-parameters in the instrument noise model, such that the noise spectral density is treated as an unknown to be determined from the data~\cite{Cornish:2007if, Littenberg:2010gf, 2010PhRvD..82b2002A}. Hierarchical Bayesian modeling can be extended to discrete and even disjoint model spaces using the Reverse Jump Markov Chain Monte Carlo (RJMCMC)~\cite{green_highly_2003} algorithm. Each discrete models can be assigned its own set of continuous hyper-parameters. \section{Toy Model I}\label{toy1} As a simple illustration of hierarchical Bayesian modeling, consider some population of $N$ signals, each described by a single parameter $x_i$ that is drawn from a normal distribution with standard deviation $\alpha_0$. The measured values of these parameters are affected by instrument noise that is drawn from a normal distribution with standard deviation $\beta$. The maximum likelihood value for the parameters is then $\bar{x}_i = \alpha_0 \delta_1 + \beta \delta_2$ where the $\delta$'s are i.i.d. unit standard deviates. Now suppose that we employ a population model where the parameters are distributed according to a normal distribution with standard deviation $\alpha$. Each choice of $\alpha$ corresponds to a particular model with posterior distribution \begin{equation}\label{eq:toy1post} p(x_i\vert s, \alpha) = \frac{1}{p(s,\alpha)} \prod_{i=1}^N \frac{1}{(2\pi \alpha\beta)} e^{-(\bar{ x}_i - x_i)^2/2\beta^2} e^{-x_i^2/2\alpha^2} \, , \end{equation} and model evidence \begin{equation}\label{mla} p(s,\alpha) = \frac{1}{(\sqrt{2\pi}\sqrt{\alpha^2+\beta^2})^N}\prod_i e^{-{\bar x}_i^2 /2 (\alpha^2+\beta^2)} \, . \end{equation} To arrive at a Hierarchical Bayesian model we elevate $\alpha$ to a hyper-parameter and introduce a hyper-prior $p(\alpha)$ which yields the joint posterior distribution \begin{equation}\label{eq:toy1post2} p(x_i, \alpha \vert s) = \frac{p(x_i\vert s, \alpha) p(\alpha)}{p(s)} \, . \end{equation} Rather than selecting a single ``best fit'' model, Hierarchical Bayesian methods reveal the range of models that are consistent with the data. In the more familiar, non-hierarchical approach we would maximize the model evidence (\ref{mla}) to find the model that best describes the data, which is here given by \begin{equation} \alpha_{\rm ME}^2 = \frac{1}{N}\sum_{i=1}^N {\bar x}_i^2 - \beta^2 , \end{equation} Since ${\rm Var}({\bar x}_i)=\alpha_0^2+\beta^2$, we have \begin{equation}\label{estx} \alpha_{\rm ME}^2 = \alpha_0^2 \pm {\cal O}(\sqrt{2}(\alpha_0^2+\beta^2)/\sqrt{N}) \, . \end{equation} The error estimate comes from the sample variance of the variance estimate. In the limit that the experimental errors $\beta$ are small compared to the width of the prior $\alpha_0$, the error in $\alpha$ scales uniformly as $1/\sqrt{N}$. The scaling is more complicated when we have a collection of observations with a range of measurement errors. Suppose that the measurement errors are large compared to the width of the prior, and that we have $N_1$ observations with standard error $\beta_1$, $N_2$ observations with standard error $\beta_2$ {\it etc}, then the error in the estimate for $\alpha$ is \begin{equation} \Delta \alpha^2 = \left(\sum_i \frac{N_i}{\beta_i^4}\right)^{-1/2} \, . \end{equation} Recalling that $1/\beta_i$ scales with the signal-to-noise ratio of the observation, we see that a few high SNR observations constrain $\alpha$ far more effectively than a large number of low SNR observations. The above calculation shows that the maximum evidence criteria provides an unbiased estimator for the model parameter $\alpha_0$, but only if the measurement noise is consistently included in both the likelihood and the simulation of the $\bar{x}_i$. Using the likelihood from (\ref{eq:toy1post}) but failing to include the noise in the simulations leads to the biased estimate $\alpha^2_{\rm ME} = \alpha_0^2 - \beta^2$. Conversely, including noise in the simulation and failing to account for it in the likelihood leads to the biased estimate $\alpha^2_{\rm ME} = \alpha_0^2 + \beta^2$. These same conclusions apply to the Hierarchical Bayesian approach, as we shall see shortly. \subsection{Numerical Simulation} The joint posterior distribution (\ref{eq:toy1post2}) can be explored using MCMC techniques. To do this we produced simulated data with $N= 1000$, $\alpha_0 = 2$ and $\beta=0.4$ and adopted a flat hyper-prior for $\alpha$. The posterior distribution function for $\alpha$, marginalized over the $x_i$, is shown in Figure~\ref{fig:toy1_alpha}. The distribution includes the injected value, and has a spread consistent with the error estimate of (\ref{estx}). The Maximum-a-Posteriori (MAP) estimate for $\alpha$ has been displaced from the injected value of $\alpha_0 = 2$ by the simulated noise. \begin{figure}[htbp] \centering \includegraphics[width=3.0in,angle=0] {Toy_Alpha.eps} \caption{The Marginalized Posterior Distribution Function for $\alpha$. The injected value is indicated by the vertical black line.} \label{fig:toy1_alpha} \end{figure} To test that there is no bias in the recovery of the model hyper-parameter $\alpha$, we produced 30 different realizations of the data and computed the average MAP value. Figure~\ref{fig:toy1_MAPs} shows the MAP value for each of these realizations and the corresponding average. We see that as we average over multiple realizations $\alpha$ does indeed converge to the injected value. The blue line in Fig.~\ref{fig:toy1_MAPs} shows a biased recovery for $\alpha$ when noise is not included in the data. We instead recover $\alpha = \sqrt{\alpha^2_0 - \beta^2} \approx 1.96$. \begin{center} \begin{figure}[htbp] \centering \includegraphics[width=3.0in] {Toy_ModeMAP.eps} \caption{MAP values for 30 different simulations of the toy model. The red curve includes noise in the simulated signal and converges to $\alpha_0$ as expected. The blue curves does not include noise in the simulation and converges to $\alpha_0^2 - \beta^2$} \label{fig:toy1_MAPs} \end{figure} \end{center} \section{Toy Model II}\label{toy2} The Hierarchical Bayesian approach produces un-biased estimates for the model parameters if the signal and the noise (and hence the likelihood) are correctly modeled. However, in some situations the cost of computing the likelihood can be prohibitive, and it becomes desirable to use approximations to the likelihood, such as the Fisher Information Matrix. For example, to investigate how the design of a detector influences its ability to discriminate between different astrophysical models, it is necessary to Monte Carlo the analysis over many realizations of the source population for many different instrument designs, which can be very costly using the full likelihood. To explore these issues we introduce a new toy model that more closely resembles the likelihood functions encountered in gravitational wave data analysis. Consider a waveform $h_0$ that represents a single data point ({\it e.g.} the amplitude of a wavelet or a Fourier component), which can be parameterized in terms of the distance to the source $d_0$. The instrument noise $n$ is assumed to be Gaussian with variance $\beta^2$. Here we will treat the noise level $\beta$ as a hyper-parameter to be determined from the observations. Adopting a fiducial noise level $\beta_0$ allows us to define a reference signal-to-noise ratio ${\rm SNR}^2_0 = h_0^2/\beta_0^2$. The likelihood of observing data $s= h_0+n$ for a source at distance $d$ with noise level $\beta$ is then \begin{equation}\label{liketoy} p(s\vert d, \beta) = \frac{1}{\sqrt{2\pi}\beta}e^{-(s-h)^2/(2\beta^2)} \end{equation} where $h = (d_0/d) h_0$. The likelihood is normally distrubuted in the inverse distance $1/d$, with a maximum that depends on the particular noise realization $n$: \begin{equation} \frac{1}{d_{\rm ML}} = \frac{1+n/(\beta_0 {\rm SNR}_0)}{d_0} \, . \end{equation} Now suppose that the distances follow a one-sided normal distribution $p(d \geq 0) = \frac{2}{\sqrt{2\pi}\beta}\exp(-d^2/2\alpha_0^2)$, and that we adopt a corresponding model for the distance distribution with hyper-parameter $\alpha$ and a flat hyper-prior. We simulate the data from $N=1000$ sources with $\alpha_0=2$ and $\beta = 0.05$. The values of $\alpha_0$ and $\beta$ were chosen to give a fiducial ${\rm SNR}=5$ for $d = 2\alpha_0$. In the first of our simulations the value of $\beta$ was assumed to be known and we computed the MAP estimates of $\alpha$ for 30 different simulated data sets. As shown in Figure~\ref{fig:toy2_MAPs}, the average MAP estimate for $\alpha$ converges to the injected value. \begin{figure}[htbp] \centering \includegraphics[width=3.0in,angle=0] {DToy_FishFull.eps} \caption{MAP values for 30 different realizations of the toy model II. Using the full likelihood (blue) the MAP values converge to the injected value, but with the Fisher Matrix approximation to the likelihood (red) there is a bias.} \label{fig:toy2_MAPs} \end{figure} In contrast to the first toy model where only the combination $\alpha^2+\beta^2$ is constrained by the data, in this more realistic toy model both the noise level $\beta$ and the model hyper-parameter $\alpha$ are separately constrained. Figure~\ref{fig:BetaAlpha} shows the marginalized PDFs for both $\beta$ and $\alpha$. Tests using multiple realizations of the data show that the MAP values of $\alpha$ and $\beta$ are un-biased estimators of the injected parameter values. \begin{figure}[htbp] \centering \includegraphics[width=3.0in,angle=0] {DToy_NoiseFit.eps} \caption{PDFs for the prior hyper-parameter $\alpha$ and the noise level $\beta$ for toy model II. Both are individually constrained in this model. The injected values are shown by the black lines.} \label{fig:BetaAlpha} \end{figure} \subsection{Approximating the Likelihood} For stationary and Gaussian instrument noise the log likelihood for a signal described by parameters $\vec{\lambda}$ is given by \begin{equation}\label{logL} L(\vec{\lambda}) = -\frac{1}{2}(s-h(\vec{\lambda}) \vert s-h(\vec{\lambda})) \end{equation} where $(a\vert b)$ denotes the standard noise-weighted inner product, and we have supressed terms that depend on the noise hyper-parameters. We can expand the waveform $h(\vec{\lambda})$ about the injected source parameters $\vec{\lambda}_0$: \begin{equation} h(\vec{\lambda}) = h(\vec{\lambda}_0) + \Delta \lambda^i \bar{h}_{,i} + \Delta \lambda^i \Delta \lambda^j \bar{h}_{,ij} + \mathcal{O}(\Delta \lambda^3) \end{equation} where $\Delta \vec{\lambda} = \vec{\lambda}-\vec{\lambda}_0$, and it is understood that the derivatives are evaluated at $\vec{\lambda}_0$. Expanding the log likelihood we find: \begin{eqnarray}\label{lex} L(\Delta \vec{\lambda}) =&-&\frac{1}{2} (n\vert n) +\Delta \lambda^i (n \vert h_{,i})\nonumber \\ &-&\frac{1}{2} \Delta \lambda^i \Delta \lambda^j (h_{,i}\vert h_{,j})+ {\cal O}(\Delta \lambda^3)\, . \end{eqnarray} The maximum likelihood solution is found from $\partial L/\partial\Delta \lambda^i = 0$, which yields $\Delta \lambda_{\rm ML}^i = (n \vert h_{j}) \Gamma^{ij}$, where $\Gamma^{ij}$ is the inverse of the Fisher Information Matrix $\Gamma_{ij}=(h_{,i}\vert h_{,j})$. Using this solution to eliminate $(n \vert h_{,i})$ from (\ref{lex}) yields the quadratic, Fisher Information Matrix approximation to the likelihood: \begin{equation}\label{fish} L(\vec{\lambda}) = {\rm const.} -\frac{1}{2} (\lambda^i - \lambda^i_{\rm ML})(\lambda^j- \lambda^j_{\rm ML}) \Gamma_{ij} \; . \end{equation} This form of the likelihood can be used in simulations by drawing the $\Delta \lambda_{\rm ML}^i$ from a multi-variate normal distribution with covariance matrix $\Gamma^{ij}$. In our toy model $\Gamma_{dd} = {\rm SNR}^2_0 \beta_0^2/(\beta^2 d_0^2)$, and $L(d)=-{\rm SNR}_0^2 \beta_0^2 (d-d_{\rm ML})^2/(2 \beta^2 d_0^2)$. The approximate likelihood follows a normal distribution in $d$ while the full likelihood follows a normal distribution in $1/d$. For signals with large SNR this makes little difference, but at low SNR the difference becomes significant and results in a bias in the recovery of the model hyper-parameters, as shown in Figure~\ref{fig:toy2_MAPs}. In this instance there is a simple remedy: using $u = 1/d$ in place of $d$ in the quadratic approximation to the likelihood exactly reproduces the full likelihood in this simple toy model. However, it is not always so easy to correct the deficiencies of the quadratic, Fisher Information Matrix approximation to the likelihood. \section{White Dwarf Binaries in the Milky Way}\label{galaxy} To illustrate how the Hierarchical Bayesian approach can be applied to an astrophysically relevant problem, we investigate how population models for the distribution of white dwarf binaries in the Milky Way galaxy can be constrained by data from a space based gravitational wave detector. Several studies have looked at parameter estimation for individual white dwarf binaries in the Milky Way~\cite{Cornish:2003vj, Vecchio:2004ec, babak:WDs}. We extend these studies to consider how the individual observations can be combined to infer the spatial and mass distributions of white dwarf binaries in the Galaxy. We use the Laser Interferometer Space Antenna (LISA)~\cite{LISAwhitepaper} as our reference mission. We focus this analysis on short-period galactic binaries, with gravitational wave frequencies above 4 mHz. Our conclusions would be little changed if we considered the recently proposed eLISA~\cite{AmaroSeoane:2012je} mission instead, as both are able to detect roughly the same number of galactic binaries in the frequency bands considered here. The 4 mHz lower limit is chosen to simplify the analysis in two ways. Firstly, it avoids the signal overlap and source confusion problems that become significant at lower frequencies~\cite{Crowder:2006eu}, and secondly, it circumvents the issue of sample completeness and Malmquist selection bias since LISA's coverage of the galaxy is complete at high frequencies. This claim is substantiated in Figure~\ref{fig:CumFreq} showing the cumulative percentage of binaries detected as a function of frequency for a 4 year LISA mission. A given frequency bin represents the percentage of binaries of that frequency and higher that are detected. All binaries above $\sim4$ mHz are detectable by LISA, of which there are $\sim 5000$. It is possible to extend our analysis to include all detectable white dwarf binaries if we were to properly account for the undetectable sources. One way to do this is to convolve the astrophysical model priors by a function that accounts for the selection effects~\cite{Schutz:2011tw} so that we are working with the predicted observed distribution rather than the theoretical distribution. Another approach is to marginalize over the un-detectable signals~\cite{Messenger:2012jy}. The high frequency signals are not only the simplest to analyze, but they also tend to have the highest signal-to-noise ratios, the best sky localization, and the best mass and distance determination due to their more pronounced evolution in frequency. When simulating the population of detectable sources we will assume that binaries of all frequencies above 4 mHz are homogeneously distributed throughout the galaxy and share the same chirp mass distribution. In reality the population is likely to be more heterogenous, and more complicated population models will have to be used. \begin{figure}[htbp] \centering \includegraphics[width=3.0in,angle=0] {fHist.eps} \caption{ The percentage of sources which are detectable as a function of frequency. Virtually 100\% of the white dwarf binaries in the Milky Way above 4mHz would be detected by LISA.} \label{fig:CumFreq} \end{figure} \subsection{Likelihood} The likelihood for a single source is given by: \begin{equation}\label{eq:FullLikelihood} p(s \vert \vec{\lambda}) = C e^{-\left(s - h(\vec{\lambda}) | s - h(\vec{\lambda})\right)/2} \, . \end{equation} Here $p(s \vert \vec{\lambda})$ is the likelihood that the residual $s - h(\vec{\lambda})$ is drawn from Gaussian noise, where $s$ is the data, and $h(\vec{\lambda})$ is the signal produced in the detector by a source described by parameters $\vec{\lambda}$. The simulated data $s=h(\vec{\lambda}_0)+n$ includes a waveform $h(\vec{\lambda}_0)$ and a realization of the LISA instrument noise $n$. The normalization constant $C$ depends on the instrument noise levels, but is independent of the waveform parameters. The waveform for a white dwarf binary is well approximated by: \begin{eqnarray} h_+(t) &=& \frac{1}{d}\frac{4G\mathcal{M} \Omega^2}{c^4}\left(\frac{1+\cos^2\iota}{2}\right)\cos(\Omega t) \nonumber \\ h_{\times}(t) &=& \frac{1}{d}\frac{4G\mathcal{M}\Omega^2}{c^4}\cos{\iota}\sin(\Omega t) \end{eqnarray} where $\Omega=2\pi f$. We have 8 parameters that describe a white dwarf binary signal, the frequency $f$, the distance to the source $d$, the chirp mass $\mathcal{M}$, the inclination angle $\iota$, a polarization angle $\psi$, a phase angle $\varphi_0$, and sky location parameters $\theta$ and $\phi$. To leading order, the frequency evolves as: \begin{equation}\label{fdot} \dot{f} = \frac{96\pi}{5} (\pi \mathcal{M})^{5/3} f^{11/3} \, . \end{equation} Sources with $\dot{f} \, T^2 \, {\rm SNR} \sim 1$, where $T$ is the observation time, provided useful measurments of the chirp mass $\mathcal{M}$ and the distance $d$~\cite{Schutz:1986gp, Takahashi:2002ky}. The strong $f$ dependence in (\ref{fdot}) is the reason why high frequency binaries are the best candidates for placing strong constraints on the distance and chirp mass. \subsection{Model for the Galaxy} We adopt a bulge plus disk model for the galaxy shape~\cite{Nelemans:2003ha, Nelemans:2000es, Nelemans:2001nr, Nelemans:2001hp}. Choosing the x-y plane as the plane of the galaxy, the density of stars in the galaxy is given by: \begin{equation} \rho(x, y, z) = \rho_0\left(A e^{-r^2/R_b^2} + (1-A)e^{-u/R_d} \text{sech}^2{(z/Z_d)}\right) \end{equation} Here, $r^2 = x^2+y^2+z^2$, $u^2=x^2+y^2$, $R_b$ is the characteristic radius for the bulge, and $R_d$ and $Z_d$ are a characteristic radius and height for the disk respectively. The quantity $\rho_0$ is a reference density of stars and the coefficient $A$, which ranges between 0 and 1, weights the number of stars in the bulge versus the number in the disk. We produced synthetic galaxies using the catalog of binaries provided by Gijs Nelemans for the Mock LISA Data Challenges (MLDC)~\cite{Arnaud:2007jy}. With appropriate normalization, the spatial density $\rho$ becomes our prior distribution for the spatial distribution of galactic binaries. The parameters of the density distribution $A,$ $R_b,$ $R_d$ and $Z_d$ become hyper-parameters in the Hierarchical Bayesian analysis. Each set of values for the four parameters corresponds to a distinct model for the shape of the galaxy. For our simulations, we chose a galaxy with $A=0.25$, $R_b=500$pc, $R_d=2500$pc, and $Z_d=200$pc. \subsection{Chirp Mass Prior} Our ability to measure the hyper-parameters of the spatial distribution depends on how well we measure the sky location and distance for each binary. For many sources, the distance is poorly determined because it is highly correlated with the chirp mass. However, there are enough binaries that have sufficiently high frequency, chirp mass and/or SNR to provide tight constraints on the chip mass distribution. The empirically determined chirp mass distribution then functions as a prior for the lower SNR, less massive, or lower frequency sources, and improves their distance determination. Figure~\ref{fig:chirp} shows the chirp mass distribution for binaries in our simulated galaxy. We use this distribution to construct a hyper-prior on the chirp mass, approximated by the following distribution: \begin{equation}\label{ChirpDist} p({\mathcal{M}}) = \frac{C}{\left(\frac{\mathcal{M}}{\mathcal{M}_0}\right)^{-a} + \frac{a}{b}\left(\frac{\mathcal{M}}{\mathcal{M}_0}\right)^b} \end{equation} where $\mathcal{M}_0, a$ and $b$ are hyper-parameters in our model. $C$ is the normalization constant which can be calculated analytically and is given by: \begin{equation}\label{ChirpNorm} C = \mathcal{M}_0 \pi \frac{b^{\frac{a+1}{a+b}}a^{-\frac{a+1}{a+b}}}{(a+b)\sin{\frac{\pi(b-1)}{a+b}}} \end{equation} $\mathcal{M}_0$ is the mode of the distribution. The hyper-parameters $a$ and $b$ determine the width of the distribution, which can be seen by calculating the full width at half maximum (FWHM). It is given by: \begin{equation}\label{ChirpFWHM} \text{FWHM} \simeq \mathcal{M}_0 \left(\left[2(b/a+1)\right]^{1/b} - \left[2(a/b+1)\right]^{-1/a}\right) \end{equation} We further assume that the orbital evolution is due only to the emission of gravitational waves, and is thus adequately described by ($\ref{fdot}$). In principle one would want to be more careful and consider tidal effects and mass transfer~\cite{Stroeer:2009uy} as possible contributions to $\dot{f}$. However, it is expected that the high frequency sources we are focusing on will be mostly detached white dwarf binaries where tidal or mass transfer effects are unlikely to be significant~\cite{Willems:2009xk}. \begin{figure}[htbp] \centering \includegraphics[width=3.0in,angle=0] {Mc_Distribution.eps} \caption{The chirp mass distribution of the 5000 binaries used in our simulations is shown in red. The green distribution shows the MAP values of the recovered chirp mass for each binary, and the blue shows the model (\ref{ChirpDist}) using the MAP values for the chirp mass prior hyper-parameters. The brightest binaries accurately capture the chirp mass distribution, which serves as a useful prior for sources whose chirp masses are not so well determined.} \label{fig:chirp} \end{figure} \section{Results} We are able to efficiently calculate the full likelihood for each source (eq.~\ref{eq:FullLikelihood}) using the fast waveform generator developed by Cornish and Littenberg~\cite{Cornish:2007if}. The following results are all derived from simulations using the full likelihood. Using the same MCMC approach from our toy models, we sample the posterior and get PDFs for source and model parameters simultaneously. We check for convergence by starting the chains at different locations in the prior volume and find that regardless of starting location, the chains converge to the same PDFs. Our procedure successfully recovers the correct chirp mass distribution, as shown in Figure~\ref{fig:chirp} and is able to meaningfully constrain the parameters of the galaxy distribution and chirp mass distribution models, with PDFs shown in Figure~\ref{fig:hyperparameters} and Figure~\ref{fig:chirpparameters} respectively. \begin{figure}[htbp] \centering \includegraphics[width=3.4in,angle=0] {Galaxy_Hypers.eps} \caption{PDFs for the four galaxy model hyper-parameters. The red is for a simulation using 100 binaries, the green 1000 binaries, and the blue 5000 binaries. The black lines show the true values of the distribution from which the binaries were drawn.} \label{fig:hyperparameters} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=3.4in,angle=0] {Chirp_Hypers.eps} \caption{PDFs for the three chirp mass model hyper-parameters and the FWHM of the distribution. The red is for a simulation using 100 binaries, the green 1000 binaries, and the blue 5000 binaries.} \label{fig:chirpparameters} \end{figure} We ran simulations with 100, 1000, and 5000 binaries to show how the constraints on the galaxy hyper-parameters improved as we include more sources (for comparison, eLISA is expected to detect between 3500-4100 white dwarf binaries during a 2-year mission lifetime~\cite{AmaroSeoane:2012je}). The chains ran for 1 million, 500k, and 100k iterations respectively. Even for a relatively modest number of detections we begin to get meaningful measurements on the population model of white dwarf binary systems. The more binaries we use in our analysis the tighter our constraints on the hyper-parameters. \begin{table}[ht] \centering \begin{tabular}{c | c c | c c | c c} \hline\hline \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{100} & \multicolumn{2}{c|}{1000} & \multicolumn{2}{c}{5000} \\ \multicolumn{1}{c|}{Parameter} & \multicolumn{1}{ c }{MAP} & \multicolumn{1}{ c|}{$\sigma$} & \multicolumn{1}{ c }{MAP} & \multicolumn{1}{ c|}{$\sigma$} & \multicolumn{1}{ c }{MAP} & \multicolumn{1}{ c}{$\sigma$} \\ \hline A & 0.262 & 0.047 & 0.226 & 0.0157 & 0.249 & 0.0074 \\ Rb (pc) & 440 & 58.9 & 490 & 17.1 & 480 & 8.38 \\ Rd (pc) & 2465 & 237.5 & 2584 & 70.2 & 2461 & 32.4 \\ Zd (pc) & 193 & 20.8 & 201 & 7.02 & 195 & 3.25 \\ [1ex] \hline $\mathcal{M}_0$ & 0.226 & 0.0063 & 0.208 & 0.0018 & 0.205 & 0.00088 \\ FWHM & 0.07 & 0.0094 & 0.071& 0.0026 & 0.076 & 0.0014 \\ [1ex] \hline \end{tabular} \caption{MAP values and variances for the galaxy hyper-parameters when using 100, 1000 and 5000 galactic binaries in the analysis. The simulated values were $A=0.25$, $R_b=500$pc, $R_d=2500$pc, and $Z_d=200$pc.} \label{table1} \end{table} Table 1 lists the recovered MAP values and the variance of the marginalized posterior distribution function for each hyper-parameter. Gravitational wave observations would be very competitive with existing electromagnetic observations in constraining the shape of the galaxy ~\cite{McMillan:2009yr, Juric:2005zr}. Making direct comparisons between our results to those in the literature is complicated, as the actual values of the bulge and disk radii are very model dependent. For example, Juric uses a model where the galaxy is comprised of both a thin and thick disk. With GW data in hand, this comparison could easily be made by trivially substituting the density profile used here. What matters for this proof-of-principal study is how well the parameters can be constrained. In the models of Juric et al constraints for the disk radii are around 20\%. We find similar accuracy when using a pessimistic population of 100 systems. Adopting a source catalog that is more consistent with theoretical predictions, we find constraints for the disk parameters as low as 1.5\% -- a substantial improvement over the state-of-the-art. \subsection{Approximating the Likelihood}\label{approx} While we happen to have a very efficient method for computing the full likelihood for galactic binaries, this is not always the case. For other signal types the full likelihood can be very expensive to compute, posing problems if we wish to do extensive studies of many astrophysical models or detector configurations. For such exploratory studies it is preferable to use the Fisher Information Matrix approximation to the likelihood of (\ref{fish}). However, as we saw with the toy model in \S\ref{toy2}, this can lead to biases in the recovered parameters. The Fisher matrix $\Gamma_{ij}$ is not a coordinate invariant quantity, and we can at least partially correct the bias by reparameterizing our likelihood. Just as in \S\ref{toy2}, instead of using the distance $d$ as a variable, we can instead use $1/d$, which provides a much better approximation to the full likelihood. We test these short-cuts by redoing the analysis of the galactic population using the Fisher matrix approximation to the likelihood (both with $d$ and $1/d$ as parameters) and compare to the results from the previous analysis using the full likelihood. Figure~\ref{fig:CompareL} shows PDFs for the galaxy hyper-parameters using the three different methods for computing $p(d|\vec{\lambda})$ with the full sample of 5000 binaries. \begin{figure}[htbp] \centering \includegraphics[width=3.4in,angle=0] {FullGaussDist5k.eps} \caption{PDFs from a simulation using 5000 binaries for the four galaxy model hyper-parameters for the full likelihood (red), a Fisher approximation in $d$ (green), and a Fisher approximation in $1/d$ (blue).} \label{fig:CompareL} \end{figure} We find that the approximation using $1/d$ matches the full likelihood better than the likelihood parameterized with $d$, however there are additional discrepancies due to non-quadratic terms in the sky location $\{\theta,\phi\}$ that we have not accounted for. The dependence of the waveform on $\{\theta,\phi\}$ is more complicated than the distance, and is not so easily corrected by a simple reparameterization. The approximation could be improved by carrying the expansion of the likelihood beyond second order, however this is computationally expensive and can be numerically unstable. \begin{figure}[htbp] \centering \includegraphics[width=3.4in,angle=0] {Bias5k.eps} \caption{MAP values and corresponding averages from a simulations using 5000 binaries for the four galaxy model hyper-parameters for the full likelihood (red), a Fisher Matrix approximation parameterized with $d$ (green), and a Fisher Matrix approximation using $1/d$ (blue).} \label{fig:CompareMAPs} \end{figure} If we analyze several realizations of the galaxy using the three different likelihood functions and average the results, we find the biases are persistent for the approximate methods. Figure~\ref{fig:CompareMAPs} shows the MAP values and the average of the MAP values for 10 realizations of our fiducial galaxy model. The biases in the recovered disk radius and disk height are particularly pronounced when using the Fisher Matrix approximation to the likelihood parameterized with $d$. \section{Conclusion}\label{concl} We have demonstrated a general Hierarchical Bayesian method capable of constraining the model parameters for a population of sources. In the particular case of white dwarf binaries in the Milky Way, we can constrain the spatial distribution of the galaxy to levels better than current electromagnetic observations using the anticipated number of systems detectable by space-based gravitational wave detectors. Even if the currently held event rates for white dwarf binaries turn out to be optimistic by more than an order of magnitude, the constraints possible with a gravitational wave detector are comparable to our current estimates of the Milky Way's shape. When the data from a space-borne detector has been collected, the resolvable white dwarf binaries will be regressed from the data, leaving behind a confusion-limited foreground which will significantly contribute to the overall power in the data around $\sim 1$ mHz. Measuring the overall shape of the galaxy as demonstrated here will provide additional means to characterize the level of the confusion noise. As we will show in an upcoming paper, we can then use the detailed understanding of the foreground signal to detect a stochastic gravitational wave background at levels well below the confusion noise. Analyzing simulated data with the full likelihood is computationally taxing and, when performing a large suite of such studies, could prove to be prohibitive. To mitigate the cost of such analyses, we test a much faster approach (approximately 50 times faster), using the Fisher matrix approximation to the likelihood. We find the results are significantly less biased by the Fisher approximation when using $1/d$ as the parameter that encodes the distance to the source. This simple adjustment gives adequately reliable results in significantly less time than the brute-force calculation, and will provide an additional, useful, metric to gauge the relative merits of proposed space-based gravitational wave missions. \section{Acknowledgments} NJC and MA were supported by NASA grant NNX07AJ61G. TBL was supported by NASA Grant 08-ATFP08-0126.
1,116,691,497,016
arxiv
\section{Introduction} Face recognition plays an important task in the ever-growing domains of computer vision and artificial intelligence (AI). With the adoption of deep machine learning techniques, facial recognition performance has reached extraordinary, a 0.1\% rank one miss rate on a gallery of 12 million individuals \cite{grother2019face}. An emerging problem in this domain is the vulnerability to biases which results in unfair decisions. Due to the data-dependent nature of most contemporary machine learning techniques, existing biases in data can bias the underlying algorithms and in some cases may amplify these biases. A biased AI-based decision may lead to unfair treatment such as scenarios in the hiring process \cite{cohen2019efficient}. These growing concerns encourage the development of a ``fair'' AI system which is critical for the future of AI-based decision-making. The creation of a ``fair'' system is a multi-stage development process and is dependent on understanding what is bias and how the mitigation of bias can lead to fairness. In \cite{ntoutsi2020bias}, bias is defined as the ``inclination or prejudice of a decision made by an AI system which is for or against one person or group, especially in a way considered to be unfair.'' In \cite{mehrabi2021survey}, fairness is defined as ``the absence of any prejudice or favoritism toward an individual or group based on their inherent or acquired characteristics.'' The two most common biases, gender and racial bias, can manifest in a dataset due to the nature of data collection resulting in over or under-representation of the different demographic groups. A possible solution is sub-sample over-represented groups while synthetically augmenting under-represented groups. In this paper, we evaluate the facial recognition performance and fairness metrics for visual and thermal images. We contrast these results with their synthetic masked counterpart to illustrate how the same fairness analysis can be used to both assess the fairness of real and synthetic images. \section{Method} We propose to create a process to evaluate the performance and fairness of a facial recognition system that can be applied to both real and synthetic images as well as visual and thermal modalities. For the experimental dataset, we choose to use the SpeakingFace \cite{abdrakhmanova2021speakingfaces} and Thermal-Mask \cite{queiroz2021thermal} dataset because of the visual and thermal modalities. We adopt the demographic parity and equalised odds to assess fairness while using precision, recall, and f1-score for performance evaluation. The proposed facial recognition process uses a simple 2-Block and 3-Block convolutional neural network. \subsection{Dataset} SpeakingFaces Dataset \cite{abdrakhmanova2021speakingfaces}: a large-scale multimodal dataset that combines thermal, visual, and audio data streams. It includes data from 142 subjects, with a gender balance of 68 female and 74 male participants, with ages ranging from 20 to 65 years with an average of 31 years. With approximately 4.6 million images collected in both the visible and thermal spectra, each of the 142 subjects has nine different head positions and each position with 900 frames acquired in 2 trials. Fig. \ref{fig:speaking} shows the thermal and visual images of 2 different subjects. \begin{figure}[!hbt] \hspace{-2.5mm} \begin{tabular}{cccc} \includegraphics[width=.1\textwidth]{fig/1_0_visual.png} & \includegraphics[width=.1\textwidth]{fig/1_0_thermal.png} & \includegraphics[width=.1\textwidth]{fig/16_8386_visual.png} & \includegraphics[width=.1\textwidth]{fig/16_8386_thermal.png} \\ (a) & (b) & (c) & (d) \\ \end{tabular} \caption{Example images from the SpeakingFace Dataset: (a) subject 1 visual, (b) subject 1 thermal, (c) subject 16 visual, (d) subject 16 thermal.} \label{fig:speaking} \end{figure} Thermal-Mask dataset \cite{queiroz2021thermal}: A synthetic mask dataset created using the SpeakingFaces Dataset. This dataset consists of 80 subjects with a total of 84,920 synthetic masked visual and thermal images. The images in this dataset are cropped and aligned with their SpeakingFaces counterpart and have a pixel resolution of 256x256. Fig. \ref{fig:thermal} shows the thermal and visual mask images of 2 different subjects. \begin{figure}[!hbt] \hspace{-2.5mm} \begin{tabular}{cccc} \includegraphics[width=.1\textwidth]{fig/1_0_mask_visual.png} & \includegraphics[width=.1\textwidth]{fig/1_0_mask_thermal.png} & \includegraphics[width=.1\textwidth]{fig/16_8386_mask_visual.png} & \includegraphics[width=.1\textwidth]{fig/16_8386_mask_thermal.png} \\ (a) & (b) & (c) & (d) \\ \end{tabular} \caption{Example images from the Thermal-Mask Dataset: (a) subject 1 visual, (b) subject 1 thermal, (c) subject 16 visual, (d) subject 16 thermal.} \label{fig:thermal} \end{figure} \subsection{Performance Metrics} In this paper, the performance of the machine learning models is measured in terms of Precision, Recall, and F1-Score (F1) defined in Eq. \ref{eq:prec}, \ref{eq:rec}, \ref{eq:f1}. \begin{equation}\label{eq:prec} \text{Precision} = \frac{TP}{TP+FP} \end{equation} \begin{equation}\label{eq:rec} \text{Recall} = \frac{TP}{TP+FN} \end{equation} \begin{equation}\label{eq:f1} \text{F1} = \frac{2*\text{Precision}*\text{Recall}}{\text{Precision}+\text{Recall}}=\frac{2TP}{2TP+FP+FN} \end{equation} where $TP$ (True Positives) represents correct recognition of the genuine user, $TN$ (True Negatives) represents the correct recognition of imposters, $FP$ (False Positives) represents incorrect recognition of imposters as the genuine user, and $FN$ (False Negatives) represents the incorrect recognition of the genuine user as imposters. \subsection{Fairness Metrics} In this paper, we evaluate the fairness of the facial recognition system using the demographic parity difference \cite{[Dwork-2012]} and equalized odds difference \cite{hardt2016equality}. Demographic parity, also known as statistical parity, states that for each protected group, their positive rate should be similar. A system satisfies statistical parity if its prediction is statistically independent of the demographic group. This system can be represented as: \begin{equation} Pr(\hat{y}|D=a) = Pr(\hat{y}|D=b) = Pr(\hat{y}|D=z) \end{equation} where $\hat{y}$ represents the predictor, $D$ is the demographic group (e.g. gender, ethnicity, etc), and $a,b,...,z\in D$ are the classes (e.g. male and female) in demographic group $D$. Demographic parity difference (DPD) is defined as the difference in positive rate between the largest and smallest demographic group. A DPD of 0 represents that all demographic groups have the same positive rate. \begin{equation}\label{eq:dpd} DPD = Pr(\hat{y}|D=l)-Pr(\hat{y}|D=s) \end{equation} where $l, s \in D$ represents the largest and smallest class in the demographic group $D$, respectively. Equalized odds state that the true positive rate and false positive rate across each protected group should be similar. A system satisfies equal opportunity if its prediction is conditionally independent of the protected group. This system can be represented as: \begin{equation} Pr(\hat{y}|y,D=a) = Pr(\hat{y}|y,D=z) \end{equation} where $\hat{y}$ represents the predictor, $y$ represents conditionally positive outcome, and $a,b,...,z\in D$ are the classes in protected group $D$. The equalized odds difference (EOD) is defined as the larger of the two: true positive rate difference (TPD) and false positive rate difference (FPD). The TPD is defined as the difference in true positive rate between the largest and smallest demographic group. Similarly, the FPD is the difference in false positive rate between the largest and smallest demographic group. An EOD of 0 represents that all demographic groups have the same true positive, true negative, false positive, and false negative rates. \begin{equation} TPD = Pr(\hat{y}|y,D=l) - Pr(\hat{y}|y,D=s) \end{equation} \begin{equation} FPD = Pr(\hat{y}|y',D=l) - Pr(\hat{y}|y',D=s) \end{equation} \begin{equation}\label{eq:eod} EOD = Max(TPD,FPD) \end{equation} where $y'$ represents a conditionally negative outcome and $l, s \in D$ represents the largest and smallest class in the demographic group $D$, respectively. \subsection{Convolutional Neural Network} In this paper, we choose to use two simplistic deep convolutional neural networks to evaluate the overall facial recognition performance and fairness metric. Table \ref{tab:cnn} shows the CNN architectures used in this paper. Both CNNs were using the Adam optimizer with default parameters: learning rate $\alpha=0.001$, $\beta_1=0.9$, and $\beta_2=0.999$. For each subject, 10\% of the images were used for training and the remaining 90\% were used for testing. The networks were trained with a batch size of 32 for a total of 10 epochs. This extreme partition of training/testing sets accompanied by low epoch count was chosen to evaluate the role of imbalanced data on facial recognition, specifically assessing fairness in the dataset. The purpose of the experiment is not to maximize facial recognition performance but to observe the change in fairness between real and synthetic images. \begin{table}[!htb] \centering \caption{2-Block and 3-Block CNN Architecture}\label{tab:cnn} \begin{tabular}{c|c} 2-block CNN & 3-block CNN \\ \hline Input 256x256x3 & Input 256x256x3\\ \hline 64 Conv2D 3x3 & 64 Conv2D 3x3 \\ 64 Conv2D 3x3 & 64 Conv2D 3x3 \\ Max Pooling & Max Pooling \\ Batch Normalization & Batch Normalization \\ \hline 128 Conv2D 3x3 & 128 Conv2D 3x3 \\ 128 Conv2D 3x3 & 128 Conv2D 3x3 \\ Max Pooling & Max Pooling \\ Batch Normalization & Batch Normalization \\ \hline & 256 Conv2D 3x3\\ & 256 Conv2D 3x3\\ & Max Pooling\\ & Batch Normalization\\ \hline \multicolumn{2}{c}{Global Average Pooling} \\ \multicolumn{2}{c}{Fully-Connected} \\ \multicolumn{2}{c}{Softmax Classification} \\ \end{tabular} \end{table} \section{Experimental Results} The experimental study involves the use of the two simple 2-block and 3-block convolutional neural networks to perform facial recognition. The performance (precision, recall, and F1-Score) and fairness are assessed using the test set which consists of 90\% (38222 images) of the images. \subsection{Performance Across Groups} In this paper, we evaluate the performance and fairness on 3 different demographic groups: gender, ethnicity, and age. For gender, we separate the dataset into binary male/female classes based on the gender labels provided in the dataset. For ethnicity, we divide into 3 categories: A, B, and C, based on the ethnicity labels provided for each subject. Lastly, for age, we split the dataset into 4 groups: $<$25, 25 to 30, 31 to 35, and $>$ 35, based on the reported age of each subject. The probability distributions for each demographic group is as follows: \begin{footnotesize} \begin{center} \begin{tabular}{cc} \begin{tabular}{cc} \multicolumn{2}{c}{Gender} \\ Male & Female \\ \hline 0.5625 & 0.4375 \\ \end{tabular} & \begin{tabular}{ccc} \multicolumn{3}{c}{Ethnicity} \\ A & B & C \\ \hline 0.750 & 0.0375 &.2125 \\ \end{tabular} \\ \multicolumn{2}{c}{\begin{tabular}{cccc} \multicolumn{4}{c}{Age} \\ $<$25 & 25-30 & 31-35 & $>$ 35 \\ \hline 0.3875 & 0.3250 &0.1250 &0.1625 \\ \end{tabular} } \end{tabular} \end{center} \end{footnotesize} We can see from the prior probability distribution for each demographic group that the dataset is not balanced, that is the ratio of male-to-female or A-to-B-to-C is not equal distribution. When an imbalanced dataset is used for training a CNN, it can lead to a biased network. An example of the performance of a biased network is shown in Table \ref{tab:bias}. The 2-block CNN is used as the recognition model with SpeakingFace (un-mask) and thermal-mask (mask) datasets used for evaluation. The rows represent the performance based on the different demographic groups such as gender, age, and ethnicity. For this experiment, we show the performance in terms of precision, recall, and F1-score measured for visual or thermal images as well as real (un-mask) or synthetic (masked) images. For example, the second row in Table \ref{tab:bias} shows the performance of subjects with an age of 32. This group shows high performance for thermal images regardless of the real or synthetic nature. An interesting disparity is shown when comparing the real and synthetic performance of the visual images using recall and f1-scores. The masked recall rate for the visual image is 30.45\% while the un-masked recall rate is 99.76\%. \begin{table*}[!htb] \centering \begin{footnotesize} \caption{2-Block CNN Facial Recognition Performance in terms of Precision, Recall, and F1-Score.}\label{tab:bias} \begin{tabular}{l|rr|rr|rr||rr|rr|rr} & \multicolumn{6}{c||}{Visual} & \multicolumn{6}{c}{Thermal} \\ \cline{2-13} & \multicolumn{2}{c|}{Precision} & \multicolumn{2}{c|}{Recall} & \multicolumn{2}{c||}{F1-Score} & \multicolumn{2}{c|}{Precision} & \multicolumn{2}{c|}{Recall} & \multicolumn{2}{c}{F1-Score}\\ & Mask & Un-Mask & Mask & Un-Mask & Mask & Un-Mask & Mask & Un-Mask & Mask & Un-Mask & Mask & Un-Mask\\ \hline \hline baseline & 70.94 & 88.10 & 53.93 & 83.01 & 50.68 & 82.85 & 69.96 & 85.82 & 68.92 & 85.66 & 68.92 & 85.66 \\ \hline age:32 & 100.00 & 97.98 & 30.45 & 99.79 & 46.69 & 98.88 & 100.00 & 100.00 & 91.96 & 97.01 & 91.96 & 97.01 \\ age:37 & 77.67 & 100.00 & 59.52 & 80.96 & 58.61 & 89.32 & 77.48 & 97.34 & 84.06 & 93.73 & 84.06 & 93.73 \\ age:29 & 87.87 & 95.33 & 58.19 & 88.73 & 64.08 & 91.11 & 76.74 & 88.16 & 74.04 & 89.65 & 74.04 & 89.65 \\ age:27 & 57.05 & 70.56 & 55.44 & 79.66 & 28.09 & 68.75 & 76.57 & 89.62 & 65.72 & 86.71 & 65.72 & 86.71 \\ age:24 & 84.53 & 83.31 & 76.61 & 89.03 & 76.68 & 85.41 & 82.30 & 93.28 & 79.37 & 94.19 & 79.37 & 94.19 \\ age:25 & 72.97 & 84.13 & 44.20 & 85.60 & 46.53 & 80.91 & 72.09 & 93.76 & 68.41 & 92.58 & 68.41 & 92.58 \\ age:21 & 78.41 & 92.48 & 65.50 & 73.72 & 60.75 & 79.84 & 50.76 & 76.53 & 56.18 & 80.58 & 56.18 & 80.58 \\ age:22 & 46.41 & 88.56 & 38.81 & 78.13 & 38.99 & 82.76 & 74.76 & 87.02 & 79.66 & 84.49 & 79.66 & 84.49 \\ age:23 & 65.48 & 82.46 & 49.04 & 86.64 & 38.67 & 81.52 & 57.60 & 77.60 & 64.53 & 83.47 & 64.53 & 83.47 \\ age:33 & 93.17 & 98.55 & 65.95 & 75.41 & 73.91 & 85.28 & 67.39 & 99.79 & 55.06 & 92.02 & 55.06 & 92.02 \\ age:35 & 54.96 & 68.65 & 91.15 & 99.59 & 68.58 & 81.28 & 87.04 & 96.91 & 93.07 & 96.62 & 93.07 & 96.62 \\ age:30 & 80.27 & 87.98 & 56.84 & 92.90 & 54.56 & 90.02 & 67.53 & 70.72 & 74.99 & 79.71 & 74.99 & 79.71 \\ age:57 & 77.94 & 100.00 & 66.87 & 98.56 & 71.98 & 99.27 & 100.00 & 100.00 & 67.59 & 98.28 & 67.59 & 98.28 \\ age:36 & 61.58 & 87.67 & 77.20 & 97.05 & 65.17 & 91.95 & 66.35 & 65.99 & 68.41 & 65.79 & 68.41 & 65.79 \\ age:28 & 66.14 & 83.04 & 48.08 & 81.28 & 53.66 & 80.84 & 51.30 & 66.74 & 64.89 & 79.25 & 64.89 & 79.25 \\ age:26 & 64.22 & 99.52 & 52.61 & 77.85 & 57.78 & 83.17 & 91.50 & 84.64 & 92.48 & 91.00 & 92.48 & 91.00 \\ age:41 & 34.28 & 89.76 & 26.03 & 86.52 & 29.59 & 88.11 & 67.59 & 91.67 & 60.16 & 93.43 & 60.16 & 93.43 \\ age:20 & 67.43 & 87.48 & 46.34 & 72.14 & 46.40 & 73.64 & 72.92 & 91.23 & 59.64 & 78.30 & 59.64 & 78.30 \\ age:45 & 96.70 & 97.92 & 20.09 & 75.11 & 33.27 & 85.01 & 85.84 & 97.95 & 39.19 & 95.76 & 39.19 & 95.76 \\ age:34 & 70.07 & 91.43 & 32.92 & 75.62 & 38.36 & 81.12 & 69.10 & 91.79 & 64.66 & 75.01 & 64.66 & 75.01 \\ age:31 & 95.67 & 100.00 & 37.41 & 87.47 & 51.99 & 93.21 & 67.85 & 97.06 & 78.75 & 96.54 & 78.75 & 96.54 \\ age:40 & 90.23 & 93.48 & 47.53 & 94.44 & 62.26 & 93.96 & 94.44 & 98.56 & 86.36 & 99.27 & 86.36 & 99.27 \\ age:39 & 91.47 & 93.00 & 94.91 & 100.00 & 93.16 & 96.38 & 96.02 & 96.90 & 97.86 & 96.48 & 97.86 & 96.48 \\ age:46 & 35.85 & 100.00 & 95.82 & 81.37 & 52.17 & 89.73 & 78.33 & 75.67 & 80.31 & 74.81 & 80.31 & 74.81 \\ \hline gender:Male & 70.31 & 87.38 & 55.70 & 82.97 & 50.95 & 83.27 & 68.77 & 88.01 & 66.26 & 88.22 & 66.26 & 88.22 \\ gender:Female & 71.76 & 89.03 & 51.66 & 83.07 & 50.34 & 82.32 & 71.50 & 83.02 & 72.35 & 82.36 & 72.35 & 82.36 \\ \hline ethnicity:A & 70.97 & 86.10 & 50.66 & 81.96 & 47.32 & 81.16 & 66.53 & 84.34 & 66.09 & 84.48 & 66.09 & 84.48 \\ ethnicity:B & 75.95 & 86.97 & 66.87 & 84.77 & 57.17 & 83.93 & 89.71 & 97.05 & 92.36 & 97.85 & 92.36 & 97.85 \\ ethnicity:C & 69.96 & 95.33 & 63.18 & 86.42 & 61.41 & 88.64 & 78.59 & 89.08 & 74.80 & 87.65 & 74.80 & 87.65 \\ \end{tabular} \end{footnotesize} \end{table*} \subsection{Fairness Across Real and Synthetic Images} Table \ref{tab:performance_fair} shows the performance and fairness metrics on the two CNNs. Given the same hyperparameters used for training both CNNs, the 3-Block CNN greatly outperforms the 2-Block CNN. The 3-Block CNN achieves near-perfect recognition performance on the thermal images with a slight decrease for the visual images. The demographic parity difference (DPD) and equalized odds difference (EOD) is calculated based on Eq. \ref{eq:dpd} and \ref{eq:eod}, respectively. The rows in the table represent the data used for experiments and the column represents the different performance/fairness metrics used. We can see that as the recognition performance approaches 100\% the DPD approaches 5, 2.5, and 3.75 for age, gender, and ethnicity, respectively. The approached value is the quotient of the number of classes in the demographic group and the number of subjects to be recognized. For example, using gender, the approached value is calculated as $\frac{2 classes}{80 subjects}\times100 = \fbox{2.50}$. As we decrease the performance of recognition, either by reducing the model learning capacity or increasing noise in an image, we can see a decrease in DPD and EOD. The last row of the table simulates the performance of random guessing which is equivalent to $1/80=\fbox{1.25}$ if the number of samples per subject is the same; however, since there are an unbalanced number of images per subject, the random performance is approximately \fbox{1.32}. \begin{table*}[!htb] \centering \begin{footnotesize} \caption{Facial Recognition Performance and Fairness Evaluation}\label{tab:performance_fair} \begin{tabular}{l|rrr|rrr|rrr} & \multicolumn{3}{c|}{Performance} & \multicolumn{3}{c|}{DPD} & \multicolumn{3}{c}{EOD} \\ & \multicolumn{1}{c}{Precision} & \multicolumn{1}{c}{Recall} & \multicolumn{1}{c|}{F1-Score} & \multicolumn{1}{c}{Age} & \multicolumn{1}{c}{Gender} & \multicolumn{1}{c|}{Ethnicity} & \multicolumn{1}{c}{Age} & \multicolumn{1}{c}{Gender} & \multicolumn{1}{c}{Ethnicity} \\ \hline \hline \multicolumn{10}{c}{2-Block CNN}\\ Mask-Visual & 70.94 & 53.93 & 50.68 & 3.33 & 2.04 & 3.26 & 53.93 & 53.93 & 53.93 \\ Normal-Visual & 88.10 & 83.01 & 82.85 & 4.45 & 2.33 & 3.61 & 83.01 & 83.01 & 83.01 \\ Mask-Thermal & 82.20 & 69.96 & 68.92 & 3.97 & 2.21 & 3.38 & 69.96 & 69.96 & 69.96 \\ Normal-Thermal & 89.57 & 85.82 & 85.66 & 4.56 & 2.43 & 3.58 & 85.82 & 85.82 & 85.82 \\ \hline \multicolumn{10}{c}{3-Block CNN}\\ Mask-Visual & 99.58 & 99.54 & 99.54 & 4.98 & 2.50 & 3.75 & 99.54 & 99.54 & 99.54 \\ Normal-Visual & 99.89 & 99.87 & 99.88 & 4.99 & 2.50 & 3.75 & 99.87 & 99.87 & 99.87 \\ Mask-Thermal & 99.99 & 99.99 & 99.99 & 5.00 & 2.50 & 3.75 & 99.99 & 99.99 & 99.99 \\ Normal-Thermal & 99.99 & 99.99 & 99.99 & 5.00 & 2.50 & 3.75 & 99.99 & 99.99 & 99.99 \\ \hline Random-Guess & 1.32 & 1.33 & 1.32 & 1.59 & 0.10 & 0.29 & 1.76 & 1.33 & 1.34 \\ \end{tabular} \end{footnotesize} \end{table*} Fig. \ref{fig:tsne-thermal} shows the t-SNE visualization of the features extracted using the CNNs encoded into a 2D map. Each point in the t-SNE plots represents an embedded image. Multiple points form a cluster that represents a demographic group. The combination of different demographic groups forms the characteristics of a subject which can be used for identification. For example, taking the bottom left cluster ($x~-30, y~-90$) indicates an age of 35, Male gender, and Black ethnicity. When combined, these characteristics indicate the person is subject 23. This process can be applied to both real and synthetic images to identify individuals using the cluster and demographic data. \begin{figure*}[!ht] \begin{center} \begin{tabular}{cc} \includegraphics[width=0.4\textwidth]{fig/thermal_mask_age.pdf} & \includegraphics[width=0.4\textwidth]{fig/thermal_normal_age.pdf} \\ (a) & (b) \\ \includegraphics[width=0.4\textwidth]{fig/thermal_mask_gender.pdf} & \includegraphics[width=0.4\textwidth]{fig/thermal_normal_gender.pdf} \\ (c) & (d) \\ \includegraphics[width=0.4\textwidth]{fig/thermal_mask_ethnicity.pdf} & \includegraphics[width=0.4\textwidth]{fig/thermal_normal_ethnicity.pdf} \\ (e) & (f) \\ \end{tabular} \end{center} \caption{The t-SNE visualization of the features extracted using the CNNs: (a) age-based thermal mask plot, (b) age-based thermal un-mask plot, (c) gender-based thermal mask plot, (d) gender-based thermal un-mask plot, (e) ethnicity-based thermal mask plot, (f) ethnicity-based thermal un-mask plot.} \label{fig:tsne-thermal} \end{figure*} \section{Conclusions} Our study addresses the problem of bias and how it can play a role in impacting fairness in a biometric system, specifically a facial recognition system. Bias can come from a variety of sources, in this paper, we explore how an imbalanced dataset can contain dangerously biased cohorts in the form of demographic groups such as gender, ethnicity, and age. These biases can deeply influence the machine learning algorithm to make unfair decisions. In this paper, we show how the same process of evaluation fairness on real images can be replicated on synthetic images. The evaluation shows that fairness is more correlated to the performance of the system than whether or not the images are synthetic. As the performance increases, the demographic parity difference also increases proportionally to the number of classes in the demographic group. Given a simple 3-Block CNN with a precision and recall rate of 99.99\%, the DPD for age, gender, and ethnicity is reported as 5, 2.5, and 3.75, respectively. A future application is to build a combined real and synthetic dataset where synthetic images are used to augment classes with few samples to create an overall more balanced dataset. \section*{Acknowledgment} This research was partially supported by the Natural Sciences and Engineering Research Council Canada (NSERC SPG grant ``Biometric-Enabled Identity Management for Safe and Secure Cities''). This work was partially supported by Natural Sciences and Engineering Research Council of Canada through Discovery Grant ``Biometric Intelligent Interfaces''. {\small \bibliographystyle{IEEEtran}
1,116,691,497,017
arxiv
\section*{Introduction} The \emph{Ricci flow} is a geometric evolution equation for the metric tensor on a general Riemannian manifold. The normalized Ricci flow has the property that all its fixed points are Einstein metrics. In his celebrated paper \cite{Ham1}, Hamilton showed that in 3-manifolds, the positive Ricci curvature condition on the initial metric implies that the Ricci flow exists for all time and converges to a Riemannian metric of constant curvature. This phenomenon has been shown later for other types of curvature conditions in other dimensions by several authors. The study of Ricci flow on surfaces is far simpler than its counterparts in higher dimensional cases. Hence one can obtain much more detailed and comprehensive results. On surfaces, the Ricci flow solutions remain within a conformal class and clearly coincide with that of the \emph{Yamabe flow} on surfaces. In \cite{Ham2} Hamilton proved that for a compact oriented Riemannian surface $(M,g)$, if $M$ is not diffeomorphic to the 2-sphere $\mathbb{S}^2$, then any metric $g$ converges to a constant curvature metric under the Ricci flow and if $M$ is diffeomorphic to $\mathbb{S}^2$, then any metric $g$ with positive Gaussian curvature on $\mathbb{S}^2$ converges to a metric of constant curvature under the flow. Later, Chow in \cite{Chow2} removed the positive Gaussian curvature assumption in Hamilton's theorem and proved that for evolution of any metric on $\mathbb{S}^2$, under Hamilton's Ricci flow, the Gaussian curvature becomes positive in finite time and concluded that under the flow any metric $g$ on a Riemannian surface converges to a metric of constant curvature. Thus for compact surfaces, Ricci flow provides a new proof of the \emph{uniformization theorem}. Much is also known in the complete case. There are also many interesting subtleties in setting up this flow in the incomplete cases. Moreover, surface Ricci flow has started making impacts on practice fields and tackling fundamental engineering problems. In Finsler geometry as a natural generalization of Riemannian geometry, the problem of constructing the Finslerian Ricci flow raises a number of new conceptual and fundamental issues in regards to the compatibility of geometrical and physical objects and their optimal configurations. A fundamental step in the study of any system of evolutionary partial differential equations is to show the short time existence and uniqueness of solutions. Recently, an evolution of a family of Finsler metrics along Finsler Ricci flow has been studied by the first named author in several joint works and it has been shown that such flows exist in short time and converge to a limit metric; for instance, see \cite{YB2}. In the present work, we study the Ricci flow on the closed Finsler surfaces and prove the short-time existence and uniqueness of solutions for the Ricci flow. Intuitively, since the Ricci flow system of equations is only weakly parabolic, its short-time existence and uniqueness do not follow from the standard theory of parabolic equations. Following the procedure described by D. DeTurck in Riemannian space \cite{DeT}, we have introduced the Finslerian Ricci-DeTurck flow on Finsler surfaces by Eq. (\ref{22}) and prove existence and uniqueness of short-time solutions. More precisely, we prove: \begin{thm}\label{main8} Let $M$ be a compact Finsler surface. Given any initial Finsler structure $F_{0}$, there exists a real number $T>0$ and a smooth one-parameter family of Finsler structures $\tilde{F}(t)$, $t\in[0,T)$, such that $\tilde{F}(t)$ is a unique solution to the Finslerian Ricci-DeTurck flow with $\tilde{F}(0)=F_{0}$. \end{thm} Next, a solution to the original Ricci flow is found by pulling back the solution to the Ricci-DeTurck flow via appropriate diffeomorphisms. This leads to \begin{thm} \label{main14} Let $M$ be a compact Finsler surface. Given any initial Finsler structure $F_{0}$, there exists a real number $T>0$ and a smooth one-parameter family of Finsler structures $F(t)$, $t\in[0,T)$, such that $F(t)$ is a unique solution to the Finslerian Ricci flow and $F(0)=F_{0}$. \end{thm} \section{Preliminaries and notations} \subsection{Chern connection; A global approach} Let $M$ be a real smooth surface and denote by $TM$ the tangent bundle of tangent vectors, by $\pi :TM_{0}\longrightarrow M$ the fiber bundle of non-zero tangent vectors and by $\pi^*TM\longrightarrow TM_0$ the pullback tangent bundle. Let $F$ be a Finsler structure on $TM_{0}$ and $g$ the related Finslerian metric. A \emph{Finsler manifold} is denoted here by the pair $(M,F)$. Any point of $TM_0$ is denoted by $z=(x,y)$, where $x=\pi z\in M$ and $y\in T_{x}M$. Let us denote by $TTM_0$, the tangent bundle of $TM_0$ and by $\rho$, the canonical linear mapping $\rho:TTM_0\longrightarrow \pi^*TM,$ where, $ \rho=\pi_*$. For all $z\in TM_0$, $V_zTM$ is the set of all vertical vectors at $z$, that is, the set of vectors which are tangent to the fiber through $z$. Consider the decomposition $TTM_0=HTM\oplus VTM$, which permits us to uniquely represent a vector field $\hat{X}\in {\cal X}(TM_0)$ as the sum of the horizontal and vertical parts namely, $\hat{X}=H\hat{X}+V\hat{X}$. The corresponding basis is denoted here by $\{\frac{\delta}{\delta {x^i}},\frac{\partial}{\partial y^{i}}\}$, where, $\frac{\delta}{\delta {x^i}}:=\frac{\partial}{\partial x^{i}}-N_{i}^{j}\frac{\partial}{\partial y^{j}}$, $N^{j}_{i}=\frac{1}{2}\frac{\partial G^j}{\partial y^i}$ and $G^i$ are the spray coefficients defined by $G^{i}=\frac{1}{4}g^{ih}(\frac{\partial^{2}F^{2}}{\partial y^{h}\partial x^{j}}y^{j}-\frac{\partial F^{2}}{\partial x^{h}})$. We denote the \emph{formal Christoffel symbols} by $\gamma^{i}_{jk}=\frac{1}{2}g^{ih}(\partial_{j}g_{hk}+\partial_{k}g_{jh}-\partial_{h}g_{jk})$ where, $\partial_{k}=\frac{\partial}{\partial x^k}$. The dual bases are denoted by $\{dx^{i},\delta y^{i}\}$ where, $\delta y^{i}:=dy^{i}+N_{j}^{i}dx^{j}$. Let us denote a global representation of the Chern connection by $\nabla:{\cal X}(TM_0)\times\Gamma(\pi^{*}TM)\longrightarrow\Gamma(\pi^{*}TM)$. Consider the linear mapping $\mu:TTM_0\longrightarrow \pi^*TM,$ defined by $\mu(\hat{X})=\nabla_{\hat{X}}{\bf y}$ where, $\hat{X}\in TTM_0$ and ${\bf y}=y^i\frac{\partial}{\partial x^i}$ is the canonical section of $\pi^*TM$. The connection 1-forms of Chern connection in these bases are given by $\omega^{i}_{j}=\Gamma^{i}_{jk}dx^{k}$ where, $\Gamma^{i}_{jk}=\frac{1}{2}g^{ih}(\delta_{j}g_{hk}+\delta_{k}g_{jh}-\delta_{h}g_{jk})$ and $\delta_{k}=\frac{\delta}{\delta x^{k}}$. In the sequel, all the vector fields on $TM_0$ are decorated with a hat and denoted by $\hat{X}$, $\hat{Y}$, $\hat{Z}$ and the corresponding sections of $\pi^*TM$ by $X=\rho(\hat{X})$, $Y=\rho(\hat{Y})$ and $Z=\rho(\hat{Z})$, respectively unless otherwise specified. The torsion freeness and almost metric compatibility of the Chern connection are given by \begin{align} &\nabla_{\hat{X}}Y-\nabla_{\hat{Y}}X=\rho[\hat{X},\hat{Y}],\label{tori}\\ &(\nabla_{\hat{Z}}g)(X,Y)=2C(\mu(\hat{Z}),X,Y),\label{gcomp} \end{align} respectively, where $C$ is the Cartan tensor with the components $C_{ijk}=\frac{\partial g_{ij}}{\partial y^{k}}.$ In a local coordinates on $TM$ the \emph{Chern horizontal} and \emph{vertical covariant derivatives} of an arbitrary $(1,2)$ tensor field $S$ on $\pi^{*}TM$ with the components $(S^{i}_{jk}(x,y))$ on $TM$ are denoted by \begin{align*} &\nabla_{l}S^{i}_{jk}:= \delta_{l}S^{i}_{jk}-S^{i}_{s k}\Gamma^{s}_{jl}-S^{i}_{js}\Gamma^{s}_{kl}+S^{s}_{jk}\Gamma^{i}_{s l}, \\ &\dot{\nabla}_{l}S^{i}_{jk}:=\dot{\partial}_{l}S^{i}_{jk} \end{align*} where, $\nabla_{l}:=\nabla_{\frac{\delta}{\delta x^l}}$ and $\dot{\nabla}_{l}:=\nabla_{\frac{\partial}{\partial y^l}}$. Horizontal metric compatibility of the Chern connection is given in local coordinates by $\nabla_{l}g_{jk}=0$, see \cite[p.\ 45]{BCS}. The local \emph{Chern $hh$-curvature} tensor is given by \begin{equation} \label{77} R^{\,\,i}_{j\,\,kl}=\delta_{k}\Gamma^{i}_{\,jl}-\delta_{l}\Gamma^{i}_{\,jk}+ \Gamma^{i}_{\,hk}\Gamma^{h}_{\,jl}-\Gamma^{i} _{\,hl}\Gamma^{h}_{\,jk}, \end{equation} see \cite[p.\ 52]{BCS}. The \emph{reduced $hh$-curvature} tensor is a connection free tensor field which is also referred to as the \emph{Riemann curvature} by certain authors. In a local coordinates on $TM$, the components of the reduced $hh$-curvature tensor are given by $R^{i}_{\,\,k}:=\frac{1}{F^2}y^{j}R^{\,\,i}_{j\,\,km}y^{m}$, which are entirely expressed in terms of $x$ and $y$ derivatives of spray coefficients $G^{i}$ as follows \begin{equation} \label{18} R^{i}_{\,\,k}:=\frac{1}{F^2}(2\frac{\partial G^{i}}{\partial x^{k}}-\frac{\partial^{2}G^{i}}{\partial x^{j}\partial y^{k}}y^{j}+2G^{j}\frac{\partial^{2}G^{i}}{\partial y^{j}\partial y^{k}}-\frac{\partial G^{i}}{\partial y^{j}}\frac{\partial G^{j}}{\partial y^{k}}), \end{equation} see \cite[p.\ 66]{BCS}. \subsection{Lie derivatives of Finsler metrics} The Lie derivative of an arbitrary Finslerian $(0,2)$ tensor field ${\cal T}={\cal T}_{jk}(x,y)dx^{j}\otimes dx^{k}$ on $\otimes^{2}\pi^{*}TM$ with respect to an arbitrary vector field $\hat V$ on $TM_0$ is given by \begin{eqnarray} (\mathcal{L}_{\hat{V}}{\cal T})(X,Y)=\hat{V}({\cal T}(X,Y))-{\cal T}(\rho[\hat V,\hat X],Y)-{\cal T}(X,\rho[\hat V,\hat Y]),\nonumber \end{eqnarray} where, $\rho(\hat{X})=X$, $\rho(\hat{Y})=Y$ and $\hat{X},\hat{Y}\in T_{z}TM_{0}$, see \cite{JB}. The Lie derivative of Finsler metric $g$ with respect to the arbitrary vector field $\hat V$ on $TM_0$ is given by \begin{eqnarray} (\mathcal{L}_{\hat{V}}{g})(X,Y)=\hat{V}({g}(X,Y))-{g}(\rho[\hat V,\hat X],Y)-{g}(X,\rho[\hat V,\hat Y]).\nonumber \end{eqnarray} By means of the torsion freeness of Chern connection defined by (\ref{tori}), Lie derivative of the Finsler metric $g$ can be rewritten as \begin{align} (\mathcal{L}_{\hat{V}}{g})(X,Y)&=\hat{V}(g(X,Y))-g(\nabla_{\hat{V}}X-\nabla_{\hat{X}}V,Y)-g(X,\nabla_{\hat{V}}Y-\nabla_{\hat{Y}}V)\nonumber\\ &=\hat{V}(g(X,Y))-g(\nabla_{\hat{V}}X,Y)+g(\nabla_{\hat{X}}V,Y)\nonumber\\ &\quad-g(X,\nabla_{\hat{V}}Y)+g(X,\nabla_{\hat{Y}}V).\label{Liederiv} \end{align} By the almost $g$-compatibility of Chern connection defined by (\ref{gcomp}), we have \begin{equation*} 2C(\mu(\hat{V}),X,Y)=(\nabla_{\hat{V}}g)(X,Y)=\hat{V}(g(X,Y))-g(\nabla_{\hat{V}}X,Y)-g(X,\nabla_{\hat{V}}Y), \end{equation*} Therefore, \begin{equation}\label{campatibility} \hat{V}(g(X,Y))=2C(\mu(\hat{V}),X,Y)+g(\nabla_{\hat{V}}X,Y)+g(X,\nabla_{\hat{V}}Y). \end{equation} Plugging the equation (\ref{campatibility}) in (\ref{Liederiv}) we obtain \begin{equation}\label{finalliederiv} (\mathcal{L}_{\hat{V}}{g})(X,Y)=2C(\mu(\hat{V}),X,Y)+g(\nabla_{\hat{X}}V,Y)+g(X,\nabla_{\hat{Y}}V). \end{equation} Replacing $X$ and $Y$ by the canonical section ${\bf y}=y^{i}\frac{\partial}{\partial x^{i}}$ in (\ref{finalliederiv}) we obtain \begin{equation*} (\mathcal{L}_{\hat{V}}{g})({\bf y},{\bf y})=2C(\mu(\hat{V}),{\bf y},{\bf y})+g(\nabla_{\hat{{\bf y}}}V,{\bf y})+g({\bf y},\nabla_{\hat{{\bf y}}}V), \end{equation*} where, $\hat{{\bf y}}=y^{i}\frac{\delta}{\delta x^{i}}$. Using $C(\mu(\hat{V}),{\bf y},{\bf y})=0$, see \cite[p.\ 23]{BCS}, and the symmetric property of $g(\nabla_{\hat{{\bf y}}}V,{\bf y})$ one arrives at \begin{equation}\label{global} (\mathcal{L}_{\hat{V}}{g})({\bf y},{\bf y})=2g({\bf y},\nabla_{\hat{{\bf y}}}V), \end{equation} where, $V=v^i\frac{\partial}{\partial x^i}$ is a section of $\pi^{*}TM$. In the local coordinates, (\ref{global}) can be written as \begin{equation*} y^iy^j\mathcal{L}_{\hat{V}}g_{ij}=2y^iy^jg_{ik}\nabla_{j}v^{k}. \end{equation*} Using $\nabla_{j}g_{ik}=0$, we obtain \begin{equation}\label{FINAL} y^iy^j\mathcal{L}_{\hat{V}}g_{ij}=2y^iy^j\nabla_{j}v_{i}, \end{equation} where, $v_{i}=g_{ik}v^{k}$. \subsection{The Berwald frame and a geometrical setup on $SM$} Let $(M,F)$ be a Finsler surface and $SM$ the quotient of $TM_0$ under the following equivalence relation: $(x,y)\sim(x,\tilde{y})$ if and only if $y, \tilde{y}$ are positive multiples of each other. In other words, $SM$ is the bundle of all directions or rays, and is called the (projective) \emph{sphere bundle}. The local coordinates $(x^1,x^2)$ on $M$ induce the global coordinates $(y^1,y^2)$ on each fiber $T_{x}M$, through the expansion $y=y^{i}\frac{\partial}{\partial x^i}$. Therefore $(x^i;y^i)$ is a coordinate system on $SM$, where the coordinates $y^i$ are regarded as homogeneous coordinates in the projective space. Using the canonical projection $p:SM\longrightarrow M$, one can pull the tangent bundle $TM$ back to $p^{*}TM$ which is a vector bundle with the fiber dimension 2 over the 3-manifold $SM$. The vector bundle $p^{*}TM$ has a global section $l:=\frac{y^i}{F(y)}\frac{\partial}{\partial x^i}$ and a natural Riemannian metric which we here denote by $g:=g_{ij}(x,y)dx^{i}\otimes dx^{j}$. One can complete $l$ into a positively oriented $g$-orthonormal frame $\{e_1,e_2\}$ for $p^{*}TM$, with $e_{2}:=l$, by setting \begin{align*} &e_{1}=\frac{F_{y^2}}{\sqrt{g}}\frac{\partial}{\partial x^1}-\frac{F_{y^1}}{\sqrt{g}}\frac{\partial}{\partial x^2},\\ &e_{2}=\frac{y^1}{F}\frac{\partial}{\partial x^1}+\frac{y^2}{F}\frac{\partial}{\partial x^2}, \end{align*} where, $\sqrt{g}:=\sqrt{\det(g_{ij})}$ and $F_{y^i}$ abbreviates the partial derivative $\frac{\partial F}{\partial y^i}$. In 2-dimensional case, $\{e_1,e_2\}$ is a globally defined $g$-orthonormal frame field for $p^{*}TM$ called a \emph{Berwald frame}. The natural dual of $l$ is the Hilbert form defined by $\omega:=F_{y^i}dx^i$, which is a global section of $p^{*}T^{*}M$. The coframe corresponding to $\{e_1,e_2\}$ is defined here by $\{\omega^1,\omega^2\}$, where \begin{align}\label{dualbase} &\omega^1=\frac{\sqrt{g}}{F}(y^2dx^1-y^1dx^2)=v^{1}_{1}dx^1+v^{1}_{2}dx^2,\nonumber\\ &\omega^2=F_{y^1}dx^1+F_{y^2}dx^2=v^{2}_{1}dx^1+v^{2}_{2}dx^2. \end{align} The sphere bundle $SM\subset TM$ is a 3-dimensional Riemannian manifold equipped with the induced \emph{Sasaki metric} \begin{equation*} \omega^1\otimes\omega^1+\omega^2\otimes\omega^2+ \omega^3\otimes\omega^3, \end{equation*} where, \begin{equation}\label{dualbase1} \omega^{3}:=\frac{\sqrt{g}}{F}(y^2\frac{\delta y^1}{F}-y^1\frac{\delta y^2}{F})=v^{1}_{1}\frac{\delta y^1}{F}+v^{1}_{2}\frac{\delta y^2}{F}. \end{equation} The collection $\{\omega^1,\omega^2,\omega^3\}$ is a globally defined orthonormal frame for $T^{*}(SM)$. Its natural dual frame is given by $\{\hat{e}_1,\hat{e}_2,\hat{e}_3\}$, where \begin{align} &\hat{e}_{1}=\frac{F_{y^2}}{\sqrt{g}}\frac{\delta}{\delta x^1}-\frac{F_{y^1}}{\sqrt{g}}\frac{\delta}{\delta x^2}=u^{1}_{1}\frac{\delta}{\delta x^1}+u^{2}_{1}\frac{\delta}{\delta x^2},\label{base1}\\ &\hat{e}_{2}=\frac{y^1}{F}\frac{\delta}{\delta x^1}+\frac{y^2}{F}\frac{\delta}{\delta x^2}=u^{1}_{2}\frac{\delta}{\delta x^1}+u^{2}_{2}\frac{\delta}{\delta x^2},\label{base2}\\ &\hat{e}_{3}=\frac{F_{y^2}}{\sqrt{g}}F\frac{\partial}{\partial y^1}-\frac{F_{y^1}}{\sqrt{g}}F\frac{\partial}{\partial y^2}=Fu^{1}_{1}\frac{\partial}{\partial y^1}+Fu^{2}_{1}\frac{\partial}{\partial y^2}.\label{base3} \end{align} These three vector fields on $SM$ form a global orthonormal frame for $T(SM)$. The first two are horizontal while the third one is vertical. The objects $\omega^1,\omega^2,\omega^3$ and $\hat{e}_1,\hat{e}_2,\hat{e}_3$ are defined in terms of objects that live on the slit tangent bundle $TM_{0}$. But they are invariant under positive rescaling in $y$. Therefore they give bonafide objects on the sphere bundle $SM$, see \cite[p.\ 92-94]{BCS}. \subsection{The integrability condition for Finsler metrics} Here, we first recall that all the Riemannian metrics on the fibers of the pulled-back bundle do not come from a Finsler structure. Hence, not every arbitrary symmetric $(0,2)$-tensor $g_{ij}(x,y)$ arises from a Finsler structure $F(x,y)$. Intuitively, in order to make sure $g_{ij}(x,y)$ are components of a Finsler structure, the essential integrability criterion is the total symmetry of $(g_{ij})_{y^k}$ on all three indices $i, j, k$. In fact, $g_{ij}(x,y)$ arises from a Finsler structure $F(x,y)$ if and only if $(g_{ij})_{y^k}$ is totally symmetric in its three indices, see \cite[p. 56]{Bao}. Symmetry of ${({g_{ij}})_{{y^k}}}$ on all three indices $i, j, k$ is known in the literature as \emph{integrability condition}. Moreover, we have to make sure the integrability criterion is satisfied in every step along the Ricci flow. To this end we consider a general evolution equation given by \begin{equation}\label{AR} \frac{\partial}{\partial t}g(t)=\omega(t),\quad g(0):=g_{0}, \end{equation} where, $\omega(t):=\omega(t, x, y)$ is a family of symmetric $(0,2)$-tensors on $\pi^{*}TM$, zero-homogenous with respect to $y$. The following Lemma establishes the integrability condition, see also \cite[p. 749]{YB2}. \begin{lemma}\label{RE2} Let $g(t)$ be a solution to the evolution equation (\ref{AR}). There is a family of Finsler structures $F(t)$ on $TM$ such that, \begin{equation}\label{Eq;IntCond} g_{ij}(t)=\frac{1}{2}\frac{\partial^2 F(t)}{\partial y^i\partial y^j}. \end{equation} \end{lemma} \begin{proof} Let $M$ be a compact differential manifold, $F(t)$ a family of smooth 1-parameter Finsler structures on $TM_0$ and $g(t)$ the Hessian matrix of $F(t)$ which defines a scalar product on $\pi^{*}TM$ for every $t$. Let $g(t)$ be a solution to the evolution equation (\ref{AR}). We have \begin{equation}\label{ARR} g(t)=g(0)+\int_{0}^{t}\omega(\tau)d\tau, \qquad \forall\tau\in[0,t). \end{equation} We show that the metric $g(t)$ satisfies the integrability condition, or equivalently there is a Finsler structure $F(t)$ on $TM_0$ satisfying \eqref{Eq;IntCond}. For this purpose, we multiply $g_{ij}$ by $y^i$ and $y^j$ in (\ref{ARR}), \begin{equation*} y^iy^jg_{ij}(t)=y^iy^jg_{ij}(0)+\int_{0}^{t}y^iy^j\omega_{ij}(\tau)d\tau. \end{equation*} By means of the initial condition $y^iy^jg_{ij}(0)= F^2(0)$, we get \begin{equation}\label{EQ1} y^iy^jg_{ij}(t)=F^2(0)+\int_{0}^{t}y^iy^j\omega_{ij}(\tau)d\tau. \end{equation} By positive definiteness assumption of $g_{ij}$, we put $F = (y^iy^jg_{ij})^{\frac{1}{2}}$. Twice vertical derivatives of (\ref{EQ1}) yields \begin{equation}\label{EQ2} \frac{1}{2}\frac{\partial^2 F^2}{\partial y^k\partial y^l}=g_{kl}(0)+\frac{1}{2}\int_{0}^{t}\frac{\partial^2}{\partial y^k\partial y^l}(y^iy^j\omega_{ij}(\tau))d\tau. \end{equation} On the other hand, by straightforward calculation we have \begin{equation}\label{EQ3} \frac{1}{2}\frac{\partial^2}{\partial y^k\partial y^l}(y^iy^j\omega_{ij}(\tau))=\frac{1}{2}\frac{\partial^2\omega_{ij}(\tau)}{\partial y^k\partial y^l}y^iy^j=\Big(\frac{\partial\omega_{ik}(\tau)}{\partial y^l}-\frac{\partial\omega_{il}(\tau)}{\partial y^k}\Big)y^i+\omega_{kl}(\tau), \end{equation} for all $\tau\in[0,t)$. Using (\ref{AR}) we obtain \begin{equation*} \frac{1}{2}\frac{\partial^2\omega_{ij}(\tau)}{\partial y^k\partial y^l}y^iy^j=0,\quad\frac{\partial\omega_{ik}(\tau)}{\partial y^l}=0,\quad\frac{\partial\omega_{il}(\tau)}{\partial y^k}=0. \end{equation*} Therefore, (\ref{EQ3}) is reduced to \begin{equation}\label{EQ4} \frac{1}{2}\frac{\partial^2}{\partial y^k\partial y^l}(y^iy^j\omega_{ij}(\tau))=\omega_{kl}(\tau), \end{equation} for all $\tau\in[0,t)$. Finally, replacing (\ref{EQ4}) in (\ref{EQ2}) we get \begin{equation*} \frac{1}{2}\frac{\partial^2 F^2}{\partial y^k\partial y^l}=g_{kl}(0)+\int_{o}^{i}\omega_{kl}(\tau)d\tau=g_{kl}. \end{equation*} Therefore, every $g_{ij}(t)$ on the fibers of pulled-back bundle, arises from a Finsler structure. This completes the proof. \end{proof} \section{Semi-linear strictly parabolic equations on $SM$} Recall that a \emph{quasi-linear} system is a system of partial differential equations where, the derivatives of principal order terms occur only linearly and coefficients may depend on derivatives of the lower order terms. It is called \emph{semi-linear} if it is quasi-linear and coefficients of the principal order terms depend only on the independent variables, but not on the solution, see \cite[p.\ 45]{Rog}. Let $M$ be a 2-dimensional manifold and $u:M\longrightarrow \mathbb{R}$ a smooth function on $M$. A \emph{semi-linear strictly parabolic} equation is a PDE of the form \begin{eqnarray*} \frac{\partial u}{\partial t}=a^{ij}(x,t)\frac{\partial^2 u}{\partial x^i\partial x^j}+h(x,t,u,\frac{\partial u}{\partial x^i}),\qquad i,j=1,2, \end{eqnarray*} where, $a^{ij}$ and $h$ are smooth functions on $M$ and for some constant $\lambda>0$ we have the parabolic assumption \begin{eqnarray*} a^{ij}\xi_{i}\xi_{j}\geq \lambda\parallel \xi\parallel^{2},\quad 0\neq\xi\in\chi(M), \end{eqnarray*} that is, all eigenvalues of $A=(a^{ij})_{2\times2}$ have positive signs or equivalently $A$ is positive definite. \begin{defn}\label{semipar} Let $M$ be a surface and $\phi:SM\longrightarrow \mathbb{R}$ a smooth function on the sphere bundle $SM$. Consider the following semi-linear strictly parabolic equation on $SM$; \begin{equation* \frac{\partial \phi}{\partial t}=G^{_{AB}}(x,y,t)\hat{e}_{_A}\hat{e}_{_B}\phi+h(x,y,t,\phi,\hat{e}_{_A}\phi),\qquad A,B=1,2,3 , \end{equation*} where, $\hat{e}_{_A}$ is a local frame for the tangent bundle $TSM$ and stand for partial derivatives on $SM$. Here, $G^{_{AB}}$ and $h$ are smooth functions on $SM$ and $G=(G^{_{AB}})$ is positive definite. \end{defn} More precisely, a semi-linear strictly parabolic equation on $SM$ can be written in the form \begin{equation}\label{po} \frac{\partial \phi}{\partial t}=p^{ab}(x,y,t)\hat{e}_{a}\hat{e}_{b}\phi+q(x,y,t)\hat{e}_{3} \hat{e}_{3}\phi+m^{a}(x,y,t)\hat{e}_{a}\hat{e}_{3}\phi+\textrm{lower order terms}, \end{equation} where $a,b=1,2$, and the matrix \begin{displaymath} G=\left(\begin{array}{c|c} P\,\,\, & \frac{1}{2}M \\ \hline\,\, \frac{1}{2}M^{t}\,\,\ & Q \end{array}\right)_{3\times3}, \end{displaymath} is positive definite where, $P=(p^{ab})_{2\times2},Q=(q)_{1\times1}, M=(m^{a})_{2\times1}$. \begin{lem}\label{mm} Let $(M,F)$ be a Finsler surface and $\phi:TM\longrightarrow \mathbb{R}$ a zero-homogeneous smooth function on the tangent bundle $TM$. The semi-linear differential equation \begin{equation}\label{lili} \frac{\partial \phi}{\partial t}=g^{ij}\frac{\delta^2\phi}{\delta x^i\delta x^j}+F^2g^{ij}\frac{\partial^2 \phi}{\partial y^i\partial y^j}+\textrm{lower order terms},\qquad i,j=1,2, \end{equation} is a strictly parabolic equation on $SM$. \end{lem} \begin{proof} Let us denote again by $\phi$ its restriction on $SM$. According to (\ref{base1}) and (\ref{base2}), replacing $\hat{e}_{a}=u^i_a\frac{\delta}{\delta x^i}$, we obtain \begin{align*} &\hat{e}_{a}\phi=u^i_a\frac{\delta \phi}{\delta x^i},\\ &\hat{e}_{b}\hat{e}_{a}\phi=u^{i}_{a}u^{j}_{b}\frac{\delta^2 \phi}{\delta x^i\delta x^j}+\hat{e}_b(u^{i}_{a})(\frac{\delta \phi}{\delta x^i}),\qquad a,b=1,2. \end{align*} Multiplying the both sides by $g^{ab}$ leads to \begin{align*} g^{ab}\hat{e}_{b}\hat{e}_{a}\phi&=g^{ab}u^{i}_{a}u^{j}_{b}\frac {\delta^2 \phi}{\delta x^i\delta x^j}+g^{ab}\hat{e}_b(u^{i}_{a}) (\frac{\delta \phi}{\delta x^i})\\ &=g^{ij}\frac{\delta^2 \phi}{\delta x^i\delta x^j}+g^{ab}\hat{e}_b(u^{i}_{a})(\frac{\delta \phi}{\delta x^i}), \end{align*} where, $g_{ab}=g_{ij}u^i_au^j_b$. According to (\ref{dualbase}), by using the notations $\omega^{c}:=v^{c}_{i}dx^{i}$ and $B^c:=v^c_ig^{ab}\hat{e}_{b}(u^i_a)$ one can rewrite the expression $g^{ij}\frac{\delta^2 \phi}{\delta x^i\delta x^j}$ on $SM$ with respect to $\hat{e}_{a}$ as follows \begin{equation* g^{ab}\hat{e}_{b}\hat{e}_{a}\phi-B^c\hat{e}_c\phi=g^{ij}\frac{\delta^2 \phi}{\delta x^i\delta x^j}, \qquad c=1,2. \end{equation*} Hence, (\ref{base3}) yields \begin{align*} \hat{e}_{3}\phi&=Fu^{i}_{1}\frac{\partial \phi}{\partial y^i},\\ \hat{e}_{3}\hat{e}_{3}\phi&=\hat{e}_{3}(Fu^{i}_{1}\frac{\partial \phi}{\partial y^i}) \nonumber\\&=F^2u^j_1 u^i_1\frac{\partial^2 \phi}{\partial y^j\partial y^i}+F(\hat{e}_{3} u^i_{1})(\frac{\partial \phi}{\partial y^i})+Fu^{j}_{1}(\frac{\partial F}{\partial y^j})u^i_1\frac{\partial \phi}{\partial y^i}. \end{align*} Using the fact $u^{j}_{1}\frac{\partial F}{\partial y^j}=0$, see \cite[p.\ 161]{HAZ}, we have \begin{eqnarray*} \hat{e}_{3}\hat{e}_{3}\phi=F^2u^j_1 u^i_1\frac{\partial^2 \phi}{\partial y^j\partial y^i}+F(\hat{e}_{3} u^i_{1})(\frac{\partial \phi}{\partial y^i}). \end{eqnarray*} Multiplying the both sides by $g^{11}$ and taking into account that $g^{11}u^{i}_{1}u^{j}_{1}=g^{ij}-y^iy^j$ we get \begin{eqnarray*} g^{11}\hat{e}_{3}\hat{e}_{3}\phi=F^2g^{ij}\frac{\partial^2 \phi}{\partial y^j\partial y^i}+Fg^{11}(\hat{e}_{3}u^i_{1})\frac{\partial\phi}{\partial y^i}, \end{eqnarray*} where, $g_{11}=g_{ij}u^i_1 u^j_1$. According to (\ref{dualbase1}), we have $\omega^{3}=v^{1}_{i}\frac{\delta y^i}{F}$. Denoting $D^1:=v^1_iFg^{11}\hat{e}_{3}u^i_{1}$ one can rewrite the expression $F^2g^{ij}\frac{\partial^2 \phi}{\partial y^j\partial y^i}$ on $SM$ with respect to $\hat{e}_{3}$ as follows \begin{equation* g^{11}\hat{e}_{3}\hat{e}_{3}\phi-D^1\hat{e}_{3}\phi=F^2g^{ij}\frac{\partial^2 \phi}{\partial y^j\partial y^i}. \end{equation*} Thus the principal order terms $g^{ij}\frac{\delta^{2}\phi}{\delta x^{i}\delta x^{j}}$ and $F^2g^{ij}\frac{\partial^{2}\phi}{\partial y^{i}\partial y^{j}}$ convert to $g^{ab}\hat{e}_{b}\hat{e}_{a}\phi-B^c\hat{e}_c\phi$ and $g^{11}\hat{e}_{3}\hat{e}_{3}\phi-D^1\hat{e}_{3}\phi$ on $SM$. On the other hand, the order of lower order terms in (\ref{lili}) do not change after rewriting them in terms of the basis $\{\hat{e}_1,\hat{e}_{2},\hat{e}_3\}$ on $SM$. Therefore (\ref{lili}) on $SM$ is written as \begin{equation}\label{bibi} \frac{\partial \phi}{\partial t}=g^{ab}\hat{e}_{b}\hat{e}_{a}\phi+g^{11} \hat{e}_{3}\hat{e}_{3}\phi-B^c\hat{e}_c\phi-D^1\hat{e}_{3}\phi+\textrm{lower order terms}, \end{equation} where, $a,b,c=1,2$. Using the fact that $g$ is positive definite, the coefficient \begin{displaymath} G=\left(\begin{array}{c|c} g^{ab} & 0 \\ \hline 0 & g^{11} \end{array}\right)_{3\times3}, \end{displaymath} of principal order terms of (\ref{bibi}) is positive definite on $SM$. Therefore, by virtue of (\ref{po}) the differential equation (\ref{bibi}) is a semi-linear strictly parabolic equation on $SM$. \end{proof} \section{A vector field on $SM$} Let $(M,F)$ and $(N,\bar{F})$ be two Finsler surfaces with the corresponding metric tensors $g$ and $h$, respectively. Let $(x^i,y^i)$ and $(\bar{x}^i,\bar{y}^i)$ be local coordinate systems on $TM$ and $TN$, respectively. Let $c$ be a geodesic on $M$. The natural lift of $c$ on $TM$, namely, $$\tilde{c}:t\in I:\longrightarrow\tilde{c}(t)=(x^{i}(t),(\frac{dx^i}{dt})(t))\in TM,$$ is a horizontal curve. That is, its tangent vector field $\dot{\tilde{c}}(t)=\frac{dx^i}{dt}\frac{\delta}{\delta x^i}$, is horizontal. Consider a diffeomorphism \begin{align*} \varphi &:TM\longrightarrow TN,\\ &(x^i,y^i)\mapsto\varphi(x^i,y^i)=(\varphi^{\alpha}(x^i,y^i))=(\varphi^j(x^i,y^i),\varphi^{2+j}(x^i,y^i)), \end{align*} such that $\bar{c}(t):=(\varphi\circ\tilde{c})(t)$ is a horizontal curve, where $i,j=1,2,$ and $\alpha =1,...,4$. Throughout this section, $\varphi$ takes horizontal curves to horizontal curves. Let us denote by $\Gamma^{i}_{\,jk}$ and $\bar{\Gamma}^{i}_{\,jk}$ the coefficients of horizontal covariant derivatives of Chern connection on $(M,F)$ and $(N,\bar{F})$, respectively. Then we have \begin{align}\label{12} \bar{\nabla}_{\dot{\bar{c}}}\dot{\bar{c}}&=\bar{\nabla}_{\dot{\bar{c}}} \frac{d\bar{x}^j}{dt}\frac{\delta}{\delta \bar{x}^j}=\frac{d^{2}\bar{x}^{j}}{dt^{2}}\frac{\delta}{\delta \bar{x}^j}+\frac{d\bar{x}^j}{dt}\bar{\nabla}_{\dot{\bar{c}}}\frac{\delta}{\delta \bar{x}^j} =(\frac{d^{2}\bar{x}^{j}}{dt^{2}}+\frac{d\bar{x}^j}{dt}\frac{d\bar{x}^k}{dt}\bar{\Gamma}^{i}_{\,jk})\frac{\delta}{\delta \bar{x}^{i}}. \end{align} On the other hand \begin{align*} \frac{d\bar{x}^i}{dt}=\frac{\delta\varphi^i}{\delta x^p}\frac{dx^p}{dt}, \quad \frac{d^{2}\bar{x}^i}{dt^2}=\frac{\delta^2\varphi^i}{\delta x^p\delta x^q}\frac{dx^p}{dt}\frac{dx^q}{dt}+\frac{\delta\varphi^i}{\delta x^p}\frac{d^2x^p}{dt^2}. \end{align*} Replacing the last equations in (\ref{12}) leads to \begin{equation}\label{12+1+1} \bar{\nabla}_{\dot{\bar{c}}}\dot{\bar{c}}=(\frac{\delta^2\varphi^i}{\delta x^p\delta x^q}\frac{dx^p}{dt}\frac{dx^q}{dt}+\frac{\delta\varphi^i}{\delta x^h}\frac{d^2x^h}{dt^2}+\bar{\Gamma}^{i}_{\,jk}\frac{\delta\varphi^j}{\delta x^p}\frac{\delta\varphi^k}{\delta x^q}\frac{dx^p}{dt}\frac{dx^q}{dt})\frac{\delta}{\delta \bar{x}^{i}}, \end{equation} where, all the indices run over the range $1,2$. The geodesic $c$ on $M$, satisfies \begin{equation}\label{12+1+1+1} \frac{d^2x^h}{dt^2}+\Gamma^{h}_{\,pq}\frac{dx^p}{dt}\frac{dx^q}{dt}=0. \end{equation} Substituting $ \frac{d^2x^h}{dt^2}$ from the last equation in (\ref{12+1+1}), leads to \begin{equation*} \bar{\nabla}_{\dot{\bar{c}}}\dot{\bar{c}}=\frac{dx^p}{dt}\frac{dx^q}{dt}(\frac{\delta^2\varphi^i}{\delta x^p\delta x^q}-\frac{\delta\varphi^i}{\delta x^h}\Gamma^{h}_{\,pq}+\bar{\Gamma}^{i}_{\,jk}\frac{\delta\varphi^j}{\delta x^p}\frac{\delta\varphi^k}{\delta x^q})\frac{\delta}{\delta \bar{x}^{i}}. \end{equation*} Next, let \begin{eqnarray*} A^{i}_{pq}:=\frac{\delta^2\varphi^i}{\delta x^p\delta x^q}-\frac{\delta\varphi^i}{\delta x^h}\Gamma^{h}_{\,pq}+\bar{\Gamma}^{i}_{\,jk}\frac{\delta\varphi^j}{\delta x^p}\frac{\delta\varphi^k}{\delta x^q}+F^2\frac{\partial^2\varphi^i}{\partial y^p\partial y^q}. \end{eqnarray*} Contracting $A^i_{pq}$ with $g^{pq}$ leads to the following operators \begin{equation} \label{15} (\Phi_{g,h}\varphi)^{i}:=g^{pq}(\frac{\delta^2\varphi^i}{\delta x^p\delta x^q}+F^2\frac{\partial^2\varphi^i}{\partial y^p\partial y^q}-\frac{\delta\varphi^i}{\delta x^h}\Gamma^{h}_{\,pq}+\bar{\Gamma}^{i}_{\,jk}\frac{\delta\varphi^j}{\delta x^p}\frac{\delta\varphi^k}{\delta x^q}), \end{equation} where, $(\Phi_{g,h}\varphi)^{i}=g^{pq}A^{i}_{pq}$ and $i=1,2$. For greater indices we consider the following operator. \begin{equation} \label{15+1+1} (\Phi_{g,h}\varphi)^{2+i}:=g^{pq}(\frac{\delta^2\varphi^{2+i}}{\delta x^p\delta x^q}+F^2\frac{\partial^2\varphi^{2+i}}{\partial y^p\partial y^q}+\frac{\partial\varphi^{2+i}}{\partial y^k}\frac{\delta N^k_q}{\delta x^p}), \end{equation} where, $i=1,2$. Summarizing the above definitions we have \begin{equation*} (\Phi_{g,h}\varphi)^{\alpha}=\left\{ \begin{array}{l} (\Phi_{g,h}\varphi)^{i}\qquad \alpha=i\cr (\Phi_{g,h}\varphi)^{2+i}\qquad \alpha=2+i, \end{array} \right. \end{equation*} where, $i=1,2.$ Next, we show the operator $(\Phi_{g,h}\varphi)^{\alpha}$ is invariant under all diffeomorphisms on $TM$. \begin{lem} \label{main2} Let $(M,F)$ and $(N,\bar{F})$ be two Finsler surfaces with the corresponding metric tensors $g$ and $h$, respectively. If $\psi$ is a diffeomorphism from $TM$ to itself, then it leaves invariant the operator $(\Phi_{g,h}\varphi)^\alpha$, that is \begin{eqnarray*} (\Phi_{\psi^{*}(g),h}\psi^{*}\varphi)^\alpha\mid_{(\tilde{x},\tilde{y})}=(\Phi_{g,h}\varphi)^\alpha\mid_{(x,y)},\qquad \alpha=1,..,4, \end{eqnarray*} where, $\tilde{x}^i=\psi^{*}x^i$ and $\tilde{y}^i=\psi^{*}y^i$. \end{lem} \begin{proof} Let $(x^i,y^i)$ and $(\bar{x}^i,\bar{y}^i)$ be the two local coordinate systems on $TM$ and $TN$, respectively and $\tilde{x}^i=\psi^{*}x^i$ and $\tilde{y}^i=\psi^{*}y^i$. For $\alpha=i$, we have \begin{align*} (\Phi_{g,h}\varphi)^i\mid_{(x,y)}&=g^{pq}(x,y)\Big (\frac{\delta^{2}\varphi^{i}}{\delta x^{p} \delta x^{q}}(x,y)+F^2(x,y)\frac{\partial^2\varphi^i}{\partial y^p\partial y^q}(x,y)\\ &\quad-\frac{\delta \varphi^{i}}{\delta x^{k}}(x,y)\Gamma^{k}_{pq}(x,y)+\bar{\Gamma}^{i}_{jk}(\bar{x},\bar{y})\frac{\delta \varphi^{j}}{\delta x^{p}}(x,y)\frac{\delta \varphi^{k}}{\delta x^{q}}(x,y)\Big)\\ &=g^{pq}(\psi(\tilde{x},\tilde{y}))\Big(\frac{\delta^{2}\varphi^{i}}{\delta x^{p} \delta x^{q}}(\psi(\tilde{x},\tilde{y}))+F^2(\psi(\tilde{x},\tilde{y}))\frac{\partial^2\varphi^i}{\partial y^p\partial y^q}(\psi(\tilde{x},\tilde{y})) \\ &\quad-\frac{\delta \varphi^{i}}{\delta x^{k}}(\psi(\tilde{x},\tilde{y}))\Gamma^{k}_{pq}(\psi(\tilde{x},\tilde{y}))\\ &\quad+\bar{\Gamma}^{i}_{jk}(\bar{x},\bar{y})\frac{\delta \varphi^{j}}{\delta x^{p}}(\psi(\tilde{x},\tilde{y}))\frac{\delta \varphi^{k}}{\delta x^{q}}(\psi(\tilde{x},\tilde{y}))\Big)\\ &=(\psi^{*} g)^{pq}(\tilde{x},\tilde{y})\Big(\frac{\delta^{2}(\psi^{*}\varphi)^{i}}{\delta \tilde{x}^{p} \delta \tilde{x}^{q}}(\tilde{x},\tilde{y})+(\psi^{*}F^2)(\tilde{x},\tilde{y})\frac{\partial^{2}(\psi^{*}\varphi)^{i}}{\partial \tilde{y}^{p} \partial \tilde{y}^{q}}(\tilde{x},\tilde{y})\\ &\quad-\frac{\delta (\psi^{*}\varphi )^{i}}{\delta \tilde{x}^{k}}(\tilde{x},\tilde{y})\Gamma(\psi^{*}g)^{k}_{pq}(\tilde{x},\tilde{y})\\ &\quad+\bar{\Gamma}^{i}_{jk}(\bar{x},\bar{y})\frac{\delta (\psi^{*}\varphi)^{j}}{\delta \tilde{x}^{p}}(\tilde{x},\tilde{y})\frac{\delta (\psi^{*}\varphi)^{k}}{\delta \tilde{x}^{q}}(\tilde{x},\tilde{y})\Big) \\ &=(\Phi_{\psi^{*}(g),h}\psi^{*}\varphi)^i\mid_{(\tilde{x},\tilde{y})}.\nonumber \end{align*} Similarly, for $\alpha=2+i$, one can show that \begin{equation*} (\Phi_{g,h}\varphi)^{2+i}\mid_{(x,y)}=(\Phi_{\psi^{*}(g),h}\psi^{*}\varphi)^{2+i}\mid_{(\tilde{x},\tilde{y})}, \end{equation*} where, $i=1,2.$ This completes the proof. \end{proof} \begin{rem} \label{main3+1} Let $(M,F)$ and $(N,\bar{F})$ be two Finsler surfaces with the corresponding metric tensors $g$ and $h$, respectively. Let $\varphi:TM\longrightarrow TN$, $\varphi(x^i,y^i)=(\varphi^\alpha(x^i,y^i))$, $\alpha=1,.., 4$, be a diffeomorphism and takes horizontal curves to horizontal curves. Given $\varphi_{0}:TM \longrightarrow TN$, we consider the following evolution equation \begin{equation} \label{1666} \frac{\partial}{\partial t}\varphi^\alpha=(\Phi_{g,h}\varphi)^\alpha,\hspace{0.6cm} \varphi_{(0)}=\varphi_{0}. \end{equation} By restricting $\varphi^\alpha$'s to $SM$ and using Lemma \ref{mm}, one can see that (\ref{1666}) is a strictly parabolic system. Hence, there is a unique solution for (\ref{1666}) in short time. \end{rem} \begin{cor} \label{main3} Let $(M,\tilde{F})$ and $(N,\bar{F})$ be two Finsler surfaces with corresponding metric tensors $\tilde{g}$ and $h$, respectively. Let $N=M$ and $\varphi$ be the identity map $\varphi=Id:TM\longrightarrow TM$, $\varphi(x^i,y^i)=(x^i,y^i)$, then we have \begin{equation*} (\Phi_{\tilde{g},h}Id)^{\alpha}=(\Phi_{\tilde{g},h}Id)^{i}=\tilde{g}^{pq}(-\tilde{\Gamma}^{i}_{pq}+\bar{\Gamma}^{i}_{pq}),\quad \alpha=i, \end{equation*} where, $i=1,2$ and $\tilde{\Gamma}^{i}_{pq}, \bar{\Gamma}^{i}_{pq}$ are the coefficients of horizontal covariant derivatives of Chern connection with respect to the metrics $\tilde{g}$ and $h$, respectively. \end{cor} Let $\xi$ be a vector field on $TM$ with the components \begin{align}\label{def;vect} \xi:=(\Phi_{\tilde{g},h}Id)^{i}\frac{\partial}{\partial x^i}=\tilde{g}^{pq}(-\tilde{\Gamma}^{i}_{pq}+\bar{\Gamma}^{i}_{pq})\frac{\partial}{\partial x^i}. \end{align} Using the fact that the difference of two connections is a tensor, $\xi$ is a globally well-defined vector field. It can be easily verified that the components of $\xi$ are homogeneous of degree zero on $y$, thus $\xi$ can be considered as a vector field on $SM$. \section{Ricci-DeTurck flow and its existence and uniqueness of solution} There are several well known definitions for Ricci tensor in Finsler geometry. For instance, H. Akbar-Zadeh has considered two Ricci tensors on Finsler manifolds in his works namely, one is defined by $ Ric_{ij}:=[\frac{1}{2}F^{2}\mathcal{R}ic]_{y^{i}y^{j}}$ where, $\mathcal{R}ic$ is the \emph{Ricci scalar} defined by $\mathcal{R}ic:=g^{ik}R_{ik}=R^{i}_{\,\,i}$ and $R^{i}_{\,\,k}$ are defined by (\ref{18}), see \cite[p.\ 192]{BCS}. Another Ricci tensor is defined by $Rc_{ij}:=\frac{1}{2} (\textsf{R}_{ij}+\textsf{R}_{ji})$, where $ \textsf{R}_{ij}$ is the trace of $hh$-curvature defined by $ \textsf{R}_{ij}=R^{\,\,l}_{i\,\,lj}$. The difference between these two Ricci tensors is the additional term $\frac{1}{2}y^k\frac{\partial \textsf{R}_{jk}}{\partial y^i}$ appeared in the first definition. More precisely, we have $Ric_{ij}-Rc_{ij}=\frac{1}{2}y^k\frac{\partial \textsf{R}_{jk}}{\partial y^i}$. D. Bao based on the first definition of Ricci tensor has considered the following Ricci flow in Finsler geometry, \begin{equation}\label{20002} \frac{\partial}{\partial t}g_{jk}(t)=-2Ric_{jk},\hspace{0.6cm}g_{(t=0)}=g_{0}, \end{equation} where, $g_{jk}(t)$ is a family of Finslerian metrics defined on $\pi^{*}TM\times[0,T)$. Contracting (\ref{20002}) with $y^{j}y^{k}$, via Euler's theorem, leads to $\frac{\partial}{\partial t}F^{2}=-2F^{2}\mathcal{R}ic$. That is, \begin{equation} \label{20} \frac{\partial}{\partial t}\log F(t)=-\mathcal{R}ic,\hspace{0.6cm}F(t=0):=F_{0}, \end{equation} where, $F_{0}$ is the initial Finsler structure, see \cite{Bao}. Here and everywhere in the present work we consider the first Akbar-Zadeh's definition of Ricci tensor and the related Ricci flow (\ref{20}). One of the advantages of the Ricci quantity $Ric_{ij}$, used in the present work is its independence on the choice of Cartan, Berwald or Chern connections. \begin{defn} \label{Main4} Let $M$ be a compact surface with a fixed background Finsler structure $\bar{F}$ and related Finsler metric $h$. Assume that for all $t\in [0,T)$, $\tilde{F}(t)$ is a one-parameter family of Finsler structures on $TM$ and $\tilde{g}(t)$ is the tensor metric related to $\tilde{F}(t)$. We say that $\tilde{F}(t)$ is a solution to the Finslerian Ricci-DeTurck flow if \begin{equation} \label{22} \frac{\partial}{\partial t}\tilde{F}^{2}(t)=-2\tilde{F}^{2}(t)\mathcal{R}ic(\tilde{g}(t))-\mathcal{L}_{\xi}\tilde{F}^{2}(t), \end{equation} where, $\mathcal{L}_{\xi}$ is the Lie derivative with respect to the vector field $\xi=(\Phi_{\tilde{g}(t),h}Id)^{i}\frac{\partial}{\partial x^i}$ on $SM$ as mentioned earlier. \end{defn} The following theorem shows that the Ricci-DeTurck flow (\ref{22}) is well defined and has a unique solution on a short time interval.\\ {\bf Proof of Theorem \ref{main8}.} Let $M$ be a compact surface with a fixed background Finsler structure $\bar{F}$ and the related Finsler metric $h$. Here, all the indices run over the range $1,2$. The Ricci-DeTurck flow (\ref{22}) can be written in the following form \begin{equation}\label{28} y^{p}y^{q}\frac{\partial}{\partial t}\tilde{g}_{pq}(t)=-2\tilde{F}^{2}(t)\mathcal{R}ic(\tilde{g}(t))-\mathcal{L}_{\xi}y^{p}y^{q}\tilde{g}_{pq}(t), \end{equation} where, $\tilde{g}(t)$ is the metric tensor related to $\tilde{F}(t)$. Also we have \begin{equation*} \mathcal{L}_{\xi}(y^{p}y^{q}\tilde{g}_{pq})=y^{p}y^{q}\mathcal{L} _{\xi}\tilde{g}_{pq}+2y^{p}\tilde{g}_{pq}\mathcal{L}_{\xi}y^{q}. \end{equation*} Therefore, (\ref{28}) becomes \begin{equation}\label{2800} y^{p}y^{q}\frac{\partial}{\partial t}\tilde{g}_{pq}(t)=-2\tilde{F}^{2}(t)\mathcal{R}ic(\tilde{g}(t))-y^{p}y^{q}\mathcal{L} _{\xi}\tilde{g}_{pq}-2y^{p}\tilde{g}_{pq}\mathcal{L}_{\xi}y^{q}. \end{equation} By means of the Lie derivative formula (\ref{FINAL}) along $\xi$ we have \begin{equation}\label{30} y^py^q\mathcal{L}_{\xi}\tilde{g}_{pq}=2y^py^q\nabla_{p}\xi_{q}, \end{equation} where, $\nabla_{p}$ is the horizontal covariant derivative in Chern connection. Using its $h$-metric compatibility, $\nabla_{p}\xi_{q}$ becomes \begin{eqnarray*} \nabla_{p}\xi_{q}=(\nabla_{p}\tilde{g}_{ql}\xi^{l})= \tilde{g}_{ql}(\nabla_{p}\xi^{l}). \end{eqnarray*} As mentioned earlier, if we denote the coefficients of horizontal covariant derivatives of Chern connection with respect to the metric tensors $h$ and $\tilde{g}$ by $\Gamma(h)$ and $\Gamma(\tilde{g})$, respectively, then by definition \eqref{def;vect} of $\xi$ we have \begin{align*} \nabla_{p}\xi_{q}&=\tilde{g}_{ql}(\delta_{p}\xi^{l}+\Gamma(\tilde{g})^{l}_{pw}\xi^{w})\\ &=\tilde{g}_{ql}[\delta_{p}(\tilde{g}^{mn}(\Gamma(h)^{l}_{mn}-\Gamma(\tilde{g})^{l}_{mn}))]+\tilde{g}_{ql}\Gamma(\tilde{g})^{l}_{pw}\xi^{w}\\ &=\tilde{g}_{ql}[(\delta_{p}\tilde{g}^{mn})(\Gamma(h)^{l}_{mn}-\Gamma(\tilde{g})^{l}_{mn})+\tilde{g}^{mn}\delta_{p}(\Gamma(h)^{l}_{mn})-\tilde{g}^{mn}\delta_{p}(\Gamma(\tilde{g})^{l}_{mn})]\nonumber\\ &\quad +\tilde{g}_{ql}\Gamma(\tilde{g})^{l}_{pw}\xi^{w}\\ &=\frac{1}{2}\tilde{g}^{mn}(\delta_{p}\delta_{q}\tilde{g}_{mn} -\delta_{p}\delta_{n}\tilde{g}_{qm}-\delta_{p}\delta_{m}\tilde{g}_{qn})\nonumber\\ &\quad -\frac{1}{2}\tilde{g}_{ql}\tilde{g}^{mn}(\delta_{p}\tilde{g}^{ls}) (\delta_{n}\tilde{g}_{sm}-\delta_{s}\tilde{g}_{mn}+\delta_{m}\tilde{g}_{ns})\\ &\quad +\tilde{g}_{ql}(\delta_{p}\tilde{g}^{mn})(\Gamma(h)^{l}_{mn}-\Gamma(\tilde{g})^{l}_{mn}) +\tilde{g}_{ql}\tilde{g}^{mn}\delta_{p}(\Gamma(h)^{l}_{mn})+\tilde{g}_{ql}\Gamma(\tilde{g})^{l}_{pw}\xi^{w}.\\ \end{align*} Using the last equation, (\ref{30}) is written \begin{align} \label{31} y^{p}y^{q}\mathcal{L}_{\xi}\tilde{g}_{pq}=&y^py^q\tilde{g}^{mn}(\delta_{p}\delta_{q}\tilde{g}_{mn} -\delta_{p}\delta_{n}\tilde{g}_{qm}-\delta_{p}\delta_{m}\tilde{g}_{qn})\nonumber\\ &-y^py^q\tilde{g}_{ql}\tilde{g}^{mn}(\delta_{p}\tilde{g}^{ls}) (\delta_{n}\tilde{g}_{sm}-\delta_{s}\tilde{g}_{mn}+\delta_{m}\tilde{g}_{ns})\nonumber\\ &+2y^py^q\tilde{g}_{ql}(\delta_{p}\tilde{g}^{mn})(\Gamma(h)^{l}_{mn}-\Gamma(\tilde{g})^{l}_{mn})\nonumber\\ &+2y^py^q\tilde{g}_{ql}\tilde{g}^{mn}\delta_{p}(\Gamma(h)^{l}_{mn})+2y^py^q\tilde{g}_{ql}\Gamma (\tilde{g})^{l}_{pw}\xi^{w}. \end{align} Also we have \begin{equation} \label{32} -2\tilde{F}^{2}\mathcal{R}ic(\tilde{g})=-2\tilde{F}^{2}R^{n}_{\,\,n} =-2\tilde{F}^{2}l^{q}R^{\,\,n}_{q\,\,np}l^{p}, \end{equation} where, $R^{\,\,n}_{q\,\,np}$ are the components of hh-curvature tensor of Chern connection and $l^{q}=\frac{y^{q}}{\tilde{F}}$ are the components of Liouville vector field. Replacing (\ref{77}) in (\ref{32}) and using the definition of $\Gamma(\tilde{g})$, yields \begin{align* -2\tilde{F}^{2}\mathcal{R}ic(\tilde{g})&=-2\tilde{F}^{2}l^{q}R^{\,\,n}_{q\,\,np}l ^{p}\nonumber\\ &=-2y^{p}y^{q}(\delta_{n}\Gamma^{n}_{qp}(\tilde{g})-\delta_{p}\Gamma^{n}_{qn}(\tilde{g}) +\Gamma^{n}_{mn}(\tilde{g})\Gamma^{m}_{qp}(\tilde{g})-\Gamma^{n}_{mp}(\tilde{g})\Gamma^{m} _{qn}(\tilde{g}))\nonumber \\ &=-2y^{p}y^{q}[\delta_{n}(\frac{1}{2}\tilde{g}^{mn}(\delta_{q}\tilde{g}_{mp}+\delta_{p}\tilde{g}_{mq}-\delta_{m}\tilde{g}_{pq}))]\nonumber\\ &\quad +2y^{p}y^{q}[\delta_{p}(\frac{1}{2}\tilde{g}^{mn}(\delta_{q}\tilde{g}_{mn}+\delta_{n}\tilde{g}_{qm}-\delta_{m}\tilde{g}_{qn}))] \nonumber\\ &\quad -2y^{p}y^{q}(\Gamma^{n}_{mn}(\tilde{g})\Gamma^{m}_{qp}(\tilde{g})-\Gamma^{n}_{mp}(\tilde{g})\Gamma^ {m}_{qn}(\tilde{g})). \end{align*} By applying the $\delta_{p}$ derivative we have \begin{align} \label{34} -2\tilde{F}^{2}\mathcal{R}ic(\tilde{g})=&y^{p}y^{q}\tilde{g}^{mn}(\delta_{p}\delta_{q}\tilde{g}_{mn}+\delta_{n}\delta_{m} \tilde{g}_{pq}-\delta_{p}\delta_{m}\tilde{g}_{qn}-\delta_{n}\delta_{q} \tilde{g}_{mp})\nonumber\\ &-y^{p}y^{q}(\delta_{n}\tilde{g}^{nm})(\delta_{q}\tilde{g}_{mp}+\delta_{p}\tilde{g}_{qm} -\delta_{m}\tilde{g}_{pq})\nonumber\\ &+y^{p}y^{q}(\delta_{p}\tilde{g}^{mn})(\delta_{q}\tilde{g}_{mn}+\delta_{n}\tilde{g}_{qm} -\delta_{m}\tilde{g}_{qn})\nonumber\\ &-2y^{p}y^{q}(\Gamma^{n}_{mn}(\tilde{g})\Gamma^{m}_{qp}(\tilde{g})-\Gamma^{n}_{mp}(\tilde{g}) \Gamma^{m}_{qn}(\tilde{g})). \end{align} Substituting (\ref{31}) and (\ref{34}) in (\ref{2800}), we obtain \begin{align} \label{36} y^{p}y^{q}\frac{\partial}{\partial t}\tilde{g}_{pq}(t)=&y^py^q\tilde{g}^{mn}\delta_{n}\delta_{m}\tilde{g}_{pq}\\ &-y^{p}y^{q}(\delta_{n}\tilde{g}^{nm})(\delta_{q}\tilde{g}_{mp}+\delta_{p}\tilde{g}_{qm} -\delta_{m}\tilde{g}_{pq})\nonumber\\ &+y^{p}y^{q}(\delta_{p}\tilde{g}^{mn})(\delta_{q}\tilde{g}_{mn}+\delta_{n}\tilde{g}_{qm} -\delta_{m}\tilde{g}_{qn})\nonumber\\ &-2y^{p}y^{q}(\Gamma^{n}_{mn}(\tilde{g})\Gamma^{m}_{qp}(\tilde{g})-\Gamma^{n}_{mp}(\tilde{g}) \Gamma^{m}_{qn}(\tilde{g}))\nonumber\\ &+y^py^q\tilde{g}_{ql}\tilde{g}^{mn}(\delta_{p}\tilde{g}^{ls}) (\delta_{n}\tilde{g}_{sm}-\delta_{s}\tilde{g}_{mn}+\delta_{m}\tilde{g}_{ns})\nonumber\\ &-2y^py^q\tilde{g}_{ql}(\delta_{p}\tilde{g}^{mn})(\Gamma(h)^{l}_{mn}-\Gamma(\tilde{g})^{l}_{mn})\nonumber\\ &-2y^py^q\tilde{g}_{ql}\tilde{g}^{mn}\delta_{p}(\Gamma(h)^{l}_{mn})-2y^py^q\tilde{g}_{ql}\Gamma (\tilde{g})^{l}_{pw}\xi^{w}\nonumber\\ &-2y^{p}\tilde{g}_{pq}\mathcal{L}_{\xi}y^{q}.\nonumber \end{align} Using Euler's theorem yields \begin{equation} \label{RE3} y^{p}y^{q}\frac{\partial^{2}\tilde{g}_{pq}}{\partial y^{n}\partial y^{m}}=\frac{\partial ^{2}}{\partial y^{n}\partial y^{m}}(y^{p}y^{q}\tilde{g}_{pq})-2\tilde{g}_{nm}=0. \end{equation} In order to get a strictly parabolic system, by virtue of (\ref{RE3}) we add the zero term $\tilde{F}^2 y^{p}y^{q}\tilde{g}^{mn}\frac{\partial^{2}\tilde{g}_{pq}}{\partial y^{n}\partial y^{m}}=0$ to the right hand side of (\ref{36}). Therefore, we have \begin{align} \label{36+11} y^{p}y^{q}\Big(&\frac{\partial}{\partial t}\tilde{g}_{pq}(t)-\tilde{g}^{mn}\delta_{n}\delta_{m}\tilde{g}_{pq}-\tilde{F}^2 \tilde{g}^{mn}\frac{\partial^{2}\tilde{g}_{pq}}{\partial y^{n}\partial y^{m}}\\ &+(\delta_{n}\tilde{g}^{nm})(\delta_{q}\tilde{g}_{mp}+\delta_{p}\tilde{g}_{qm} -\delta_{m}\tilde{g}_{pq})\nonumber\\ &-(\delta_{p}\tilde{g}^{mn})(\delta_{q}\tilde{g}_{mn}+\delta_{n}\tilde{g}_{qm} -\delta_{m}\tilde{g}_{qn})\nonumber\\ &+2(\Gamma^{n}_{mn}(\tilde{g})\Gamma^{m}_{qp}(\tilde{g})-\Gamma^{n}_{mp}(\tilde{g}) \Gamma^{m}_{qn}(\tilde{g}))\nonumber\\ &-\tilde{g}_{ql}\tilde{g}^{mn}(\delta_{p}\tilde{g}^{ls}) (\delta_{n}\tilde{g}_{sm}-\delta_{s}\tilde{g}_{mn}+\delta_{m}\tilde{g}_{ns})\nonumber\\ &+2\tilde{g}_{ql}(\delta_{p}\tilde{g}^{mn})(\Gamma(h)^{l}_{mn}-\Gamma(\tilde{g})^{l}_{mn})\nonumber\\ &+2\tilde{g}_{ql}\tilde{g}^{mn}\delta_{p}(\Gamma(h)^{l}_{mn})-2y^py^q\tilde{g}_{ql}\Gamma (\tilde{g})^{l}_{pw}\xi^{w}+2\frac{l_{q}}{F}\tilde{g}_{np}\mathcal{L}_{\xi}y^{n}\Big)=0.\nonumber \end{align} On the other hand, applying twice the vector field $\frac{\delta}{\delta x^{n}}$ on the components of metric tensor $\tilde{g}_{pq}$ yields \begin{align*} \label{RE1} \delta_{n}\delta_{m}\tilde{g}_{pq}=&\frac{\partial^{2}\tilde{g}_{pq}}{\partial x^{n}\partial x^{m}}-\frac{\partial N^{k}_{m}}{\partial x^{n}}\frac{\partial \tilde{g}_{pq}}{\partial y^{k}}-N^{k}_{m}\frac{\partial^{2}\tilde{g}_{pq}}{\partial x^{n}\partial y^{k}}-N^{l}_{n}\frac{\partial^{2}\tilde{g}_{pq}}{\partial y^{l}\partial x^{m}}\nonumber\\ &+N^{l}_{n}\frac{\partial N_{m}^{k}}{\partial y^{l}}\frac{\tilde{g}_{pq}}{\partial y^{k}}+ N^{l}_{n}N_{m}^k\frac{\partial^2\tilde{g}_{pq}}{\partial y^k\partial y^l}.\nonumber \end{align*} Convecting the last equation with $y^{p}y^{q}$ and using (\ref{RE3}) we have \begin{equation*} y^{p}y^{q}\delta_{n}\delta_{m}\tilde{g}_{pq} =y^{p}y^{q}\frac{\partial^{2}\tilde{g}_{pq}}{\partial x^{n}\partial x^{m}}. \end{equation*} Remark that, in the term $y^py^q\tilde{g}^{mn}\delta_{n}\delta_{m}\tilde{g}_{pq}$ in (\ref{36}), there is no term containing derivatives of $\tilde{g}$ except $y^{p}y^{q}\frac{\partial^{2}\tilde{g}_{pq}}{\partial x^{n}\partial x^{m}}$. One can rewrite (\ref{36+11}) as follows \begin{align} \label{36+111} y^{p}y^{q}\Big(\frac{\partial}{\partial t}\tilde{g}_{pq}(t)-\tilde{g}^{mn}\delta_{n}\delta_{m}\tilde{g}_{pq}-\tilde{F}^2 \tilde{g}^{mn}\frac{\partial^{2}\tilde{g}_{pq}}{\partial y^{n}\partial y^{m}} +\textrm{lower order terms}\Big)=0. \end{align} Recall that, $M$ is a 2-dimensional Finsler surface and hence is isotropic. Thus, $R^{\,\,n}_{q\,\,np}$ can be part of a symmetric quadratic form, namely, it is symmetric with respect to the indices $p$ and $q$, see \cite[p.\ 152]{HAZ}. Therefore, by means of symmetry of $\mathcal{L}_{\xi}(y^{p}y^{q}\tilde{g}_{pq})$ with respect to the indices $p$ and $q$ we conclude that (\ref{36+111}) is symmetric with respect to the indices $p$ and $q$. If a symmetric bilinear form vanishes on the diagonal, then by the polarization identity it vanishes identically. Therefore, from (\ref{36+111}) we have \begin{equation} \label{36+1111} \frac{\partial}{\partial t}\tilde{g}_{pq}(t)-\tilde{g}^{mn}\delta_{n}\delta_{m}\tilde{g}_{pq}-\tilde{F}^2 \tilde{g}^{mn}\frac{\partial^{2}\tilde{g}_{pq}}{\partial y^{n}\partial y^{m}} +\textrm{lower order terms}=0. \end{equation} By restricting the metric tensor $\tilde{g}$ on $p^{*}TM$ and using Lemma \ref{mm} we can rewrite (\ref{36+1111}) in terms of the basis $\{\hat{e}_1,\hat{e}_{2},\hat{e}_3\}$ on $SM$ as follows \begin{equation}\label{asli} \frac{\partial}{\partial t}\tilde{g}_{pq}=\tilde{g}^{ab}\hat{e}_{b}\hat{e}_{a}\tilde{g}_{pq}+\tilde{g}^{11} \hat{e}_{3}\hat{e}_{3}\tilde{g}_{pq}-B^c\hat{e}_c\tilde{g}_{pq}-D^1\hat{e}_{3}\tilde{g}_{pq}+\textrm{lower order terms}, \end{equation} where, $B^c:=v^c_i\tilde{g}^{ab}\hat{e}_{b}(u^i_a)$ and $D^1:=v^1_i\tilde{F}\tilde{g}^{11}\hat{e}_{3}u^i_{1}$ as mentioned in Lemma \ref{mm} and all the indices in (\ref{asli}) run over the range $1,2$. By assumption $M$ is compact and the sphere bundle $SM$ as well. Also, the metric tensor $\tilde{g}^{mn}$ remains positive definite along the Ricci flow, see \cite{YB2}, Corollary 3.7. Since the coefficients of principal (second) order terms of (\ref{asli}) are positive definite, by Definition \ref{semipar}, it is a semi-linear strictly parabolic system on $SM$. Therefore, the standard existence and uniqueness theorem for parabolic systems on compact domains implies that, (\ref{asli}) has a unique solution on $SM$. Equation (\ref{asli}) is a special case of the general flow (\ref{AR}) and $\tilde{g}(t)$ is a solution to it. Therefore, by means of Lemma \ref{RE2}, $\tilde{g}(t)$ satisfies the integrability condition or equivalently, there exists a Finsler structure $\tilde{F}(t)$ on $TM$ such that $\tilde{g}_{ij}=\frac{1}{2}\frac{\partial^2 \tilde{F}}{\partial y^i\partial y^j}$. Hence, $\tilde{g}$ is a Finsler metric and determines a Finsler structure $\tilde{F}^{2}:=\tilde{g}_{pq}y^{p}y^{p}$ which is a unique solution to the Finsler Ricci-DeTurck flow. This completes the proof of Theorem \ref{main8}.\hspace{\stretch{1}}$\Box$ \section{Short time solution to the Ricci flow on Finsler surfaces} In this section, we will show that there is a one-to-one correspondence between the solutions to Ricci flow and Ricci-DeTurck flow on Finsler surfaces. Here, we recall some results which will be used in the sequel. \begin{Lemma} \label{main9} \cite[p.\ 82]{Chow1} Let $\{X_{t}:0\leq t<T\leq \infty\}$ be a continuous time-dependent family of vector fields on a compact manifold $M$, then there exists a one-parameter family of diffeomorphisms $\{\varphi_{t}:M \longrightarrow M;\quad 0 \leq t<T \leq \infty\}$ defined on the same time interval such that \begin{equation* \left\{ \begin{array}{l} \frac{\partial}{\partial t} \varphi_{t}(x)=X_{t}[\varphi_{t}(x)],\cr \varphi_{0}(x)=x, \end{array} \right. \end{equation*} for all $x\in M$ and $t\in[0,T)$. \end{Lemma} \begin{rem}\label{remark1} Let $M$ be a compact Finsler surface. According to Lemma \ref{main9} there exists a unique one-parameter family of diffeomorphisms $\tilde{\varphi}_{t}$ on $SM$, such that \begin{equation* \left\{ \begin{array}{l} \frac{\partial}{\partial t}\varphi_{t}(z)=\xi(\varphi_{t}(z),t),\cr \varphi_{0}=Id_{SM}, \end{array} \right. \end{equation*} where, $z=(x,[y])\in SM$ and $t\in[0,T)$. \end{rem} \begin{rem}\label{remark2} Let $\tilde{g}_{pq}$ be a solution to the Ricci-DeTurck flow and $\varphi_{t}$ the one-parameter global group of diffeomorphisms according to the vector field $\xi$. Since $\xi$ is a vector field on $SM$, then $\varphi_{t}$ are homogeneous of degree zero. Zero-homogeneity of $\tilde{g}_{pq}$ implies that $\varphi_{t}^{*}(\tilde{g}_{pq})$ be also homogeneous of degree zero. In fact, \begin{equation*} (\varphi_{t}^{*}\tilde{g}_{pq})(x,\lambda y)=\tilde{g}_{pq}(\varphi_{t}(x,\lambda y))=\tilde{g}_{pq}(\varphi_{t}(x,y))= (\varphi_{t}^{*}\tilde{g}_{pq})(x,y). \end{equation*} Using the fact that $\tilde{g}_{pq}$ is positive definite and $\varphi_{t}^{*}$ are diffeomorphisms, $\varphi_{t}^{*}(\tilde{g}_{pq})$ is also positive definite. As well $\varphi_{t}^{*}(\tilde{g}_{pq})$ is symmetric. More intuitively, \begin{equation*} (\varphi_{t}^{*}\tilde{g})(X,Y)=g(\varphi_{t_{_*}}(X),\varphi_{t_{_*}}(Y))=g(\varphi_{t_{_*}}(Y),\varphi_{t_{_*}}(X))=(\varphi_{t}^{*}\tilde{g})(Y,X). \end{equation*} Therefore, $\varphi_{t}^{*}(\tilde{g}_{pq})$ determines a Finsler structure as follows \begin{equation*} F^{2}:=g_{pq}\tilde{y}^{p}\tilde{y}^{q}, \end{equation*} where, $g_{pq}:=\varphi_{t}^{*}(\tilde{g}_{pq})$ and $\varphi_{t}^{*}y^p:=\tilde{y}^{p}$. \end{rem} \begin{lem} Let $\varphi_{t}$ be a global one parameter group of diffeomorphisms corresponding to the vector field $\xi$ and $(\gamma^{i}_{jk})_{\tilde{g}}$ and $(G^{i})_{\tilde{g}}$ are the Christoffel symbols and spray coefficients related to the Finsler metric $\tilde{g}$, respectively. Then we have \begin{align} &\varphi_{t}^{*}((\gamma_{jk}^{i})_{\tilde{g}})=(\gamma_{jk}^{i})_{\varphi_{t}^{*}(\tilde{g})},\label{Christoffel}\\ &\varphi_{t}^{*}(G^{i}_{\tilde{g}})=G^{i}_{\varphi_{t}^{*}(\tilde{g})},\label{spray} \end{align} where, $(\gamma_{jk}^{i})_{\tilde{g}}=\tilde{g}^{is}\frac{1}{2}(\frac{\partial \tilde{g}_{sj}}{\partial x^k}-\frac{\partial \tilde{g}_{jk}}{\partial x^s}+\frac{\partial \tilde{g}_{ks}}{\partial x^j})$ and $G^{i}_{\tilde{g}}=\frac{1}{2}(\gamma^{i}_{jk})_{\tilde{g}}y^jy^k$. \end{lem} \begin{proof} Let us denote $\varphi_{t}^{*}x^i=\tilde{x}^i$ and $\varphi_{t}^{*}y^i=\tilde{y}^i$. By definition, we have \begin{align*} \varphi_{t}^{*}((\gamma^{i}_{jk})_{\tilde{g}})&=\varphi_{t}^{*}(\tilde{g}^{is}\frac{1}{2}(\frac{\partial \tilde{g}_{sj}}{\partial x^k}-\frac{\partial \tilde{g}_{jk}}{\partial x^s}+\frac{\partial \tilde{g}_{ks}}{\partial x^j}))\\ &=\varphi_{t}^{*}(\tilde{g}^{is})\varphi_{t}^{*}(\frac{1}{2}(\frac{\partial \tilde{g}_{sj}}{\partial x^k}-\frac{\partial \tilde{g}_{jk}}{\partial x^s}+\frac{\partial \tilde{g}_{ks}}{\partial x^j}))\\ &=\varphi_{t}^{*}(\tilde{g}^{is})\frac{1}{2}(\frac{\partial\varphi_{t}^{*}( \tilde{g}_{sj})}{\partial \tilde{x}^k}-\frac{\partial\varphi_{t}^{*}( \tilde{g}_{jk})}{\partial \tilde{x}^s}+\frac{\partial\varphi_{t}^{*}(\tilde{g}_{ks})}{\partial \tilde{x}^j})\\ &=(\gamma_{jk}^{i})_{\varphi_{t}^{*}(\tilde{g})}. \end{align*} Next, by means of (\ref{Christoffel}) we have \begin{align*} \varphi_{t}^{*}(G^{i}_{\tilde{g}})&=\varphi_{t}^{*}(\frac{1}{2}(\gamma^{i}_{jk})_{\tilde{g}}y^jy^k)=\frac{1}{2}\varphi_{t}^{*}((\gamma^{i}_{jk})_{\tilde{g}})\varphi_{t}^{*}y^j\varphi_{t}^{*}y^k\\ &=\frac{1}{2}(\gamma^{i}_{jk})_{\varphi_{t}^{*}(\tilde{g})}\tilde{y}^{j}\tilde{y}^{k}=G^{i}_{\varphi_{t}^{*}(\tilde{g})}. \end{align*} This completes the proof. \end{proof} \begin{lem}\label{lemmohem} Let $\varphi_{t}$ be a global one parameter group of diffeomorphisms generating the vector field $\xi$ and $\mathcal{R}ic_{\tilde{g}}$ the Ricci scalar related to the Finsler metric $\tilde{g}$, then we have \begin{equation*} \varphi_{t}^{*}(\mathcal{R}ic_{\tilde{g}})=\mathcal{R}ic_{\varphi_{t}^{*}(\tilde{g})}. \end{equation*} \end{lem} \begin{proof} Let us consider the \emph{reduced hh-curvature tensor} $R^{i}_{\,\,k}$ which is expressed entirely in terms of the $x$ and $y$ derivatives of the spray coefficients $G^{i}_{\tilde{g}}$. \begin{equation*} (R^{i}_{\,\,k})_{\tilde{g}}:=\frac{1}{\tilde{F}^2}(2\frac{\partial G^{i}_{\tilde{g}}}{\partial x^{k}}-\frac{\partial^{2}G^{i}_{\tilde{g}}}{\partial x^j\partial y^k}y^{j}+2G^{j}_{\tilde{g}}\frac{\partial^{2}G^{i}_{\tilde{g}}}{\partial y^j\partial y^k}-\frac{\partial G^{i}_{\tilde{g}}}{\partial y^{j}}\frac{\partial G^{j}_{\tilde{g}}}{\partial y^{k}}). \end{equation*} Therefore, we have \begin{align*} \varphi_{t}^{*}((R^{i}_{\,\,k})_{\tilde{g}})&=\varphi_{t}^{*} (\frac{1}{\tilde{F}^2}(2\frac{\partial G^{i}_{\tilde{g}}}{\partial x^{k}}-\frac{\partial^{2}G^{i}_{\tilde{g}}}{\partial x^j\partial y^k}y^{j}+2G^{j}_{\tilde{g}}\frac{\partial^{2}G^{i}_{\tilde{g}}}{\partial y^j\partial y^k}-\frac{\partial G^{i}_{\tilde{g}}}{\partial y^{j}}\frac{\partial G^{j}_{\tilde{g}}}{\partial y^{k}}))\nonumber\\ &=\varphi_{t}^{*}(\frac{1}{\tilde{F}^2})\varphi_{t}^{*}(2\frac{\partial G^{i}_{\tilde{g}}}{\partial x^{k}}-\frac{\partial^{2}G^{i}_{\tilde{g}}}{\partial x^j\partial y^k}y^{j}+2G^{j}_{\tilde{g}}\frac{\partial^{2}G^{i}_{\tilde{g}}}{\partial y^j\partial y^k}-\frac{\partial G^{i}_{\tilde{g}}}{\partial y^{j}}\frac{\partial G^{j}_{\tilde{g}}}{\partial y^{k}}). \end{align*} Thus, we get \begin{align*} \varphi_{t}^{*}((R^{i}_{\,\,k})_{\tilde{g}})=&\frac{1}{\varphi_{t}^{*}(\tilde{F}^2)}(2\frac{\partial(\varphi_{t}^{*} (G^{i}_{\tilde{g}}))}{\partial \tilde{x}^{k}}-\frac{\partial^{2}(\varphi_{t}^{*}(G^{i}_{\tilde{g}}))}{\partial \tilde{x}^j\partial \tilde{y}^k}\tilde{y}^{j}\nonumber\\ &+2\varphi_{t}^{*}(G^{j}_{\tilde{g}})\frac{\partial^{2}(\varphi_{t}^{*}(G^{i}_{\tilde{g}}))}{\partial \tilde{y}^j\partial \tilde{y}^k}-\frac{\partial(\varphi_{t}^{*}( G^{i}_{\tilde{g}}))}{\partial \tilde{y}^{j}}\frac{\partial (\varphi_{t}^{*}(G^{j}_{\tilde{g}}))}{\partial \tilde{y}^{k}}). \end{align*} Putting $i=k$ in this equation together with (\ref{spray}) implies \begin{equation*} \varphi_{t}^{*}(\mathcal{R}ic_{\tilde{g}})=\mathcal{R}ic_{\varphi_{t}^{*}(\tilde{g})}, \end{equation*} as we have claimed. \end{proof} Now we are in a position to prove the following proposition. \begin{prop} \label{main11} Fix a compact Finsler surface $(M,\bar{F})$ with related Finsler metric tensor $h$. Let $\tilde{F}(t)$ be a family of solutions to the Ricci-DeTurck flow \begin{eqnarray} \label{46} \frac{\partial}{\partial t}\tilde{F}^{2}(t)=-2\tilde{F}^{2}(t)\mathcal{R}ic(\tilde{g}(t))-\mathcal{L}_{\xi}\tilde{F}^{2}(t), \end{eqnarray} where, $\xi=(\Phi_{\tilde{g}(t),h}Id)^{i}\frac{\partial}{\partial x^i}$ and $t\in[0,T)$. Moreover, let $\varphi_{t}$ be a one-parameter family of diffeomorphisms satisfying \begin{eqnarray*} \frac{\partial}{\partial t}\varphi_{t}(z)=\xi(\varphi_{t}(z),t),\nonumber \end{eqnarray*} for $z\in SM$ and $t\in[0,T)$. Then the Finsler structures $F(t)$ form a solution to the Finslerian Ricci flow (\ref{20}) where, $F(t)$ is defined by \begin{eqnarray*} F^{2}(t):=g_{pq}\tilde{y}^{p}\tilde{y}^{q}=\varphi_{t}^{*}(\tilde{F}^{2}(t)).\nonumber \end{eqnarray*} where, $g_{pq}:=\varphi_{t}^{*}(\tilde{g}_{pq})$ and $\varphi_{t}^{*}y^p:=\tilde{y}^{p}$. \end{prop} \begin{proof} In order to show $F(t)$ form a solution to the Finslerian Ricci flow (\ref{20}) we have to show $\frac{\partial}{\partial t}(\log F(t))=-\mathcal{R}ic.$ Derivation of $F^{2}(t)=\varphi_{t}^{*}(\tilde{F}^{2}(t))$ with respect to the parameter $t$, leads to \begin{equation} \label{50} \frac{\partial}{\partial t}(\log F(t))=\frac{1}{2}\frac{\frac{\partial}{\partial t}(\varphi_{t}^{*}(\tilde{F}^{2}(t)))}{\varphi_{t}^{*}(\tilde{F}^{2}(t))}. \end{equation} The term $\frac{\partial}{\partial t}(\varphi_{t}^{*}\tilde{F}^{2}(t))$ becomes \begin{eqnarray} \label{51} \frac{\partial}{\partial t}(\varphi_{t}^{*}\tilde{F}^{2}(t))&=&\frac{\partial}{\partial s}(\varphi_{s+t}^{*}(\tilde{F}^{2}(s+t)))\mid_{s=0}\\ &=&\varphi_{t}^{*}(\frac{\partial}{\partial t}\tilde{F}^{2}(t))+\frac{\partial}{\partial s}(\varphi_{s+t}^{*}(\tilde{F}^{2}(t)))\mid_{s=0}\nonumber \\ &=&\varphi_{t}^{*}(\frac{\partial}{\partial t}\tilde{F}^{2}(t))+\frac{\partial}{\partial s}((\varphi_{t}^{-1}\circ\hspace{0.1cm}\varphi_{t+s})^{*}(\varphi_{t}^{*}(\tilde{F}^{2}(t))))\mid_{s=0}\nonumber\\ &=&\varphi_{t}^{*}(\frac{\partial}{\partial t}\tilde{F}^{2}(t))+\mathcal{L}_{\frac{\partial}{\partial s}(\varphi_{t}^{-1}\circ\hspace{0.1cm}\varphi_{t+s})\mid_{s=0}}\varphi_{t}^{*}(\tilde{F}^{2}(t)).\nonumber \end{eqnarray} On the other hand, we have \begin{eqnarray*} \frac{\partial}{\partial s}(\varphi_{t}^{-1}\circ\hspace{0.1cm}\varphi_{t+s})\mid_{s=0}=(\varphi_{t}^{-1})_{*}((\frac{\partial}{\partial s}\varphi_{s+t})\mid_{s=0})=(\varphi_{t}^{-1})_{*}(\xi). \end{eqnarray*} Hence, (\ref{51}) is written \begin{eqnarray* \frac{\partial}{\partial t}(\varphi_{t}^{*}\tilde{F}^{2}(t))=\varphi_{t}^{*}(\frac{\partial}{\partial t}\tilde{F}^{2}(t))+\mathcal{L}_{(\varphi_{t}^{-1})_{*}(\xi)}\varphi_{t}^{*}(\tilde{F}^{2}(t)). \end{eqnarray*} Replacing the last relation in (\ref{50}) and using the assumption (\ref{46}) we get \begin{eqnarray* \frac{\partial}{\partial t}(\log F(t))&=&\frac{1}{2}\frac{\varphi_{t}^{*}(\frac{\partial}{\partial t}\tilde{F}^{2}(t))+\mathcal{L}_{(\varphi_{t}^{-1})_{*}(\xi)} \varphi_{t}^{*}(\tilde{F}^{2}(t))}{\varphi_{t}^{*}(\tilde{F}^{2}(t))} \\ &=&\frac{1}{2}\frac{\varphi_{t}^{*}(-2\tilde{F}^{2}(t)\mathcal{R}ic(\tilde{F}(t))-\mathcal{L}_{\xi}\tilde{F}^{2}(t))+ \mathcal{L}_{(\varphi_{t}^{-1})_{*}(\xi)}\varphi_{t}^{*}(\tilde{F}^{2}(t))}{\varphi_{t}^{*}(\tilde{F}^{2}(t))}\\ &=&\frac{1}{2}\frac{\varphi_{t}^{*}(-2\tilde{F}^{2}(t)\mathcal{R}ic(\tilde{F}(t)))-\varphi_{t}^{*}(\mathcal{L}_{\xi}\tilde{F}^{2}(t))) +\mathcal{L}_{(\varphi_{t}^{-1})_{*}(\xi)}\varphi_{t}^{*}(\tilde{F}^{2}(t))}{\varphi_{t}^{*}(\tilde{F}^{2}(t))}\\ &=&\frac{1}{2}\frac{-2\varphi_{t}^{*}(\tilde{F}^{2}(t))\varphi_{t}^{*}(\mathcal{R}ic(\tilde{F}(t)))}{\varphi_{t}^{*}(\tilde{F}^{2}(t))}. \end{eqnarray*} By virtue of Lemma \ref{lemmohem} we have \begin{eqnarray*} \frac{\partial}{\partial t}(\log F(t)) =-\varphi_{t}^{*}(\mathcal{R}ic(\tilde{F}(t))) =-\mathcal{R}ic_{\varphi_{t}^{*}(\tilde{F}(t))} =-\mathcal{R}ic_{F(t)}. \end{eqnarray*} Therefore, the Finsler structures $F(t)$ form a solution to the Finslerian Ricci flow. Hence the proof is complete. \end{proof} In the next step we assume that there is a solution to the Finslerian Ricci flow based on which we construct a solution to the Ricci-DeTurck flow in Finsler space. \begin{prop}\label{main12} Fix a compact Finsler surface $(M,\bar{F})$ with the related Finsler metric tensor $h$. Let $F(t)$, $t\in[0,T)$, be a family of solutions to the Ricci flow and $\varphi_{t}$ a one-parameter family of diffeomorphisms on $SM$ evolving under the following flow, \begin{equation} \label{55} \frac{\partial}{\partial t}\varphi_{t}=\Phi_{g(t),h}\varphi_{t}.\nonumber \end{equation} Then the Finsler structures $\tilde{F}(t)$ defined by $F^{2}(t)=\varphi^{*}_{t}(\tilde{F}^{2}(t))$ form a solution to the following Ricci-DeTurck flow \begin{equation} \label{56} \frac{\partial}{\partial t}\tilde{F}^{2}(t)=-2\tilde{F}^{2}(t)\mathcal{R}ic(\tilde{g}(t))-\mathcal{L}_{\xi}\tilde{F}^{2}(t),\nonumber \end{equation} where, $\xi=(\Phi_{\tilde{g}(t),h}Id)^{i}\frac{\partial}{\partial x^i}$ and $\tilde{g}(t)$ is the metric tensor related to $\tilde{F}(t)$. Furthermore, for all $z\in SM$ and $t\in[0,T)$ we have \begin{equation* \frac{\partial}{\partial t}\varphi_{t}(z)=\xi(\varphi_{t}(z),t). \end{equation*} \end{prop} \begin{proof} Using Lemma \ref{main2} we have \begin{align*} \frac{\partial}{\partial t}\varphi_{t}&=\Phi_{g(t),h}\varphi_{t}=\Phi_{\varphi^{*}_{t} (\tilde{g}(t)),h}\varphi_{t}=\Phi_{\varphi^{*}_{t} (\tilde{g}(t)),h} Id \circ\varphi_{t}\\ &=\Phi_{\varphi^{*}_{t} (\tilde{g}(t)),h}\varphi_{t}^{*} Id=\Phi_{\tilde{g}(t),h}Id=\xi, \end{align*} for all $z\in SM$ and $t\in[0,T)$. Using $F^{2}(t)=\varphi^{*}_{t}(\tilde{F}^{2}(t))$ leads to \begin{align}\label{59+1} \frac{\partial}{\partial t}(\log F(t))&= \frac{1}{2}\frac{\frac{\partial}{\partial t}(\varphi^{*}_{t}(\tilde{F}^{2}(t)))}{\varphi^{*}_{t}(\tilde{F}^{2}(t))}\nonumber\\ &=\frac{1}{2}\frac{\varphi^{*}_{t}(\frac{\partial}{\partial t}\tilde{F}^{2}(t))+\mathcal{L}_{(\varphi^{-1}_{t})_{*}(\xi)} \varphi^{*}_{t}(\tilde{F}^{2}(t))}{\varphi^{*}_{t}(\tilde{F}^{2}(t))}\nonumber\\ &=\frac{1}{2}\frac{\varphi^{*}_{t}(\frac{\partial}{\partial t}\tilde{F}^{2}(t)+\mathcal{L}_{\xi}\tilde{F}^{2}(t))}{\varphi^{*}_{t}(\tilde{F}^{2}(t))}. \end{align} By assumption, $F(t)$ form a solution to the Finslerian Ricci flow (\ref{20}) \begin{equation} \label{60} 0=\frac{\partial}{\partial t}(\log F(t))+\mathcal{R}ic_{F(t)}. \end{equation} Thus by means of (\ref{59+1}), (\ref{60}) and Lemma \ref{lemmohem} we have \begin{align*} 0&=\frac{\varphi^{*}_{t}(\frac{\partial}{\partial t}\tilde{F}^{2}(t)+\mathcal{L}_{\xi}\tilde{F}^{2}(t))}{\varphi^{*}_{t}(\tilde{F}^{2}(t))}+2\mathcal{R}ic_{F(t)}\nonumber\\ &=\frac{\varphi^{*}_{t}(\frac{\partial}{\partial t}\tilde{F}^{2}(t)+\mathcal{L}_{\xi}\tilde{F}^{2}(t))}{\varphi^{*}_{t}(\tilde{F}^{2}(t))} +2\mathcal{R}ic_{\varphi^{*}_{t}(\tilde{F}(t))}\nonumber\\ &=\frac{\varphi^{*}_{t}(\frac{\partial}{\partial t}\tilde{F}^{2}(t)+\mathcal{L}_{\xi}\tilde{F}^{2}(t))}{\varphi^{*}_{t}(\tilde{F}^{2}(t))} +2\varphi^{*}_{t}(\mathcal{R}ic_{\tilde{F}(t)})\nonumber\\ &=\frac{\varphi^{*}_{t}(\frac{\partial}{\partial t}\tilde{F}^{2}(t)+\mathcal{L}_{\xi}\tilde{F}^{2}(t))+2 \varphi^{*}_{t}(\tilde{F}^{2}(t))\varphi^{*}_{t}(\mathcal{R}ic_{\tilde{F}(t)})}{\varphi^{*}_{t} (\tilde{F}^{2}(t))}\nonumber\\ &=\frac{\varphi^{*}_{t}(\frac{\partial}{\partial t}\tilde{F}^{2}(t)+\mathcal{L}_{\xi}\tilde{F}^{2}(t)+2 \tilde{F}^{2}(t)\mathcal{R}ic_{\tilde{F}(t)})}{\varphi^{*}_{t} (\tilde{F}^{2}(t))}\nonumber. \end{align*} Therefore, $ \varphi^{*}_{t}(\frac{\partial}{\partial t}\tilde{F}^{2}(t)+\mathcal{L}_{\xi}\tilde{F}^{2}(t)+2 \tilde{F}^{2}(t)\mathcal{R}ic_{\tilde{F}(t)})=0.\nonumber $ This implies \begin{equation} \frac{\partial}{\partial t}\tilde{F}^{2}(t)=-2 \tilde{F}^{2}(t)\mathcal{R}ic_{\tilde{F}(t)}-\mathcal{L}_{\xi}\tilde{F}^{2}(t).\nonumber \end{equation} Therefore, $\tilde{F}(t)$ is a solution to the Ricci-DeTurck flow, as we have claimed. \end{proof} {\bf Proof of Theorem \ref{main14}.} In order to check the existence statement, recall that by means of Theorem \ref{main8}, there exists a solution $\tilde{F}(t)$ to the Finslerian Ricci-DeTurck flow (\ref{22}) which is defined on some time interval $[0,T)$ and satisfies $\tilde{F}(0)=F_{0}$. Let $\varphi_{t}$ be the solution of the ODE \begin{eqnarray* \frac{\partial}{\partial t}\varphi_{t}(z)=(\Phi_{\tilde{g}(t),h}Id)(\varphi_{t}(z),t)=\xi(\varphi_{t}(z),t),\nonumber \end{eqnarray*} with the initial condition $\varphi_{0}(z)=z$, for $z\in SM$ and $t\in[0,T)$. By Proposition \ref{main11}, the Finsler structures $F^{2}(t)=\varphi^{*}_{t}(\tilde{F}^{2}(t))$ form a solution to the Finslerian Ricci flow (\ref{20}) with $F(0)=F_{0}$. This completes the existence statement. For uniqueness statement assume that $F_{1}(t)$ and $F_{2}(t)$ are both solutions to the Finslerian Ricci flow defined on some time interval $[0,T)$ and satisfy $F_{1}(0)=F_{2}(0)$. We claim $F_{1}(t)=F_{2}(t)$ for all $t\in[0,T)$. In order to prove this fact, we argue by contradiction. Suppose that $F_{1}(t)\neq F_{2}(t)$ for some $t\in[0,T)$. Let's consider a real number $\tau\in[0,T)$ where $\tau=\inf\{t\in[0,T):F_{1}(t)\neq F_{2}(t)\}$. Clearly, $F_{1}(\tau)=F_{2}(\tau)$. Let $\varphi^{1}_{t}$ be a solution of the flow \begin{equation} \label{67} \frac{\partial}{\partial t}\varphi^{1}_{t}=\Phi_{g_{1}(t),h}\varphi^{1}_{t},\nonumber \end{equation} with initial condition $\varphi^{1}_{\tau}=Id$ and $\varphi^{2}_{t}$ a solution of the flow \begin{equation} \frac{\partial}{\partial t}\varphi^{2}_{t}=\Phi_{g_{2}(t),h}\varphi^{2}_{t},\nonumber \end{equation} with initial condition $\varphi^{2}_{\tau}=Id$. It follows from the standard theory of parabolic differential equations that $\varphi^{1}_{t}$ and $\varphi^{2}_{t}$ are defined on some time interval $[\tau,\tau+\epsilon)$, where, $\epsilon$ is a positive real number. Moreover, if we choose $\epsilon>0$ small enough, then $\varphi^{1}_{t}$ and $\varphi^{2}_{t}$ are diffeomorphisms for all $t\in[\tau,\tau+\epsilon)$. For each $t\in[\tau,\tau+\epsilon)$ we define two Finsler structures $\tilde{F}_{1}(t)$ and $\tilde{F}_{2}(t)$ by $(F_{1}(t))^{2}=(\varphi^{1}_{t})^{*}(\tilde{F}_{1}(t))^{2}$ and $(F_{2}(t))^{2}=(\varphi^{2}_{t})^{*}(\tilde{F}_{2}(t))^{2}$. It follows from Proposition \ref{main12} that $\tilde{F}_{1}(t)$ and $\tilde{F}_{2}(t)$ are solutions of the Finslerian Ricci-DeTurck flow. Since $\tilde{F}_{1}(\tau)=\tilde{F}_{2}(\tau)$, the uniqueness statement in Theorem \ref{main8} implies that $\tilde{F}_{1}(t)=\tilde{F}_{2}(t)$ for all $t\in[\tau,\tau+\epsilon)$. For each $t\in[\tau,\tau+\epsilon)$, we define a vector field $\xi$ on $SM$ by \begin{equation} \xi=\Phi_{\tilde{g}_{1}(t),h}Id=\Phi_{\tilde{g}_{2}(t),h}Id.\nonumber \end{equation} By Proposition \ref{main12}, we have \begin{eqnarray* \frac{\partial}{\partial t}\varphi^{1}_{t}(z)=\xi(\varphi^{1}_{t}(z),t),\nonumber \end{eqnarray*} and \begin{eqnarray* \frac{\partial}{\partial t}\varphi^{2}_{t}(z)=\xi(\varphi^{2}_{t}(z),t),\nonumber \end{eqnarray*} for $z\in SM$ and $t\in[\tau,\tau+\epsilon)$. Since $\varphi^{1}_{\tau}=\varphi^{2}_{\tau}=Id$, it follows that $\varphi^{1}_{t}=\varphi^{2}_{t}$ for all $t\in[\tau,\tau+\epsilon)$. Putting these facts together, we conclude that \begin{equation*} (F_{1}(t))^{2}=(\varphi^{1}_{t})^{*}(\tilde{F}_{1}(t))^{2}=(\varphi^{2}_{t})^{*}(\tilde{F}_{2}(t))^{2}= (F_{2}(t))^{2}, \end{equation*} for all $t\in[\tau,\tau+\epsilon)$. Therefore, $F_{1}(t)=F_{2}(t)$ for all $t\in[\tau,\tau+\epsilon)$. This contradicts the definition of $\tau$. Thus the uniqueness holds well. This completes the proof of Theorem \ref{main14}.\hspace{\stretch{1}}$\Box$ \begin{ex} Let $(M,F_{0})$ be a compact Finsler surface. We are going to obtain a solution to the Ricci flow \eqref{20}. It's well known in dimension two, a Finsler metric is of isotropic Ricci scalar (or Einstein) if and only if it is of isotropic flag curvature. Therefore, $F_{0}$ is an Einstein metric and we have $\mathcal{R}ic_{F_{0}}=K$ where, $K=K(x)$ is a scalar function on $M$. Consider a family of scalars $\tau(t)$ defined by \begin{equation*} \tau(t):=1-2Kt>0. \end{equation*} Define a smooth one-parameter family of Finsler structures on $M$ by \begin{equation*} F^{2}(t):=\tau(t)F_{0}^2. \end{equation*} Thus we have \begin{equation*} \log(F(t))=\frac{1}{2}\log(\tau(t)F_{0}^2). \end{equation*} Derivative with respect to $t$ yields \begin{equation}\label{exp1} \frac{\partial}{\partial t}\log(F(t))=-\frac{K}{\tau(t)}=-\frac{\mathcal{R}ic_{F_{0}}}{\tau(t)}. \end{equation} On the other hand, by straight forward computations we have $\frac{1}{\tau(t)}\mathcal{R}ic_{F_{0}}=\mathcal{R}ic_{\tau(t)^{\frac{1}{2}}F_{0}}$, for more details see \cite[p.\ 926]{BY}. Replacing the last relation in (\ref{exp1}) leads to \begin{equation*} \frac{\partial}{\partial t}\log(F(t))=-\mathcal{R}ic_{\tau(t)^{\frac{1}{2}}F_{0}}=-\mathcal{R}ic_{{F(t)}}. \end{equation*} Hence, $F(t)$ is a solution to the Ricci flow equation \eqref{20}. \end{ex} \begin{ex} Let $F(t)$ be a family of Finsler structures on the sphere $\mathbb{S}^2$ defined by $ F^2(t)=a_{ij}(t)y^iy^j$ where, $a_{ij}(t)$ is a well known Riemannian metric on $I\!\! R^2$, called the Rosenau metric $$ a_{ij}(t)=\frac{8\sinh (-t)}{1+2 \cosh (-t) |x|^2+|x|^4}\delta_{ij} , \quad t\in (-\infty , 0) , x\in I\!\! R^2. $$ It is well known that $a_{ij}$ extends to a metric on $\mathbb{S}^2$. The related Finsler metric tensor of $F(t)$ is \begin{equation*} g_{ij}(t):=(\frac{1}{2}F^{2})_{y^iy^j}=a_{ij}(t). \end{equation*} By straight forward computations, $R(a(t))$ the scalar curvature of the Riemannian metric $a_{ij}(t)$ is \begin{equation*} R(a(t))=\frac{\cosh(-t)}{\sinh(-t)}-\frac{2\sinh(-t)|x|^2}{1+2\cosh(-t)|x|^2+|x|^4}. \end{equation*} The Ricci tensor of $g_{ij}$ and $a_{ij}$ coincides. Hence \begin{equation*} Ric_{ij}(g(t))=Ric_{ij}(a(t))=\frac{1}{2}R(a(t))a_{ij}(t). \end{equation*} From $F(t)=(a_{ij}(t)y^iy^j)^{\frac{1}{2}}$, we have \begin{equation*} \log(F(t))=\frac{1}{2}\log(a_{ij}(t)y^iy^j). \end{equation*} Derivative with respect to $t$ leads to \begin{equation}\label{exp} \frac{\partial}{\partial t}\log F(t)=\frac{1}{2}\frac{\partial}{\partial t}(a_{ij}(t))l^il^j, \end{equation} where, \begin{equation*} \frac{\partial}{\partial t}(a_{ij}(t))={\Big (}\frac{-8\cosh (-t)}{1+2 \cosh (-t) |x|^2+|x|^4}+\frac{16\sinh ^2(-t)|x|^2}{(1+2 \cosh (-t) |x|^2+|x|^4)^2}{\Big )}\delta_{ij}. \end{equation*} On the other hand, we have \begin{align*} \mathcal{R}ic(g(t))&=l^il^jRic_{ij}(g(t))=\frac{1}{2}l^il^jR(a(t))a_{ij}(t)\\ &=\frac{1}{2}l^il^j{\Big (}\frac{8\cosh (-t)}{1+2 \cosh (-t) |x|^2+|x|^4}-\frac{16\sinh ^2(-t)|x|^2}{(1+2 \cosh (-t) |x|^2+|x|^4)^2}{\Big )}\delta_{ij}. \end{align*} Comparing the last equation and (\ref{exp}) we have \begin{equation*} \frac{\partial}{\partial t}\log F(t)=-\mathcal{R}ic(g(t)). \end{equation*} Consequently, F(t) form a solution to the Finsler Ricci flow \eqref{20} on $\mathbb{S}^2$. \end{ex} {\bf Acknowledgment}\\ The authors would like to thank Prof. David Bao for his valuable comments. This work is partially supported by Iran National Science Foundation (INSF), under the grant 95002579.
1,116,691,497,018
arxiv
\section{Brief review of ramification theory} Let $K$ be a complete discrete valuation field. We assume that $K$ is of characteristic $0$ and the residue field $F$ is of characteristic $p>0$. We assume that $F$ is finitely generated over a perfect subfield $k$. We define an $F$-vector space of finite dimension $\Omega_F(\log)$ by \begin{equation} \Omega_F(\log) = \left. \bigl(\Omega^1_{F/k} \oplus (F\otimes K^\times)\bigr) \right/ (d \bar a- \bar a\otimes a; a\in {\cal O}_K^\times), \label{eqOmF} \end{equation} by an abuse of notation because $\Omega_F(\log)$ depends not only on $F$ but also on $K$. It fits in an exact sequence $0\to \Omega^1_{F/k} \to \Omega_F(\log) \to F\to 0$. For $a\in K^\times$, the image of $1\otimes a$ is denoted by $d\log a$. Let $\bar K$ be an algebraic closure of $K$. The residue field $\bar F$ of $\bar K$ is an algebraic closure of $F$. Let $G_K$ and $G_F$ be the absolute Galois groups ${\rm Gal}(\bar K/K)$ and ${\rm Gal}(\bar F/F)$. We have a canonical surjection $G_K\to G_F$. Let $(G_K^r)_{r\in {\mathbb Q},r>0}$ denote the decreasing filtration by logarithmic ramification groups. For $r>0$, we put $G_K^{r+}= \overline{\bigcup_{s>r}G_K^s}$. For a finite \'etale $K$-algebra $L$, we say that the log ramification of $L$ is bounded by $r+$ if the natural action of $G_K$ on the finite set ${\rm Hom}_K(L,\bar K)$ factors through the quotient $G_K^{\le r+}=G_K/G_K^{r+}$. Let ${\cal C}_K^{\le r+}$ denote the category of finite \'etale $K$-algebras of log ramification bounded by $r+$. We identify the category ${\cal C}_K^{\le r+}$ with the category $(G_K^{\le r+}\text{-Sets})$ of finite sets with continuous action of $G_K^{\le r+}$ by the natural anti-equivalence defined by the fiber functor attaching ${\rm Hom}_K(L,\bar K)$ to $L$. In the following of this section, we assume that $r>0$ is an integer. Let $\Theta^{(r)}$ denote the $F$-vector space ${\rm Hom}_F(\Omega_F(\log), {\mathfrak m}_K^r/ {\mathfrak m}_K^{r+1})$ of finite dimension regarded as a smooth algebraic group over $F$. We consider a natural action of $G_F$ on $\Theta^{(r)}_{\bar F}= \Theta^{(r)}\times_F\bar F$. Let $(G_K^{\le r+}\text{-}{\rm FEt}/\Theta^{(r)}_{\bar F})$ denote the category of finite \'etale schemes over $\Theta^{(r)}_{\bar F}$ with a continuous action of $G_K^{\le r+}= G_K/G_K^{r+}$ compatible with that of $G_F$ on $\Theta^{(r)}_{\bar F}$. We briefly recall the construction of the functor \begin{equation} X_K^{(r)}\colon {\cal C}_K^{\le r+} \longrightarrow (G_K^{\le r+}\text{-}{\rm FEt}/\Theta^{(r)}_{\bar F}) \label{eqXr} \end{equation} in \cite{AS2} with a slight modification replacing complete local rings by schemes of finite type. \begin{lm}\label{lmP0} Let $L=\prod_jL_j$ be a finite \'etale $K$-algebra and we put $S={\rm Spec}\ {\cal O}_K$ and $T={\rm Spec}\ {\cal O}_L$. Then, there exists a commutative diagram \begin{equation} \begin{CD} T@>{i'}>> Q_0@<<<E_0\\ @VVV @VVV@VVV\\ S@>i>> P_0@<<<D_0 \end{CD} \label{eqP0} \end{equation} of schemes over the ring $W(k)$ of Witt vectors satisfying the following conditions: \begin{itemize} \item[{\rm (\ref{eqP0}.1)}] The schemes $P_0,Q_0,D_0$ and $E_0$ are smooth over $W(k)$ and $D_0\subset P_0$ and $E_0\subset Q_0$ are divisors. The vertical arrows are finite and flat. The left square is cartesian. \item[{\rm (\ref{eqP0}.2)}] Let $s={\rm Spec}\ F\in S$ denote the closed point and $t_j={\rm Spec}\ F_j \in T$ denote the closed points. Then, the maps $i$ and $i'$ induces isomorphisms $\kappa(i(s))\to F$ and $\kappa(i'(t_j))\to F_j$ of residue fields. The closed subschemes $S\times_{P_0}D_0$ and $T\times_{Q_0}E_0$ are equal to ${\rm Spec}\ F$ and to the reduced part $(T\times_S {\rm Spec}\ F)_{\rm red} = \coprod_j {\rm Spec}\ F'_j$ respectively. The canonical maps $\Omega^1_{P_0/W(k)}(\log D_0) \otimes F \to \Omega_F(\log)$ and $\Omega^1_{Q_0/W(k)}(\log E_0) \otimes F'_j \to \Omega_{F'_j}(\log)$ are isomorphisms for every $j$. On a neighborhood of $i'(t_j)$, the pull-back $D_0\times_{P_0}Q_0$ is equal to the divisor $e_jE_0$ where $e_j$ is the ramification index, for every $j$. \end{itemize} \end{lm} {\it Proof.} We take elements $a_1,\ldots,a_n \in {\cal O}_K$ such that the images $\bar a_1,\ldots, \bar a_n$ in $F$ form a transcendental basis $F$ over $k$ and that $F$ is a finite separable extension of $k(\bar a_1,\ldots,\bar a_n)$. Let $A_0$ be the henselization of the subring $W(k)[a_1,\ldots,a_n] \subset {\cal O}_K$ at the prime ideal $(p)$ and $K_0$ be the fraction field of the completion of the henselian discrete valuation ring $A_0$. Then $K$ is a finite separable extension of $K_0$. Let $K_1\subset K$ be the maximum unramified subextension over $K_0$. Then, there exist unique finite flat normal $A_0$-subalgebras $A_1\subset A$ of ${\cal O}_K$ such that the natural maps $A\otimes_{A_0} {\cal O}_{K_0} \to {\cal O}_K$ and $A_1\otimes_{A_0} {\cal O}_{K_0} \to {\cal O}_{K_1}$ are isomorphisms. We take a prime element $\pi$ of $A$ and let $f\in A_1[t]$ be the minimal polynomial. Let $A_1\{t\}$ be the henselization at the maximal ideal $(p,t)$. Then, we obtain an isomorphism $A_1\{t\}/(f)\to A$. It induces an isomorphism $A_1\{t\}/(f,t)\to F$. We may assume $L$ is a finite separable extension of $K$. Similarly, there exists a unique finite flat normal $A_0$-subalgebra $B$ of ${\cal O}_L$ such that the natural map $B\otimes_{A_0} {\cal O}_{K_0} \to {\cal O}_L$ is an isomorphism. Let $F'$ be the residue field of $L$. We take elements $b_1,\ldots,b_n \in B$ such that the images $\bar b_1,\ldots, \bar b_n$ in $F'$ form a transcendental basis $F'$ over $k$ and that $F'$ is a finite separable extension of $k(\bar b_1,\ldots,\bar b_n)$. Let $B_0$ be the henselization of the subring $W(k)[b_1,\ldots,b_n] \subset {\cal O}_L$ at the prime ideal $(p)$. Then, we obtain $L_0\subset L_1\subset L$ and $B_0\subset B_1\subset B$ as above. We take a prime element $\pi'$ of $B$ and let $g\in B_1[t']$ be the minimal polynomial. Then, we obtain an isomorphism $B_1\{t'\}/(g)\to B$. It induces an isomorphism $B_1\{t'\}/(g,t')\to F'$. Since $A_1$ is essentially smooth over $W(k)$, there exists a map $A_1\to B_1\{t'\}$ over $W(k)$ lifting the composition $A_1\to A\to B=B_1\{t'\}/(g)$. We put $\pi=u\pi^{\prime e}$ for $u\in B^\times$ and take a lifting $\tilde u\in B_1\{t'\}^\times$. We extend the map $A_1\to B_1\{t'\}$ to a map $A_1\{t\}\to B_1\{t'\}$ by sending $t$ to $\tilde u\cdot t^{\prime e}$. Thus, we obtain a commutative diagram \begin{equation} \begin{CD} B@<<< B_1\{t'\}@>{t'\mapsto 0}>> B_1\\ @AAA @AAA @AAA\\ A@<<< A_1\{t\}@>{t\mapsto 0}>> A_1 \end{CD} \label{eqAB} \end{equation} of $W(k)$-algebras. We show that the left square gives an isomorphism $A\otimes_{A_1\{t\}} B_1\{t'\}\to B$. Since the maximal ideal of $A_1$ is generated by the image of $f$, the maximal ideal of $B_1$ is also generated by the image of $f$. Hence, the image of $f$ in $B_1\{t'\}$ is not in the square of the maximal ideal and we have $(f)=(g)$ as ideals of $B_1\{t'\}$. Therefore the map $A\otimes_{A_1\{t\}} B_1\{t'\} =B_1\{t'\}/(f) \to B=B_1\{t'\}/(g)$ is an isomorphism. Consequently, the map $A_1\{t\} \to B_1\{t'\}$ is finite flat. Since the question is \'etale local on neighborhoods of the images of $S$ and $T$, we deduce a diagram (\ref{eqP0}) satisfying the conditions (\ref{eqP0}.1) and (\ref{eqP0}.2) from the diagram (\ref{eqAB}). \qed We define a modification $P^{(r)}$ of the scheme $P_0\times_{W(k)}S$ as follows. We take a blow-up of $P_0\times_{W(k)}S$ at $D_0\times_{W(k)}{\rm Spec}\ F$ and define a scheme $P$ over $S$ to be the complement of the union of the proper transforms of $P_0\times_{W(k)}{\rm Spec}\ F$ and $D_0\times_{W(k)}S$. The map $S\to P_0$ induces a section $S\to P$. We regard $S_r={\rm Spec}\ {\cal O}_K/ {\mathfrak m}^r_K$ as a closed subscheme of $P$ by the composition $S_r\to S\to P$. We consider the blow-up of $P$ at the closed subscheme $S_r$ and define $P^{(r)}$ to be the complement of the proper transform of the closed fiber $P\times_S {\rm Spec}\ F$. The schemes $P$ and $P^{(r)}$ are smooth over $S$. More concretely, the schemes $P$ and $P^{(r)}$ are described as follows. Assume $P_0={\rm Spec}\ A_0$ is affine and the divisor $D_0$ is defined by $t\in A_0$. The image $\pi \in {\cal O}_K$ of $t$ by the map $A_0\to {\cal O}_K$ corresponding to $S\to P_0$ is a uniformizer of $K$. Then, we have $P={\rm Spec}\ A$ for $A=A_0\otimes_{W(k)} {\cal O}_K[U^{\pm 1}] /(Ut-\pi)$. Let $I$ be the kernel of the surjection $A\to {\cal O}_K$ induced by $A_0\to {\cal O}_K$ and $U\mapsto 1$. Then, we have $P^{(r)}={\rm Spec}\ A^{(r)}$ for $A^{(r)}= A[I/\pi^r] \subset A[1/\pi]$. The closed fiber $P^{(r)}_F= P^{(r)}\times_S{\rm Spec}\ F$ is canonically identified with the affine space $\Theta^{(r)}$ as follows. The canonical map $\Omega^1_{P_0/S} \otimes_{{\cal O}_{P_0}} {\cal O}_P \to \Omega^1_{P/S}$ is uniquely extended to an isomorphism $\Omega^1_{P_0/S}(\log D_0) \otimes_{{\cal O}_{P_0}} {\cal O}_P \to \Omega^1_{P/S}$. Let ${\cal I} \subset {\cal O}_P$ denote the ideal sheaf defining the closed subscheme $S\subset P$. Then, the closed fiber $P^{(r)}_F$ is canonically identified with the $F$-vector space ${\rm Hom}_F ({\cal I}/{\cal I}^2 \otimes F, {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K)$ regarded as an affine space. By the isomorphisms $\Omega^1_{P_0/S}(\log D_0) \otimes_{{\cal O}_{P_0}} {\cal O}_P \to \Omega^1_{P/S},\ {\cal I}/{\cal I}^2 \to \Omega^1_{P/S} \otimes_{{\cal O}_P} {\cal O}_S$ and $\Omega^1_{P_0/S}(\log D_0) \otimes F \to \Omega_F(\log)$, we obtain a canonical isomorphism \begin{equation} P^{(r)}_F \to \Theta^{(r)}. \label{eqth} \end{equation} Let $Q^{(r)}_{\bar S}$ be the normalization of the base change $Q_0\times_{P_0} P^{(r)}_{\bar S}$ and $Q^{(r)}_{\bar F}$ be the closed fiber. Then, by the description of log ramification groups \cite[Section 5.1]{AS2}, the log ramification of $L$ is bounded by $r+$ if and only if the finite map $Q^{(r)}_{\bar F} \to \Theta^{(r)}_{\bar F}$ is \'etale. Further, it is shown in \cite[Lemma 5.10]{AS2} that, if the log ramification of $L$ is bounded by $r+$, the finite \'etale scheme $Q^{(r)}_{\bar F} \to \Theta^{(r)}_{\bar F}$ with the natural action of $G_K$ is independent of the choice of a diagram (\ref{eqP0}) and is well-defined up to unique isomorphism. The functor $X_K^{(r)}\colon {\cal C}_K^{\le r+} \to (G_K^{\le r+}\text{-}{\rm FEt}/\Theta^{(r)}_{\bar F})$ (\ref{eqXr}) is defined by attaching $Q^{(r)}_{\bar F}$ to $L$. The composition of $X_K^{(r)}\colon {\cal C}_K^{\le r+} \to (G_K^{\le r+}\text{-}{\rm FEt}/\Theta^{(r)}_{\bar F})$ with the fiber functor $F_{\bar 0}\colon (G_K^{\le r+}\text{-}{\rm FEt}/\Theta^{(r)}_{\bar F}) \to (G_K^{\le r+}\text{-Sets})$ at the origin $0\in \Theta^{(r)}_{\bar F}$ recovers the natural equivalence of categories ${\cal C}_K^{\le r+} \to (G_K^{\le r+}\text{-Sets})$. Further, it is shown in \cite[Theorem 5.12]{AS2} that, for a finite \'etale $K$-algebra $L$ of log ramification bounded by $r+$, the finite \'etale covering $X_K^{(r)}(L)\to \Theta^{(r)}_{\bar F}$ is trivialized by a universal abelian covering $\Theta^{(r) {\rm ab}}_{\bar F}$. Thus, forgetting the Galois action on $X_K^{(r)}(L)$ and taking the fiber functor at the origin $0\in \Theta^{(r)}_{\bar F}$, we obtain a group homomorphism $\pi_1(\Theta^{(r)}_{\bar F})^{\rm ab} \to G_K^{\le r+}$ defined by the functor $X_K^{(r)}$. It induces a canonical surjection \begin{equation} \pi_1(\Theta_{\bar F}^{(r)})^{\rm ab} \to {\rm Gr}^rG_K \label{eqcan1} \end{equation} \cite[(5.12.1)]{AS2}. It is compatible with the $G_K$-action defined by the actions of $G_F$ on $\Theta_{\bar F}^{(r)}$ and the conjugate action on ${\rm Gr}^rG_K$. Since the inertia group $I={\rm Ker}(G_K\to G_F)$ acts trivially on $\Theta^{(r)}_{\bar F}$, it follows that ${\rm Gr}^rG_K$ is a central subgroup of $I/G_K^{r+}$. \section{Infinitesimal deformation} Let $K$ and $K'$ be complete discrete valuation fields. We say that a morphism $f\colon K\to K'$ of fields is an extension of complete discrete valuation fields if it induces a flat local morphism ${\cal O}_K\to {\cal O}_{K'}$, also denoted by $f$ by abuse of notation. The integer $e>0$ characterized by $f({\mathfrak m}_K){\cal O}_{K'} = {\mathfrak m}^e_{K'}$ is called the ramification index of $f$. \begin{df}\label{dfinf} Let $f\colon K\to K'$ be an extension of complete discrete valuation fields of ramification index $e$. For an integer $r>0$, we call the pair $(f,\varepsilon)$ with an $f$-linear morphism $\varepsilon\colon \Omega_F(\log)\to {\mathfrak m}_{K'}^{er}/ {\mathfrak m}_{K'}^{er+1}$ an infinitesimal deformation of $f$. \end{df} We define a composition of infinitesimal deformations. Let $f\colon K\to K', g\colon K'\to K''$ be extensions of complete discrete valuation fields of ramification indices $e,e'$ and $\varepsilon\colon \Omega_F(\log)\to {\mathfrak m}_{K'}^{er}/ {\mathfrak m}_{K'}^{er+1}$ and $\varepsilon'\colon \Omega_{F'}(\log)\to {\mathfrak m}_{K''}^{ee'r}/ {\mathfrak m}_{K''}^{ee'r+1}$ be $f$-linear and $g$-linear morphisms. Let $g_*\colon {\mathfrak m}_{K'}^{er}/ {\mathfrak m}_{K'}^{er+1}\to {\mathfrak m}_{K''}^{ee'r}/ {\mathfrak m}_{K''}^{ee'r+1}$ be the map induced by $g$ and $f_*\colon \Omega_F(\log)\to \Omega_{F'}(\log)$ be the map induced by $f$. Then, we define the composition $(g,\varepsilon')\circ (f,\varepsilon)$ to be $(g\circ f, g_*\circ \varepsilon+ \varepsilon'\circ f_*)$. Let $f\colon K\to K'$ be an extension of complete discrete valuation field of ramification index $e$. Let $r>0$ an integer and we put $r'=er$. Let $\varepsilon \colon \Omega_{{\cal O}_K}(\log) \to {\mathfrak m}_{K'}^{r'}/ {\mathfrak m}_{K'}^{r'+1}$ be an $f$-linear morphism. For an infinitesimal deformation $(f,\varepsilon)$ of $f$, we define a functor \begin{equation} f_{\varepsilon*}\colon {\cal C}_K^{\le r+}\to {\cal C}_{K'}^{\le r'+}. \label{eqf*} \end{equation} We take separable closures $K\subset \bar K, K'\subset \bar K'$ and an embedding $\bar f\colon \bar K \to \bar K'$. The residue fields $\bar F, \bar F'$ of $\bar K, \bar K'$ are algebraic closures of $F$ and of $F'$. By \cite[Proposition 3.15 (3)]{AS1}, the map $f^*\colon G_{K'}\to G_K$ induces $G_{K'}^{\le er+} \to G_K^{\le r+}$. The embedding $\bar f\colon \bar K \to \bar K'$ induces $\bar f\colon \bar F \to \bar F'$ and defines a commutative diagram \begin{equation} \begin{CD} G_{K'}^{\le r'+} @>{f^*}>> G_K^{\le r+}\\ @VVV @VVV\\ G_{F'}@>>> G_F \end{CD}\label{eqGF} \end{equation} We recalled the definition of a functor $X_K^{(r)}\colon {\cal C}_K^{\le r+} \to (G_K^{\le r+}\text{-}{\rm FEt}/\Theta^{(r)}_{\bar F})$ (\ref{eqXr}) in Section 1. The map $\varepsilon$ defines a geometric point $\bar \varepsilon\colon \bar F'\to \Theta^{(r)}_{\bar F}$. The map $\bar \varepsilon\colon \bar F'\to \Theta^{(r)}_{\bar F}$ is compatible with the morphism $G_{F'}\to G_F$. By the commutative diagram (\ref{eqGF}), the fiber functor $F_{\bar \varepsilon}$ defines a functor $(G_K^{\le r+}\text{-}{\rm FEt}/\Theta^{(r)}_{\bar F}) \to (G_{K'}^{\le r'+}\text{-Sets})\simeq {\cal C}_{K'}^{\le r'+}$. We define a functor $f_{\varepsilon*}$ as the composition \begin{equation} F_{\bar \varepsilon} \circ X^{(r)}_K\colon {\cal C}_K^{\le r+}\to (G_K^{\le r+}\text{-}{\rm FEt}/\Theta^{(r)}_{\bar F}) \to {\cal C}_{K'}^{\le r'+}. \label{eqfe} \end{equation} To describe a morphism $f_{\varepsilon}^*\colon G_{K'}^{\le r'+}\to G_K^{\le r+}$ corresponding to the functor $f_{\varepsilon*}$, we introduce a terminology. Let $G$ and $G'$ be groups and $C\subset G$ be a central subgroup. For morphisms of groups $\varphi\colon G'\to G$ and $\psi\colon G'\to C$, we call the morphism $\varphi_\psi\colon G'\to G$ defined by $\varphi_\psi(g) =\varphi(g)\psi(g)$ for $g\in G'$, the deformation of $\varphi\colon G'\to G$ by $\psi\colon G'\to C$. We consider the composition \begin{equation} \begin{CD} G_{K'}^{\le r'+} \to G_{F'}^{\rm ab} @>{\varepsilon_*}>> \pi_1(\Theta^{(r)}_{\bar F})^{\rm ab} @>{\rm(\ref{eqcan1})}>> {\rm Gr}^rG_K \subset G_K^{\le r+}= G_K/G_K^{r+}. \end{CD} \label{eqep*} \end{equation} As is remarked after (\ref{eqcan1}), the subgroup ${\rm Gr}^rG_K \subset G_K^{\le r+}$ is a central subgroup if the residue field $F$ is separably closed. \begin{lm}\label{lmhom} Assume that the residue field $F$ is separably closed. Then, the functor $f_{\varepsilon*}\colon {\cal C}_K^{\le r+}\to {\cal C}_{K'}^{\le r'+}$ is compatible with the deformation $f^*_{\varepsilon^*}\colon G_{K'}^{\le r'+}\to G_K^{\le r+}$ of $f^*\colon G_{K'}^{\le r'+}\to G_K^{\le r+}$ by $\varepsilon_*\colon G_{K'}^{\le r'+}\to {\rm Gr}^rG_K \subset G_K^{\le r+}$ {\rm (\ref{eqep*})}. \end{lm} {\it Proof.} We take a lifting ${\rm Spec}\ \bar F' \to \Theta^{(r) {\rm ab}}_{\bar F}$ to a universal abelian covering of the geometric point $\bar\varepsilon\colon {\rm Spec}\ \bar F' \to \Theta^{(r)}_{\bar F}$ and consider the bijection \begin{equation} X_K^{(r)}(L) \times_{\Theta^{(r)}_{\bar F}} {\rm Spec}\ \bar F' \to \pi_0(X_K^{(r)}(L) \times_{\Theta^{(r)}_{\bar F}} \Theta^{(r) {\rm ab}}_{\bar F}) \label{eqbij} \end{equation} of finite sets for a finite \'etale $K$-algebra $L$ of ramification bounded by $r+$. By the definition of the functor $f_{\varepsilon *}$, the finite $G_{K'}^{\le r'+}$-set $f_{\varepsilon *}(L)$ is defined as $X_K^{(r)}(L) \times_{\Theta^{(r)}_{\bar F}} {\rm Spec}\ \bar F'$. The bijection (\ref{eqbij}) is compatible with the map $(f^*,\varepsilon_*) \colon G_{K'}^{\le r'+} \to G_K^{\le r+} \times \pi_1(\Theta_{\bar F})^{\rm ab}$. By the definition of the canonical map (\ref{eqcan1}), the action of $\pi_1(\Theta_{\bar F})^{\rm ab}$ on the finite set $\pi_0(X_K^{(r)}(L) \times_{\Theta^{(r)}_{\bar F}} \Theta^{(r) {\rm ab}}_{\bar F})$ is the same as that induced from the action of $G_K^{\le r+}$ by (\ref{eqcan1}). Thus the assertion follows. \qed If we choose a morphism of fiber functors, the functor $f_{\varepsilon*}\colon {\cal C}_K^{\le r+}\to {\cal C}_{K'}^{\le r'+}$ induces a morphism of groups $f_{\varepsilon}^*\colon G_{K'}^{\le r'+}\to G_K^{\le r+}$. Without choosing a morphism of fiber functors, it is still well-defined up to conjugate. Hence, for a representation $V$ of $G_K^{\le r+}$ the restriction ${\rm Res}_{f,\varepsilon}V$ is well-defined up to an isomorphism as a representation of $G_{K'}^{\le r'+}$. \begin{cor}\label{corchi} Assume that the residue field $F$ is separably closed. Let $V$ be a represention of $G_K^{\le r+}$ such that the restriction to ${\rm Gr}^rG_K$ is a character $\chi$. Let $\varepsilon^*(\chi)$ denote the character of $G_{K'}^{\le r'+}$ defined as the pull-back by $\varepsilon_* \colon G_{K'}^{\le r'+} \to {\rm Gr}^rG_K$ {\rm (\ref{eqep*})}. Then, we have an isomorphism \begin{equation} {\rm Res}_{f,\varepsilon}V \to {\rm Res}_{f}V \otimes \varepsilon^*(\chi) \end{equation} of representations of $G_{K'}^{\le r'+}$. \end{cor} \section{Transitivity} Let $f\colon K\to K'$ be an extension of complete discrete valuation fields. We say that $K'$ is a smooth extension of $K$ if the ramification index is $1$ and if the residue field $F'$ of $K'$ is a finitely generated separable extension of the residue field $F$ of $K$. Let $f\colon K\to K'$ be a smooth extension of complete discrete valuation field and let $\varepsilon \colon \Omega_F(\log) \to {\mathfrak m}^r_{K'}/ {\mathfrak m}^{r+1}_{K'}$ be an $f$-linear map. We consider the dual $${\rm Hom}_{F'}( \Omega_{F'}(\log), {\mathfrak m}^r_{K'}/ {\mathfrak m}^{r+1}_{K'}) \to {\rm Hom}_F( \Omega_F(\log), {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K) \otimes_F{F'}$$ of the map $\Omega_F(\log) \otimes_FF' \to \Omega_{F'}(\log)$ induced by $f$. We also consider the translation $+\varepsilon$ as a morphism $\Theta^{\prime(r)}_{F'}= {\rm Hom}_{F'}( \Omega_{F'}(\log), {\mathfrak m}^r_{K'}/ {\mathfrak m}^{r+1}_{K'}) \to \Theta^{\prime(r)}_{F'}$. Their composition defines a morphism of schemes $f_\varepsilon^*\colon \Theta^{\prime(r)}_{\bar F'} \to \Theta^{(r)}_{\bar F}$ compatible with $G_{F'}\to G_F$ and hence the pull-back functor $f_{\varepsilon*}\colon (G_K^{\le r+}\text{-}{\rm FEt}/\Theta^{(r)}_{\bar F}) \to (G_{K'}^{\le r+}\text{-} {\rm FEt}/\Theta^{\prime(r)}_ {\bar F'}).$ \begin{pr}\label{prsm} Assume $f\colon K\to K'$ is smooth and consider the diagram $$\begin{CD} {\cal C}_K^{\le r+} @>{f_{\varepsilon *}}>> {\cal C}_{K'}^{\le r+}\\ @V{X^{(r)}_K}VV @VV{X^{(r)}_{K'}}V\\ (G_K^{\le r+}\text{-}{\rm FEt}/\Theta^{(r)}_{\bar F}) @>{f_{\varepsilon *}}>> (G_{K'}^{\le r+}\text{-} {\rm FEt}/\Theta^{\prime(r)}_ {\bar F'}) \end{CD}$$ of functors. Then, there exists an isomorphism \begin{equation} f_{\varepsilon *}\circ X^{(r)}_K \to X^{(r)}_{K'} \circ f_{\varepsilon *} \label{eqsm} \end{equation} of functors. \end{pr} {\it Proof.} We regard $\varepsilon\colon \Omega_F(\log) \to {\mathfrak m}_{K'}^r/ {\mathfrak m}_{K'}^{r+1}$ as an $F'$-rational point $\varepsilon\colon {\rm Spec}\ F' \to \Theta^{(r)}_F$. Let $L$ be a finite \'etale algebra over $K$ of log ramification bounded by $r+$. We take a cartesian diagram (\ref{eqP0}) over $W(k)$ as in Lemma \ref{lmP0}. Take a morphism $S'\to P^{(r)}$ over $S$ lifting $\varepsilon\colon {\rm Spec}\ F' \to \Theta^{(r)} \subset P^{(r)}$ and consider the composition $S'\to P^{(r)} \to P\to P_0$. We put $T'=S'\times_{P_0}Q_0$. Since $\bar Q^{(r)}_{\bar F} \to \bar P^{(r)}_{\bar F}= \Theta^{(r)}_{\bar F}$ is \'etale, the base chage $Q_0\times_{P_0}P^{(r)} \to P^{(r)}$ is \'etale on the complement of the closed fiber in a neighborhood of the closed fiber. Hence, the $K'$-algebra $L'= \Gamma(T'\times_{S'}{\rm Spec}\ K', {\cal O})$ is \'etale. The fiber product $T'\times_{Q_0}E_0$ is isomorphic to $S'\times_{P_0}E_0 = (S'\times_{P_0}D_0) \times_{D_0}E_0 = F'\times_F (F\times_{D_0}E_0) = F'\times_F (T\times_{Q_0}E_0)_{\rm red}$ and is reduced since $F'$ is separable over $F$. Hence, $T'$ is the spectrum of the integer ring ${\cal O}_{L'}$. By the assumption that $K'$ is smooth over $K$, there exists a commutative diagram $$\begin{CD} S'@>>> P'_0@<<< D'_0\\ @VVV @VVV@VVV\\ S@>>> P_0@<<<D_0 \end{CD}$$ of schemes over $W(k)$ satisfying the following conditions: The vertical arrow $P'_0\to P_0$ is smooth, the right square is cartesian, $S'\times_{P'_0}D'_0= {\rm Spec}\ F'$ and $\Omega^1_{P'_0/W(k)}(\log D'_0) \otimes F'\to \Omega_{F'}(\log)$ is an isomorphism. We consider the diagram \begin{equation} \begin{CD} T'@>>> Q'_0 @<<< E'_0\\ @VVV @VVV@VVV\\ S'@>>> P'_0@<<<D'_0 \end{CD} \label{eqP'0} \end{equation} where the right square is the base change of that of (\ref{eqP0}) by $P'_0\to P_0$ and the left square is cartesian. Since $T'\times_{Q'_0}E'_0 = T'\times_{Q_0}E_0 = F'\times_F (T\times_{Q_0}E_0)_{\rm red} = (T'\times_{Q_0}E_0)_{\rm red}$, the diagram (\ref{eqP'0}) satisfies the conditions corresponding to (\ref{eqP0}.1) and (\ref{eqP0}.2). We define $Q^{(r)}_{\bar S} \to P^{(r)}_{\bar S}$ and $Q^{\prime (r)}_{\bar S'} \to P^{\prime(r)}_{\bar S'}$ and we identify $P^{(r)}_{\bar S}= \Theta^{(r)}_{\bar F}$ and $P^{\prime (r)}_{\bar S'}= \Theta^{(r)}_{\bar F'}$ as in (\ref{eqth}). Then, the map $P^{\prime (r)}_{\bar S'} \to P^{(r)}_{\bar S} \times_{\bar S}{\bar S'}$ induced by $P'\to P$ is smooth and hence the diagram \begin{equation} \begin{CD} Q^{\prime (r)}_{\bar S'} @>>> Q^{(r)}_{\bar S} \\ @VVV@VVV\\ P^{\prime (r)}_{\bar S'} @>>> P^{(r)}_{\bar S} \end{CD} \label{eqPP'} \end{equation} is cartesian. Since the diagram \begin{equation} \begin{CD} \Theta^{\prime (r)}_{\bar F'} @>{f_{\varepsilon*}}>> \Theta^{(r)}_{\bar F} \\ @VVV@VVV\\ P^{\prime (r)}_{\bar S'} @>>> P^{(r)}_{\bar S} \end{CD} \label{eqPT'} \end{equation} is cartesian, we obtain a cartesian diagram \begin{equation} \begin{CD} X_{K'}^{(r)}(L') @>>> X_K^{(r)}(L) \\ @VVV@VVV\\ \Theta^{\prime (r)}_{\bar F'} @>{f_{\varepsilon*}}>> \Theta^{(r)}_{\bar F} \end{CD} \label{eqXT'} \end{equation} compatible with the group homomorphism $G_{K'}^{\le r+}\to G_K^{\le r+}$. Thus by the definition of the functor $f_{\varepsilon *}$, we obtain an isomorphism $L'\to f_{\varepsilon *}(L)$. The diagram (\ref{eqXT'}) defines an isomorphism $X_{K'}^{(r)}(L') \to f_{\varepsilon*} X_K^{(r)}(L)$ in the category $(G_{K'}^{\le r+}\text{-}{\rm FEt}/ \Theta^{\prime (r)}_{\bar F'})$. The isomorphism is functorial in $L$ and they define an isomorphism $X_{K'}^{(r)}\circ f_{\varepsilon*} \to f_{\varepsilon*} \circ X_K^{(r)}$ of functors. \qed We deduce the following transitivity. \begin{cor}\label{cortra} Let $f\colon K\to K'$ and $g\colon K' \to K''$ be extensions of complete discrete valuation fields. We assume $f$ is {\rm smooth} and let $e'$ be the ramification index of $K''$ over $K'$. Let $\varepsilon\colon \Omega_F(\log)\to {\mathfrak m}_{K'}^r/ {\mathfrak m}_{K'}^{r+1}$ and $\varepsilon'\colon \Omega_{F'}(\log)\to {\mathfrak m}_{K''}^{e'r}/ {\mathfrak m}_{K''}^{e'r+1}$ be infinitesimal deformations and we put $(g,\varepsilon')\circ (f,\varepsilon) =(g\circ f,\varepsilon'')$. Then, there exists an isomorphism of functors: \begin{equation} (g\circ f)_{ \bar \varepsilon''*} \to g_{\varepsilon'*} \circ f_{\varepsilon*}. \label{eqtra} \end{equation} \end{cor} {\it Proof.} The composition of the morphism (\ref{eqsm}) with the fiber functor $F_{\bar \varepsilon'}$ gives a morphism $$F_{\bar \varepsilon'} \circ f_{\varepsilon *}\circ X^{(r)}_K \to F_{\bar \varepsilon'} \circ X^{(r)}_{K'} \circ f_{\varepsilon *}= g_{\varepsilon' *} \circ f_{\varepsilon *}$$ of functors. By the canonical isomorphism $F_{\bar \varepsilon'} \circ f_{\varepsilon *} \to F_{\bar \varepsilon''}$, the first term $F_{\bar \varepsilon'} \circ f_{\varepsilon *}\circ X^{(r)}_K$ is identified with the functor $(g\circ f)_{ \bar \varepsilon''*}$. \qed \section{Proof of Theorem \ref{them2}} We prove Theorem \ref{them2} in the introduction. It is reduced to the case where $r>0$ is an integer, by considering the base change by log smooth extension as in \cite[Lemma 1.22]{Jus}. We regard an $F$-vector space $V$ of finite dimension as a smooth additive algebraic group over $F$ and let ${\rm pr}_1, {\rm pr}_2\colon V\times V\to V$ be the projections and $-\colon V\times V\to V$ be the subtraction $(x,y)\mapsto y-x$. By Lemma \ref{lmalg} below, it suffices to prove the following. \begin{lm}\label{lmchi} Let $\chi$ be a character of ${\rm Gr}^rG_K$ and regard it also as a character of $\pi_1(\Theta^{(r)}_{\bar F})^{\rm ab}$ by the surjection {\rm (\ref{eqcan1})}. Then, we have an equality ${\rm pr}_2^*\chi = {\rm pr}_1^*\chi \cdot -^*\chi$ of characters of $\pi_1(\Theta^{(r)}_{\bar F} \times \Theta^{(r)}_{\bar F} )^{\rm ab}$. \end{lm} \begin{lm}[{\cite[Lemma 1.23]{Jus}}]\label{lmalg} Let $F$ be an algebraically closed field of characteristic $p>0$ and regard an $F$-vector space $V$ of finite dimension as a smooth additive algebraic group over $F$. Let $\pi_1(V)^{\rm alg}$ be the quotient of the abelian fundamental group $\pi_1(V)^{\rm ab}$ classifying \'etale isogenies. Then, for a character $\chi$ of $\pi_1(V)^{\rm ab}$, the following conditions are equivalent: {\rm (1)} $\chi$ factors through the quotient $\pi_1(V)^{\rm alg}$. {\rm (2)} We have an equality ${\rm pr}_2^*\chi = {\rm pr}_1^*\chi \cdot -^*\chi$ of characters of $\pi_1(V\times V)^{\rm ab}$. \end{lm} To prove Lemma \ref{lmchi}, we use the geometric construction in Section 1. We consider the smooth scheme $P^{(r)}$ over $S$ and the fiber product $P^{(r)}\times_S P^{(r)}$. The closed fibers $P^{(r)}_F$ and $P^{(r)}_F\times_F P^{(r)}_F$ are identified with $\Theta^{(r)}_F$ and $\Theta^{(r)}_F \times \Theta^{(r)}_F$. Let $\xi \in \Theta^{(r)}_F \subset P^{(r)}$ and $\eta \in \Theta^{(r)}_F \times_F \Theta^{(r)}_F \subset P^{(r)}\times_S P^{(r)}$ be the generic points. Define complete discrete valuation fields $K'$ and $K''$ to be the fraction fields of the completions of the local rings ${\cal O}_{P^{(r)},\xi}$ and ${\cal O}_{P^{(r)}\times_S P^{(r)},\eta}$ respectively. They are smooth extensions of $K$. Let $1\colon K\to K'$ denote the canonical map and ${\rm p}_1,{\rm p}_2 \colon K'\to K''$ denote the map induced by the projections $P^{(r)}\times_S P^{(r)}\to P^{(r)}$. We define infinitesimal deformations $\varepsilon\colon \Omega_F(\log)\to {\mathfrak m}^r_{K'}/ {\mathfrak m}^{r+1}_{K'}$ and $\varepsilon'\colon \Omega_{F'}(\log)\to {\mathfrak m}^r_{K''}/ {\mathfrak m}^{r+1}_{K''}$. The residue field $F'$ of $K'$ is the function field of $\Theta^{(r)}_F$ and hence is the fraction field of the symmetric algebra $S^\bullet_F {\rm Hom}_F( {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K, \Omega_F(\log))$ of the dual vector space. We define $\varepsilon\colon \Omega_F(\log)\to {\mathfrak m}^r_{K'}/ {\mathfrak m}^{r+1}_{K'} =F'\otimes_F {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K$ to be the map \begin{align*} \Omega_F(\log)\to & {\rm Hom}_F( {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K, \Omega_F(\log)) \otimes_F {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K\\ &\subset S^\bullet_F ({\rm Hom}_F( {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K, \Omega_F(\log))) \otimes_F {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K \subset F'\otimes_F {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K \end{align*} where the arrow is the inverse of the isomorphism defined by the evaluation. Similarly, the residue field $F''$ of $K''$ is the fraction field of the symmetric algebra $S^\bullet_F {\rm Hom}_F( {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K, \Omega_F(\log)^{\oplus 2})$. Since $K'$ is a smooth extension of $K$, we have an exact sequence $0\to \Omega_F(\log) \otimes_FF' \to \Omega_{F'}(\log) \to \Omega_{F'/F}\to 0$. Let $\varepsilon'\colon \Omega_{F'}(\log)\to {\mathfrak m}^r_{K''}/ {\mathfrak m}^{r+1}_{K''} =F''\otimes_F {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K$ be a ${\rm p}_1$-linear map such that the restriction to $\Omega_F(\log)$ is \begin{align*} \Omega_F(\log)\to & \Omega_F(\log)^{\oplus 2} \to {\rm Hom}_F( {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K, \Omega_F(\log)^{\oplus 2}) \otimes_F {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K\\ &\subset S^\bullet_F ({\rm Hom}_F( {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K, \Omega_F(\log)^{\oplus 2})) \otimes_F {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K \subset F''\otimes_F {\mathfrak m}^r_K/ {\mathfrak m}^{r+1}_K \end{align*} where the first arrow is the map $x\mapsto (-x,x)$ and the second arrow is the inverse of the isomorphism defined by the evaluation. Finally, we define a morphism $\mu\colon K'\to K''$ over $K$. The subtraction map $\Theta_F^{(r)}\times_F \Theta_F^{(r)}\to \Theta_F^{(r)}$ is dominant and induces a morphism $F'\to F''$ over $F$. We define $\mu\colon K'\to K''$ to be a morphism over $K$ lifting the map $F'\to F''$. \begin{lm}\label{lmmu} We have \begin{align} {\rm p}_2\circ (1,\varepsilon) &= ({\rm p}_1,\varepsilon') \circ (1,\varepsilon) \label{eq2e} \\ \mu\circ (1,\varepsilon) &= ({\rm p}_1,\varepsilon') \circ 1. \label{eqmue} \end{align} \end{lm} {\it Proof.} They are equalities of deformations of the canonical map $K\to K''$. Hence, it suffices to prove the equalities of maps $\Omega_F(\log)\to {\mathfrak m}^r_{K''} /{\mathfrak m}^{r+1}_{K''} =F''\otimes {\mathfrak m}^r_K /{\mathfrak m}^{r+1}_K$. For the left hand side of (\ref{eq2e}), it is induced by the map $\Omega_F(\log) \to \Omega_F(\log)^{\oplus 2}$ sending $x$ to $(0,x)$. For the right hand side of (\ref{eq2e}), it is induced by the sum of the maps $\Omega_F(\log) \to \Omega_F(\log)^{\oplus 2}$ sending $x$ to $(x,0)$ and to $(-x,x)$. For both sides of (\ref{eqmue}), they are induced by the map $\Omega_F(\log) \to \Omega_F(\log)^{\oplus 2}$ sending $x$ to $(-x,x)$. \qed {\it Proof of Lemma \ref{lmchi}.} We may assume that the residue field $F$ is separably closed. Let $\chi$ be a character of ${\rm Gr}^rG_K$. Let $\xi^*(\chi)$ be the character of $G_{K'}^{\le r+}$ defined as the composition of $\chi$ with $$\begin{CD} G_{K'}^{\le r+} @>{\xi_*}>> \pi_1(\Theta^{(r)}_{\bar F})^{\rm ab} @>{(\ref{eqcan1})}>> {\rm Gr}^rG_K. \end{CD}$$ Since the canonical map $\eta_*\colon G_{K''}^{\le r+} \to \pi_1(\Theta^{(r)}_{\bar F} \times \Theta^{(r)}_{\bar F})^{\rm ab}$ is surjective, it suffices to show the equality ${\rm p}_2^*\xi^*(\chi) ={\rm p}_1^*\xi^*(\chi)\cdot \mu^*\xi^*(\chi)$ of characters of $G_{K''}^{\le r+}$. Since ${\rm Gr}^rG_K$ is a central subgroup of $G^{\le r+}_K$, there exists an irreducible representation $V$ of $G^{\le r+}_K$ such that the restriction to ${\rm Gr}^rG_K$ is the scalar multiplication by the character $\chi$. By Corollary \ref{corchi}, we have an isomorphism ${\rm Res}_{1,\varepsilon}V \to {\rm Res}^{G_K^{\le r+}} _{G_{K'}^{\le r+}} V\otimes \xi^*(\chi)$ of representations of $G_{K'}^{\le r+}$. By Corollary \ref{cortra} and by (\ref{eq2e}) and (\ref{eqmue}), it induces isomorphisms \begin{align*} {\rm Res}^{G_K^{\le r+}} _{G_{K''}^{\le r+}} V\otimes {\rm p}_2^*\xi^*(\chi) &\to {\rm Res}_{{\rm p}_1,\varepsilon'} {\rm Res}^{G_K^{\le r+}} _{G_{K'}^{\le r+}} V\otimes {\rm p}_1^*\xi^*(\chi), \\ {\rm Res}^{G_K^{\le r+}} _{G_{K''}^{\le r+}} V \otimes \mu^*\xi^*(\chi) &\to {\rm Res}_{{\rm p}_1,\varepsilon'} {\rm Res}^{G_K^{\le r+}} _{G_{K'}^{\le r+}} V \end{align*} of representations of $G_{K''}^{\le r+}$. Thus, we obtain an isomorphism $${\rm Res}^{G_K^{\le r+}} _{G_{K''}^{\le r+}} V\otimes {\rm p}_2^*\xi^*(\chi) \to {\rm Res}^{G_K^{\le r+}} _{G_{K''}^{\le r+}} V \otimes {\rm p}_1^*\xi^*(\chi) \cdot \mu^*\xi^*(\chi).$$ Since $G_{K''}^{\le r+}\to G_K^{\le r+}$ is surjective, this implies an equality ${\rm p}_2^*\xi^*(\chi) = {\rm p}_1^*\xi^*(\chi) \cdot \mu^*\xi^*(\chi)$ of characters of $G_{K''}^{\le r+}$ by Schur's lemma. Thus the assertion is proved. \qed As in \cite[Corollary 1.26]{Jus}, Theorem \ref{them2} has the following consequence. Let $V$ be an $\ell$-adic representation of $G_K$. Since $P=G_{K,\log}^{0+}$ is a pro-$p$ group, there exists a unique direct sum decomposition $V=\bigoplus_{q\ge0, q\in {\mathbb Q}} V^{(q)}$ by sub $G_K$-modules such that the $G_{K,\log}^{r+}$-fixed part is given by $V^{G_{K,\log}^{r+}}= \bigoplus_{q\ge r} V^{(q)}.$ We put ${\rm Sw}_KV= \sum_rr\cdot {\rm rank}\ V^{(r)} \in {\mathbb Q}$. \begin{cor}\label{corHA} $${\rm Sw}_KV\in {\mathbb Z}[\frac1p].$$ \end{cor} Liang Xiao claims a stronger assertion ${\rm Sw}_KV\in {\mathbb Z}$ in \cite[Theorem 3.5.11]{Xiao}.
1,116,691,497,019
arxiv
\section{Introduction} Our goal is the construction of relativistic scalar field models and the analysis of their extended solutions. We use the deformation procedure, which was put forward in Ref.~\cite{blm}-\cite{blm1}, and profusely applied in \cite{dd+} in a diversity of contexts. The deformation procedure relies on a function, called the deformation function, which responds for the construction of the new, deformed model. Here, however, we choose the deformation function as a composed function of the scalar field under investigation. This novelty nicely brings new interesting families of models, together with their corresponding extended solutions. The composed deformation function greatly enlarges the capabilities of the method allowing also the deformation of singular solutions of the starting model into regular solutions of the deformed model. Another interesting new result in this work is that all the potentials obtained through the composed deformation can be written in a factorized form, immediately allowing the identification of the absolute minima of the new potential terms. The deformation method introduced in \cite{blm} connects two distinct models of real scalar fields in two-dimensional space-time, characterized respectively by the Lagrangians \begin{subequations} \begin{eqnarray} {\cal L}=\frac12\partial_\mu\chi\partial^\mu\chi-U(\chi)\label{1a} \\ {\cal L}_d=\frac12\partial_\mu\phi\partial^\mu\phi-V(\phi)\label{1b} \end{eqnarray}\end{subequations} where $\chi$ and $\phi$ are the scalar fields and $U(\chi)$ and $V(\phi)$ are the potential terms, which specify each one of the two models. The key ingredient is an invertible function $f=f(\phi),$ the deformation function, from which we link the model of (\ref{1a}) with the \lq\lq deformed" model of (\ref{1b}) by relating the two potentials $U(\chi)$ and $V(\phi)$ in the very specific form \begin{equation} V(\phi)=\frac{U(\chi\to f(\phi))}{(df/d\phi)^2} \end{equation} This allows showing that if the starting model has static solution $\chi(x)$ which obeys the first-order equations ${d\chi}/{dx}=\pm\sqrt{2U(\chi)} \label{1c}$, and the equation of motion ${d^2\chi}/{dx^2}={dU}/{d\chi} $, then, the deformed model has static solution given by $\phi(x)=f^{-1}(\chi(x)),$ which obeys ${d\phi}/{dx}=\pm\sqrt{2V(\phi)} \label{1d}$, and ${d^2\phi}/{dx^2}={dV}/{d\phi} $. The proof was already given in Ref.~{\cite{blm}}. The effectiveness of the deformation method in the search for topological and non-topological defects in (1+1)D scalar field theory demands, first, that the static solutions of eq.(\ref{1c}) are known. The second subtle point is an shrewd choice of deformation function such that $V(\phi)$ has a finite set of degenerated minima: ${\phi^i} (i=1, \cdots,N)$, and the analytical solution $\phi_S(x)=f^{-1}[\chi_S(x)]$ of eq.(\ref{1d}) complies with: $\lim_{x\to -\infty}\phi_S(x)=\phi^i$ and $ \lim_{x\to\infty}\phi_S(x)=\phi^j \, , \, \, j=i+1, i, i-1 \quad $ If $j=i\pm 1$ we talk of topological defects named kinks/anti-kinks, and if $j=i$ we find, besides the classical minima, non-topological defects called lumps. \section{Composed deformation functions} \label{sec:3} In Ref.~\cite{blm1}, some of us applied the deformation method starting from the standard $\chi^4$ model and choosing \begin{equation} \chi=f(\phi)=\cos(a\arccos\phi-M\pi) \, , \end{equation} as deformation functions, where $a$ and $M$ are integer or half-integer numbers. In this paper we extend the family of deformation functions by allowing composed functions $f[g(\phi)]$ of the form \begin{equation} \label{df}\chi=f[g(\phi)]=\cos(a\arccos[g(\phi)]-M\pi)\, . \end{equation} Appropriate choices of the function $g(\phi)$ will provide us with new scalar field models having analytically solvable first-order differential field equations, which support new topological (kink-shape, double kink-shape) and/or non-topological (bell-shape, sugar loaf-shape) defect structures as classical static solutions of the corresponding field equations. The starting model is described by the potential \begin{equation} U(\chi)=\frac12\;(1-\chi^2)^2 \end{equation} where we are using dimensionless field and coordinates. We fix the center of the defect at the origin ($x_0=0$) to get, for $|\chi|\leq 1$, the finite energy regular kink solution $ \chi_1(x)=\pm\tanh(x) $ and, for $|\chi|\geq 1$, the infinite energy singular kink solution $ \chi_2(x)=\pm\coth(x) $. Like in Ref.~ \cite{blm1}, the parameter $M$ leads to two distinct families of models: for $M$ integer, the deformed potential can be written in the form \begin{equation}\label{vs1} {V}_{\sin}^a(\phi)=\frac{1}{2a^2}\frac{(1-g^2(\phi))}{(g'(\phi))^2}\sin^2\left(a\;\arccos(g(\phi))\right) \end{equation} where $g'(\phi)=dg/d\phi$. However, for $M$ half-integer we get \begin{equation}\label{vc1} {V}_{\cos}^a(\phi)=\frac{1}{2a^2}\frac{(1-g^2(\phi))}{(g'(\phi))^2}\cos^2\left(a\;\arccos(g(\phi))\right) \end{equation} Using half-integer and integer values for the parameter $a$ the number of vacua of the new model is fixed at will. By this procedure we identify families of potentials which present static solutions of the general form \begin{equation}\label{sol1} {\phi}_S(x)=g^{-1}\left(\cos\left[{(\eta(x)+ M\;\pi})/{a}\right]\right)\end{equation} where $g^{-1}$ is the inverse function of $g(\phi)$, and $\eta(x)$ is either $\theta_1(x)$ or $\theta_2(x)$, both $\in [0,\pi]$ given by \begin{eqnarray} \theta_1(x)&=&\arccos(\tanh(x))\,, \label{teta1}\\ \theta_2(x)&=&\arccos(\coth(x))\,.\label{teta2} \end{eqnarray} Firstly, we start with \eqref{vs1} and $g(\phi)=\phi$. Here the potentials are, for $a$ odd, \begin{equation}\label{vsino} {V}_{\sin}^a(\phi)=\frac{1}{2a^2}\;\prod_{j=1}^{(a+1)/2} \left(1-\frac{\phi^2}{{Z_j^{a}}^2}\right)^2\,, \end{equation} and for $a$ even, \begin{equation}\label{vsine} {V}_{\sin}^a(\phi)=\frac{1}{2}\;\phi^2\;\prod_{j=1}^{a/2} \left(1-\frac{\phi^2}{{Z_j^{a}}^2}\right)^2\,, \end{equation} where $Z_j^{a}=\cos[(j-1)\pi/a]$. Also, for $a$ half-integer we get \begin{equation}\label{vsins} {V}_{\sin}^a(\phi)=\frac{1}{4a^2}(1-\phi)(1-\phi^2)\;\prod_{j=1}^{a-1/2} \left(1+\frac{\phi}{{\tilde{Z}_j^{a}}}\right)^2\,, \end{equation} where $\tilde{Z}_j^{a}=\cos[(2j-1)\pi/2a]$. Now, if we start with \eqref{vc1} and $g(\phi)=\phi$, the potentials are, for $a$ odd, \begin{equation}\label{vcoso} {V}_{\cos}^a(\phi)=\frac{1}{2}\;\phi^2(1-\phi^2)\prod_{j=1}^{(a-1)/2} \left(1-\frac{\phi^2}{ {{\tilde{Z}}_j^{a 2}}}\right)^2\,, \end{equation} and, for $a$ even, \begin{equation}\label{vcose} {V}_{\cos}^a(\phi)=\frac{1}{2a^2}\;(1-\phi^2)\;\prod_{j=1}^{a/2} \left(1-\frac{\phi^2} { {\tilde{Z}}_j^{a 2}}\right)^2\, , \end{equation} whereas, for $a$ half-integer we have ${V}_{\cos}^a(\phi)={V}_{\sin}^a(-\phi)$. Note that, the choice $g(\phi)=\phi$ gives the polynomial potentials already investigated in Ref.~\cite{blm1}, but here the formula for $a$ half-integer, Eq.~\eqref{vsins}, is factorized. \section{Families of models for $g(\phi)=\phi^r$} \label{sec:4} Consider the case $g(\phi)=\phi^r$, where $r={n}/{m}$ is a positive rational number, the ratio of two nonzero natural numbers, i.e., $n,m\in{\mathbb N}^*$. From \eqref{sol1}, the static solutions of the $V^{a}_{\sin}(\phi)$ and $V^{a}_{\cos }(\phi)$ are given by \begin{equation}\label{uroot} {\phi}_S^k(x)=s(r)\cos^{1/r}\left[({\eta(x)+M\;\pi})/{a}\right] \quad , \end{equation} where $M=k-1$ for the $sine$ family, or $M=(2k-1)/2$ for the $cosine$ family, and $k$ is a positive natural number, but only a few values of $k$ produce different solutions depending on $a$. The symbol $s(r)$ in (\ref{uroot}) is defined as follows: If $r={2p}/{(2q-1)}$, $p,q\in{\mathbb N}^*$, $s(r)$ amounts to take the $\pm$ sign and the modulus before extracting the odd root in Eq.\eqref{uroot}: \begin{equation} {\phi}_S^k(x)=\pm\left|\cos^{2q-1}\left[(\eta(x)+M\;\pi)/{a}\right]\,\right|^\frac{1}{2p} \quad . \label{uroot1} \end{equation} For any other value of $r$ the symbol is simply the unity: $s(r)=1$. This is because there are two real even roots and only one real odd root. Because $r$ is positive, the deformation function $\chi=g(\phi)=\phi^r$ maps the range $|\chi|\leq 1$ in the range $|\phi|\leq 1$, and all the zeros of the deformed potential are between $-1\leq\phi\leq 1$. Then, to obtain finite energy static solutions of the deformed model we need to start from the regular kink solution $\eta(x)=\theta_1(x)$, as given by Eq.~\eqref{teta1}. We note that a general characteristic of the models generated by the deformation function \eqref{df} with $g(\phi)=\phi^r$, is that the number of topological and non-topological static solutions is determinate only by $a$, as described below. The potentials can be written in polynomial form, for $a$ integer or half-integer, as we show below. \subsection{The $sine$ family of models for $a$ integer} Here the family of models $V^{a,r}_{\sin}$ is investigated for the specific case of $a$ being an integer. The polynomial form of ${V}^{a,1}_{\sin}$, with its zeros (and multiplicities) is known, and so performing the deformation $g(\phi)=\phi^r$ we have, for $a$ odd \begin{equation} {V}_{\sin}^{a,r}(\phi)=\frac{1}{2a^2r^2}\;\phi^{2-2r} \, \prod_{j=1}^{(a+1)/2} \left(1-\frac{\phi^{2r}}{{Z_j^{a}}^{2}}\right)^2\,,\end{equation} and for $a$ even \begin{equation} {V}_{\sin}^{a,r}(\phi)=\frac{1}{2r^2}\;\phi^2\, \prod_{j=1}^{a/2} \left(1-\frac{\phi^{2r}}{{Z_j^{a}}^{2}}\right)^2\,, \end{equation} where $Z_j^a=\cos\left[(j-1)\pi/a\right]$ under the restriction that the potential be real and nonsingular. Henceforth, if $a$ is odd we can take only $r\leq 1$. The $sine$ potential $V^{a,r}_{\sin}(\phi)$ can be written in terms of the Chebyshev polynomials of second kind in the $\phi^r$ variable: \begin{subequations} \begin{eqnarray} V^{a,r}_{\sin}(\phi)&=&\frac1{2a^2r^2}\;\phi^{2-2r}(1-\phi^{2r})^2\;U^2_{a-1}(\phi^r)\,, \\ \nonumber \\ U_a(\theta)&=&{\sin[(a+1)\arccos\theta]}/{\sin[\arccos\theta]}\label{cheb2}\,. \end{eqnarray} \end{subequations} The explicit forms of $V^{a,r}_{\sin}(\phi)$, for ${a=1,2,3}$ are given by \begin{equation}\label{vsin1n} V^{1,r}_{\sin}(\phi)=\frac1{2r^2}\;\phi^{2-2r}\;(1-\phi^{2r})^2\,, \end{equation} \begin{equation} V^{2,r}_{\sin}(\phi)=\frac1{2r^2}\;\phi^2\;(1-\phi^{2r})^2\,, \end{equation} \begin{equation} V^{3,r}_{\sin}(\phi)=\frac{8}{9r^2}\;\phi^{2-2r}\;(1-\phi^{2r})^2\left(\frac14-\phi^{2r}\right)^2\,, \end{equation} which illustrate this new family of models. We see that, in the cases for $r>1$ integer and $r>1/2$ half-integer, we have to take $a$ even. The defects analytically described by the formula (\ref{uroot}) are classified in three distinct types: topological kink, non-topological bell-shape lump, and topological double kink. The defect classes depend on the potentials $V^{a,r}_{\sin}(\phi)$. We have noticed in Ref.~\cite{blm1}, that in the case for $r=1$ there are two classes of models: for $a$ odd they are $\phi^4-$like potentials -- no zero at the origin -- and for $a$ even they are $\phi^6$-like models -- having a zero at the origin. Hereafter, the new potentials are described and compared with their predecessors in Ref.~\cite{blm1} case by case. For $a=$ even, $r=n$ or $r=n/m$, $n$ integer, and $m$ odd, the potentials are non-negative and symmetric with respect to $\phi=0$, like the $\phi^6$- model, have a zero at the origin, see Figures \ref{FIG1} and \ref{FIG4}. \begin{figure}[ht] \includegraphics[{height=02.2cm,width=08cm,angle=00}]{FIG1} \caption{Plots of $V^{4,1}_{\sin}(\phi)$ and $V^{4,2}_{\sin}(\phi),$ depicted with dashed (red) and solid (blue) lines, respectively.}\label{FIG1} \end{figure} The vacua and the static solutions are \begin{equation} \phi_v^j=\pm\left|\cos\left[(j-1)\pi/a\right]\,\right|^\frac{1}{r} \, , \, \, \, j=1, \, \dots \, , 1+{a}/{2}\,, \end{equation} \begin{equation} \phi_S^k(x)=\pm\Big|\cos\Big[\arccos(\tanh(x))+(k-1)\pi)/{a}\Big]\,\Big|^\frac{1}{r},\end{equation} $k=1, \, \dots \, , a$. There are $a+1$ vacua and $a$ pairs of kink/anti-kink, both for $n$ even and odd. All the defects are topological kink/anti-kink, interpolating between consecutive vacua of the potential. For $a$ even and $r=n/2>1/2$ half-integer, the potentials are also non-negative but non symmetric with respect to the zero at the origin, that is a minimum of $V$. These potentials are not $\phi\to -\phi$ invariants and all their critical points are non-negative, see Figure \ref{FIG2}. \begin{figure}[ht] \includegraphics[{height=02.2cm,width=08cm,angle=00}]{FIG2} \caption{Plots of $V^{4,1/2}_{\sin}(\phi)$ and $V^{4,3/2}_{\sin}(\phi),$ depicted with dashed (red) and solid (blue) lines, respectively.}\label{FIG2} \end{figure} The vacua and the static solutions are \begin{equation} \phi_v^j=\left(\cos^{2}\left[(j-1)\pi/a\right]\right)^{\frac{1}{n}}\, , \,\, j=1,2, \dots , 1+ {a}/{2}\,,\end{equation} \begin{equation} \phi_S^k(x)=\left(\cos^{2}\left[({\arccos(\tanh(x))+(k-1)\pi})/{a}\right]\right)^{\frac{1}{n}}\, \, \, , \end{equation} $k=1,2, \dots , a$. There are thus $a/2+1$ non-negative vacua and $a/2$ couples of topological kink/anti-kink defects. For $a$ odd and $r={1}/{2}$, the potentials of the $sine$ family are non-negative only for $\phi\geq0$, and the zero at $\phi=0$ is not a critical point. The vacua and the static solutions are respectively \begin{equation} \phi_v^j=\cos^2\left[({j-1})\pi/a\right] \, , \, \, j=1,2, \dots , ({a+1})/{2}\,, \end{equation} \begin{equation} \phi_S^k(x)=\cos^2\left[({\arccos(\tanh(x))+(k-1)\pi})/{a}\right]\, \, , \end{equation} $k=1,2, \dots, a$. There are $(a+1)/2$ vacua and a total of $a$ defects - one non-topological lump and $(a-1)/2$ couples of topological kink/anti-kink. The solutions corresponding to $k<(a+1)/2$ are topological kinks interpolating between two consecutive local minima. For $k>(a+1)/2$ we find topological anti-kinks that connect the vacua in the opposite sense. The remaining solution $k=(a+1)/2$ asymptotically behaving as $\phi=\cos^2\left[(a-1)\pi/2a\right]$ both at $x=\pm\infty$, is a non-topological lump: in the mechanical analogy, the associated NTK trajectory in the potential $V^{a,1/2}(\phi)$ starts at $x=-\infty$ from the \lq\lq maximum" $\phi=\cos^2\left[(a-1)\pi/2a\right]$, bounces back at the \lq\lq turning" point $\phi=0$ and \lq\lq finally" arrives at $\phi=\cos^2\left[(a+1)\pi/2a\right]=\phi=\cos^2\left[(a-1)\pi/2a\right]$ at $x=\infty$. Finally, for $a$ odd and $r=n/m$ non integer, $m$ odd, and $n=1,2, \dots ,m-1 $, the potentials are non-negative and symmetric with respect to the origin, where $V=0$, see Figure \ref{FIG4}. \begin{figure}[ht] \includegraphics[{height=02.2cm,width=08cm,angle=00}]{FIG3} \caption{Plots of $V^{1,1/3}_{\sin}(\phi)$ and $V^{2,1/3}_{\sin}(\phi),$ depicted with dashed (red) and solid (blue) lines, respectively}\label{FIG4} \end{figure} A very interesting novelty arise in these cases: the origin is not a vacuum of $V$, $[{dV}/{d\phi}]_{\phi=0}$ does not exist for $r>1/2$, and $[{d^2V}/{d\phi^2}]_{\phi=0}\rightarrow\infty$ for $r<1/2$. This generate a defect connecting the two closest neighbor minima near the origin, named double kink-like solution. The vacua and the static solutions are \begin{equation} \phi_v^j=\pm\left|\cos^{m}\left[(j-1)\pi/a\right]\,\right|^\frac{1}{n} \, , \, \, \, j=1, \, \dots \, , ({a+1})/{2}\,, \end{equation} \begin{equation} \phi_S^k(x)=\pm\left|\cos^{m}\left[({\arccos(\tanh(x))+(k-1)\pi})/{a}\right]\,\right|^\frac{1}{n},\end{equation} where $k=1, \, \dots \, , a$, and $k\neq({a+1})/{2}$ for kink/anti-kink defect, and \begin{equation} \phi_S^k(x)=\pm sg(x) \left|\cos^{m}\left[(\arccos(\tanh(x))+(k-1)\pi)/a\right)\right|^\frac{1}{n} \end{equation} where $sg(x)=x/|x|$, and $k=(a+1)/2$ for double kink/anti-kink defect. There are $a+1$ vacua and $a$ couples of topological defects - (a-1) couples of kinks/anti-kinks, and one couple of double kink/anti-kink around the origin - both for $n$ even and odd. For instance, let us take $n=1$, the $k=(a+1)/2$ and $k=a+2$ kinks skip the origin and connect the closest minima to $\phi=0$: $\phi=\cos^{m}\left[({a-1})/{2a}\right]$ and $\phi=\cos^{m}\left[({a+1})/{2a}\right]$. The character is thus topological joining two different minima but they have the shape of a double kink. This special form of defect was introduced in Ref.~\cite{bmm}, and one nicely see its appearance again in the current context. In Figure \ref{FIG5} we plot the double kink and double anti-kink solutions of the sine potential for $a=1$ and $r=1/3$. \begin{figure}[ht] \includegraphics[{height=02.2cm,width=08cm,angle=00}]{FIG4} \caption{Plots of kink (solid line) and anti-kink (dashed line) solutions of $V^{1,1/3}_{\sin}$ (blue) and $V^{1,1}_{\sin}$ (red). The solutions of $V^{1,1/3}_{\sin}$ have the form of double kink/anti-kink.}\label{FIG5} \end{figure} \subsection{The $cosine$ family of models for $a$ integer} Here is investigate the family of models given by $V^{a,r}_{\cos}$ for the specific case of $a$ being an integer. As before, the polynomial form of ${V}^{a,1}_{\cos}$, with its zeros (and multiplicities), is also known. Performing the deformation $g(\phi)=\phi^r$ we get to the new potentials: for $a$ odd, \begin{equation} {V}_{\cos}^{a,r}(\phi)=\frac{1}{2r^2}\;\phi^2 (1-\phi^{2r}) \, \prod_{j=1}^{\frac{a-1}{2}} \left( 1-\frac{\phi^{2r}}{{Z_j^{a}}^2}\right)^2\,, \end{equation} and for $a$ even, \begin{equation} {V}_{\cos}^{a,r}(\phi)=\frac{1}{2a^2r^2}\;\phi^{2-2r} (1-\phi^{2r}) \, \prod_{j=1}^{\frac{a}{2}} \left( 1-\frac{\phi^{2r}}{{Z_j^{a}}^2}\right)^2\,, \end{equation} where $Z_j^a= \cos\left(\frac{2j-1}{2a}\pi\right)$. Again the restriction that the potential be real and nonsingular means that for $a$ even we can take only $r\leq 1$. The $cosine$ potentials $V^{a,r}_{\cos}(\phi)$ are given by Chebyshev polynomials of the first kind in the $\phi^r$ variable: \begin{subequations} \begin{eqnarray} V^{a,r}_{\cos}(\phi)&=&\frac1{2a^2r^2}\;\phi^{2-2r}(1-\phi^{2r})\;T^2_a(\phi^r)\,,\\ T_a(\theta)&=&\cos[a\arccos\theta]\label{cheb1}\,. \end{eqnarray} \end{subequations} The explicit forms of the potentials $V^{a,r}_{\cos}(\phi)$ for ${a=1,2,3}$ are given by \begin{equation} V^{1,r}_{\cos}(\phi)=\frac1{2r^2}\;\phi^2\;(1-\phi^{2r})\,, \end{equation} \begin{equation} V^{2,r}_{\cos}(\phi)=\frac1{2r^2}\;\phi^{2-2r}(1-\phi^{2r})\left(\frac12-\phi^{2r}\right)^2\,, \end{equation} \begin{equation} V^{3,r}_{\cos}(\phi)=\frac8{9r^2}\;\phi^2\;(1-\phi^{2r})\;\left(\frac34-\phi^{2r}\right)^2\,, \end{equation} which illustrate this new family of models. We see that, in the cases for $r>1$ integer and $r>1/2$ half-integer, we can take only $a$ odd. Here, the defects analytically described by the formula (\ref{uroot}) can be classified in three types: topological kink, non-topological bell-shape lump, and topological double kink. The defect classes depend on the potentials $V^{a,r}_{\cos}(\phi)$. We recall that in the case for $r=1$, there are two classes of models: for $a$ odd they are inverted $\phi^4-$like models -- having a zero at the origin --, and for $a$ even they are inverted $\phi^6-$like models. Hereafter, the new potentials are described and compared with their predecessors in the third work in Ref.~\cite{blm1} case by case. For $a=$ odd, $r=n$ or $r={n}/{m}$, $n$ integer, and $m$ odd, the potentials are non-negative only for $|\phi|\leq 1$, and the zeros $\phi=\pm1$ are not critical points, like the inverted $\phi^4-$ models -- having a zero at the origin. The vacua and the static solutions are \begin{equation} \phi_v^j=\pm\left|\cos\left[({2j-1})\pi/2a\right]\,\right|^\frac{1}{r} \, , \, \, \, j=1, \, \dots \, , ({a+1})/{2}\,, \end{equation} \begin{equation} \phi_S^k(x)=\pm\left|\cos\left[({2\,\arccos(\tanh(x))+(2k-1)\pi})/{2a}\right]\,\right|^\frac{1}{r}, \end{equation} where $k=1, \, \dots \, , a-1$ for kink/anti-kink defect, and $k=a$ for lump defect. There are $a$ vacua and $2a$ defects - $(a-1)$ couples of topological kink/anti-kink and two non-topological lumps, both for $n$ even and odd. For $a$ odd and $r=n/2>1/2$ half-integer, the potentials are non-symmetric and non-negative only for $\phi\leq1$, and the zero at $\phi=1$ is not a critical point, all their critical points are non-negative. The vacua and the static solutions are given by \begin{equation} \phi_v^j=\left(\cos^{2}\left[({2j-1})\pi/2a\right]\right)^{\frac{1}{n}}\, , \,\,j=1,2, \dots , ({a+1})/{2}\,,\end{equation} \begin{equation} \phi_S^k(x)=\left(\cos^{2}\left[({2\,\arccos(\tanh(x))+(2k-1)\pi})/{2a}\right]\right)^{\frac{1}{n}}\, \, \, , \end{equation} where $k=1,2, \dots , a-1$, for kink/anti-kink defect, and $k=a$, for lump defect. There are $(a+1)/2$ non-negative vacua and $a$ defects - $(a-1)/2$ couples of topological kink/anti-kink and one non-topological lump. For $a$ even and $r={1}/{2}$, the potentials of the $cosine$ family are non-negative only for $0\leq\phi\leq1$, and the zeros at $\phi=0,1$ are not critical points, all their critical points are non-negative. The vacua and the static solutions are respectively \begin{equation} \phi_v^j=\cos^2\left[({2j-1})\pi/2a\right] \, , \, \, j=1,2, \dots , {a}/{2}\,, \end{equation} \begin{equation} \phi_S^k(x)=\cos^2\left[(2\,\arccos(\tanh(x))+(2\,k-1)\pi)/2a\right]\, \, , \end{equation} where $k=1,2, \dots, (a-1)$ $\forall\,k\neq\,a/2$, for kink/anti-kink defect, and $k=a/2\,\,\text{or}\,\,a$, for lump defect. There are $a/2$ vacua and a total of $a$ defects - two non-topological lumps and $(a-2)/2$ couples of topological kink/anti-kink. Finally, for $a$ even and $r=n/m$ non integer, $m$ odd, and $n=1,2, \dots ,m-1 $, the potentials are non-negative only for $|\phi|<1$ and symmetric with respect to the origin, where $V=0$, and the zeros $\phi=\pm1$ are not critical points. In this case, the origin is not a vacuum of $V$, $[{dV}/{d\phi}]_{\phi=0}$ does not exist for $r>1/2$, and $[{d^2V}/{d\phi^2}]_{\phi=0}\rightarrow\infty$ for $r<1/2$. Again, this generates defects connecting the two closest minima to the origin, in the form of double kink and double anti-kink. The vacua and the static solutions are \begin{equation} \phi_v^j=\pm\left|\cos^{m}\left[({2j-1})\pi/2a\right]\,\right|^\frac{1}{n} \, , \, \, \, j=1, \, \dots \, , {a}/{2}\,, \end{equation} \begin{equation} \phi_S^k(x)=\pm\left|\cos^{m}\left[({2\,\arccos(\tanh(x))+(2k-1)\pi})/{2a}\right]\,\right|^\frac{1}{n},\end{equation} where $k=1,2, \dots, (a-1)$ $\forall\,k\neq\,a/2$, for kink/anti-kink, $k=a$ for lump defects, and \begin{equation} \phi_S^k(x)=\pm\frac {x}{|x|}\left|\cos^{m}\left[\frac{2\,\arccos(\tanh(x))+(2k-1)\pi}{2a}\right]\right|^\frac{1}{n}\end{equation} where $k={a}/{2}$, for double kink/anti-kink defect. There are $a$ vacua and $2a$ defects -- $(a-2)$ couples of topological kink/anti-kink, and one couple of topological double kink/anti-kink around the origin, and two non-topological lumps -- both for $n$ even and odd. We can also choose $a$ half-integer. This gives two new families of models, which can be studied as before. Moreover, other possibilities for $g(\phi)$ can also be chosen, in particular we can consider the case $g(\phi)=1/\phi^r$, where $r={n}/{m}$. As before, $r$ is a positive rational number, the ratio of two nonzero natural numbers, i.e., $n,m\in{\mathbb N}^*$. The investigation follows the same steps we just introduced, so we omit it here. \section{Superpotentials and stability} \label{sec:6} In general, when the potential is non negative, it is possible to introduce superpotentials $W=W(\phi)$ such that $ V(\phi)=\left({dW}/{d\phi}\right)^2/2\,. $ This is the case for the $sine$ family of potentials with $a$ integer. However, in the other cases, for the $cosine$ family of potential with $a$ integer, and for the $sine$ and $cosine$ families of potential with $a$ half-integer, the potentials may be non negative. Nevertheless, we can follow the lines of \cite{ablm} introducing superpotentials, for both topological and non topological solutions, and their energies. Particularly, for $r=1$, in the case of the $sine$ and $cosine$ families, and $a$ integer or half-integer, the superpotentials can be written in terms of Chebyshev polynomials as \begin{subequations} \begin{eqnarray} W^{a,1}_{sin}(\phi)&=&\frac{1}{a^2(a^2-4)}[(a^2(1-\phi^2)-2)\;T_a(\phi)\nonumber\\ &&-2a\phi(1-\phi^2)\;U_{a-1}(\phi)]\;, \end{eqnarray} \begin{eqnarray} W^{a,1}_{cos}(\phi)&=&\frac{\sqrt{1-\phi^2}}{a^2(a^2-4)}[(a^2(1-\phi^2)-2)\;U_{a-1}(\phi)\nonumber\\ &&+2a\phi\;T_{a}(\phi)]\;, \end{eqnarray} \end{subequations} for $a\neq2$, and \begin{subequations} \begin{equation} W_{sin}^{2,1}(\phi)\, =(2\phi^2-\phi^4)/4\,, \end{equation} \begin{equation} W_{cos}^{2,1}(\phi) =\left((3\phi-2\phi^3)\sqrt{1-\phi^2}+\arcsin(\phi)\right)/8\,. \end{equation} \end{subequations} In the case of the $sine$ family, for $r\neq 2$ we have \begin{eqnarray} W^{1,r}_{sin}(\phi)&=&\frac{\phi^{2-r}}{r} \left(\frac{\phi ^{2r}}{r+2}+\frac{1}{r-2}\right)\,,\\ W^{2,r}_{sin}(\phi)&=&\frac{\phi^2}{4 r }\left(\frac{\phi ^{2 r}}{r+1}-1\right)\,. \end{eqnarray} For $r=2$, we have \begin{equation} W^{1,2}_{sin}(\phi)=(\phi^4-4\ln|\phi|)/8 \end{equation} \begin{equation} W^{3,2}_{sin}(\phi)=(2\phi^8-5\phi^4+4\ln|\phi|)/24\,. \end{equation} In the case of the $cosine$ family, for $r\neq 2$ we find \begin{eqnarray} W^{1,r}_{cos}(\phi)&=&\frac{\phi^{2}}{2r(2+r)} \Big(2\sqrt{1-\phi^{2r}}+r\nonumber\\ &&\times\,\,_2F_1\left[{1}/{2},{1}/{r},1+1/r,\phi^{2r}\right]\Big)\,,\\ W^{2,r}_{cos}(\phi)&=&\frac{\phi^{2-r}}{8r(1+r)(r-2)} \Big(2\sqrt{1-\phi^{2r}}\nonumber\\&&\times\left(1+r+(r-2)\phi^{2r}\right)+(2r-1)\phi^{r-2}\nonumber\\ &&\times\,\,\,B\left[\phi^{2r},{1}/{2}+{1}/{r},1/2\right]\Big)\,, \end{eqnarray} where $_2F_1$ is the Gaussian Hypergeometric function and $B$ the Euler incomplete beta function. For $r=2$, however, we have \begin{eqnarray} W^{2,2}_{cos}(\phi)&=&\frac1{24}\sqrt{1-\phi^4}\,(5-2\phi^4)+\frac18\ln(\phi^2)\nonumber\\ &&-\frac18\ln\left(2+2\sqrt{1-\phi^4}\right)\,. \end{eqnarray} In general, the superpotential simplify the calculations. Particularly, for computing the energy associated with the corresponding static solution $\phi_k$ we can write, for the kinklike solutions connecting minima $Z_k$ and $Z_{k+1}$: $ E^{a,r,k}(\phi_k)=|W^{a,r}(Z_{k})-W^{a,r}(Z_{k+1})|\;, $ and for the lumplike solution around the minimum $Z_k$: $ E^{a,r,k}(\phi_k)=2\;|W^{a,r}(Z_{k})-W^{a,r}(\phi(x=0))|\;. $ \begin{figure}[ht] \vspace{0.2cm} \includegraphics[{height=02cm,width=08cm,angle=00}]{FIG5a} \includegraphics[{height=02cm,width=8cm,angle=00}]{FIG5b}\vspace{0.3cm} \caption{The potential well $U(x)$ (upper panel) and the translation mode (lower panel), as a function of $x$}\label{FIG15} \end{figure} Regarding the problem of linear stability most of the new solutions obtained in this work are either ordinary kinks interpolating between consecutive vacua or classical lumps connecting a vacuum at $x=-\infty$ with itself at $x=+\infty$. We have profusely studied the stability of these solutions in previous papers; ordinary kinks are stable and classical lumps are unstable, and there is no need to repeat here the same calculations and arguments. There is a third type of kink that arise in the new models considered in this work when there are cuspidal points in the potential. The newly found solutions resemble double kinks and their stability properties are unclear. Therefore, we shall only discuss the stability of double kinks choosing as a completely generic example in this class the potential $V^{2,\frac{1}{3}}_{cos}(\phi)$, and the double kinks: $ \phi_S(x)=\pm{\sinh^3({x}/{2})}/\cosh^{\frac32}(x)\,. $ The Schr$\ddot{\rm o}$dinger operator governing the small fluctuations around these kinks is of the form $K=-{d^2}/{d x^2}+U(x)$, where the potential well is given by $ U(x)=1+{1}/{2\sinh^2 (x/2)}-{5}/{2\cosh (x)}-{35}/{4\cosh^2 (x)}\,. $ Things are more clear in the Figure \eqref{FIG15} where $U(x)$ is depicted. One notices the important limits $\lim_{x\to\pm\infty} U(x)=1$, $\lim_{x=0}U(x)=+\infty$ and realizes the qualitative similarity with the Lennard-Jones potential \cite{lj} of molecular physics. The main difference is that the $U(x)$ well looks like the Lennard-Jones well (living only in the positive real half-line) plus its specular image with respect to the ordinate axis, defined as a whole on the full real line. In fact, the ground state wave function of zero energy is the translational mode $\psi_0(x)=d\phi_K / dx = 3 \sinh ({x}/{2})\, \sinh(x) / 4 \cosh^{\frac52}(x) $. There is no negative energy eigenfunction because the zero of the translational mode wave function is not strictly a node, see Fig.~\eqref{FIG15}. There is no change of sign in the wave function which is simply telling us that the center of the double kink, where the field reaches the value of zero, cannot be perturbed, since it would cost infinite energy. The other, higher in energy, eigenfunctions are totally reflecting scattering waves with the threshold at $k^2=1$. Either coming from the left or from the right the incoming waves are sent back by the infinite wall at the origin which is perfectly opaque. There are no kink fluctuations that cross the center of the double kink. One spectrum like this is very peculiar but ensures the stability of the double kink: there is no kink fluctuation with negative energy. {\bf{Acknowledgements}}. The authors would like to thank CAPES/Nanobiotec and CNPq, Brazil, the FCT project CERN/FP/116358/2010, Portugal, and the Spanish Ministerio de Educaci\'on y Ciencia, grant FIS2009-10546, for partial financial support.
1,116,691,497,020
arxiv
\section{Introduction} \label{sec:introduction} In order to build a proper artificial electric sense, it is essential to understand how it works in biology. In this regard, the study of active electrolocation in weakly electric fish is crucial. Indeed, from an electric potential of about $1 \, \textrm{mV}$ oscillating at about $1$--$10 \, \textrm{kHz}$, weakly electric fish are able to recognize an object in their surrounding (for a review, see~\cite{moller1995electric} and references therein). Hence, they are undoubtly a source of great inspiration for neuro-ethology, underwater robotics, signal processing, as well as applied mathematics. Several species of fish share this remarkable sense. They are classified in various families, all belonging to two different orders: Gymnotiforms in South America, and Mormyriforms in Africa. Moreover, according to the time representation of their Electric Organ Discharges (EODs), they are also divided into two types: wave-type species (such as \emph{Apteronotus albifrons}) and pulse-type species (\emph{e.g.} \emph{Gnathonemus petersii}). Known for several centuries, the electrogenesis and electroreception abilities of the wider set of species called \emph{electric fish} have been studied extensively~\cite{finger2011shocking}. In 1958, Lissmann and Machin showed that for the weakly electric fish, this ability is used for electrolocation instead of hurting preys~\cite{lissmann1958mechanism}. Furthermore, they gave the physical principles on which relies this electric sense: using cylinders in the water tanks of their \emph{Gymnarchus niloticus}, they showed that these objects disturb the self-emitted electric field like an electric dipole. The electroreceptors allow the fish to distinguish such a difference, which in turn is a clue for electrolocation. However, one important question remains: how to estimate the location of the object from the measurement of this difference? Experimental, modelling and numerical approaches have been carried since this discovery by Lissman and Machin. From behavioral studies, we now know that these fish are able to estimate the distance~\cite{von1998electric}, recognize the shape~\cite{von2007distance}, and the electric capacitance and conductivity~\cite{von1999active} of an object in their surrounding. In these studies, a fish is placed in front of two doors, each one hiding a different object. The fish is then trained to chose one of the two objects, in a reward/punish setup. More theoretically, the electric dipole formula has been investigated in more details. Indeed, Bacher in 1983~\cite{bacher1983new} argued that the electric dipole formula given by Lissmann and Machin did not explain the phase shift observed when the electric premittivity of the object differs from the electric permittivity of the water, although this phase difference is measured acurately by some species such as in the \emph{Eigenmannia} genus~\cite{rose1985temporal}. Then, in 1996 Rasnow~\cite{rasnow1996effects} solved this issue by considering a complex-valued conductivity; he also extended the range of shapes on which the dipolar approximation can be applied. From the numerical simulation point of view, various works have been carried since the 70's, for example finite differences schemes in 1975 by Heiligenberg~\cite{heiligenberg1975theoretical} and finite elements in 1980 by Hoshimiya \emph{et al.}~\cite{hoshimiya1980theapteronotus}. In the latter, a simplified geometry of fish was used: it was represented as an ellipse, divided into two areas (the low conductive and thin skin, and the body). Their aim was to optimize the non-uniform values of the skin's conductivity, and by optimizing it according to experimentally measured field, they conclude that the tail region is more conductive than the head region. These models were then improved, as can be seen in~\cite{babineau2006modeling,maciver2001computational,migliaro2005theoretical}. Finally, in the 90's Chris Assad considered a boundary element method to solve this numerical simulation issue~\cite{assad1997electric}. His model took into account the highly resistive and thin skin, as well as the time dependance of the EOD. He compared his model to several \emph{in vivo} experiments involving different species of fish~\cite{rasnow1988simulation}. Those advances in the field of neuro-ethology inspired researchers in robotics to develop underwater probes and sensors in order to effectively navigate with the help of this electric sense. Mainly two teams gather the most research about this stimulating subject: one in Nantes lead by Frédéric Boyer~\cite{boyer2012model}, and another in Chicago lead by Malcolm MacIver~\cite{maciver2004designing}. Just to mention a few of their studies, they face important challenges such as target location~\cite{lebastard2013environment,solberg2008active}, shape recognition~\cite{bai2015finding,lanneau2016object}, autonomous (or reactive) navigation~\cite{boyer2013underwater}, or docking~\cite{boyer2015underwater}. We leave the interested reader to these articles and the references therein for a more complete review of this area. Let us mention however that for these works, there is a need for more quantitative assessments of the electric sense: how precisely an object disturb the electric field, and how to compute back its location and shape. Mathematically speaking, this is called an \emph{inverse problem}, as opposed to a \emph{forward problem}. The latter would be to compute the electric field surrounding the fish, knowing everything about the object (position, shape, material). On the contrary, the inverse problem here is to recover as much information as possible about the object from the knowledge of the electric field at the surface of the fish's skin. Given the low frequencies of emission (see Section~\ref{sub:quasi_static_approximation}), this problem lies in the domain of Electrical Impedance Tomography (EIT). EIT is a non-invasive imaging technique in which an image of the conductivity or permittivity of a medium is inferred from surface electrode measurements. It is studied as a non-invasive imaging technique, in particular for medical imaging, non-destructive testing, or geophysical probing (see reviews in~\cite{borcea02,cheney1999electrical}). In the mathematical literature it is also known as Calder\'on's problem from Calder\'on's pioneer contribution~\cite{calderon80}. This type of problem is \emph{ill-posed}, in the sense that existence, uniqueness, or continuity of the solution is not guaranteed~\cite{uhlmann09}. The resolution of the inverse problem is often formulated as a minimization problem, which consists in minimizing the error between the measured data and the synthetic data obtained by solving numerically the forward problem with a candidate object. It requires careful discretization and regularization and sometimes numerically intensive calculations, however, it is known to be sensitive to noise and to have poor spatial resolution~\cite{borcea02,brown03}. Since 2010, our team has been working on the mathematical modelling of active electrolocation. Indeed, having heard about that the electric fish are able to solve --~in some way~-- the Calder\'on's problem raised our curiousity. In other words, the problem of recovering the object from what the fish can feel is a very difficult problem, thus making it even more fascinating as the fish seem to perform better than the most recent medical EIT devices which typically consist of a few tens of electrodes only. Hence, we wanted to have equations governing the electric field surrounding the fish (\emph{i.e.} a model of the forward problem), so that we could imagine original solutions of the inverse problem. Our inspiration largely took its source in the aformentioned works, and relevent references will be done throughout the text. The aim of this article is to summarize our works on target location estimation~\cite{ammari2013modeling} and shape identification~\cite{ammari2014shape}, as well as making connections between our theoretical studies and what it is intended to model. We will show that it is possible to extract some information about the object in a quite robust and straightforward manner, even with less than one hundred electrodes. We wanted to make accessible some mathematical concepts that could be useful to researchers in biology and robotics, in order to engage further discussion and collaboration. The outline of this paper is as follows. In Section~\ref{sec:forward_problem}, we derive the equations governing the electric field emitted by the fish and show numerical simulations. In Section~\ref{sec:inverse_problem}, we explain the localization (Section~\ref{sub:localization}) and shape recognition (Section~\ref{sub:shape_recognition}) algorithms, which both rely on a formula called dipolar approximation (Section~\ref{sub:dipolar_approximation}). \section{Forward Problem}{} \label{sec:forward_problem} Let us consider the model depicted in Figure~\ref{fig:model}. The body of the fish is modelled as a domain $\Omega \subset \mathbb{R}^d$, where $d \in \{2,3\}$. For the sake of clarity, we plot the results for $d=2$ in this paper. This mathematical model is of course extremely simplified compared to the complexity of a fish revealed by biology, but our objective is to extract the few ingredients that are sufficient to explain the fish electric sense, as was done for example in the numerical simualtions of Hoshimiya \emph{et al.}~\cite{hoshimiya1980theapteronotus}. \begin{figure}[!ht] \centering \includegraphics[width=10cm]{img/model.png} \caption{Problem considered here: knowing the electric current $(\itbf{E}-\itbf{E}_0) \cdot {\boldsymbol{\nu}}$ at the surface of the skin $\partial \Omega$, how can we determine the target $D$?\label{fig:model}} \end{figure} The goal is to recover the object $D$, another set of $\mathbb{R}^d$ which is away from $\Omega$. The localization is explained in Section~\ref{sub:localization} whereas the shape identification is shown in Section~\ref{sub:shape_recognition}. In this section, we focus on the so-called \emph{forward problem}: what are the equations governing the transdermal electric current $(\itbf{E}-\itbf{E}_0) \cdot {\boldsymbol{\nu}}$ on $\partial \Omega$ (Section~\ref{sub:quasi_static_approximation} and Section~\ref{sub:boundary_conditions}), and how can we compute it numerically (Section~\ref{sub:numerical_simulations})? The results presented here can be found with more mathematical details in~\cite{ammari2013modeling}. \subsection{Quasi-Static Approximation} \label{sub:quasi_static_approximation} In this subsection we show that, given the low frequency of the electric field, it can be derived as a complex-valued electric potential. From this point, let us precise that we will work in the frequency domain, so that time derivatives will be simplified by Fourier transform. It has an impact of what we are modelling exactly, \emph{i.e.} from now on we can only discuss about wave-type species such as \emph{Apteronotus albifrons}. For example, after discretization, we would have a discrete set of frequencies (the fundamental and its harmonics), see for example~\cite[fig. 12.3]{evans2006physiology} for such a transform in \emph{Eigenmannia virescens}. These frequencies are known to remain centred on a specific frequency unless their dominance status changes, or the temperature or pH of the water varies~\cite{dunlap1998diversity}. Having this in mind, let us start with the Maxwell system: \begin{align} \nabla \cdot \itbf{E} &= \frac{\rho}{\varepsilon}, \label{eq:Maxwell1} \\ \nabla \cdot \itbf{B} &= 0, \label{eq:Maxwell2} \\ \nabla \times \itbf{E} &= - i \omega \itbf{B}, \label{eq:Maxwell3} \\ \nabla \times \itbf{B} &= \mu (\itbf{j}_i + \itbf{j}_s + i\omega \varepsilon \itbf{E}), \label{eq:Maxwell4} \end{align} where $\varepsilon$ (resp. $\mu$) is the electric permittivity (resp. the magnetic susceptibility) of the medium and $\omega$ is the frequency. The sources are $\rho$ (density of electric charges) and $\itbf{j}_s$ (density of electric current). Whereas the former are null, the latter need to be considered carefully. Indeed, the current density comes from the electric organ of the fish. It is usually a long filament at the posterior part of the body~\cite{moller1995electric}. In any case, it can be modelled as a distribution contained into the body: $\textrm{supp} (\itbf{j}_s) \subset \Omega$. Finally, the electrical current $\itbf{j}_i$ in (\ref{eq:Maxwell4}) is the one induced by Ohm's law, \begin{equation} \itbf{j}_i = \sigma \itbf{E}, \end{equation} where $\sigma$ is the conductivity of the medium. The electro-quasistatic~(EQS) approximation states that, if the wavelength $\lambda$ is large compared to the typical length of the problem $L$, then $\itbf{E}$ can be considered as irrotationnal, \emph{i.e.}~ the right-hand side of (\ref{eq:Maxwell3}) is neglected~\cite{klinkenbusch2011domains}. In other words, if $\lambda \gg L$, then $\nabla \times \itbf{E} \approx 0$. In nature emission frequencies are always below $10$kHz \cite{nelson2006sensory}, which means that $\lambda$ is always larger than $10\textrm{km}$, which is much larger than $L$ if we consider it as the typical size of the fish; indeeed, the electrolocation range is known to be one body-length at maximum~\cite{moller1995electric}. Hence, the EQS approximation is very well suited in our case and thus we can write $\nabla \times \itbf{E} = 0$ instead of (\ref{eq:Maxwell3}). Then, it follows that there exists a frequency-dependent, complex-valued potential $u$ such that \begin{equation} \itbf{E} = \nabla u. \end{equation} Taking the divergence of~(\ref{eq:Maxwell4}), we finally obtain \begin{equation} \nabla \cdot \big( k(\omega) \nabla u \big) = f, \label{eq:conductivity} \end{equation} where we have defined $f = -\nabla \cdot \itbf{j}_s$ as the source coming from the electric organ, and $k(\omega) = \sigma + i \varepsilon \omega$ as the complex-valued conductivity; this complex-valued conductivity is the same idea that was used by Rasnow in~\cite{rasnow1996effects}. To conclude, the equation~(\ref{eq:conductivity}) is the one governing the transdermal electric current. If we note $U$ the potential associated to the background electric field ($\itbf{E}_0 = \nabla U$), the transdermal electric current stated in Figure~\ref{fig:model} is translated into \begin{equation} (\itbf{E}-\itbf{E}_0) \cdot {\boldsymbol{\nu}} = \ddn{u} - \ddn{U}. \end{equation} \subsection{Boundary Conditions} \label{sub:boundary_conditions} Now that we have the partial differential equations governing the electric field, given in~(\ref{eq:conductivity}), let us focus on the boundary conditions. They strongly depend on the distribution of the conductivity, $k(\omega)$, which can be modelled as piecewise constant: $k_w$ in the water, $k_D$ in the object, $k_b$ in the body of the fish, and $k_s$ in the skin. Note that since the water, the body, and the skin are not dielectric materials, we have \begin{equation*} \Im(k_w) = \Im(k_b) = \Im(k_s) = 0. \end{equation*} Hence, two boundary conditions appear to matter: at the surface of the object $\partial D$ and over the skin $\partial \Omega$. \\ 1) The boundary conditions at the surface of the object can be addressed easily. Indeed, the piecewise constant conductivity imposes the following jump relations for $\itbf{x} \in \partial D$~\cite{ammari2004reconstruction} \begin{align} u(\itbf{x}^-) &= u(\itbf{x}^+), \label{eq:BC_D_1} \\ k_w \ddn{u}(\itbf{x}^-) &= k_D(\omega)\ddn{u}(\itbf{x}^-). \label{eq:BC_D_2} \end{align} The notation $\itbf{x}^{\pm}$ means the inner/outer limit at the boundary of $\partial D$. More precisely, for a function $w$ defined on $\mathbb{R}^d$, one has \begin{equation} w(\itbf{x}^{\pm}) = \lim_{h \rightarrow 0} w(\itbf{x} \pm h {\boldsymbol{\nu}}), \; \; \itbf{x} \in \partial D, \end{equation} where ${\boldsymbol{\nu}}$ is the outward normal unit vector of $\partial D$.\\ 2) The boundary conditions over the skin are a bit more complicated (see Figure~\ref{fig:skin}). \begin{figure}[!ht] \centering \includegraphics[width=10cm]{img/skin.png} \caption{Boundary conditions over the skin.} \label{fig:skin} \end{figure} This is due to the fact that, compared to the water which has a conductivity of the order of~$0.01 \textrm{S} \cdot \textrm{m}^{-1}$~\cite{maciver2001prey}, the skin is very resistive ($10^{-4} \textrm{S} \cdot \textrm{m}^{-1}$~\cite{budelli2000electric}) and the body is very conductive ($1 \textrm{S} \cdot \textrm{m}^{-1}$)~\cite{scheich1973coding}. In other words, one has \begin{equation} k_s \ll k_w \ll k_b. \end{equation} Futhermore, the skin is very thin: if we denote its thickness by $\delta$, we have~\cite{zakon1986electroreceptive} $$ \delta \approx 100 \mu\textrm{m} \ll L,$$ where $L$ was defined as the body length in Section~\ref{sub:quasi_static_approximation}. In~\cite{ammari2013modeling} we have shown in the case $d=2$ that, when $\delta/L \ll 1$ and $k_s/k_w \ll 1$ , but $\delta k_w / (L k_s)$ is of order one (or smaller), we have the following effective relation for $\itbf{x} \in \partial \Omega$: \begin{equation} u(\itbf{x}^+) - u(\itbf{x}^-) = \xi \ddn{u}(\itbf{x}^+), \label{eq:robin_BC} \end{equation} where $\xi = \delta k_w/k_s$ is called the \emph{effective thickness} in Assad's work~\cite{assad1997electric}. Indeed, equation~(\ref{eq:robin_BC}) is exactly the same as the one used in his model. On the other side the limit $k_b / k_w \gg 1$ gives \begin{equation} \ddn{u}(\itbf{x}^-) = 0. \label{eq:neumann_BC} \end{equation} To get a well-posed problem, we should add the far field condition $u(\itbf{x}) = O(|\itbf{x}|^{1-d})$ as $|\itbf{x}|\to \infty$, if the problem is formulated in an open medium, or any prescribed condition corresponding to the experimental configuration. \subsection{Numerical Simulations} \label{sub:numerical_simulations} Taken altogether, we have to solve a system composed by the partial differential equation (\ref{eq:conductivity}) with boundary conditions (\ref{eq:BC_D_1}), (\ref{eq:BC_D_2}), (\ref{eq:robin_BC}), and (\ref{eq:neumann_BC}). Hence, $u$ is solution of the following system \begin{align} \nabla \cdot \big( k(\omega) \nabla u \big) &= f, \label{eq:u_first} \\ u(\itbf{x}^-) &= u(\itbf{x}^+), & \itbf{x} \in \partial D \\ k_w \ddn{u}(\itbf{x}^-) &= k_D(\omega)\ddn{u}(\itbf{x}^-), & \itbf{x} \in \partial D \\ u(\itbf{x}^+) - u(\itbf{x}^-) &= \xi \ddn{u}(\itbf{x}^+), & \itbf{x} \in \partial \Omega\\ \ddn{u}(\itbf{x}^-) &= 0, & \itbf{x} \in \partial \Omega \label{eq:u_last} \end{align} and the background solution $U$ is given by \begin{align} \nabla \cdot \big( \tilde{k}(\omega) \nabla U \big) &= f, \label{eq:U_first}\\ U(\itbf{x}^+) - U(\itbf{x}^-) &= \xi \ddn{U}(\itbf{x}^+), & \itbf{x} \in \partial \Omega\\ \ddn{U}(\itbf{x}^-) &= 0, & \itbf{x} \in \partial \Omega \label{eq:U_last} \end{align} where $\tilde{k}(\omega)$ is equal to $k_b$ in the body of the fish $\Omega$, and $k_w$ outside, \emph{i.e.} in the water. Using layer potential representations for the solutions of the systems (\ref{eq:u_first}-\ref{eq:u_last}) and (\ref{eq:U_first}-\ref{eq:U_last}), we have developed a MATLAB script for their numerical approximations\footnote{This script is now part of the package SIES (\emph{Shape Identification in Electro-Sensing}), which can be found at \url{https://github.com/ens2013/SIES/}}, using boundary element methods like in Assad's thesis~\cite{assad1997electric}. In Figure~\ref{fig:numerics}, we have plotted the solutions $u$ and $U$ when $d=2$, the body $\Omega$ is an ellipse, the object $D$ is a disk, and the source $f$ is a dipole. \begin{figure}[!ht] \centering \includegraphics[width=10cm]{img/numeric_background.eps} \\ \includegraphics[width=10cm]{img/numeric_object.eps} \caption{Numerical simulations for the background electric potential $U$ -~system (\ref{eq:U_first}-\ref{eq:U_last}), upper image~- and for the perturbed electric potential $u$ -~system (\ref{eq:u_first}-\ref{eq:u_last}), lower image.} \label{fig:numerics} \end{figure} \section{Inverse Problem} \label{sec:inverse_problem} In the previous section, we have derived the equations governing the transdermal electric current at the surface of the skin, and we have shown results of numerical simulations. In this section, we show the two main results of our studies: how to localize the object $D$ based on the knowledge of $\ddn{u}-\ddn{U}$ (Section~\ref{sub:localization}), and how to recognize its shape when the fish has already memorized several objects (Section~\ref{sub:shape_recognition}). These methods are based on a dipolar approximation of the solution $u$, which is explained in Section~\ref{sub:dipolar_approximation}. \subsection{Dipolar Approximation} \label{sub:dipolar_approximation} The dipolar approximation states that if $D$ is small enough, the difference $u-U$ can be expressed as the electric potential coming from an electric dipole centered in $D$. In this subsection we only present this result numerically, in order to make things more intuitive. For a complete proof of the formula in this context, see~\cite{ammari2013modeling}; for a more detailed review of the dipolar approximation in general, see the book~\cite{ammari2007polarization}. Let us get back to the example in Section~\ref{sub:numerical_simulations}. In Figure~\ref{fig:u-U}, we have plotted the difference $u-U$. Qualitatively, we can see that this electric potential looks like the one emitted by an electric dipole coming from $D$. \begin{figure}[!ht] \centering \includegraphics[width=10cm]{img/poisson_difference.eps} \caption{The difference between $u$ and $U$, taken from Figure~\ref{fig:numerics}.} \label{fig:u-U} \end{figure} Quantitatively, we have shown in~\cite{ammari2013modeling} that, if $D$ is small enough and sufficiently away from $\Omega$, then the fish feels a distorsion that is similar to the one produced by a dipole $\itbf{p}_D$. More precisely, we have \begin{equation} \ddn{u}(\itbf{x}) - \ddn{U}(\itbf{x}) \approx \itbf{p}_D \cdot \nabla G(\itbf{x}-\itbf{z}), \quad \itbf{x} \in \partial \Omega, \label{eq:dipolar_formula} \end{equation} where $\itbf{z}$ is the center of mass of $D$ and $G$ is the Green function for the Laplacian in $\mathbb{R}^d$ (for instance $G(\itbf{x}) = \log (|\itbf{x}|)/(2\pi)$ in dimension $d=2$ or $G(\itbf{x}) = -1/(4\pi |\itbf{x}|)$ in dimension $d=3$). The vector $\itbf{p}_D$ is called the equivalent dipole, and it is given by \begin{equation} \itbf{p}_D \approx {\bf M}(\kDomega,D) \nabla U(\itbf{z}), \label{eq:equivalent_dipole} \end{equation} where ${\bf M}(\kDomega,D)$ is a $d \times d$ complex-valued matrix that depends only on the shape $D$ of the object and its complex conductivity $k_D(\omega)$, called the \emph{first-order polarization tensor}~\cite{ammari2007polarization}. This matrix maps the illuminating electric field $\nabla U(\itbf{z})$ to the equivalent dipole $\itbf{p}_D$. When the conductivity $k_D(\omega)$ is real, in other words when $\Im(k_D(\omega))=0$ and thus $\omega$ does not play any role anymore, one can find an ellipse or ellipsoid $\mathcal{E}$ such that \begin{equation} {\bf M}(\kDomega,D) = {\bf M}(k,\mathcal{E}). \end{equation} This means that the equivalent dipole would always be the same as the equivalent dipole of this ellipse or ellipsoid $\mathcal{E}$. This latter is then called the \emph{equivalent ellipse}~\cite{ammari2007polarization}. Note, however, that when $k_D(\omega)$ is frequency-dependent, the information that can be extracted is much richer. For single-frequency data, it is only possible to identify a few characteristics of the object. When multi-frequency data are available, it is possible to get a lot of information about the object from the frequency dependence of the observed first-order polarization tensors. The need for multi-frequency data that we exhibit is in agreement with the complex emission patterns (pulse and wave) by fish. \subsection{Localization} \label{sub:localization} In this subsection, we show that the multi-frequency aspect of the measurements are sufficient to localize an object with precision. For the sake of simplicity we focus our attention on the example that gave Figures~\ref{fig:numerics}-\ref{fig:u-U} in dimension $d=2$. Indeed, the particular case of $D$ being a disk is easier since we have~\cite{ammari2007polarization} \begin{equation} {\bf M}(\kDomega,D) = \abs{D} \frac{k_D(\omega)-1}{2k_D(\omega)+1} {\bf I}_2, \end{equation} where $\abs{D}$ denotes the volume of $D$ and ${\bf I}_2$ is the identity matrix.\footnote{In $\mathbb{R}^3$, we have a similar formula, \emph{i.e.}~ ${\bf M}(\kDomega,D)$ is proportional to identity~\cite[p. 83]{ammari2007polarization}.} Hence, equation~(\ref{eq:dipolar_formula}) becomes \begin{equation} \ddn{u}(\itbf{x},\omega) - \ddn{U}(\itbf{x},\omega) = \abs{D} \frac{k_D(\omega)-1}{2k_D(\omega)+1} \nabla U(\itbf{z}) \cdot \nabla G(\itbf{x}-\itbf{z}), \label{eq:dipolar_formula_fish} \end{equation} which is equivalent to Lissman-Machin~\cite{lissmann1958mechanism} or Rasnow~\cite{rasnow1996effects} formulas, showing that~(\ref{eq:dipolar_formula}) is a generalization of these latters. Hence, from~(\ref{eq:dipolar_formula_fish}) we can easily extract $\nabla U(\itbf{z}) \cdot \nabla G(\itbf{x}-\itbf{z})$. The reason why we can recover the location $\itbf{z}$ of $D$ is that the function $\itbf{z} \mapsto \nabla U(\itbf{z}) \cdot \nabla G(\itbf{x}-\itbf{z})$ is one-to-one. Therefore, it is possible to build an \emph{imaging functional} from the measured data, \emph{i.e.}~ a function $\itbf{z}_s \mapsto \mathcal{I}(\itbf{z}_s)$ that has a strong peak at $\itbf{z}_s = \itbf{z}$ (see Figure~\ref{fig:SF-MUSIC}). \begin{figure}[!ht] \centering \includegraphics[width=10cm]{img/MUSIC_global.eps} \caption{From the example shown in Figures~\ref{fig:numerics}-\ref{fig:u-U}, plot of the imaging functional presented in~\cite{ammari2013modeling}.} \label{fig:SF-MUSIC} \end{figure} \subsection{Shape Recognition} \label{sub:shape_recognition} Once the object $D$ is located (\emph{i.e.}~ we know $\itbf{z}$), we would like to extract ${\bf M}(\kDomega,D)$ from the measurements. Indeed, we know that this polarization tensor contains all the necessary information about the shape of $D$~\cite{ammari2016shape}. More precisely, the function $\omega \mapsto {\bf M}(\kDomega,D)$ uniquely determines $D$ in some class of domains. However, it is not straighfoward to determine $D$ from ${\bf M}(\kDomega,D)$. In other words, the shape recognition problem has been reduced to a simpler, however still difficult, inverse problem: determine the shape of the object from the observed first-order polarization tensors. To solve this inverse problem we got inspired by the behavioral studies (for example~\cite{von1998electric}) described in the introduction. Indeed, instead of trying to \emph{compute} a shape from the measurements, we wanted to use classification and machine learning techniques. For example, let us suppose that the fish has already encountered several shapes $D_l$, $p=1,\ldots, L$ (such as the shapes plotted in Figure~\ref{fig:dico}), and that it knows all the polarization tensors ${\bf M}(k_{D_l}(\omega_j),D_l)$, where $\omega_j$ are the different frequencies emitted by the electric organ. \begin{figure}[!ht] \centering \includegraphics[width=10cm]{img/dico.pdf} \caption{A dictionnary of shapes, used to identify the object $D$. The distance units here are arbitrary. Note that these shapes are often encountered in experimental studies~\cite{von2007distance} showing that the fish are able to recognize shapes.} \label{fig:dico} \end{figure} It is not possible to extract ${\bf M}(\kDomega,D)$ from one single measure of $\ddn{u} - \ddn{U}$; instead, we use the fact that the fish actively swim around their prey when hunting, leading to particular swimming patterns called \emph{probing motor acts}~\cite{toerring1979motor}. Extracting ${\bf M}(k_D(\omega_j),D)$ from (\ref{eq:dipolar_formula}) measured for several fish's positions is then a simple linear system~\cite{ammari2014shape}. Hence, $D$ can be identified as \begin{equation} D = \textrm{arg} \min_{D_l} \sum_j \norm{{\bf M}(k_D(\omega_j),D) - {\bf M}(k_{D_l}(\omega_j),D_l)}{}. \end{equation} In Figure~\ref{fig:pnas}, one can see the performance of this technique in terms of robustness against measurement noise. Note that the number $64$ of electrodes is not very large in the numerical simulations, but $10$ frequencies and $20$ different positions of the fish around the object are exploited: the different positions allow for good extraction of the polarization tensors, and the different frequencies allow for good classification of the object given the estimated polarization tensors. That is how robustness is achieved, and this remark may be of interest for the design of EIT devices. \begin{figure}[!ht] \centering \includegraphics[width=12cm]{img/ellipse_complete_stability.pdf} \caption{Performance of classification with respect to measurement noise. Each point represent the probability of correct detection, infered after $10^5$ realisations of the same experiment, consisting of the fish swimming around the object on a circular trajectory. The first-order polarization tensors ${\bf M}(\kDomega,D)$ were extracted for $10$ frequencies (from $1\textrm{kHz}$ to $10\textrm{kHz}$). Then these tensors where compared to the dictionnary presented in Figure~\ref{fig:dico}. On the $x$-axis, the strength of the measurement noise is indicated, in percentage of $\norm{\ddn{u}-\ddn{U}}{}$. Measurement noise was modeled as a white Gaussian random process for each point, with standard deviation as what we call the \emph{strengh}. Originally published in~\cite{ammari2014shape}.} \label{fig:pnas} \end{figure} \section{Conclusion and Perspectives} \label{sec:conclusion_and_perspectives} In this short review, we aimed at summarizing our main results on the mathematical modelling of active electrolocation. After deriving the partial differential equations and their boundary conditions, we have shown numerical simulations of the forward problem. Then, based on the dipolar approximation, we have detailed our localization algorithm and our shape recognition process. These latters are made possible thanks to multi-frequency measurements. Moreover, shape recognition additionally needs movement of the fish. Note that our model is an extremely simplified version of what actually happens in real life; our aim was to extract the relevant features of the problem in order to be able to make a generalization for other topics, such as medical imaging or robotics. More realistic simulations and algorithms of target location active electro-sensing can be found for example in~\cite{babineau2006modeling,babineau2007spatial,lewis2001neuronal}. Shape identification remains an open challenge, although some algorithms begin to emerge in the robotic community~\cite{bai2015finding,lanneau2016object}. Since the publication of our two algorithms, several directions of research have been taken. First, we have extended the shape recognition algorithm to the time-domain formulation of the problem~\cite{ammari2016time}, and to the echolocation problem (as observed in dolphins and bats for example)~\cite{ammari2014shapeecho}. Then, we have used wavelets methods in order to improve the accuracy of recognition~\cite{ammari2016wavelet}. And finally, we have raised the question of tracking a moving object~\cite{ammari2013tracking}. \subsection*{Acknowledgement} The authors would like to thank the reviewers for their very constructive remarks. \bibliographystyle{plain}
1,116,691,497,021
arxiv
\section{Introduction} A major obstacle to the construction of superstring field theories has been formulating an action for the Ramond sector. The last few months have seen remarkable progress on this problem. In~\cite{complete} a complete action for open superstring field theory was formulated by restricting the off-shell state space of the Ramond field so as to reproduce the correct integration over the fermionic modulus in the Ramond propagator.\footnote{The construction of~\cite{complete} is based on a very old idea for formulating the free action for the Ramond string field~\cite{West1,West2,Terao,Yamaron,Kugo,Belopolsky,Kunitomo}, which with the proper understanding is equivalent to the formulation of Witten \cite{Witten}. However, the construction of \cite{complete} gives the first consistent nonlinear extension of this free action. } A closely related approach has recently been developed by Sen \cite{1PIR,SenBV}, which has a somewhat simpler worldsheet realization at the cost of introducing spurious free fields. We are now in a position to complete the construction of all classical superstring field theories. The construction of \cite{complete} was realized by extending the Neveu-Schwarz (NS) open superstring field theory of Berkovits \cite{Berkovits1,Berkovits2} to include the Ramond (R) sector. The Berkovits theory gives an elegant Wess-Zumino-Witten-like (WZW-like) action for the NS sector in the large Hilbert space~\cite{FMS} and is a suitable starting point for the study of tachyon condensation and classical solutions \cite{BSZ,supermarg,Oksupermarg,Okrealsupermarg,FKsuper,KOsuper,simplesupermarg,supervac}. However, the question of recent interest is how to construct other superstring field theories and how to quantize them. In this capacity the Berkovits formulation is not ideal, since it does not immediately generalize to type II closed superstrings,\footnote{Some attempts to provide a WZW-like formulation of closed type II superstring field theory are described in~\cite{Matsunaga1,Matsunaga2}. For heterotic string field theory a WZW-like formulation in the large Hilbert space is well established~\cite{heterotic}, and its extension to the Ramond sector would be interesting to consider \cite{KunHeterotic1,KunHeterotic2,KunHeterotic3}.} and, despite some attempts \cite{superBV,BerkBV,Torii1,Torii2,Torii3,superBV2}, it is not known how to properly define the gauge-fixed path integral. For this reason, in this paper we turn our attention to a different form of open string field theory which uses the small Hilbert space and realizes a cyclic $A_\infty$ structure. The construction of superstring field theories based on $A_\infty$ and $L_\infty$ algebras is attractive since all forms of superstring field theory can in principle be described in this language \cite{Muenster,WittenSS,ClosedSS,Ramond,SenBV}. In addition, the definition of the gauge-fixed path integral is straightforward thanks to the close relation between homotopy algebras, Batalin-Vilkovisky quantization, and the Feynman-graph decomposition of moduli spaces of Riemann surfaces (or their supergeometrical extension\footnote{The manner in which picture changing operators in the vertices implement integration over odd moduli has not yet been made fully explicit, though the computation of the four-point amplitude in \cite{INOT} has given some preliminary insight. However, it follows from the computation of the $S$-matrix \cite{Konopka} that the tree-level actions and equations of motion constructed so far correctly integrate over the supermoduli spaces of punctured disks and spheres.}) \cite{Zwiebach,SZ1,SZ2}. Our construction of open superstring field theory extends the NS open superstring field theory of \cite{WittenSS} to include the Ramond sector, and the interactions are built from Witten's open string star product dressed with picture changing insertions. Part of the work for constructing this theory was done in \cite{Ramond}, which gives classical equations of motion describing the interactions between the NS and R sectors. Our task is to modify the equations of motion so that they can be derived from an action. This requires, specifically, that the equations of motion realize a cyclic $A_\infty$ structure, where the notion of cyclicity is provided by the inner products defining the NS and R kinetic terms. Interestingly, the action we find for the Ramond sector turns out to be identical to that of \cite{complete} after the appropriate translation of NS degrees of freedom \cite{OkWB,WB,WBlarge}. This paper is organized as follows. In section \ref{sec:bckd} we review the formulation of the Ramond sector kinetic term used in \cite{complete} and the NS and Ramond equations of motion described in \cite{Ramond}. In section~\ref{sec:small} we construct an action by requiring compatibility of the equations of motion with the bilinear form defining the Ramond sector kinetic term. First we describe the picture changing insertion which plays a central role in defining the vertices. Then we give an explicit discussion of the 2-string product, generalize to the higher string products, and provide a proof that the resulting $A_\infty$ structure is cyclic. We also describe how the construction can be translated into the formulation of the Ramond kinetic term used by Sen \cite{1PIR,SenBV}. In section \ref{sec:large} we relate our construction to the WZW-based formulation developed by Kunitomo and one of the authors \cite{complete}. We end with some concluding remarks. \bigskip \noindent{\bf Note Added:} While this paper was in preparation, we were informed of independent work by Konopka and Sachs addressing the same problem. Their work should appear concurrently \cite{SK}. See also \cite{Matsunaga3} for related discussion. \section{Background} \label{sec:bckd} In this section we review the Ramond kinetic term \cite{complete} and equations of motion \cite{Ramond}. To describe compositions of string products and their interrelations in an efficient manner, we will make extensive use of the coalgebra formalism. The coalgebra formalism expresses string products in terms of {\it coderivations} or {\it cohomomorphisms} acting on the tensor algebra generated from the open string state space $\widetilde{\mathcal{H}}$: \begin{equation}T\widetilde{\mathcal{H}} \equiv \widetilde{\mathcal{H}}^{\otimes 0}\ \oplus\ \widetilde{\mathcal{H}}\ \oplus\ \widetilde{\mathcal{H}}^{\otimes 2}\ \oplus\ \widetilde{\mathcal{H}}^{\otimes 3}\ \oplus\ ...\ \ .\end{equation} Coderivations will be denoted in boldface, and cohomomorphisms with a ``hat" and in boldface. String fields can be described by {\it group-like elements} of the tensor algebra. The coalgebra formalism works efficiently if we use a shifted grading of the open string field called {\it degree}. The degree of a string field $A$ is defined to be its Grassmann parity $\epsilon(A)$ plus one: \begin{equation}\mathrm{deg}(A) = \epsilon(A) + 1\ \ \ \mathrm{mod}\ \mathbb{Z}_2.\end{equation} For a detailed description of all the relevant definitions, formulas, and the notational conventions, see \cite{WB}. \subsection{Ramond Kinetic Term} \label{subsec:kinetic} Let us start by summarizing what is needed to have a consistent open string field theory kinetic term from the perspective of an action realizing a cyclic $A_\infty$ structure. We need three things: \begin{description} \item{(A)} A state space $\mathcal{H}$, perhaps a subspace of the full CFT state space, which is closed under the action of the BRST operator $Q$. The BRST cohomology at ghost number 1 computed in $\mathcal{H}$ reproduces the appropriate spectrum of open string states. \item{(B)} A symplectic form $\omega$ on the state space $\mathcal{H}$. This is a linear map from two copies of the state space into complex numbers, \begin{equation}\omega:\mathcal{H}\otimes\mathcal{H}\to\mathbb{C},\end{equation} which is graded antisymmetric, \begin{equation}\omega(A,B) = -(-1)^{\mathrm{deg}(A)\mathrm{deg}(B)}\omega(B,A),\end{equation} and nondegenerate. We sometimes write the symplectic form as $\langle \omega|$, and write $\omega(A,B)\equiv \langle\omega|A\otimes B$. We assume that $\omega$ is nonzero only when acting on states whose ghost number adds up to $3$. \item{(C)} The BRST operator must be {\it cyclic} with respect to the symplectic form $\omega$: \begin{equation}\omega(QA,B) = -(-1)^{\mathrm{deg}(A)}\omega(A,QB).\end{equation} Equivalently \begin{equation}\langle\omega|(Q\otimes\mathbb{I}+\mathbb{I}\otimes Q) = 0,\end{equation} where $\mathbb{I}$ is the identity operator on the state space. \end{description} If these three criteria are met, a string field theory kinetic term can be written as \begin{equation}S= \frac{1}{2}\omega(\Psi,Q\Psi),\end{equation} where $\Psi$ is a degree even and ghost number 1 dynamical string field in $\mathcal{H}$. Variation of the action produces the expected equations of motion $Q\Psi = 0$, and the action has the linearized gauge invariance \begin{equation}\Psi' = \Psi +Q\Lambda,\end{equation} where $\Lambda \in \mathcal{H}$ is degree odd and carries ghost number 0. Let us see how this story applies to the NS and R sectors of the open superstring. We consider the RNS formulation of the open superstring, described by a $c=15$ matter boundary superconformal field theory tensored with the $c=-15$ ghost boundary superconformal field theory $b,c,\beta,\gamma$. The $\beta\gamma$ system may be bosonized to the $\xi,\eta,e^\phi$ system \cite{FMS}. We will write the eta zero mode $\eta_0$ as $\eta$. The state space of the open superstring is the direct sum of an NS component $\mathcal{H}_{\mathrm{NS}}$ and a Ramond component $\mathcal{H}_{\mathrm{R}}$: \begin{equation}\widetilde{\mathcal{H}}= \mathcal{H}_{\mathrm{NS}}\oplus\mathcal{H}_{\mathrm{R}}.\end{equation} We use $\widetilde{\mathcal{H}}$ to denote the combined state space. Formulating the NS kinetic term requires a subspace of $\mathcal{H}_\mathrm{NS}$ consisting of states at picture $-1$ and in the small Hilbert space. The BRST operator preserves this subspace, and has the correct cohomology at ghost number 1. The symplectic form can be defined by the small Hilbert space BPZ inner product (up to a sign from the shifted grading):\footnote{The elementary correlator in the small Hilbert space will be normalized as $\langle c\partial c\partial^2 ce^{-2\phi}(0)\rangle_S=-2\times Z^{\mathrm{matter}}$, where $Z^{\mathrm{matter}}$ is the disk partition function in the matter boundary conformal field theory. In the large Hilbert space the elementary correlator will be normalized as $\langle\xi c\partial c\partial^2 ce^{-2\phi}(0)\rangle_L=2\times Z^{\mathrm{matter}}$, with the opposite sign.} \begin{equation}\omega_S(A,B) \equiv (-1)^{\mathrm{deg}(A)}\langle A,B\rangle_S,\end{equation} where the subscript $S$ denotes the small Hilbert space. Furthermore, the BRST operator is cyclic with respect to $\omega_S$. Since conditions (A), (B) and (C) are met, we can write the NS kinetic term as \begin{equation}S = \frac{1}{2}\omega_S(\Psi_\mathrm{NS},Q\Psi_\mathrm{NS}),\end{equation} where the dynamical NS string field $\Psi_\mathrm{NS}\in\mathcal{H}_\mathrm{NS}$ is in the small Hilbert space ($\eta\Psi_\mathrm{NS}=0$), is degree even, and carries ghost number 1 and picture $-1$. Though it is not needed to formulate the NS kinetic term, it will be useful to consider the large Hilbert space symplectic form $\omega_L$ defined in terms of the large Hilbert space BPZ inner product by \begin{equation}\omega_L(A,B) \equiv (-1)^{\mathrm{deg}(A)}\langle A,B\rangle_L,\end{equation} where the subscript $L$ denotes the large Hilbert space. Now let us describe the Ramond kinetic term. The major technical problem in this respect is defining an appropriate symplectic form. For this purpose we introduce two picture changing operators: \begin{eqnarray} \mathscr{X} \!\!\!\!\!\!\!\!&& \equiv -\delta(\beta_0)G_0 + \delta'(\beta_0)b_0,\label{eq:scrX}\\ \mathscr{Y} \!\!\!\!\!\!\!\!&& \equiv -c_0\delta'(\gamma_0), \end{eqnarray} where $G_0$ is the zero mode of the supercurrent. The operator $\mathscr{X}$ is degree even and carries ghost number $0$ and picture $1$, and $\mathscr{Y}$ is degree even and carries ghost number $0$ and picture $-1$. Since these operators depend on $\beta\gamma$ zero modes, they only act on states in the Ramond sector. Moreover, it is clear that $\mathscr{X}$ should not act on states which are annihilated by $\beta_0$ and $\mathscr{Y}$ should not act on states which are annihilated by $\gamma_0$. For this reason we will always assume that $\mathscr{X}$ and $\mathscr{Y}$ act on states in the small Hilbert space at the following pictures: \begin{eqnarray} \mathscr{X}:\!\!\!\!\!\!\!\!&& \ \mathrm{small\ Hilbert\ space},\ \mathrm{picture}\ -\!3/2,\nonumber\\ \mathscr{Y}:\!\!\!\!\!\!\!\!&& \ \mathrm{small\ Hilbert\ space},\ \mathrm{picture}\ -\!1/2. \label{eq:XYrest} \end{eqnarray} In particular, all pictures besides picture $-3/2$ either contain states annihilated by $\beta_0$ or are BPZ conjugate to pictures containing states annihilated by $\beta_0$. Similarly, all pictures besides picture $-1/2$ either contain states annihilated by $\gamma_0$ or are BPZ conjugate to pictures containing states annihilated by $\gamma_0$. Assuming $\mathscr{X}$ and $\mathscr{Y}$ act on the appropriate picture as above, they satisfy \begin{equation}\mathscr{X}\mathscr{Y}\mathscr{X}= \mathscr{X},\ \ \ \ \mathscr{Y}\mathscr{X}\mathscr{Y} = \mathscr{Y},\ \ \ \ [Q,\mathscr{X}] = 0, \label{eq:preproj}\end{equation} and are BPZ even: \begin{equation} \langle \omega_S| \mathscr{X}\otimes\mathbb{I} = \langle \omega_S| \mathbb{I}\otimes \mathscr{X},\ \ \ \ \langle \omega_S| \mathscr{Y}\otimes\mathbb{I}= \langle \omega_S| \mathbb{I}\otimes \mathscr{Y}. \end{equation} Note that \eq{preproj} implies that the operator $\mathscr{X}\mathscr{Y}$ is a projector: \begin{equation}(\mathscr{X}\mathscr{Y})^2 = \mathscr{X}\mathscr{Y}.\end{equation} This projector selects a subspace \begin{equation}\mathcal{H}_{\mathrm{R}}^{\mathrm{restricted}}\subset \mathcal{H}_\mathrm{R}\end{equation} of Ramond states which satisfy \begin{equation}\mathscr{X}\mathscr{Y} A = A,\ \ \ \ A\in\mathcal{H}_{\mathrm{R}}^{\mathrm{restricted}}.\end{equation} We will call this the {\it restricted space}. To ensure that the action of $\mathscr{X}\mathscr{Y}$ is well defined, we will assume that the restricted space only contains states in the small Hilbert space and at picture $-1/2$. We claim that the restricted space allows for the definition of a Ramond kinetic term, and to see it, we check conditions (A), (B) and (C). First note that the restricted space is preserved by the action of the BRST operator: \begin{equation} \mathscr{X}\mathscr{Y}QA = \mathscr{X}\mathscr{Y}Q\mathscr{X}\mathscr{Y} A =\mathscr{X}\mathscr{Y}\mathscr{X}Q\mathscr{Y}A = \mathscr{X}Q\mathscr{Y}A = Q\mathscr{X}\mathscr{Y} A = QA,\ \ \ \ A\in \mathcal{H}_{\mathrm{R}}^{\mathrm{restricted}}. \end{equation} Moreover, the cohomology of $Q$ computed in $\mathcal{H}_{\mathrm{R}}^{\mathrm{restricted}}$ reproduces the correct physical spectrum~\cite{coh}. Therefore condition (A) is met. Next, we define a symplectic form on $\mathcal{H}_{\mathrm{R}}^{\mathrm{restricted}}$ by \begin{equation}\omega_S(\mathscr{Y} A,B),\ \ \ \ A,B\in\mathcal{H}_{\mathrm{R}}^{\mathrm{restricted}}.\end{equation} Graded antisymmetry follows from the fact that $\mathscr{Y}$ is BPZ even and the fact that $\omega_S$ is graded antisymmetric. Nondegeneracy follows from the fact that $\mathscr{Y}A = 0$ implies $A = 0$ upon operating with $\mathscr{X}$, and $\omega_S$ is nondegenerate on the subspace of Ramond states at pictures $-1/2$ and $-3/2$. Therefore condition (B) is met. Finally, we have \begin{eqnarray} \omega_S(\mathscr{Y}A,QB) \!\!\!\!\!\!\!\!&& = \omega_S(\mathscr{Y}A,Q\mathscr{X}\mathscr{Y}B) \nonumber\\ \!\!\!\!\!\!\!\!&& = \omega_S(\mathscr{Y}A,\mathscr{X}Q\mathscr{Y}B)\nonumber\\ \!\!\!\!\!\!\!\!&& =\omega_S(\mathscr{X}\mathscr{Y}A,Q\mathscr{Y}B) \nonumber\\ \!\!\!\!\!\!\!\!&& = \omega_S(A,Q\mathscr{Y}B)\nonumber\\ \!\!\!\!\!\!\!\!&& = - (-1)^{\mathrm{deg}(A)}\omega_S(\mathscr{Y} QA,B),\ \ \ \ \ \ A,B\in\mathcal{H}_{\mathrm{R}}^{\mathrm{restricted}}, \end{eqnarray} so condition (C) is met. Therefore, we can write a free action for the Ramond string field as \begin{equation} S= \frac{1}{2}\omega_S(\mathscr{Y}\Psi_\mathrm{R},Q\Psi_\mathrm{R}), \end{equation} where the dynamical Ramond string field $\Psi_\mathrm{R}$ is in the small Hilbert space ($\eta\Psi_\mathrm{R} = 0$), is degree even, carries ghost number 1 and picture $-1/2$, and satisfies $\mathscr{X}\mathscr{Y}\Psi_\mathrm{R} = \Psi_\mathrm{R}$. We can package the dynamical NS and R string fields together into a string field: \begin{equation} \widetilde{\Psi} =\Psi_\mathrm{NS}+\Psi_\mathrm{R}. \end{equation} We call this the ``composite string field." It is an element of the state space \begin{equation}\widetilde{\mathcal{H}}^{\mathrm{restricted}} = \mathcal{H}_\mathrm{NS}^{\mathrm{restricted}}\oplus \mathcal{H}_{\mathrm{R}}^{\mathrm{restricted}},\end{equation} which we call the ``composite restricted space." In the NS sector, the space $\mathcal{H}_\mathrm{NS}^{\mathrm{restricted}}$ consists of states in the small Hilbert space at picture $-1$. In the Ramond sector, the space $\mathcal{H}_{\mathrm{R}}^{\mathrm{restricted}}$ is defined as above. We define a ``composite symplectic form" \begin{equation}\widetilde{\omega}:\widetilde{\mathcal{H}}^{\mathrm{restricted}}\otimes\widetilde{\mathcal{H}}^{\mathrm{restricted}} \to\mathbb{C}\end{equation} by \begin{equation}\langle \widetilde{\omega}| \equiv \langle \omega_S|_0| + \langle \omega_S|_2|\mathscr{Y}\otimes\mathbb{I},\end{equation} where, following notation to be introduced in a moment, $\langle \omega_S|_0|$ is nonzero only when contracting two NS states, and $\langle \omega_S|_2|$ is nonzero only when contracting two Ramond states. From the above discussion, it is clear that the composite restricted space together with the composite symplectic form satisfy conditions (A), (B) and (C), so we can write the kinetic term as \begin{equation} S = \frac{1}{2}\widetilde{\omega}(\widetilde{\Psi},Q\widetilde{\Psi}), \end{equation} which describes the free propagation of both the NS and R states. \subsection{Ramond Equations of Motion} \label{subsec:EOM} Now that we have a free action for the NS and R sectors, our task will be to add interactions. The structure of interactions at the level of the equations of motion was described in \cite{Ramond}. It is helpful to review this before considering the action. The equations of motion are characterized by a sequence of degree odd multi-string products: \begin{equation}\widetilde{M}_1\equiv Q,\ \ \widetilde{M}_2,\ \ \widetilde{M}_3,\ \ \widetilde{M}_4,\ \ ...\ .\end{equation} We call these ``composite products" since they encapsulate the multiplication of both NS and R states. We require three properties: \begin{description} \item{(I)} The composite products satisfy the relations of an $A_\infty$ algebra. Equivalently, if $\widetilde{{\bf M}}_{n+1}$ is the coderivation corresponding to $\widetilde{M}_{n+1}$, the sum \begin{equation}\widetilde{{\bf M}}\equiv \widetilde{{\bf M}}_1+\widetilde{{\bf M}}_2 + \widetilde{{\bf M}}_3 + \widetilde{{\bf M}}_4+...\end{equation} defines a nilpotent coderivation on the tensor algebra:\footnote{Commutators of multi-string products are always graded with respect to degree \cite{WittenSS}. Commutators of string fields, computed with the open string star product, are graded with respect to Grassmann parity. When taking commutators of operators (or equivalently commutators of 1-string products) the degree and Grassmann gradings are equivalent.} \begin{equation}[\widetilde{{\bf M}},\widetilde{{\bf M}}]= 0.\end{equation} \item{(II)} The composite products are defined in the small Hilbert space. Equivalently, the coderivation $\widetilde{{\bf M}}$ commutes with the coderivation ${\bm \upeta}$ representing the eta zero mode: \begin{equation}[{\bm \upeta},\widetilde{{\bf M}}] = 0.\end{equation} \item{(III)} The composite products carry the required ghost and picture number so that the equations of motion, \begin{equation} 0 = Q\widetilde{\Psi} + \widetilde{M}_2(\widetilde{\Psi},\widetilde{\Psi}) +\widetilde{M}_3(\widetilde{\Psi},\widetilde{\Psi},\widetilde{\Psi})+...,\label{eq:EOM} \end{equation} have an NS component at ghost number $2$ and picture $-1$, and a Ramond component at ghost number $2$ and picture $-1/2$. \end{description} When we write the equations of motion, the dynamical Ramond string field does not have to be in the restricted space. Formulating the equations of motion in the restricted space is closely related to constructing the action, and will be described later. However, we still assume that $\Psi_\mathrm{R}$ is in the small Hilbert space, is degree even, and carries ghost number 1 and picture $-1/2$. We will construct the composite products by placing picture changing insertions on Witten's associative star product: \begin{equation}m_2(A,B) \equiv (-1)^{\mathrm{deg}(A)} A*B.\end{equation} The generalization to other forms of open string multiplication (for example, the star product with ``stubs" \cite{ClosedSS,Ramond}) is closely related to the generalization to heterotic and type II superstring field theories, and will be left for future work. The BRST operator, the eta zero mode, and the star product satisfy \begin{eqnarray} 0\!\!\!\!\!\!\!\!&& = [{\bf Q},{\bf Q}],\ \ \ \ \, 0 = [{\bm \upeta},{\bf Q}],\ \ \ \ \, 0 = [{\bm \upeta},{\bm \upeta}],\nonumber\\ 0\!\!\!\!\!\!\!\!&& = [{\bf Q},{\bf m}_2], \ \ \ 0 = [{\bm \upeta},{\bf m}_2],\ \ \ 0 = [{\bf m}_2,{\bf m}_2]. \end{eqnarray} This says that $Q$ and $\eta$ are nilpotent and commute, that $Q$ and $\eta$ are derivations of the star product, and that the star product is associative. Equivalently, ${\bf Q},{\bm \upeta}$ and ${\bf m}_2$ define three mutually commuting $A_\infty$ structures. Though it is not important for the equations of motion, we note that the star product is cyclic with respect to the small (and large) Hilbert space symplectic form: \begin{equation}\langle\omega_S|(m_2\otimes\mathbb{I}+\mathbb{I}\otimes m_2) = 0.\end{equation} Similarly the eta zero mode is cyclic with respect to the large Hilbert space symplectic form. Because $\Psi_\mathrm{NS}$ and $\Psi_\mathrm{R}$ carry different picture, the composite products $\widetilde{M}_{n+1}$ must provide a different amount of picture depending on how many NS and R states are being multiplied. To keep track of this, it will be useful to invoke the concept of {\it Ramond number}. A multi-string product has Ramond number $r$ if it is nonvanishing only when the number of Ramond inputs minus the number of Ramond outputs is equal to $r$. We will write the Ramond number of a product using a vertical slash followed by an index indicating the Ramond number. For example, $b_m|_r$ is an $m$-string product of Ramond number $r$. The definition of Ramond number implies that the product $b_m|_r$ has the property \begin{eqnarray} b_m|_r\Big(\ r\ \mathrm{Ramond\ states}\ \Big) \!\!\!\!\!\!\!\!&& = \mathrm{NS\ state},\nonumber\\ b_m|_r\Big(\ r\!+\!1\ \mathrm{Ramond\ states}\ \Big) \!\!\!\!\!\!\!\!&& = \mathrm{R\ state},\nonumber\\ b_m|_r\Big(\ \mathrm{otherwise}\ \Big)\!\!\!\!\!\!\!\!&& = 0. \end{eqnarray} Any product can be written as a unique sum of products of definite Ramond number: \begin{equation}b_m = b_m|_{-1} + b_m|_0 + b_m|_1 + ... + b_m|_m.\end{equation} The Ramond number of $b_m$ is bounded between $-1$ and $m$ since $b_m$ can have at most $m$ Ramond inputs and at most $1$ Ramond output. Since Ramond number is conserved when composing products, it is conserved when taking commutators of coderivations: \begin{equation}[{\bf b}_m,{\bf c}_n]|_s = \sum_{r=-1}^s [{\bf b}_m|_r,{\bf c}_n|_{s-r}], \end{equation} with the understanding that commutators in this sum vanish if the Ramond number exceeds the number of inputs of the product. As an example of this identity, note that associativity of the star product implies \begin{eqnarray} 0\!\!\!\!\!\!\!\!&& =[{\bf m}_2,{\bf m}_2]|_0 = [{\bf m}_2|_0,{\bf m}_2|_0],\label{eq:m2R0}\\ 0\!\!\!\!\!\!\!\!&& =[{\bf m}_2,{\bf m}_2]|_2 = 2 [{\bf m}_2|_0,{\bf m}_2|_2],\label{eq:m2R2}\\ 0\!\!\!\!\!\!\!\!&& = [{\bf m}_2,{\bf m}_2]|_4 = [{\bf m}_2|_2,{\bf m}_2|_2], \end{eqnarray} where the star product is broken into components of definite Ramond number as \begin{equation}{\bf m}_2 = {\bf m}_2|_0 + {\bf m}_2|_2.\end{equation} The components of the star product with odd Ramond number vanish identically. We are now ready to describe the equations of motion constructed in \cite{Ramond}. The composite products $\widetilde{M}_{n+2}$ have a component at Ramond number $0$ and a component at Ramond number $2$: \begin{equation}\widetilde{M}_{n+2} = M_{n+2}|_0+m_{n+2}|_2,\label{eq:comp}\end{equation} which carry the following picture and ghost numbers: \begin{eqnarray} M_{n+2}|_0:\!\!\!\!\!\!\!\!&&\ \ \mathrm{picture}\ n+1, \ \ \mathrm{ghost\ number}\ -n,\\ m_{n+2}|_2:\!\!\!\!\!\!\!\!&&\ \ \mathrm{picture}\ n,\ \ \ \ \ \ \ \,\! \mathrm{ghost\ number}\ -n. \end{eqnarray} The 1-string product $M_1|_0$ is identified with the BRST operator \begin{equation}M_1|_0\equiv Q,\end{equation} and $m_2|_2$ is the Ramond number 2 component of Witten's open string star product. We also define {\it bare products} of odd degree and {\it gauge products} of even degree: \begin{eqnarray} \mathrm{bare\ products}\ \ m_{n+2}|_0:\!\!\!\!\!\!\!\!&&\ \ \mathrm{picture}\ n,\ \ \ \ \ \ \ \mathrm{ghost\ number}\ -n,\\ \mathrm{gauge\ products}\ \ \ \mu_{n+2}|_0:\!\!\!\!\!\!\!\!&&\ \ \mathrm{picture}\ n+1, \ \ \mathrm{ghost\ number}\ -n-1. \end{eqnarray} The bare product $m_2|_0$ is the Ramond number zero component of Witten's open string star product. We define generating functions \begin{eqnarray} {\bf M}|_0(t) \!\!\!\!\!\!\!\!&& \equiv \sum_{n=0}^\infty t^n {\bf M}_{n+1}|_0,\label{eq:Mgen}\\ {\bf m}|_2(t) \!\!\!\!\!\!\!\!&& \equiv \sum_{n=0}^\infty t^n {\bf m}_{n+2}|_2,\\ {\bf m}|_0(t) \!\!\!\!\!\!\!\!&& \equiv \sum_{n=0}^{\infty} t^n {\bf m}_{n+2}|_0,\\ {\bm \upmu}|_0(t) \!\!\!\!\!\!\!\!&& \equiv \sum_{n=0}^{\infty} t^n {\bm \upmu}_{n+2}|_0,\label{eq:mpgen} \end{eqnarray} which are postulated to satisfy the differential equations \begin{eqnarray} \frac{d}{dt}{\bf M}|_0(t) \!\!\!\!\!\!\!\!&& = [{\bf M}|_0(t),{\bm \upmu}|_0(t)],\label{eq:Mdiff}\\ \frac{d}{dt}{\bf m}|_2(t) \!\!\!\!\!\!\!\!&& = [{\bf m}|_2(t),{\bm \upmu}|_0(t)],\label{eq:mdiff}\\ \frac{d}{dt}{\bf m}|_0(t) \!\!\!\!\!\!\!\!&& = [{\bf m}|_0(t),{\bm \upmu}|_0(t)],\label{eq:mpdiff}\\ \ [{\bm \upeta},{\bm \upmu}|_0(t)] \!\!\!\!\!\!\!\!&& = {\bf m}|_0(t).\phantom{\Big(}\label{eq:mum} \end{eqnarray} Expanding in powers of $t$ gives a recursive system of equations which determine higher products in terms of sums of commutators of lower ones. A crucial step in solving this system of equations concerns \eq{mum}, which defines the gauge product $\mu_{n+2}|_0$ in terms of the bare product $m_{n+2}|_0$. The solution of \eq{mum} requires a choice of contracting homotopy of ${\bm \upeta}$.\footnote{In this context, a contracting homotopy for ${\bm \upeta}$ is a degree odd linear operator $\Xi\circ$ acting on the vector space of coderivations which satisfies $[{\bm \upeta},\Xi\circ{\bf D}] + \Xi\circ[{\bm \upeta},{\bf D}] = {\bf D}$ for an arbitrary coderivation ${\bf D}$. } This choice influences the configuration of picture changing insertions which appear in the vertices, and will determine whether or not the equations of motion can be derived from an action. The products can be usefully characterized by the cohomomorphism \begin{equation} {\bf \hat{G}}(t) \equiv \mathcal{P}\exp\left[\int_0^t ds\, {\bm \upmu}|_0(s)\right],\label{eq:Gt} \end{equation} where the path ordering is in sequence of increasing $s$ from left to right. In particular, the generating functions take the form \begin{eqnarray} {\bf M}|_0(t) \!\!\!\!\!\!\!\!&& = {\bf \hat{G}}(t)^{-1}{\bf Q}{\bf \hat{G}}(t),\phantom{\Big(}\label{eq:M0G}\\ {\bf m}|_2(t) \!\!\!\!\!\!\!\!&& = {\bf \hat{G}}(t)^{-1}{\bf m}_2|_2{\bf \hat{G}}(t),\phantom{\Big(}\label{eq:m2G}\\ {\bf m}|_0(t) \!\!\!\!\!\!\!\!&& = {\bf \hat{G}} (t)^{-1}{\bf m}_2|_0{\bf \hat{G}}(t),\phantom{\Big(}\label{eq:m0G}\\ {\bm \upmu}|_0(t)\!\!\!\!\!\!\!\!&& = {\bf \hat{G}}(t)^{-1}\frac{d}{dt}{\bf \hat{G}}(t).\label{eq:mu0G} \end{eqnarray} Also, using \eq{mum} and \eq{m0G} it is straightforward to show that \cite{OkWB} \begin{eqnarray} {\bm \upeta} \!\!\!\!\!\!\!\!&& = {\bf \hat{G}}^{-1}({\bm \upeta}-{\bf m}_2|_0){\bf \hat{G}}.\label{eq:etaG} \end{eqnarray} Here and in what follows, all objects are evaluated at $t=1$ when the dependence on $t$ is not explicitly indicated. The coderivation representing the composite products is \begin{eqnarray}\widetilde{{\bf M}}\!\!\!\!\!\!\!\!&& = {\bf M}|_0 + {\bf m}|_2 \nonumber\\ \!\!\!\!\!\!\!\!&& = {\bf \hat{G}}^{-1}({\bf Q}+{\bf m}_2|_2){\bf \hat{G}}.\end{eqnarray} From this expression it immediately follows that \begin{equation}[\widetilde{{\bf M}},\widetilde{{\bf M}}] = 0,\ \ \ \ [{\bm \upeta},\widetilde{{\bf M}}] = 0,\label{eq:Ainfsmall}\end{equation} because ${\bf Q},{\bf m}_2$ and ${\bm \upeta}$ are mutually commuting $A_\infty$ structures. Therefore the composite products satisfy $A_\infty$ relations and are in the small Hilbert space. \section{The Action} \label{sec:small} Now we can bring the Ramond kinetic term and equations of motion together to define an action: \begin{equation} S = \frac{1}{2}\widetilde{\omega}(\widetilde{\Psi},Q\widetilde{\Psi}) + \frac{1}{3}\widetilde{\omega}(\widetilde{\Psi},\widetilde{M}_2(\widetilde{\Psi},\widetilde{\Psi}))+\frac{1}{4}\widetilde{\omega}(\widetilde{\Psi},\widetilde{M}_3(\widetilde{\Psi},\widetilde{\Psi},\widetilde{\Psi}))+...,\label{eq:action} \end{equation} where $\widetilde{\Psi}$ is the composite string field and $\widetilde{M}_{n+1}$ are the composite products introduced in subsection~\ref{subsec:EOM}. Since we now consider the action, the dynamical Ramond string field must belong to the restricted space. When we vary the action, it is assumed that we should reproduce the equations of motion \begin{equation}0=Q\widetilde{\Psi} + \widetilde{M}_2(\widetilde{\Psi},\widetilde{\Psi}) + \widetilde{M}_3(\widetilde{\Psi},\widetilde{\Psi},\widetilde{\Psi}) + ...\ .\end{equation} However, this requires that the composite products are {\it cyclic} with respect to the composite symplectic form: \begin{equation}\langle\widetilde{\omega}|\big(\widetilde{M}_{n+1}\otimes\mathbb{I}+\mathbb{I}\otimes \widetilde{M}_{n+1}\big)= 0\ \ \ \ \mathrm{on}\ \ \widetilde{\mathcal{H}}^{\mathrm{restricted}}.\end{equation} Thus the composite products define a cyclic $A_\infty$ algebra. Cyclicity does not follow automatically from the construction of the equations of motion given in subsection \ref{subsec:EOM}, but requires a special choice of picture changing insertions inside the vertices. More technically, it requires a special choice of contracting homotopy for ${\bm \upeta}$ in the solution of \eq{mum}, and our task is to find it. \subsection{Picture Changing Insertion} \label{subsec:PCO} The picture changing insertions in the action are defined with the operator \begin{equation} \tilde{\xi }: \mathrm{degree\ odd},\ \ \mathrm{ghost\ number}\ -\!1,\ \ \mathrm{picture}\ 1, \end{equation} which has the following properties: \begin{description} \item{\ \ \ \ \ 1)} $\tilde{\xi }$ is a contracting homotopy for $\eta$:\ \ $[\eta,\tilde{\xi }] = 1$, \item{\ \ \ \ \ 2)} $\tilde{\xi } $ is BPZ even:\ \ $\langle \omega_L|\tilde{\xi } \otimes\mathbb{I} = \langle\omega_L|\mathbb{I}\otimes\tilde{\xi } $, \item{\ \ \ \ \ 3)} $[Q,\tilde{\xi } ] = \mathscr{X}$ when acting on a Ramond state at picture $-3/2$ in the small Hilbert space, \item{\ \ \ \ \ 4)} $\tilde{\xi }^2=0$. \end{description} Property 1) is needed to define a contracting homotopy for ${\bm \upeta}$ in the solution of \eq{mum}. Properties~2) and 3) will be needed in the proof of cyclicity. Property 4) will not be essential for our purposes, but we would like to have it anyway. A natural candidate for $\tilde{\xi } $ is the operator $\Theta(\beta_0)$ as used in \cite{complete}, which in particular satisfies \begin{equation}[Q,\Theta(\beta_0)]=\mathscr{X}.\end{equation} However, we must be careful to avoid acting $\Theta(\beta_0)$ on states annihilated by $\beta_0$. This means that $\Theta(\beta_0)$ can only act ``safely" on the states: \begin{equation}\Theta(\beta_0):\ \mathrm{small\ Hilbert\ space,\ picture}\, -\! 3/2.\label{eq:Thsm}\end{equation} It may seem somewhat unnatural to require that $\Theta(\beta_0)$ acts on the small Hilbert space, since generically it maps into the large Hilbert space. Let us explain why this is necessary. Suppose $\Theta(\beta_0)$ could act on an arbitrary state $A$ at picture $-3/2$ in the large Hilbert space. Then we should be able to contract with a state $B$ at picture $-1/2$, \begin{equation}\langle \Theta(\beta_0)A,B\rangle_L,\end{equation} and obtain a finite result. Now suppose $A=QA'$ and $B'=QB$. Then using the BPZ even property of $\mathscr{X}$ gives \begin{equation} \langle \Theta(\beta_0)A,B\rangle_L= \langle A',\mathscr{X} B\rangle_L+(-1)^{\epsilon(A')+1} \langle \Theta(\beta_0)A',B'\rangle_L. \end{equation} We have assumed that the left hand side is finite, and the second term on the right hand side should be finite by the same assumption. However, this contradicts the fact that the first term on the right hand side can be infinite if $B$ is annihilated by $\beta_0$. Therefore, the action of $\Theta(\beta_0)$ in the large Hilbert space must generally be singular. This causes problems with a direct attempt to identify $\Theta(\beta_0)$ with the operator $\tilde{\xi } $. Nevertheless, it was shown in \cite{complete} that $\Theta(\beta_0)$ at least formally satisfies properties $1)-4)$. However, in \cite{complete} it was assumed that $\Theta(\beta_0)$ never acts on states annihilated by $\beta_0$. Here we would like to provide a setting where this assumption is justified. First, note that \eq{Thsm} implies that we can define operators $\Theta(\beta_0)\eta$ and $\eta\Theta(\beta_0)$ acting on the following states: \begin{eqnarray} \!\!\!\!\!\!\!\!&& \Theta(\beta_0)\eta:\ \mathrm{large\ Hilbert\ space,\ picture}\, -\! 1/2,\nonumber\\ \!\!\!\!\!\!\!\!&& \eta\Theta(\beta_0):\ \mathrm{large\ Hilbert\ space,\ picture}\, -\! 1/2. \end{eqnarray} The operator $\Theta(\beta_0)\eta$ is well defined since $\eta$ maps from the large Hilbert space at picture $-1/2$ into the small Hilbert space at picture $-3/2$, after which we can act with $\Theta(\beta_0)$. The operator $\eta\Theta(\beta_0)$ is defined by BPZ conjugation of $\Theta(\beta_0)\eta$. Therefore we have \begin{equation}\langle \omega_L| \eta\Theta(\beta_0)\otimes\mathbb{I} = \langle\omega_L|\mathbb{I}\otimes \Theta(\beta_0)\eta\end{equation} when acting on states in the large Hilbert space at picture $-1/2$. We also have \begin{equation}\eta\Theta(\beta_0) + \Theta(\beta_0)\eta = 1\label{eq:Thid1}\end{equation} when acting in the large Hilbert space at picture $-1/2$. We can also say that $\Theta(\beta_0)$ is nilpotent in the sense that \begin{equation}\eta\Theta(\beta_0)^2\eta = 0,\label{eq:Thid3}\end{equation} which similarly holds on states in the large Hilbert space at picture $-1/2$. Having understood the limitations of $\Theta(\beta_0)$, we can search for a more acceptable alternative. For this purpose we introduce the operator \cite{INOT,WittenSS} \begin{equation}\xi \equiv\oint_{|z|=1}\frac{dz}{2\pi i} f(z)\xi (z),\end{equation} where the function $f(z)$ is holomorphic in the vicinity of the unit circle. The function $f(z)$ can be chosen so that $\xi $ is BPZ even and commutes with $\eta$ to give 1: \begin{equation} \langle \omega_L| \xi \otimes\mathbb{I} = \langle \omega_L|\mathbb{I}\otimes\xi ,\ \ \ \ [\eta,\xi ] = 1. \end{equation} In addition $\xi ^2=0$. Therefore $\xi $ realizes properties 1), 2) and 4), but it does not realize property~3). Rather, the BRST variation gives the operator \begin{equation} X\equiv [Q,\xi ], \label{eq:X} \end{equation} which is not the same as $\mathscr{X}$. This can be fixed by defining a ``hybrid" operator between $\xi $ and $\Theta(\beta_0)$: \begin{equation} \tilde{\xi } \equiv \xi + (\Theta(\beta_0)\eta\xi - \xi ) P_{-3/2} + (\xi \eta\Theta(\beta_0) -\xi )P_{-1/2}, \end{equation} where $P_{n}$ projects onto states at picture $n$. Note that $\Theta(\beta_0)$ always appears here in allowed combinations with $\eta$ acting on allowed pictures. Note also that $\tilde{\xi }$ reduces to $\xi $ when acting on NS states, as is appropriate for defining the NS superstring field theory \cite{WittenSS}. It is also clear that $\tilde{\xi }$ is BPZ even, and so realizes property 2). To see that property 3) is realized, let us define the picture changing operator \begin{equation}\widetilde{X} \equiv[Q,\tilde{\xi }].\end{equation} Note that in general $\widetilde{X}$ is different from $\mathscr{X}$ defined in \eq{scrX} and $X$ defined in \eq{X}. However, $\widetilde{X}$ is identical to $\mathscr{X}$ when it acts on a state $A$ in the small Hilbert space at picture $-3/2$: \begin{eqnarray} \widetilde{X} A\!\!\!\!\!\!\!\!&& = [Q,\Theta(\beta_0)\eta\xi ] A \nonumber\\ \!\!\!\!\!\!\!\!&& = \Big(\mathscr{X}\eta\xi + \Theta(\beta_0)\eta X\Big) A\nonumber\\ \!\!\!\!\!\!\!\!&& = \Big(\mathscr{X}[\eta,\xi ] + \Theta(\beta_0)[\eta, X]\Big) A\nonumber\\ \!\!\!\!\!\!\!\!&& = \mathscr{X} A, \end{eqnarray} so property 3) is realized. Now let us confirm properties 1) and 4). Note \begin{equation}P_{n} \eta = \eta P_{n+1},\end{equation} and compute \begin{eqnarray} [\eta,\tilde{\xi }] \!\!\!\!\!\!\!\!&& = 1+ \eta\Big(\Theta(\beta_0)\eta\xi - \xi \Big) P_{-3/2}\nonumber\\ \!\!\!\!\!\!\!\!&& \ \ \ \ \ + \Big[\eta\Big(\xi \eta\Theta(\beta_0) -\xi \Big)+\Big(\Theta(\beta_0)\eta\xi - \xi \Big)\eta\Big]P_{-1/2}\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ +\Big(\xi \eta\Theta(\beta_0) -\xi \Big)\eta P_{1/2}\nonumber\\ \!\!\!\!\!\!\!\!&& = 1+ (\eta\xi - \eta\xi ) P_{-3/2}\nonumber\\ \!\!\!\!\!\!\!\!&& \ \ \ \ \ + \Big[\eta\Theta(\beta_0) -\eta\xi +\Theta(\beta_0)\eta - \xi \eta\Big]P_{-1/2}\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ +(\xi \eta -\xi \eta) P_{1/2}\nonumber\\ \!\!\!\!\!\!\!\!&& = 1, \end{eqnarray} where we used \eq{Thid1} and $[\eta,\xi ]=1$. Finally let us check property 4): \begin{eqnarray} \tilde{\xi } ^2 \!\!\!\!\!\!\!\!&& = \xi ^2 + \Big(\Theta(\beta_0)\eta\xi - \xi \Big) P_{-3/2}\Big(\Theta(\beta_0)\eta\xi - \xi \Big) P_{-3/2}\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \, +\, \Big(\xi \eta\Theta(\beta_0) -\xi \Big)P_{-1/2}\Big(\xi \eta\Theta(\beta_0) -\xi \Big)P_{-1/2}\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \, +\, \xi \Big(\Theta(\beta_0)\eta\xi - \xi \Big) P_{-3/2}+ \Big(\Theta(\beta_0)\eta\xi - \xi \Big) P_{-3/2}\xi \nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \, +\, \xi \Big(\xi \eta\Theta(\beta_0) -\xi \Big)P_{-1/2}\!+\!\Big(\xi \eta\Theta(\beta_0) -\xi \Big)P_{-1/2}\xi \nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \,+\,\Big(\Theta(\beta_0)\eta\xi - \xi \Big) P_{-3/2}\Big(\xi \eta\Theta(\beta_0) -\xi \Big)P_{-1/2}\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \,+\,\Big(\xi \eta\Theta(\beta_0) -\xi \Big)P_{-1/2}\Big(\Theta(\beta_0)\eta\xi - \xi \Big) P_{-3/2}\nonumber\\ \!\!\!\!\!\!\!\!&& = \xi ^2 +\xi \Big(\Theta(\beta_0)\eta\xi - \xi \Big) P_{-3/2}+ \Big(\Theta(\beta_0)\eta\xi - \xi \Big)\xi P_{-5/2} + \xi \Big(\xi \eta\Theta(\beta_0) -\xi \Big)P_{-1/2}\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \,+\,\Big(\xi \eta\Theta(\beta_0) -\xi \Big)\xi P_{-3/2} +\Big(\xi \eta\Theta(\beta_0) -\xi \Big)\Big(\Theta(\beta_0)\eta\xi - \xi \Big) P_{-3/2}. \end{eqnarray} In the second step we commuted all projectors to the right and dropped terms with a pair of projections into incompatible pictures. Using $\xi ^2=0$ this further simplifies \begin{eqnarray} \tilde{\xi } ^2 \!\!\!\!\!\!\!\!&& =\xi \Theta(\beta_0)\eta\xi P_{-3/2} +\xi \eta\Theta(\beta_0)\xi P_{-3/2} +\Big(\xi \eta\Theta(\beta_0)^2\eta\xi -\xi \eta\Theta(\beta_0)\xi -\xi \Theta(\beta_0)\eta\xi \Big)P_{-3/2}\nonumber\\ \!\!\!\!\!\!\!\!&& = \xi \eta\Theta(\beta_0)^2\eta\xi P_{-3/2}\nonumber\\ \!\!\!\!\!\!\!\!&& = 0, \end{eqnarray} which vanishes as a consequence of \eq{Thid3}. Therefore we have a definition of the picture changing insertion $\tilde{\xi }$ with all necessary properties. It is worth mentioning that $\mathscr{X}$ and $\Theta(\beta_0)$ cannot be expressed in an elementary way in terms of the local picture changing insertions $X(z)$ and $\xi(z)$. Therefore, the computation of correlation functions with $\mathscr{X}$ and $\Theta(\beta_0)$ does not appear to be straightforward. However, a recipe for computations with such operators was given in \cite{revisited} in the context of $\beta\gamma$ correlation functions, where they may be represented as formal integrals \begin{equation} \mathscr{X} \equiv \int d\zeta \int d\tilde{\zeta}\, e^{\zeta G_0 -\tilde{\zeta}\beta_0},\ \ \ \ \Theta(\beta_0) \equiv -\int d\tilde{\zeta}\,\frac{e^{-\tilde{\zeta}\beta_0}}{\tilde{\zeta}},\label{eq:intrep} \end{equation} where $\zeta$ is an odd integration variable and $\tilde{\zeta}$ is an even integration variable. The key point is that the integral over the even variable $\tilde{\zeta}$ should be understood algebraically, analogous to the Berezin integral over the odd variable $\zeta$, rather than as an ordinary integral in the sense of analysis. One difficulty, however, is the appearance of a singular factor $\tilde{\zeta}^{-1}$ in the integral for $\Theta(\beta_0)$. This is related to the fact that $\Theta(\beta_0)$ is an operator in the large Hilbert space, and therefore its precise definition must go slightly beyond the formalism of \cite{revisited}. Here we give one prescription for dealing with this. We may express $\Theta(\beta_0)$ in the form \begin{equation} \Theta(\beta_0) = \xi_0 +\Delta, \end{equation} where \begin{equation}\Delta \equiv \Theta(\beta_0) - \xi_0,\end{equation} and $\xi_0$ is the zero mode of the $\xi$ ghost. The term $\Delta$ can be represented as an algebraic integral \begin{eqnarray} \Delta = -\int d\tilde{\zeta}\,\frac{e^{-\tilde{\zeta}\beta_0}}{\tilde{\zeta}} + \oint_{|z|=1} \frac{dz}{2\pi i}\frac{1}{z}\int d\tilde{\zeta}\,\frac{e^{-\tilde{\zeta}\beta(z)}}{\tilde{\zeta}}. \end{eqnarray} Since the first term is independent of $z$, we can write $\Delta$ as \begin{equation} \Delta = \oint_{|z|=1} \frac{dz}{2\pi i}\frac{1}{z} \int d\tilde{\zeta}\,\frac{1}{\tilde{\zeta}}\Big(-e^{-\tilde{\zeta}\beta_0} + e^{-\tilde{\zeta}\beta(z)}\Big). \end{equation} Finally, we represent the integrand as the integral of a total derivative, \begin{equation} \Delta = \oint_{|z|=1} \frac{dz}{2\pi i}\frac{1}{z} \int d\tilde{\zeta}\,\frac{1}{\tilde{\zeta}}\int_0^1 dt\,\frac{d}{dt}e^{-\tilde{\zeta}(t\beta(z)+(1-t)\beta_0)}, \end{equation} and taking the derivative with respect to $t$ gives \begin{eqnarray} \Delta = \int_0^1 dt \oint_{|z|=1} \frac{dz}{2\pi i}\frac{1}{z} \int d\tilde{\zeta}(\beta_0-\beta(z)) e^{-\tilde{\zeta}(t\beta(z)+(1-t)\beta_0)}. \end{eqnarray} Note that the problematic factor $\tilde{\zeta}^{-1}$ is canceled. The upshot is that we have defined $\Theta(\beta_0)$ as a sum of $\xi_0$, which can be understood in the bosonized $\beta\gamma$ system, and $\Delta$, which can be evaluated following \cite{revisited}. To see how this definition can be applied, note that the computation of a typical open string field theory vertex requires evaluating correlation functions with multiple insertions of $\Theta(\beta_0)$: \begin{equation} \Theta^{(1)}\Theta^{(2)}...\Theta^{(n)}, \end{equation} where $\Theta^{(i)}$ represent appropriate conformal transformations of $\Theta(\beta_0)$. Writing $\Theta(\beta_0) = \xi_0+\Delta$ produces cross terms of the form \begin{equation} \xi^{(1)}\xi^{(2)}...\,\xi^{(m)}\Delta^{(m+1)}\Delta^{(m+2)}...\,\Delta^{(n)}, \end{equation} where $\xi^{(i)}$ and $\Delta^{(i)}$ represent appropriate conformal transformations of $\xi_0$ and $\Delta$, respectively. Since $(\xi^{(1)})^2 = 0$, we can replace these insertions with \begin{equation} \xi^{(1)}(\xi^{(2)}-\xi^{(1)})...\,(\xi^{(m)}-\xi^{(1)})\Delta^{(m+1)}\Delta^{(m+2)}...\,\Delta^{(n)}. \end{equation} We can now drop the factor $\xi^{(1)}$, which only serves to saturate the $\xi$ zero mode in the large Hilbert space, and evaluate the remaining factors using $\beta\gamma$ correlation functions as in \cite{revisited}. An important question is whether our choice of picture changing insertions $\tilde{\xi }$ and $\widetilde{X}$ avoid contact divergences in vertices and amplitudes, as appear for example when we use a local picture changing insertion in the cubic vertex \cite{Wendt}. In the NS sector such divergences are absent since the picture changing insertions appear as holomorphic contour integrals \cite{INOT,WittenSS}. In the Ramond sector, the picture changing insertions appear as $\Theta(\beta_0)$ and $\mathscr{X}$; to our knowledge, such operators can only be divergent in the presence of a zero mode of the path integral associated with $\beta_0$. We have taken some care to ensure that $\Theta(\beta_0)$ and $\mathscr{X}$ operate on states of pictures where such zero modes are absent, and therefore the vertices are expected to be finite. Explicit calculations with similar operators will be discussed in upcoming work \cite{OO}, and no contact divergences appear. \subsection{The 2-String Product} \label{subsec:2string} We are ready to construct the products defining the action. Let us start by expanding the equations of motion out to second order in the string field and in NS and R components: \begin{eqnarray} 0\!\!\!\!\!\!\!\!&& =Q\Psi_\mathrm{NS} + M_2|_0(\Psi_\mathrm{NS},\Psi_\mathrm{NS}) + m_2|_2(\Psi_\mathrm{R},\Psi_\mathrm{R})+\ ... \ ,\\ 0\!\!\!\!\!\!\!\!&& =Q\Psi_\mathrm{R} + M_2|_0(\Psi_\mathrm{NS},\Psi_\mathrm{R})+M_2|_0(\Psi_\mathrm{R},\Psi_\mathrm{NS})+\ ...\ .\label{eq:REOM2} \end{eqnarray} In \cite{WittenSS} the product of two NS states was defined by \begin{equation} M_2|_0 = \frac{1}{3}\Big(Xm_2|_0 + m_2|_0(X\otimes \mathbb{I}+ \mathbb{I}\otimes X)\Big) \ \ \ \ \ \ \ \ (\mathrm{multiplying\ NS\ states}).\label{eq:NSM20} \end{equation} This definition does not work for multiplying an NS and R state, since it does not multiply into the restricted space in the Ramond sector. For this reason we take \begin{equation} M_2|_0 = \mathscr{X} m_2|_0 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (\mathrm{multiplying\ NS\ and\ R\ state\ in}\ \widetilde{\mathcal{H}}^{\mathrm{restricted}}). \end{equation} Because $\mathscr{X}\mathscr{Y}\mathscr{X} = \mathscr{X}$, this product satisfies $\mathscr{X}\mathscr{Y}M_2|_0 = M_2|_0$ and therefore maps into the restricted space. Note that this definition of $M_2|_0$ differs from \cite{Ramond}, where it was assumed that $M_2|_0$ multiplies two NS states and an NS and R state in the same way. To make notation uniform it is helpful to write $X$ and $\mathscr{X}$ together using the picture changing operator $\widetilde{X}$, so we define \begin{equation} M_2|_0 \equiv \left\{\begin{matrix*}[l] & {\displaystyle \frac{1}{3}}\Big(\widetilde{X} m_2|_0 + m_2|_0(\widetilde{X}\otimes \mathbb{I}+ \mathbb{I}\otimes \widetilde{X})\Big) &\ \ \ (0\ \mathrm{Ramond\ inputs}) \phantom{\Bigg]}\\ & \widetilde{X} m_2|_0 &\ \ \ (1\ \mathrm{Ramond\ input})\phantom{\Bigg]} \end{matrix*}\right..\label{eq:M202} \end{equation} The full composite 2-product is then \begin{equation} \widetilde{M}_2 \equiv \left\{\begin{matrix*}[l] & {\displaystyle \frac{1}{3}}\Big(\widetilde{X} m_2|_0 + m_2|_0(\widetilde{X}\otimes \mathbb{I}+ \mathbb{I}\otimes \widetilde{X})\Big) &\ \ \ (0\ \mathrm{Ramond\ inputs}) \phantom{\Bigg]}\\ & \widetilde{X} m_2|_0 &\ \ \ (1\ \mathrm{Ramond\ input})\phantom{\Bigg]}\\ & m_2|_2 &\ \ \ (2\ \mathrm{Ramond\ inputs})\phantom{\Bigg]} \end{matrix*}\right..\label{eq:subsection} \end{equation} Note that using $\widetilde{X}$ gives a definition of the product $M_2|_0$ between arbitrary states in $\widetilde{\mathcal{H}}$. Following the discussion of subsection \ref{subsec:EOM}, the product $M_2|_0$ should be derived from a gauge 2-product $\mu_2|_0$ and bare 2-product $m_2|_0$ satisfying the formulas \begin{eqnarray} {\bf M}_2|_0 \!\!\!\!\!\!\!\!&& = [{\bf Q},{\bm \upmu}_2|_0],\\ \ [{\bm \upeta},{\bm \upmu}_2|_0]\!\!\!\!\!\!\!\!&& = {\bf m}_2|_0. \end{eqnarray} The last equation defines $\mu_2|_0$ in terms of $m_2|_0$ with an appropriate choice of contracting homotopy for ${\bm \upeta}$. The choice of contracting homotopy which produces our preferred definition of $M_2|_0$ is realized by the following gauge 2-product: \begin{equation} \mu_2|_0 \equiv \left\{\begin{matrix*}[l] & {\displaystyle \frac{1}{3}}\Big(\tilde{\xi } m_2|_0 - m_2|_0(\tilde{\xi }\otimes \mathbb{I}+ \mathbb{I}\otimes \tilde{\xi })\Big) &\ \ \ (0\ \mathrm{Ramond\ inputs}) \phantom{\Bigg]}\\ & \tilde{\xi } m_2|_0 &\ \ \ (1\ \mathrm{Ramond\ input})\phantom{\Bigg]} \end{matrix*}\right..\label{eq:mu20} \end{equation} This completes the definition of the equations of motion up to second order. Now we want to see that the equations of motion can be derived from an action. This requires that the composite 2-product is cyclic: \begin{equation}\langle\widetilde{\omega}| \mathbb{I}\otimes\widetilde{M}_2=-\langle\widetilde{\omega}| \widetilde{M}_2\otimes\mathbb{I} \ \ \ \mathrm{on}\ \widetilde{\mathcal{H}}^{\mathrm{restricted}}.\end{equation} Note that cyclicity only needs to hold when the vertex is evaluated on the composite restricted space, since this is the space of the dynamical string field appearing in the action. Outside this space the products will not be cyclic, and in fact the notion of cyclicity itself is somewhat problematic since $\mathscr{Y}$ may act on a state of the wrong picture. The demonstration of cyclicity goes slightly differently depending on the arrangement of NS and R states in the vertex. Let us discuss for example the case \begin{equation}\langle\widetilde{\omega}| (\mathbb{I}\otimes\widetilde{M}_2) (R_1\otimes R_2 \otimes N_1),\end{equation} where $R_1,R_2$ are Ramond states and $N_1$ is an NS state in $\widetilde{\mathcal{H}}^{\mathrm{restricted}}$. Expanding into components of definite Ramond number, we have \begin{eqnarray} \langle\widetilde{\omega}| (\mathbb{I}\otimes\widetilde{M}_2) (R_1\otimes R_2 \otimes N_1) \!\!\!\!\!\!\!\!&& = \Big(\langle\omega_S|_0| + \langle \omega_S|_2|\mathscr{Y}\otimes\mathbb{I}\Big) \Big(\mathbb{I}\otimes(M_2|_0+m_2|_2)\Big) (R_1\otimes R_2 \otimes N_1)\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle \omega_S|_2|(\mathscr{Y} \otimes M_2|_0)(R_1\otimes R_2\otimes N_1). \end{eqnarray} The product $m_2|_2$ drops out since it does not multiply a sufficient number of Ramond states, and $\langle \omega_S|_0|$ drops out since it contracts too many Ramond states. Plugging in \eq{M202} we obtain \begin{eqnarray} \langle\widetilde{\omega}| (\mathbb{I}\otimes\widetilde{M}_2) (R_1\otimes R_2 \otimes N_1) \!\!\!\!\!\!\!\!&& = \langle \omega_S|_2|(\mathscr{Y} \otimes \widetilde{X} m_2|_0 )(R_1\otimes R_2\otimes N_1)\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle \omega_S|_2|(\mathscr{Y} \otimes \mathscr{X}m_2|_0 )(R_1\otimes R_2\otimes N_1)\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle \omega_S|_2|(\mathscr{X}\mathscr{Y} \otimes m_2|_0 )(R_1\otimes R_2\otimes N_1)\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle \omega_S|_2|(\mathbb{I} \otimes m_2|_0 )(R_1\otimes R_2\otimes N_1)\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle \omega_S|(\mathbb{I} \otimes m_2 )(R_1\otimes R_2\otimes N_1). \end{eqnarray} In the second step we noted that $\widetilde{X} $ acts on a state of picture $-3/2$ in the small Hilbert space, and therefore can be replaced by $\mathscr{X}$. In the third step we used that $\mathscr{X}$ is BPZ even and in the fourth step we used the fact that $R_1$ is in the restricted space. Finally we dropped the Ramond number labels since in this context they are redundant. Note that in these steps it is important to assume that the states are in $\widetilde{\mathcal{H}}^{\mathrm{restricted}}$. Next consider \begin{eqnarray} \langle\widetilde{\omega}| (\widetilde{M}_2\otimes\mathbb{I}) (R_1\otimes R_2 \otimes N_1) \!\!\!\!\!\!\!\!&& = \Big(\langle \omega_S|_0|+ \langle \omega_S|_2|\mathscr{Y}\otimes\mathbb{I}\Big)\Big((M_2|_0+m_2|_2)\otimes \mathbb{I}\Big)(R_1\otimes R_2\otimes N_1)\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle \omega_S|_0|(m_2|_2\otimes \mathbb{I})(R_1\otimes R_2\otimes N_1)\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle \omega_S|(m_2\otimes \mathbb{I})(R_1\otimes R_2\otimes N_1). \end{eqnarray} We therefore have \begin{equation} \langle\widetilde{\omega}| (\widetilde{M}_2\otimes\mathbb{I}+\mathbb{I}\otimes\widetilde{M}_2) (R_1\otimes R_2 \otimes N_1) = \langle \omega_S|(m_2\otimes \mathbb{I}+\mathbb{I}\otimes m_2)(R_1\otimes R_2\otimes N_1) = 0, \end{equation} which vanishes because the open string star product is cyclic. The proof of cyclicity for the other combinations $R_1\otimes N_1\otimes R_2$ and $N_1\otimes R_1\otimes R_2$ goes similarly. When all inputs are NS states, cyclicity follows from the construction of the NS open superstring field theory in \cite{WittenSS}. Therefore we have a cubic vertex consistent with a cyclic $A_\infty$ structure. \subsection{Higher Products} Now let us discuss the generalization to higher string products. Defining the higher products requires a choice of contracting homotopy for ${\bm \upeta}$ in the solution of the equation \begin{equation}[{\bm \upeta},{\bm \upmu}_{n+2}|_0]={\bf m}_{n+2}|_0.\end{equation} The contracting homotopy we choose defines the gauge products as follows: \begin{equation} \mu_{n+2}|_0 \equiv \left\{\begin{matrix*}[l] & {\displaystyle \frac{1}{n+3}}\Big(\tilde{\xi } m_{n+2}|_0 - m_{n+2}|_0(\tilde{\xi } \otimes \mathbb{I}^{\otimes n+1}+...+ \mathbb{I}^{\otimes n+1}\otimes \tilde{\xi } )\Big) &\ \ \ (0\ \mathrm{Ramond\ inputs}) \phantom{\Bigg]}\\ & \tilde{\xi } m_{n+2}|_0 &\ \ \ (1\ \mathrm{Ramond\ input})\phantom{\Bigg]} \end{matrix*}\right..\label{eq:mun02} \end{equation} It is not immediately obvious that this leads to a cyclic $A_\infty$ structure. We will prove that it does in the next subsection. For now, we demonstrate two important properties, which follow from this definition: \begin{eqnarray} M_{n+2}|_0 \!\!\!\!\!\!\!\!&& = \widetilde{X} m_{n+2}|_0\ \ \ \ (1\ \mathrm{Ramond\ input}),\phantom{\Big]}\label{eq:Xm}\\ m_{n+2}|_2 \!\!\!\!\!\!\!\!&& = 0\ \ \ \ \ \ \ \ \ \ \ \ \ \ \, (3\ \mathrm{Ramond\ inputs}).\phantom{\Big]}\label{eq:mp0} \end{eqnarray} The first equation generalizes \eq{M202}, and implies that the interactions are consistent with the projection onto the restricted space in the Ramond sector. The second equation addresses a puzzle raised in \cite{Ramond} concerning the existence of cubic terms in the Ramond string field in the equations of motion. The existence of such terms is consistent with $A_\infty$ relations, but is not compatible with the existence of an action since the equations of motion do not possess quartic terms in the Ramond string field. (Recall that $\widetilde{M}_n$ has no component with Ramond number 4.) Therefore, the fact that $m_{n+2}|_2$ vanishes with three Ramond inputs is expected and in fact necessary to derive the equations of motion from an action. In total, then, we find that the composite products appear as follows: \begin{equation} \widetilde{M}_{n+2} = \left\{\begin{matrix*}[l] & \displaystyle{\frac{1}{n+3}}\Big(\widetilde{X} m_{n+2}|_0+m_{n+2}|_0(\widetilde{X}\!\otimes\!\mathbb{I}^{\otimes n+1}+...+ \mathbb{I}^{\otimes n+1}\!\otimes\! \widetilde{X})\Big) & (0\ \mathrm{Ramond\ inputs}) \phantom{\bigg]}\\ & \widetilde{X} m_{n+2}|_0 & (1\ \mathrm{Ramond\ input})\phantom{\bigg]}\\ & m_{n+2}|_2 & (2\ \mathrm{Ramond\ inputs})\phantom{\bigg]}\\ & 0 & (\mathrm{otherwise})\phantom{\bigg]} \end{matrix*}\right. . \end{equation} The products $m_{n+2}|_0$ and $m_{n+2}|_2$ above are determined recursively by solving \eq{mdiff} and \eq{mpdiff} with our choice of gauge products \eq{mun02}. To streamline the proof of \eq{Xm} and \eq{mp0}, it will be useful to introduce the projection operator \begin{equation}\pi_n^{r}: T\widetilde{\mathcal{H}}\to T\widetilde{\mathcal{H}},\label{eq:pinr}\end{equation} which selects $n$-string states containing $r$ Ramond factors (and therefore $n-r$ NS factors). This projector commutes in a simple way through coderivations of products with definite Ramond number: \begin{eqnarray} \pi_{m+1}^r \, {\bf b}_{n}|_s \!\!\!\!\!\!\!\!&& = {\bf b}_{n}|_s\, \pi_{m+n}^{s+r}.\label{eq:codpi} \end{eqnarray} We also define \begin{equation}\pi_n = \sum_{r=0}^n \pi_n^r,\end{equation} which projects onto $n$-string states with an undetermined number of Ramond factors. With these projectors we can express \eq{Xm} and \eq{mp0} in a more useful form using coderivations. First we write \begin{eqnarray} {\bf M}_{n+2}|_0\pi_{n+2}^1 \!\!\!\!\!\!\!\!&& = \widetilde{\bf X} {\bf m}_{n+2}|_0\pi_{n+2}^1,\\ {\bf m}_{n+2}|_2\pi_{n+2}^3 \!\!\!\!\!\!\!\!&& = 0, \end{eqnarray} where $\widetilde{\bf X}$ is the coderivation corresponding to $\widetilde{X}$. Commuting the projectors through the coderivations using \eq{codpi} gives \begin{eqnarray} \pi_1^1{\bf M}_{n+2}|_0 \!\!\!\!\!\!\!\!&& = \widetilde{X} \pi_1^1 {\bf m}_{n+2}|_0 ,\\ \pi_1^1 {\bf m}_{n+2}|_2 \!\!\!\!\!\!\!\!&& = 0. \end{eqnarray} Summing over $n$ then implies \begin{eqnarray} \pi_1^1 ({\bf M}|_0 - {\bf Q}) \!\!\!\!\!\!\!\!&& = \widetilde{X} \pi_1^1 {\bf m}|_0\phantom{\Big[},\label{eq:Xm2}\\ \pi_1^1 {\bf m}|_2 \!\!\!\!\!\!\!\!&& = 0\phantom{\Big]}.\label{eq:mp02} \end{eqnarray} In the first equation we subtract $Q$ since \eq{Xm2} only applies to the 2-string product and higher. To prove \eq{Xm2} and \eq{mp02} it is helpful to first derive the form of ${\bf \hat{G}}^{-1}$ when it produces one Ramond output: \begin{equation}\pi_1^1 {\bf \hat{G}}^{-1}.\end{equation} To compute this, note that \begin{eqnarray} \frac{d}{dt}\Big[\pi_1^1{\bf \hat{G}}(t)^{-1}\Big] = -\pi_1^1{\bm \upmu}|_0(t){\bf \hat{G}}(t)^{-1} \end{eqnarray} from the definition of the path ordered exponential. Our choice of contracting homotopy for ${\bm \upeta}$ in the Ramond sector \eq{mun02} implies \begin{equation}\pi_1^1{\bm \upmu}|_0(t) = \pi_1^1 \tilde{\bm{\upxi}} {\bf m}|_0(t),\end{equation} where $\tilde{\bm{\upxi}}$ is the coderivation corresponding to $\tilde{\xi }$. Plugging in gives \begin{eqnarray} \frac{d}{dt}\Big[\pi_1^1 {\bf \hat{G}}(t)^{-1}\Big]\!\!\!\!\!\!\!\!&& = -\pi_1^1\tilde{\bm{\upxi}} {\bf m}|_0(t) {\bf \hat{G}}(t)^{-1} = -\pi_1^1 \tilde{\bm{\upxi}} {\bf \hat{G}}(t)^{-1}{\bf m}_2|_0, \end{eqnarray} where we used ${\bf m}|_0(t) = {\bf \hat{G}}(t)^{-1}{\bf m}_2|_0{\bf \hat{G}}(t)$. Therefore we obtain \begin{equation} \frac{d}{dt}\Big[\pi_1^1{\bf \hat{G}}(t)^{-1}\Big] = -\tilde{\xi } \Big[\pi_1^1 {\bf \hat{G}}(t)^{-1}\Big]{\bf m}_2|_0. \label{eq:diffRG} \end{equation} The solution is subject to the initial condition ${\bf \hat{G}}(0)^{-1} = \mathbb{I}_{T\widetilde{\mathcal{H}}}$, where $\mathbb{I}_{T\widetilde{\mathcal{H}}}$ is the identity operator on the tensor algebra. This determines the solution to be \begin{equation}\pi_1^1{\bf \hat{G}}(t)^{-1} = \pi_1^1\Big[\mathbb{I}_{T\widetilde{\mathcal{H}}} - t\tilde{\bm{\upxi}} {\bf m}_2|_0\Big].\end{equation} This satisfies \eq{diffRG} since $({\bf m}_2|_0)^2 = 0$ by \eq{m2R0}. Setting $t=1$ we have \begin{equation}\pi_1^1{\bf \hat{G}}^{-1} = \pi_1^1\Big[\mathbb{I}_{T\widetilde{\mathcal{H}}} - \tilde{\bm{\upxi}} {\bf m}_2|_0\Big].\label{eq:GinvR}\end{equation} This identity will play a central role in the following analysis, as it is the basis for our proof of cyclicity and the relations \eq{Xm2} and \eq{mp02}, and it provides a crucial link to the WZW-based theory in section~\ref{sec:large}. Note that expanding the path ordered exponential \eq{Gt} and integrating over the parameter in the generating function gives a general expression for $\pi_1^1{\bf \hat{G}}^{-1}$: \begin{equation} \pi_1^1{\bf \hat{G}}^{-1} = \pi_1^1\left(\mathbb{I}_{T\widetilde{\mathcal{H}}} - {\bm \upmu}_2|_0 -\frac{1}{2}{\bm \upmu}_3|_0 +\frac{1}{2}{\bm \upmu}_2|_0{\bm \upmu}_2|_0+\ ...\ \right). \label{eq:exGinvR1} \end{equation} This is substantially more elaborate than \eq{GinvR}. With our choice of contracting homotopy for ${\bm \upeta}$, the higher order products in $\pi_1^1{\bf \hat{G}}^{-1}$ drop out, giving a closed form expression. Now we are ready to prove \eq{Xm2} and \eq{mp02}. First note that the bare products with one Ramond output simplify to \begin{eqnarray}\pi_1^1{\bf m}|_0 \!\!\!\!\!\!\!\!&& = \pi_1^1{\bf \hat{G}}^{-1} {\bf m}_2|_0{\bf \hat{G}}\nonumber\\ \!\!\!\!\!\!\!\!&& = \pi_1^1{\bf m}_2|_0 {\bf \hat{G}}, \label{eq:m1R}\end{eqnarray} since the second term in \eq{GinvR} cancels by associativity of the star product. Now consider ${\bf M}|_0$ with one Ramond output: \begin{eqnarray} \pi_1^1 {\bf M}|_0 \!\!\!\!\!\!\!\!&& = \pi_1^1 {\bf \hat{G}}^{-1}{\bf Q}{\bf \hat{G}} \nonumber\\ \!\!\!\!\!\!\!\!&& = \pi_1^1 \Big[\mathbb{I}_{T\widetilde{\mathcal{H}}} - \tilde{\bm{\upxi}} {\bf m}_2|_0\Big] {\bf Q} {\bf \hat{G}} \nonumber\\ \!\!\!\!\!\!\!\!&& = \pi_1^1 \Big[{\bf Q}{\bf \hat{G}} - {\bf Q}\tilde{\bm{\upxi}} {\bf m}_2|_0 {\bf \hat{G}} +\widetilde{\bf X} {\bf m}_2|_0{\bf \hat{G}} \Big]\nonumber\\ \!\!\!\!\!\!\!\!&& = \pi_1^1 {\bf Q} \Big[\mathbb{I}_{T\widetilde{\mathcal{H}}} - \tilde{\bm{\upxi}} {\bf m}_2|_0\Big]{\bf \hat{G}} +\widetilde{X} \pi_1^1 {\bf m}_2|_0 {\bf \hat{G}} \nonumber\\ \!\!\!\!\!\!\!\!&& = \pi_1^1{\bf Q} +\widetilde{X}\pi_1^1{\bf m}_2|_0 {\bf \hat{G}}. \end{eqnarray} From this we conclude \begin{equation}\pi_1^1({\bf M}|_0-{\bf Q}) = \widetilde{X} \pi_1^1{\bf m}|_0,\label{eq:Xmc}\end{equation} establishing \eq{Xm}. Next consider \begin{eqnarray} \pi_1^1 {\bf m}|_2 \!\!\!\!\!\!\!\!&& = \pi_1^1{\bf \hat{G}}^{-1}{\bf m}_2|_2 {\bf \hat{G}} \nonumber\\ \!\!\!\!\!\!\!\!&& = \pi_1^1\Big[\mathbb{I}_{T\widetilde{\mathcal{H}}} - \tilde{\bm{\upxi}} {\bf m}_2|_0\Big]{\bf m}_2|_2 {\bf \hat{G}} \nonumber\\ \!\!\!\!\!\!\!\!&& = \pi_1^1 {\bf m}_2|_2 {\bf \hat{G}} + \tilde{\xi }\pi_1^1 {\bf m}_2|_2 {\bf m}_2|_0{\bf \hat{G}}, \end{eqnarray} where in the third line we used \begin{equation}{\bf m}_2|_0{\bf m}_2|_2 = -{\bf m}_2|_2{\bf m}_2|_0\end{equation} from \eq{m2R2}. Now note \begin{equation}\pi_1^1{\bf m}_2|_2 = m_2|_2\pi_2^3 = 0.\end{equation} This holds because the 2-string component of the state space cannot have three Ramond factors. Therefore \begin{equation}\pi_1^1{\bf m}|_2 = 0,\label{eq:mp0c}\end{equation} which establishes \eq{mp0}. \subsection{Proof of Cyclicity} \label{subsec:cyclic} Having constructed the products, we are ready to demonstrate cyclicity: \begin{equation}\langle \widetilde{\omega}|(\widetilde{M}_{n+1}\otimes\mathbb{I}+ \mathbb{I}\otimes\widetilde{M}_{n+1}) = 0\ \ \ \ \mathrm{on}\ \ \widetilde{\mathcal{H}}^{\mathrm{restricted}}. \end{equation} We will need to simplify this equation somewhat before we arrive at the key property responsible for cyclicity and provide its proof. Note that the cyclicity of $\widetilde{M}_1=Q$ was already demonstrated in subsection \ref{subsec:kinetic}. When the vertex acts only on NS states, cyclicity follows from the construction of the NS open superstring field theory in \cite{WittenSS}. When the vertex acts on one or three Ramond states, it vanishes identically since the symplectic form and composite products do not carry odd Ramond number. When the vertex acts on four or more Ramond states, it vanishes identically since the composite products vanish when multiplying three or more Ramond states. Therefore, all that we need to show is that the vertex is cyclic when it acts on two Ramond states: \begin{equation}\langle \widetilde{\omega}|(\widetilde{M}_{n+2}\otimes\mathbb{I}+ \mathbb{I}\otimes\widetilde{M}_{n+2})\pi^2_{n+3} = 0\ \ \ \ \mathrm{on}\ \ \widetilde{\mathcal{H}}^{\mathrm{restricted}} .\end{equation} Expanding $\widetilde{M}_{n+2}$ into components of definite Ramond number, this reads \begin{eqnarray} \!\!\!\!\!\!\!\!&& \!\!\!\!\!\!\!\!\!\langle \omega_S| (m_{n+2}|_2 \otimes\mathbb{I}+\mathbb{I}\otimes m_{n+2}|_2)\pi_{n+3}^2 \nonumber\\ \!\!\!\!\!\!\!\!&& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\langle \omega_S|(\mathscr{Y}\otimes\mathbb{I}) (M_{n+2}|_0 \otimes\mathbb{I}+\mathbb{I}\otimes M_{n+2}|_0)\pi_{n+3}^2 = 0 \ \ \ \ \mathrm{on}\ \ \widetilde{\mathcal{H}}^{\mathrm{restricted}}.\label{eq:cycC1} \end{eqnarray} In the first term, both Ramond states must be channeled into the input of $m_{n+2}|_2$. In the second term, the Ramond states split between the input of $M_{n+2}|_0$ and the symplectic form. This means that we can simplify the second term using \eq{Xm}: \begin{eqnarray} \langle \omega_S|(\mathscr{Y}\otimes\mathbb{I})(M_{n+2}|_0\otimes\mathbb{I})\pi_{n+3}^2 \!\!\!\!\!\!\!\!&& = \langle \omega_S|(\mathscr{Y}\widetilde{X}m_{n+2}|_0\otimes\mathbb{I})\pi_{n+3}^2\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle \omega_S|(\mathscr{Y}\mathscr{X}m_{n+2}|_0\otimes\mathbb{I})\pi_{n+3}^2\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle \omega_S|(m_{n+2}|_0\otimes\mathscr{X}\mathscr{Y})\pi_{n+3}^2\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle \omega_S|(m_{n+2}|_0\otimes\mathbb{I})\pi_{n+3}^2\ \ \ \ \mathrm{on}\ \ \widetilde{\mathcal{H}}^{\mathrm{restricted}}.\ \ \ \ \end{eqnarray} In the first step we used \eq{Xm}; in the second step we used the fact that $\widetilde{X} = \mathscr{X}$ when acting on a state in the small Hilbert space at picture $-3/2$; in the third step we used that $\mathscr{X}$ and $\mathscr{Y}$ are BPZ even; in the fourth step we used that $\mathscr{X}\mathscr{Y}=1$ when acting on states in the restricted space. Then the statement of cyclicity reduces to \begin{equation} \langle \omega_S| \Big((m_{n+2}|_0 +m_{n+2}|_2)\otimes\mathbb{I}+\mathbb{I}\otimes (m_{n+2}|_0+m_{n+2}|_2)\Big)\pi_{n+3}^2=0 \ \ \ \ \mathrm{on}\ \ \widetilde{\mathcal{H}}^{\mathrm{restricted}}.\label{eq:cycC2} \end{equation} Therefore $m_{n+2}|_0+m_{n+2}|_2$ should be cyclic with respect to the small Hilbert space symplectic form when the vertex acts on $\widetilde{\mathcal{H}}^{\mathrm{restricted}}$ including two Ramond states. Actually, we wish to make a slightly stronger hypothesis: $m_{n+2}|_0+m_{n+2}|_2$ is cyclic with respect to the {\it large} Hilbert space symplectic when the vertex acts on the {\it large} Hilbert space including two Ramond states: \begin{equation} \langle \omega_L| \Big((m_{n+2}|_0 +m_{n+2}|_2)\otimes\mathbb{I}+\mathbb{I}\otimes (m_{n+2}|_0+m_{n+2}|_2)\Big)\pi_{n+3}^2=0.\label{eq:cycC3} \end{equation} This relation is the nontrivial property required for the proof of cyclicity. We will provide a demonstration in a moment, but first let us explain why \eq{cycC3} implies \eq{cycC2}. The small and large Hilbert space symplectic forms can be related by \begin{equation}\langle \omega_S| = \langle\omega_L|\xi\otimes\mathbb{I},\end{equation} where $\xi$ satisfies $[\eta,\xi]=1$. The precise form of $\xi$ is not important since its only role is to saturate the $\xi$ zero mode in the large Hilbert space CFT correlator. The left hand side of \eq{cycC2} can be expressed as \begin{eqnarray} \!\!\!\!\!\!\!\!&& \langle \omega_S| \Big((m_{n+2}|_0 +m_{n+2}|_2)\otimes\mathbb{I}+\mathbb{I}\otimes (m_{n+2}|_0+m_{n+2}|_2)\Big)\pi_{n+3}^2 \nonumber\\ \!\!\!\!\!\!\!\!&& \ \ \ \ \ \ =\langle \omega_L| (\xi\otimes\mathbb{I})\Big((m_{n+2}|_0 +m_{n+2}|_2)\otimes\mathbb{I}+\mathbb{I}\otimes (m_{n+2}|_0+m_{n+2}|_2)\Big)\pi_{n+3}^2, \end{eqnarray} where for current purposes we assume that this equation acts on the small Hilbert space, including states in $\widetilde{\mathcal{H}}^{\mathrm{restricted}}$. Now in front of $\pi_{n+3}^2$ insert the identity operator in the form \begin{equation}\mathbb{I}^{\otimes n+3} = \eta\xi\otimes\mathbb{I}^{\otimes n+2},\end{equation} where $\eta\xi$ is equivalent to the identity since it acts on a state in the small Hilbert space. Moving $\eta$ to the left it will commute with $\xi$ to give $1$ and otherwise act on states in the small Hilbert space to give zero. Thus we have \begin{eqnarray} \!\!\!\!\!\!\!\!&& \langle \omega_S| \Big((m_{n+2}|_0 +m_{n+2}|_2)\otimes\mathbb{I}+\mathbb{I}\otimes (m_{n+2}|_0+m_{n+2}|_2)\Big)\pi_{n+3}^2 \nonumber\\ \!\!\!\!\!\!\!\!&& \ \ \ \ \ \ =-\langle \omega_L| \Big((m_{n+2}|_0 +m_{n+2}|_2)\otimes\mathbb{I}+\mathbb{I}\otimes (m_{n+2}|_0+m_{n+2}|_2)\Big)\pi_{n+3}^2 (\xi\otimes\mathbb{I}^{\otimes n+2}). \end{eqnarray} From this we can see that \eq{cycC3} implies \eq{cycC2} when operating on $\widetilde{\mathcal{H}}^{\mathrm{restricted}}$. We can proceed to prove \eq{cycC3} using the recursive definition of the products. However, the proof in this form requires consideration of several different cases depending on the arrangement of NS and R inputs on the left hand side of \eq{cycC3}. Earlier we encountered similar inconvenience in the proof of cyclicity of $\widetilde{M}_2$ at the end of subsection \ref{subsec:2string}. A more efficient route to the proof uses the coalgebra formalism, and therefore it is useful to review how the cyclicity is described in this language. An $n$-string product $D_n$ is cyclic with respect to a symplectic form $\omega$ if \begin{equation}\langle \omega| (D_n\otimes\mathbb{I} +\mathbb{I}\otimes D_n)= 0.\end{equation} If we have a sequence of cyclic $n$-string products $D_0,D_1,D_2,...$ of the same degree, the corresponding coderivation ${\bf D} = {\bf D}_0+{\bf D}_1+{\bf D}_2+...$ will satisfy \begin{equation} \langle \omega|\pi_2 {\bf D} = 0. \label{eq:codcyc} \end{equation} We then say that the coderivation ${\bf D}$ is {\it cyclic} with respect to the symplectic form $\omega$. A cohomomorphism ${\bf \hat{H}}$ is {\it cyclic} with respect to $\omega$ if it satisfies \begin{equation}\langle \omega|\pi_2 {\bf \hat{H}} = \langle\omega|\pi_2. \label{eq:cohcyc}\end{equation} An example of a cyclic cohomomorphism is \begin{equation}{\bf \hat{H}} = \mathcal{P}\exp\left[\int_0^1 ds\, {\bf h}(s)\right],\end{equation} where ${\bf h}(s)$ are a one-parameter family of degree even cyclic coderivations. To prove that ${\bf \hat{H}}$ in this form is cyclic, consider ${\bf \hat{H}}(u)$ obtained by replacing the lower limit $s=0$ in the path ordered exponential above with $s=u$. Taking the derivative with respect to $u$ we find \begin{equation}\frac{d}{du} \langle \omega|\pi_2 {\bf \hat{H}}(u) = \langle \omega|\pi_2 {\bf h}(u) {\bf \hat{H}}(u) = 0. \end{equation} This vanishes on the assumption that ${\bf h}(s)$ is cyclic. Therefore, the object $\langle \omega|\pi_2 {\bf \hat{H}}(u)$ is independent of $u$. Setting $u=0$ and $u=1$ reproduces \eq{cohcyc}. The construction of the NS superstring field theory~\cite{WittenSS} implies that the gauge products are cyclic with respect to the large Hilbert space symplectic form when acting on NS states. Therefore we have \begin{eqnarray} \langle \omega_L|\pi_2^0 {\bm \upmu}|_0(t) = 0. \end{eqnarray} Using \eq{Gt} this also implies \begin{equation} \langle \omega_L|\pi_2^0 {\bf \hat{G}} = \langle \omega_L|\pi_2^0.\label{eq:Gcyc} \end{equation} Therefore ${\bf \hat{G}}$ is cyclic in the large Hilbert space when acting on NS states. Next it is helpful to recall a few things about the ``triangle formalism" of the product and coproduct introduced in \cite{WB}. For this purpose we will need to think about ``tensor products" of tensor algebras, which we denote with the symbol $\otimes'$ to avoid confusion with the tensor product~$\otimes$ defining $T\widetilde{\mathcal{H}}$. The product $\inverttriangle$ is a linear map from two copies of $T\widetilde{\mathcal{H}}$ into $T\widetilde{\mathcal{H}}$: \begin{equation}\inverttriangle: T\widetilde{\mathcal{H}}\otimes'T\widetilde{\mathcal{H}} \to T\widetilde{\mathcal{H}},\end{equation} and the coproduct $\triangle$ is a linear map from one copy of $T\widetilde{\mathcal{H}}$ into two copies of $T\widetilde{\mathcal{H}}$: \begin{equation}\triangle: T\widetilde{\mathcal{H}} \to T\widetilde{\mathcal{H}}\otimes'T\widetilde{\mathcal{H}}.\end{equation} The coproduct is defined by its action on tensor products of states: \begin{equation} \triangle A_1\otimes ... \otimes A_n = \sum_{k=0}^n (A_1\otimes ...\otimes A_k) \otimes' (A_{k+1}\otimes...\otimes A_n), \end{equation} where at the extremes of summation $\otimes'$ multiplies the identity of the tensor product $1_{T\widetilde{\mathcal{H}}}$. The product $\inverttriangle$ acts by replacing the primed tensor product $\otimes'$ with the tensor product $\otimes$. A coderivation ${\bf D}$ and a cohomomorphism ${\bf \hat{H}}$ satisfy the following compatibility conditions with respect to the coproduct: \begin{eqnarray} \triangle {\bf D} \!\!\!\!\!\!\!\!&& = ({\bf D}\otimes' \mathbb{I}_{T\widetilde{\mathcal{H}}} + \mathbb{I}_{T\widetilde{\mathcal{H}}}\otimes'{\bf D})\triangle,\label{eq:cod}\\ \triangle {\bf \hat{H}}\!\!\!\!\!\!\!\!&& = ({\bf \hat{H}}\otimes'{\bf \hat{H}})\triangle.\label{eq:coh} \end{eqnarray} These are in fact the defining properties of coderivations and cohomomorphisms. The useful identity for our computations is \begin{equation}\pi_{m+n} = \inverttriangle\!\! \Big[\pi_m\otimes'\pi_n\Big]\triangle.\end{equation} A generalization which also accounts for a projection onto $r$ Ramond factors is \begin{equation}\pi_{m+n}^r = \sum_{k=0}^r \inverttriangle\!\! \Big[\pi_m^{r-k}\otimes'\pi_n^k\Big]\triangle,\end{equation} with the understanding that $\pi_n^k$ vanishes if $k>n$. Using coalgebra notation, the key equation \eq{cycC3} can be expressed as follows: \begin{equation}\langle \omega_L| \big(\pi_2^2{\bf m}|_0+\pi_2^0 {\bf m}|_2\big)=0.\label{eq:cyc3}\end{equation} To prove this, consider the second term on the left hand side: \begin{eqnarray}\langle \omega_L|\pi_2^0 {\bf m}|_2 \!\!\!\!\!\!\!\!&& = \langle \omega_L|\pi_2^0 {\bf \hat{G}}^{-1}{\bf m}_2|_2{\bf \hat{G}}\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle \omega_L|\pi_2^0 {\bf \hat{G}}\G^{-1}{\bf m}_2|_2{\bf \hat{G}}\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle \omega_L|\pi_2^0{\bf m}_2|_2{\bf \hat{G}}. \end{eqnarray} In the second step we used the fact that ${\bf \hat{G}}$ is cyclic with respect to $\omega_L$ when it only has NS outputs, as in \eq{Gcyc}. Next consider the first term of \eq{cyc3}. Expressing $\pi_2^2$ in terms of the product and coproduct gives \begin{eqnarray} \langle \omega_L|\pi_2^2 {\bf m}|_0 \!\!\!\!\!\!\!\!&& = \langle \omega_L|\!\!\inverttriangle\!\!\Big[\pi_1^1\otimes' \pi_1^1\Big]\triangle\, {\bf m}|_0\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle\omega_L|\!\!\inverttriangle\!\!\Bigg[(\pi_1^1\otimes'\pi_1^1)({\bf m}|_0\otimes'\mathbb{I}_{T\widetilde{\mathcal{H}}} + \mathbb{I}_{T\widetilde{\mathcal{H}}}\otimes'{\bf m}|_0)\Bigg]\triangle\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle\omega_L|\!\!\inverttriangle\!\!\Bigg[\Big(\pi_1^1{\bf m}|_0\Big)\otimes' \pi_1^1 + \pi_1^1\otimes'\Big(\pi_1^1{\bf m}|_0\Big)\Bigg]\triangle. \end{eqnarray} The form of ${\bf m}|_0$ with one Ramond output is given in \eq{m1R}. Plugging in gives \begin{equation}\langle \omega_L|\pi_2^2 {\bf m}|_0 = \langle\omega_L|\!\!\inverttriangle\!\!\Big[(\pi_1^1{\bf m}_2|_0{\bf \hat{G}})\otimes' \pi_1^1 + \pi_1^1\otimes'(\pi_1^1{\bf m}_2|_0{\bf \hat{G}})\Big]\triangle.\end{equation} The factor $\pi_1^1$ in the two terms above can be written as \begin{equation} \pi_1^1 = \pi_1^1 {\bf \hat{G}}^{-1}{\bf \hat{G}} = \pi_1^1{\bf \hat{G}} - \tilde{\xi }\pi_1^1{\bf m}_2|_0{\bf \hat{G}}, \end{equation} where we used \eq{GinvR}. Therefore we have \begin{eqnarray} \langle \omega_L|\pi_2^2 {\bf m}|_0 \!\!\!\!\!\!\!\!&& = \langle\omega_L|\!\!\inverttriangle\!\!\Bigg[\big(\pi_1^1{\bf m}_2|_0{\bf \hat{G}}\big)\otimes' \big(\pi_1^1{\bf \hat{G}}\big) + \big(\pi_1^1{\bf \hat{G}}\big)\otimes'\big(\pi_1^1{\bf m}_2|_0{\bf \hat{G}}\big)\Bigg]\triangle\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ - \langle\omega_L|\!\!\inverttriangle\!\!\Bigg[\big(\pi_1^1{\bf m}_2|_0{\bf \hat{G}}\big)\otimes' \big(\tilde{\xi }\pi_1^1{\bf m}_2|_0{\bf \hat{G}}\big) + \big(\tilde{\xi }\pi_1^1{\bf m}_2|_0{\bf \hat{G}}\big)\otimes'\big(\pi_1^1{\bf m}_2|_0{\bf \hat{G}}\big)\Bigg]\triangle. \end{eqnarray} The second term above can be simplified as follows: \begin{eqnarray} \!\!\!\!\!\!\!\!&& \langle\omega_L|\!\!\inverttriangle\!\!\Bigg[\Big(\mathbb{I}\otimes'\tilde{\xi }\Big)\Big(\big(\pi_1^1{\bf m}_2|_0{\bf \hat{G}}\big)\otimes' \big(\pi_1^1{\bf m}_2|_0{\bf \hat{G}}\big)\Big) - \Big(\tilde{\xi }\otimes'\mathbb{I}\Big)\Big(\big(\pi_1^1{\bf m}_2|_0{\bf \hat{G}}\big)\otimes'\big(\pi_1^1{\bf m}_2|_0{\bf \hat{G}}\big)\Big)\Bigg]\triangle\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\langle\omega_L|(\mathbb{I}\otimes\tilde{\xi }-\tilde{\xi }\otimes\mathbb{I})\!\!\inverttriangle\!\!\Bigg[\big(\pi_1^1{\bf m}_2|_0{\bf \hat{G}}\big)\otimes' \big(\pi_1^1{\bf m}_2|_0{\bf \hat{G}}\big) \Bigg]\triangle, \end{eqnarray} which vanishes since $\tilde{\xi }$ is BPZ even. With what is left we can disentangle the product and coproduct: \begin{eqnarray} \langle \omega_L|\pi_2^2 {\bf m}|_0 \!\!\!\!\!\!\!\!&& = \langle\omega_L|\!\!\inverttriangle\!\!\Bigg[(\pi_1^1\otimes'\pi_1^1)({\bf m}_2|_0\otimes'\mathbb{I}_{T\widetilde{\mathcal{H}}}+\mathbb{I}_{T\widetilde{\mathcal{H}}}\otimes'{\bf m}_2|_0)({\bf \hat{G}}\otimes'{\bf \hat{G}})\Bigg]\triangle\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle\omega_L|\!\!\inverttriangle\!\!\Bigg[(\pi_1^1\otimes'\pi_1^1)({\bf m}_2|_0\otimes'\mathbb{I}_{T\widetilde{\mathcal{H}}}+\mathbb{I}_{T\widetilde{\mathcal{H}}}\otimes'{\bf m}_2|_0)\Bigg]\triangle {\bf \hat{G}}\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle\omega_L|\!\!\inverttriangle\!\!\Big[\pi_1^1\otimes'\pi_1^1\Big]\triangle {\bf m}_2|_0 {\bf \hat{G}}\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle\omega_L|\pi_2^2 {\bf m}_2|_0{\bf \hat{G}}. \end{eqnarray} Bringing the first and second terms in \eq{cyc3} together therefore gives \begin{equation} \langle \omega_L| \big(\pi_2^2{\bf m}|_0+\pi_2^0 {\bf m}|_2\big) = \langle \omega_L|(\pi_2^2 {\bf m}_2|_0+\pi_2^0{\bf m}_2|_2){\bf \hat{G}}. \end{equation} Commuting the projectors past the star product in the two terms gives \begin{eqnarray} \langle \omega_L| \big(\pi_2^2{\bf m}|_0+\pi_2^0 {\bf m}|_2\big) \!\!\!\!\!\!\!\!&& = \langle \omega_L|\pi_2({\bf m}_2|_0+{\bf m}_2|_2)\pi_3^2{\bf \hat{G}}\nonumber\\ \!\!\!\!\!\!\!\!&& = \langle\omega_L|\pi_2{\bf m}_2\pi_3^2{\bf \hat{G}}\nonumber\\ \!\!\!\!\!\!\!\!&& = 0, \end{eqnarray} which vanishes since the star product is cyclic with respect to the large Hilbert space symplectic form. This completes the proof of cyclicity. \subsection{Relation to Sen's Formulation} \label{subsec:Sen} Here we would like to spell out the relation between our treatment of the Ramond sector and the approach developed by Sen \cite{1PIR,SenBV}. The main advantage of Sen's approach is that it utilizes simpler picture changing insertions, which may facilitate calculations. On the other hand, the theory propagates spurious free fields and does not directly display a cyclic $A_\infty$ structure. Sen's approach requires two dynamical string fields \begin{eqnarray} \widetilde{\Psi}\!\!\!\!\!\!\!\!&& = \Psi_\mathrm{NS}+\Psi_\mathrm{R},\\ \widetilde{\Pi}\!\!\!\!\!\!\!\!&& = \Pi_\mathrm{NS}+\Pi_\mathrm{R}. \end{eqnarray} The NS fields $\Psi_\mathrm{NS}$ and $\Pi_\mathrm{NS}$ are in the small Hilbert space, degree even, and carry ghost number 1 and picture $-1$. The Ramond fields $\Psi_\mathrm{R}$ and $\Pi_\mathrm{R}$ are in the small Hilbert space, degree even, and at ghost number 1, but carry different pictures: $\Psi_\mathrm{R}$ carries picture $-1/2$ and $\Pi_\mathrm{R}$ carries picture $-3/2$. In this approach it is not necessary to assume that $\mathscr{X}\mathscr{Y}\Psi_\mathrm{R} = \Psi_\mathrm{R}$. The action takes the form \begin{equation} S = -\frac{1}{2}\omega_S(\widetilde{\Pi},\mathcal{G}Q\widetilde{\Pi}) + \omega_S(\widetilde{\Pi},Q\widetilde{\Psi}) + \frac{1}{3}\omega_S(\widetilde{\Psi},\widetilde{b}_2(\widetilde{\Psi},\widetilde{\Psi})) + \frac{1}{4}\omega_S(\widetilde{\Psi},\widetilde{b}_3(\widetilde{\Psi},\widetilde{\Psi},\widetilde{\Psi}))+..., \end{equation} where $\widetilde{b}_{n+2}$ are degree odd multi-string products which appropriately multiply NS and R states, and the operator $\mathcal{G}$ is defined by \begin{eqnarray} \mathcal{G} \!\!\!\!\!\!\!\!&& = \mathbb{I}\ \ \ \ \ (\mathrm{acting\ on\ NS\ state}),\nonumber\\ \mathcal{G} \!\!\!\!\!\!\!\!&& = X\ \ \ \ (\mathrm{acting\ on\ R\ state}). \end{eqnarray} For present purposes we can assume that the picture changing operator $X$ is defined as in \eq{X}. In particular, $\mathcal{G}$ is BPZ even and $[Q,\mathcal{G}] = 0$. The action does not realize a cyclic $A_\infty$ structure in the standard sense, but the products $\widetilde{b}_{n+2}$ satisfy a hierarchy of closely related algebraic identities. To describe them, we introduce a sequence of degree odd multi-string products \begin{equation} \widetilde{M}_1\equiv Q,\ \ \widetilde{M}_2,\ \ \widetilde{M}_3,\ \ \widetilde{M}_4,\ \ ..., \end{equation} where \begin{equation}\widetilde{M}_{n+2} \equiv \mathcal{G}\widetilde{b}_{n+2}\ \ \ \ \ (n=0,1,2,...).\label{eq:bc}\end{equation} The relation to the composite products introduced earlier will be clear in a moment. The first few algebraic relations satisfied by the multi-string products are \begin{eqnarray} 0\!\!\!\!\!\!\!\!&& = Q\widetilde{b}_2(A,B) + \widetilde{b}_2(QA,B) +(-1)^{\mathrm{deg}(A)}\widetilde{b}_2(A,QB),\\ 0\!\!\!\!\!\!\!\!&& = Q\widetilde{b}_3(A,B,C) + \widetilde{b}_3(QA,B,C)+(-1)^{\mathrm{deg}(A)}\widetilde{b}_3(A,QB,C)+(-1)^{\mathrm{deg}(A)+\mathrm{deg}(B)}\widetilde{b}_3(A,B,QC)\ \ \ \ \ \ \nonumber\\ \!\!\!\!\!\!\!\!&&\ \ +\widetilde{b}_2(\widetilde{M}_2(A,B),C) + (-1)^{\mathrm{deg}(A)}\widetilde{b}_2(A,\widetilde{M}_2(B,C)),\\ \!\!\!\!\!\!\!\!&&\vdots\ \ .\nonumber \end{eqnarray} More abstractly, the full set of algebraic relations can be described using the coderivations \begin{eqnarray} \widetilde{{\bf b}}\!\!\!\!\!\!\!\!&& \equiv \widetilde{{\bf b}}_2 + \widetilde{{\bf b}}_3 +\widetilde{{\bf b}}_4 + ...,\\ \widetilde{{\bf M}}\!\!\!\!\!\!\!\!&& \equiv {\bf Q} + \widetilde{{\bf M}}_2 + \widetilde{{\bf M}}_3 +\widetilde{{\bf M}}_4 + ..., \end{eqnarray} as \begin{equation}\pi_1({\bf Q}\widetilde{{\bf b}} + \widetilde{{\bf b}}\widetilde{{\bf M}}) = 0.\label{eq:bcid} \end{equation} In addition, gauge invariance requires that the products $\widetilde{b}_{n+2}$ are cyclic with respect to the small Hilbert space symplectic form: \begin{equation}\langle \omega_S|\pi_2\widetilde{{\bf b}} = 0.\end{equation} Note that \eq{bc} implies \begin{equation} \mathcal{G}\pi_1\widetilde{{\bf b}} = \pi_1(\widetilde{{\bf M}}-{\bf Q}). \end{equation} Multiplying \eq{bcid} by $\mathcal{G}$ gives \begin{eqnarray} 0 \!\!\!\!\!\!\!\!&& = \mathcal{G}\pi_1({\bf Q}\widetilde{{\bf b}} + \widetilde{{\bf b}}\widetilde{{\bf M}})\nonumber\\ \!\!\!\!\!\!\!\!&& =\pi_1\Big({\bf Q}(\widetilde{{\bf M}}-{\bf Q}) + (\widetilde{{\bf M}}-{\bf Q})\widetilde{{\bf M}}\Big)\nonumber\\ \!\!\!\!\!\!\!\!&& = \pi_1\widetilde{{\bf M}}^2, \end{eqnarray} which implies that the products $\widetilde{M}_{n+1}$ satisfy $A_\infty$ relations: \begin{equation} [\widetilde{{\bf M}},\widetilde{{\bf M}}] = 0. \end{equation} However, the products $\widetilde{M}_{n+1}$ are not required to be cyclic. Rather, cyclicity is realized by the products $\widetilde{b}_{n+2}$ which appear in the action. We will explain why this formulation leads to a gauge invariant action in appendix \ref{app:Sen}. As suggested by the notation, it is natural to identify $\widetilde{M}_{n+1}$ with the composite products constructed earlier. Indeed the composite products can be written in the form \begin{equation} \widetilde{M}_{n+2} = \widetilde{\mathcal{G}}\ \widetilde{b}_{n+2}\ \ \ \ \ (n=0,1,2,...)\label{eq:bct} \end{equation} for some products $\widetilde{b}_{n+2}$, where \begin{eqnarray} \widetilde{\mathcal{G}} \!\!\!\!\!\!\!\!&& = \mathbb{I}\ \ \ \ \ (\mathrm{acting\ on\ NS\ state}),\nonumber\\ \widetilde{\mathcal{G}} \!\!\!\!\!\!\!\!&& = \widetilde{X}\ \ \ \ (\mathrm{acting\ on\ R\ state}). \end{eqnarray} This differs from \eq{bc} only by the substitution of $\widetilde{X}$ with $X$. Therefore it is natural to construct the products as before but replacing the picture changing insertion in \eq{mun02} as \begin{equation}\widetilde{\xi}\to\xi.\end{equation} Then the composite products satisfy \eq{bc}, where $\widetilde{b}_{n+2}$ takes the form \begin{eqnarray} \widetilde{b}_{n+2}= \left\{\begin{matrix*}[l] \ \ \displaystyle{\frac{1}{n+3}}\Big(X m_{n+2}|_0+m_{n+2}|_0(X\!\otimes\!\mathbb{I}^{\otimes n+1}+...+ \mathbb{I}^{\otimes n+1}\!\otimes\! X)\Big) & (\mathrm{0\ Ramond\ inputs}) \phantom{\bigg[}\\ \ \ m_{n+2}|_0 & (\mathrm{1\ Ramond\ input}) \phantom{\bigg[}\\ \ \ m_{n+2}|_2 & (\mathrm{2\ Ramond\ inputs})\phantom{\bigg[} \\ \ \ 0& (\mathrm{otherwise})\phantom{\bigg[} \end{matrix*}\right. \nonumber\\ \end{eqnarray} with the understanding that $m_{n+2}|_0$ and $m_{n+2}|_2$ are constructed out of $\xi$ rather than $\tilde{\xi }$. We can show that $\widetilde{b}_{n+2}$ satisfies \eq{bcid} by pulling a factor of $\mathcal{G}$ out of the $A_\infty$ relations for $\widetilde{M}_{n+2}$.\footnote{Note that the products $\widetilde{M}_{n+1}$ satisfy $A_\infty$ relations regardless of whether or not $X$ has a kernel. This can only be true if \eq{bcid} holds regardless of whether $X$ has a kernel. However, it is not difficult to check \eq{bcid} directly.} Furthermore, the cyclicity of $\widetilde{b}_{n+2}$ follows from the proof of \eq{cycC3} in the previous subsection with the replacement of $\tilde{\xi }$ with~$\xi$. \section{Relation to the WZW-based Formulation} \label{sec:large} In this section we explain the relation between our construction to the WZW-based formulation of \cite{complete}. The relation between the NS sectors was considered in \cite{OkWB,WB,WBlarge}, and our task will be to extend this analysis to the Ramond sector. The WZW-based theory uses an NS dynamical field \begin{equation}\widehat{\Phi}_\mathrm{NS},\end{equation} which is Grassmann even, carries ghost and picture number zero, and lives in the large Hilbert space (generically $\eta\widehat{\Phi}_\mathrm{NS}\neq0$). The dynamical Ramond field \begin{equation}\widehat{\Psi}_\mathrm{R},\end{equation} is the same kind of state as the Ramond field $\Psi_\mathrm{R}$ from the $A_\infty$ theory; it is Grassmann odd, carries ghost number 1 and picture $-1/2$, and lives in the restricted space in the Ramond sector. We will always denote objects in the WZW-based theory with a ``hat" to distinguish from corresponding objects defined in the $A_\infty$ theory. To write the NS sector of the action in WZW-like form, we introduce a one-parameter family of NS string fields $\widehat{\Phi}_\mathrm{NS}(t),t\in[0,1]$, subject to the boundary conditions \begin{equation}\widehat{\Phi}_\mathrm{NS}(0) = 0,\ \ \ \ \widehat{\Phi}_\mathrm{NS}(1) = \widehat{\Phi}_\mathrm{NS}.\end{equation} The WZW-based action of \cite{complete} can be written as \begin{equation}\widehat{S} = \frac{1}{2}\langle \mathscr{Y}\widehat{\Psi}_\mathrm{R},Q\widehat{\Psi}_\mathrm{R}\rangle_S - \int_0^1dt\, \langle \widehat{A}_t(t), Q \widehat{A}_\eta(t)+ (\widehat{F}(t)\widehat{\Psi}_\mathrm{R})^2\rangle_L.\label{eq:lHSaction} \end{equation} The ``potentials" are defined by \begin{eqnarray} \widehat{A}_\eta(t)\!\!\!\!\!\!\!\!&& \equiv (\eta e^{\widehat{\Phi}_\mathrm{NS}(t)})e^{-\widehat{\Phi}_\mathrm{NS}(t)},\nonumber\\ \widehat{A}_t(t)\!\!\!\!\!\!\!\!&&\equiv\left(\frac{d}{dt} e^{\widehat{\Phi}_\mathrm{NS}(t)}\right) e^{-\widehat{\Phi}_\mathrm{NS}(t)}. \end{eqnarray} The object $\widehat{F}(t)$ is a linear operator acting on string fields, defined by \begin{equation} \widehat{F}(t) \equiv \frac{1}{\mathbb{I}-\tilde{\xi } \mathrm{ad}_{\widehat{A}_\eta(t)}}, \end{equation} where $\mathrm{ad}_{\widehat{A}_\eta(t)}$ refers to the adjoint action of $\widehat{A}_\eta(t)$: \begin{equation}\mathrm{ad}_{\widehat{A}_\eta(t)} \Psi \equiv [\widehat{A}_\eta(t),\Psi].\end{equation} All products of string fields are computed with the open string star product $AB=A*B$, and all commutators of string fields are graded with respect to Grassmann parity. The WZW-based action only depends on the value of $\widehat{\Phi}_\mathrm{NS}(t)$ at $t=1$. Variation of the action produces the equations of motion \cite{complete} \begin{eqnarray} 0\!\!\!\!\!\!\!\!&& = Q \widehat{A}_\eta + (\widehat{F}\widehat{\Psi}_\mathrm{R})^2,\label{eq:lNSEOM}\\ 0\!\!\!\!\!\!\!\!&& = Q \widehat{F}\widehat{\Psi}_\mathrm{R}. \label{eq:lREOM} \end{eqnarray} Unless the dependence on $t$ is explicitly indicated, we will assume $t=1$ here and in what follows. \subsection{Field Redefinition} The relation between these string field theories can be extracted by inspection of the equations of motion \cite{WB}. The equations of motion of the $A_\infty$ theory can be expressed in the form \begin{equation}0=\widetilde{{\bf M}}\frac{1}{1-\widetilde{\Psi}},\label{eq:cohEOM}\end{equation} where \begin{equation} \frac{1}{1-\widetilde{\Psi}} = 1_{T\widetilde{\mathcal{H}}}\, +\, \widetilde{\Psi}\, +\widetilde{\Psi}\otimes\widetilde{\Psi} + \widetilde{\Psi}\otimes\widetilde{\Psi}\otimes\widetilde{\Psi} +... \end{equation} denotes the group-like element generated by $\widetilde{\Psi}$. Since \begin{equation}\widetilde{{\bf M}} = {\bf \hat{G}}^{-1}({\bf Q}+{\bf m}_2|_2){\bf \hat{G}},\end{equation} multiplying \eq{cohEOM} by ${\bf \hat{G}}$ gives \begin{equation} 0 = ({\bf Q}+{\bf m}_2|_2){\bf \hat{G}} \frac{1}{1-\widetilde{\Psi}}. \end{equation} Let us look at the component of this equation with one NS output: \begin{eqnarray} 0\!\!\!\!\!\!\!\!&& = \pi_1^0({\bf Q}+{\bf m}_2|_2){\bf \hat{G}} \frac{1}{1-\widetilde{\Psi}}\nonumber\\ \!\!\!\!\!\!\!\!&& = Q\pi_1^0{\bf \hat{G}}\frac{1}{1-\widetilde{\Psi}} + m_2\pi_2^2 {\bf \hat{G}}\frac{1}{1-\widetilde{\Psi}}\nonumber\\ \!\!\!\!\!\!\!\!&& = Q\left(\pi_1^0{\bf \hat{G}}\frac{1}{1-\widetilde{\Psi}}\right) + m_2\left(\pi_1^1 {\bf \hat{G}}\frac{1}{1-\widetilde{\Psi}},\pi_1^1 {\bf \hat{G}}\frac{1}{1-\widetilde{\Psi}}\right).\label{eq:BANSEOM1} \end{eqnarray} The component with one Ramond output is \begin{eqnarray} 0\!\!\!\!\!\!\!\!&& = \pi_1^1({\bf Q}+{\bf m}_2|_2){\bf \hat{G}} \frac{1}{1-\widetilde{\Psi}}\nonumber\\ \!\!\!\!\!\!\!\!&& = Q\pi_1^1{\bf \hat{G}}\frac{1}{1-\widetilde{\Psi}} + m_2\pi_2^3 {\bf \hat{G}}\frac{1}{1-\widetilde{\Psi}}\nonumber\\ \!\!\!\!\!\!\!\!&& = Q\left(\pi_1^1{\bf \hat{G}}\frac{1}{1-\widetilde{\Psi}}\right). \label{eq:BAREOM1} \end{eqnarray} Further note that \begin{eqnarray} \pi_1^0{\bf \hat{G}}\frac{1}{1-\widetilde{\Psi}}\!\!\!\!\!\!\!\!&& = \pi_1{\bf \hat{G}}\frac{1}{1-\Psi_\mathrm{NS}},\\ \pi_1^1{\bf \hat{G}}\frac{1}{1-\widetilde{\Psi}}\!\!\!\!\!\!\!\!&& = \pi_1{\bf \hat{G}}\frac{1}{1-\Psi_\mathrm{NS}}\otimes\Psi_\mathrm{R}\otimes\frac{1}{1-\Psi_\mathrm{NS}}, \end{eqnarray} and define \begin{eqnarray} A_\eta\!\!\!\!\!\!\!\!&& \equiv \pi_1{\bf \hat{G}}\frac{1}{1-\Psi_\mathrm{NS}},\label{eq:An}\\ F\Psi_\mathrm{R}\!\!\!\!\!\!\!\!&& \equiv \pi_1{\bf \hat{G}}\frac{1}{1-\Psi_\mathrm{NS}}\otimes\Psi_\mathrm{R}\otimes\frac{1}{1-\Psi_\mathrm{NS}}.\label{eq:FPsi} \end{eqnarray} Therefore \eq{BANSEOM1} and \eq{BAREOM1} reduce to \begin{eqnarray} 0\!\!\!\!\!\!\!\!&& = QA_\eta +(F\Psi_\mathrm{R})^2,\nonumber\\ 0\!\!\!\!\!\!\!\!&& = QF\Psi_\mathrm{R}. \end{eqnarray} These are the same as the equations of motion of the WZW-based theory, \eq{lNSEOM} and \eq{lREOM}, with the ``hats" missing. It is therefore natural to suppose that the field redefinition between the theories is given by equating \begin{eqnarray} \widehat{A}_\eta \!\!\!\!\!\!\!\!&& = A_\eta,\\ \widehat{F}\widehat{\Psi}_\mathrm{R} \!\!\!\!\!\!\!\!&& = F\Psi_\mathrm{R}.\label{eq:fieldred} \end{eqnarray} In the NS sector, this only specifies the field redefinition up to a gauge transformation of the form \begin{equation}e^{\widehat{\Phi}_\mathrm{NS}'} = e^{\widehat{\Phi}_\mathrm{NS}} e^v,\ \ \ \ \eta v=0, \label{eq:4gt}\end{equation} where $v$ is a gauge parameter, since this transformation leaves $\widehat{A}_\eta$ invariant. This ambiguity can be removed by partial gauge fixing \cite{INOT,OkWB,WB}, or by lifting the NS sector of the $A_\infty$ theory to the large Hilbert space \cite{WBlarge}, as will be reviewed in the next subsection. To further simplify the field redefinition in the Ramond sector let us take a closer look at $F\Psi_\mathrm{R}$. Consider the expression: \begin{equation}\pi_1^1 {\bf \hat{G}}^{-1}{\bf \hat{G}}\frac{1}{1-\Psi_\mathrm{NS}}\otimes\Psi_\mathrm{R}\otimes\frac{1}{1-\Psi_\mathrm{NS}}.\end{equation} Canceling ${\bf \hat{G}}^{-1}$ and ${\bf \hat{G}}$ and projecting onto the 1-string output gives \begin{equation}\pi_1^1 {\bf \hat{G}}^{-1}{\bf \hat{G}}\frac{1}{1-\Psi_\mathrm{NS}}\otimes\Psi_\mathrm{R}\otimes\frac{1}{1-\Psi_\mathrm{NS}} = \Psi_\mathrm{R}.\label{eq:4dr1}\end{equation} On the other hand, we can substitute \eq{GinvR} for $\pi_1^1{\bf \hat{G}}^{-1}$, obtaining \begin{eqnarray} \!\!\!\!\!\!\!\!&& \pi_1^1 {\bf \hat{G}}^{-1}{\bf \hat{G}} \frac{1}{1-\Psi_\mathrm{NS}}\otimes\Psi_\mathrm{R}\otimes\frac{1}{1-\Psi_\mathrm{NS}} \nonumber\\ \!\!\!\!\!\!\!\!&& \ \ \ \ = \pi_1^1(\mathbb{I}_{T\widetilde{\mathcal{H}}} - \tilde{\bm{\upxi}} {\bf m}_2|_0 ){\bf \hat{G}}\frac{1}{1-\Psi_\mathrm{NS}}\otimes\Psi_\mathrm{R}\otimes\frac{1}{1-\Psi_\mathrm{NS}}\nonumber\\ \!\!\!\!\!\!\!\!&& \ \ \ \ = \pi_1^1{\bf \hat{G}} \frac{1}{1-\Psi_\mathrm{NS}}\otimes\Psi_\mathrm{R}\otimes\frac{1}{1-\Psi_\mathrm{NS}} - \tilde{\xi } m_2|_0\pi_2^1{\bf \hat{G}} \frac{1}{1-\Psi_\mathrm{NS}}\otimes\Psi_\mathrm{R}\otimes\frac{1}{1-\Psi_\mathrm{NS}}.\label{eq:4dr2} \end{eqnarray} The first term on the right hand side is $F\Psi_\mathrm{R}$. By writing \begin{equation} \pi_2^1 = \inverttriangle\!\Big[\pi_1^1\otimes'\pi_1^0+\pi_1^0\otimes'\pi_1^1\Big]\triangle \end{equation} we can show that the second term on the right hand side is \begin{eqnarray} \tilde{\xi } m_2|_0 \pi_2^1{\bf \hat{G}} \frac{1}{1-\Psi_\mathrm{NS}}\otimes\Psi_\mathrm{R}\otimes\frac{1}{1-\Psi_\mathrm{NS}} \!\!\!\!\!\!\!\!&& = \tilde{\xi } m_2|_0\big(F\Psi_\mathrm{R}\otimes A_\eta + A_\eta\otimes F\Psi_\mathrm{R}\big)\nonumber\\ \!\!\!\!\!\!\!\!&& = \tilde{\xi } [A_\eta,F\Psi_\mathrm{R}],\label{eq:OkDer1} \end{eqnarray} where in the last step we switched from degree to Grassmann grading.\footnote{The coproduct $\triangle$ acts on a group-like element as~\cite{WB} \begin{equation} \triangle \frac{1}{1-A} = \frac{1}{1-A} \otimes' \frac{1}{1-A}. \end{equation} A straightforward generalization gives the formulas \begin{eqnarray} \triangle \frac{1}{1-A} \otimes B \otimes \frac{1}{1-A} \!\!\!\!\!\!\!\!&& = \frac{1}{1-A} \otimes' \frac{1}{1-A} \otimes B \otimes \frac{1}{1-A}+\frac{1}{1-A} \otimes B \otimes \frac{1}{1-A} \otimes' \frac{1}{1-A},\ \ \ \ \ \ \ \ \\ \triangle \frac{1}{1-A} \otimes B \otimes \frac{1}{1-A} \otimes C \otimes \frac{1}{1-A}\!\!\!\!\!\!\!\!&& = \frac{1}{1-A} \otimes' \frac{1}{1-A} \otimes B \otimes \frac{1}{1-A} \otimes C \otimes \frac{1}{1-A}\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ +\frac{1}{1-A} \otimes B \otimes \frac{1}{1-A} \otimes' \frac{1}{1-A} \otimes C \otimes \frac{1}{1-A}\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ +\frac{1}{1-A} \otimes B \otimes \frac{1}{1-A} \otimes C \otimes \frac{1}{1-A} \otimes' \frac{1}{1-A}. \end{eqnarray} We use the first formula in the derivation of \eq{OkDer1}, and later the second formula in the derivation of~\eq{OkDer2} and the calculation of \eq{OkDer4} from \eq{OkDer3}.} Equating \eq{4dr1} and \eq{4dr2} then implies \begin{equation}\Psi_\mathrm{R} = F\Psi_\mathrm{R} -\tilde{\xi }[A_\eta,F\Psi_\mathrm{R} ].\end{equation} This can be interpreted as a recursive formula for $F\Psi_\mathrm{R}$: \begin{equation} F\Psi_\mathrm{R} = \Psi_\mathrm{R} +\tilde{\xi } [A_\eta,F\Psi_\mathrm{R}]. \end{equation} Plugging this formula into itself implies \begin{equation} F\Psi_\mathrm{R} = \frac{1}{\mathbb{I}-\tilde{\xi }\mathrm{ad}_{A_\eta}}\Psi_\mathrm{R}. \end{equation} This is the same formula which defines $\widehat{F}\widehat{\Psi}_\mathrm{R}$, but with the ``hats" missing. Since the field redefinition in the NS sector implies $\widehat{A}_\eta=A_\eta$, the field redefinition in the Ramond sector simplifies to \begin{equation}\widehat{\Psi}_\mathrm{R} = \Psi_\mathrm{R}.\end{equation} The Ramond fields are equal; there is no field redefinition between them. This was anticipated in~\cite{complete} and is not surprising for the following reason. Since the Ramond fields have identical kinetic terms, we can assume a field redefinition relating them takes the form \begin{equation}\widehat{\Psi}_\mathrm{R} = \Psi_\mathrm{R} + \mathscr{X}\Big(\widetilde{f}_2(\widetilde{\Psi},\widetilde{\Psi}) + \widetilde{f}_3(\widetilde{\Psi},\widetilde{\Psi},\widetilde{\Psi}) +...\Big),\end{equation} where $\widetilde{f}_2,\widetilde{f}_3,...$ are string products and the factor of $\mathscr{X}$ is needed to ensure that both fields live in the restricted space. Since the interaction vertices of both theories are built out of $Q,\tilde{\xi }$ and the open string star product, it is natural to assume that the field redefinition can be constructed from these operations. The $(n+2)$-product in the field redefinition $\widetilde{f}_{n+2}$ must carry ghost number $-n-1$. Therefore it must contain at least $n+1$ insertions of $\tilde{\xi }$, since no other operations carry negative ghost number. This implies that $\widetilde{f}_{n+2}$ carries at least picture $n+1$, and $\widetilde{f}_{n+2}(\widetilde{\Psi},...,\widetilde{\Psi})$ must have picture greater than or equal to $-1$. However, consistency of the field redefinition requires that $\widetilde{f}_{n+2}(\widetilde{\Psi},...,\widetilde{\Psi})$ carries picture $-3/2$. Therefore $\widetilde{f}_{n+2}$ must vanish, and the Ramond fields are equal. We therefore conclude that the field redefinition between the $A_\infty$ theory and WZW-based theory is \begin{eqnarray} \widehat{A}_\eta \!\!\!\!\!\!\!\!&& = A_\eta,\label{eq:fieldred2}\\ \widehat{\Psi}_\mathrm{R} \!\!\!\!\!\!\!\!&& = \Psi_\mathrm{R},\label{eq:fieldred3} \end{eqnarray} up to a gauge transformation of the form \eq{4gt}. It is important to note that the proposed field redefinition is consistent with the assumption that $\Psi_\mathrm{NS}$ and $\Psi_\mathrm{R}$ are in the small Hilbert space. In the Ramond sector this is obvious. In the NS sector it follows from the fact that $A_\eta$ and $\widehat{A}_\eta$ satisfy \begin{equation} \eta A_\eta - A_\eta*A_\eta = 0,\ \ \ \eta \widehat{A}_\eta - \widehat{A}_\eta*\widehat{A}_\eta = 0. \end{equation} See \cite{OkWB,WB}. \subsection{Equivalence of the Actions} Here we demonstrate that the field redefinition given by \eq{fieldred2} and \eq{fieldred3} relates the theories at the level of the action, not just the equations of motion. Following the analysis of \cite{OkWB,WBlarge}, this can be demonstrated by expressing the $A_\infty$ action in the same form as the WZW-based action, including the contribution from the Ramond sector. Let us explain how this is done. The $(n+3)$-string vertex in the $A_\infty$ action is \begin{equation}\frac{1}{n+3}\widetilde{\omega}(\widetilde{\Psi},\widetilde{M}_{n+2}(\widetilde{\Psi},...,\widetilde{\Psi})).\end{equation} Let us expand $\widetilde{\Psi}$ into NS and R components. Since the composite products multiply at most two Ramond states, the expanded vertex takes the form \begin{eqnarray} \!\!\!\!\!\!\!\!&& \frac{1}{n+3}\widetilde{\omega}(\widetilde{\Psi},\widetilde{M}_{n+2}(\widetilde{\Psi},...,\widetilde{\Psi}))= \frac{1}{n+3}\Bigg[\widetilde{\omega}(\Psi_\mathrm{NS},\widetilde{M}_{n+2}(\Psi_\mathrm{NS},...,\Psi_\mathrm{NS}))\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\sum_{k=0}^{n+1} \widetilde{\omega}(\Psi_\mathrm{R},\widetilde{M}_{n+2}(\underbrace{\Psi_\mathrm{NS},..., \Psi_\mathrm{NS}}_{k\ \mathrm{times}},\Psi_\mathrm{R},\Psi_\mathrm{NS},...,\Psi_\mathrm{NS}))\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\sum_{k=0}^n\sum_{j=0}^{n-k}\widetilde{\omega}(\Psi_\mathrm{NS},\widetilde{M}_{n+2}(\underbrace{\Psi_\mathrm{NS},...,\Psi_\mathrm{NS}}_{k\ \mathrm{times}},\Psi_\mathrm{R}, \underbrace{\Psi_\mathrm{NS},...,\Psi_\mathrm{NS}}_{j\ \mathrm{times}},\Psi_\mathrm{R},\Psi_\mathrm{NS},...,\Psi_\mathrm{NS}))\Bigg].\ \ \ \ \ \ \ \ \ \ \end{eqnarray} Many terms in these sums are redundant. In fact, using cyclicity we can write the sum in the second line as $2/(n+1)$ times the double sum in the third line. Therefore we have \begin{eqnarray} \!\!\!\!\!\!\!\!&& \frac{1}{n+3}\widetilde{\omega}(\widetilde{\Psi},\widetilde{M}_{n+2}(\widetilde{\Psi},...,\widetilde{\Psi}))= \frac{1}{n+3}\widetilde{\omega}(\Psi_\mathrm{NS},\widetilde{M}_{n+2}(\Psi_\mathrm{NS},...,\Psi_\mathrm{NS}))\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +\frac{1}{n+1}\sum_{k=0}^n\sum_{j=0}^{n-k}\widetilde{\omega}(\Psi_\mathrm{NS},\widetilde{M}_{n+2}(\underbrace{\Psi_\mathrm{NS},...,\Psi_\mathrm{NS}}_{k\ \mathrm{times}},\Psi_\mathrm{R}, \underbrace{\Psi_\mathrm{NS},...,\Psi_\mathrm{NS}}_{j\ \mathrm{times}},\Psi_\mathrm{R},\Psi_\mathrm{NS},...,\Psi_\mathrm{NS})).\ \ \ \ \ \ \ \ \end{eqnarray} Next we introduce a one-parameter family of NS string fields $\Psi_\mathrm{NS}(t),t\in[0,1]$ subject to the boundary conditions \begin{equation}\Psi_\mathrm{NS}(0) = 0,\ \ \ \ \Psi_\mathrm{NS}(1) =\Psi_\mathrm{NS}.\end{equation} The $(n+3)$-string vertex can be written as the integral of a total derivative in $t$: \begin{eqnarray} \!\!\!\!\!\!\!\!&&\!\!\!\!\!\!\!\!\!\! \frac{1}{n+3}\widetilde{\omega}(\widetilde{\Psi},\widetilde{M}_{n+2}(\widetilde{\Psi},...,\widetilde{\Psi}))= \int_0^1 dt\,\frac{d}{dt}\bigg[\frac{1}{n+3}\widetilde{\omega}(\Psi_\mathrm{NS}(t),\widetilde{M}_{n+2}(\Psi_\mathrm{NS}(t),...,\Psi_\mathrm{NS}(t)))\nonumber\\ \!\!\!\!\!\!\!\!&&\!\!\!\!\!\!\!\!\!\! +\frac{1}{n+1}\!\sum_{k=0}^n\sum_{j=0}^{n-k}\widetilde{\omega}(\Psi_\mathrm{NS}(t),\widetilde{M}_{n+2}(\underbrace{\Psi_\mathrm{NS}(t),...,\Psi_\mathrm{NS}(t)}_{k\ \mathrm{times}},\Psi_\mathrm{R}, \underbrace{\Psi_\mathrm{NS}(t),...,\Psi_\mathrm{NS}(t)}_{j\ \mathrm{times}},\Psi_\mathrm{R},\Psi_\mathrm{NS}(t),...,\Psi_\mathrm{NS}(t)))\!\bigg].\nonumber\\ \end{eqnarray} Acting $d/dt$ produces $n+3$ terms with $\dot{\Psi}_\mathrm{NS}(t)=d\Psi_\mathrm{NS}(t)/dt$ in the first line, and in the second term it produces $n+1$ terms with $\dot{\Psi}_\mathrm{NS}(t)$. All of these terms are related by cyclicity, and therefore we can bring $\dot{\Psi}_\mathrm{NS}(t)$ to the first entry of the symplectic form and cancel the factors $1/(n+3)$ and $1/(n+1)$: \begin{eqnarray} \!\!\!\!\!\!\!\!&&\!\!\!\!\!\!\!\!\!\! \frac{1}{n+3}\widetilde{\omega}(\widetilde{\Psi},\widetilde{M}_{n+2}(\widetilde{\Psi},...,\widetilde{\Psi}))= \int_0^1 dt \bigg[\omega_S(\dot{\Psi}_{\mathrm{NS}}(t),\widetilde{M}_{n+2}(\Psi_\mathrm{NS}(t),...,\Psi_\mathrm{NS}(t)))\nonumber\\ \!\!\!\!\!\!\!\!&&\!\!\!\!\!\!\!\!\!\! +\sum_{k=0}^n\sum_{j=0}^{n-k}\omega_S(\dot{\Psi}_\mathrm{NS}(t),\widetilde{M}_{n+2}(\underbrace{\Psi_\mathrm{NS}(t),...,\Psi_\mathrm{NS}(t)}_{k\ \mathrm{times}},\Psi_\mathrm{R}, \underbrace{\Psi_\mathrm{NS}(t),...,\Psi_\mathrm{NS}(t)}_{j\ \mathrm{times}},\Psi_\mathrm{R},\Psi_\mathrm{NS}(t),...,\Psi_\mathrm{NS}(t)))\bigg].\nonumber\\ \end{eqnarray} On the right hand side we replaced $\widetilde{\omega}$ with $\omega_S$ since only NS states are contracted. We can simplify this expression using coderivations and group-like elements: \begin{eqnarray} \!\!\!\!\!\!\!\!&&\frac{1}{n+3}\widetilde{\omega}(\widetilde{\Psi},\widetilde{M}_{n+2}(\widetilde{\Psi},...,\widetilde{\Psi}))= \int_0^1 dt \left[\omega_S\left(\dot{\Psi}_{\mathrm{NS}}(t),\pi_1{\bf M}_{n+2}|_0\frac{1}{1-\Psi_\mathrm{NS}(t)}\right)\right.\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \ \ \ \ +\left.\omega_S\left(\dot{\Psi}_\mathrm{NS}(t),\pi_1{\bf m}_{n+2}|_2\frac{1}{1-\Psi_\mathrm{NS}(t)}\otimes \Psi_\mathrm{R}\otimes \frac{1}{1-\Psi_\mathrm{NS}(t)}\otimes\Psi_\mathrm{R}\otimes \frac{1}{1-\Psi_\mathrm{NS}(t)}\right)\right].\ \ \ \ \ \ \ \ \ \ \ \end{eqnarray} Summing over the vertices, the action can therefore be expressed as \begin{eqnarray} S \!\!\!\!\!\!\!\!&& = \frac{1}{2}\widetilde{\omega}(\widetilde{\Psi},Q\widetilde{\Psi}) +\sum_{n=0}^\infty \frac{1}{n+3} \widetilde{\omega}(\widetilde{\Psi},\widetilde{M}_{n+2}(\widetilde{\Psi},...\widetilde{\Psi}))\nonumber\\ \!\!\!\!\!\!\!\!&& = \frac{1}{2}\omega_S(\mathscr{Y}\Psi_\mathrm{R},Q\Psi_\mathrm{R})+ \frac{1}{2}\omega_S(\Psi_\mathrm{NS},Q\Psi_\mathrm{NS}) + \int_0^1 dt \left[\omega_S\!\!\left(\dot{\Psi}_{\mathrm{NS}}(t),\pi_1({\bf M}|_0-{\bf Q})\frac{1}{1-\Psi_\mathrm{NS}(t)}\right)\right.\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \ \ \ \ +\left.\omega_S\left(\dot{\Psi}_\mathrm{NS}(t),\pi_1{\bf m}|_2\frac{1}{1-\Psi_\mathrm{NS}(t)}\otimes \Psi_\mathrm{R}\otimes \frac{1}{1-\Psi_\mathrm{NS}(t)}\otimes\Psi_\mathrm{R}\otimes \frac{1}{1-\Psi_\mathrm{NS}(t)}\right)\right]. \end{eqnarray} We can absorb the NS kinetic term into the integral over $t$, obtaining \begin{eqnarray} S\!\!\!\!\!\!\!\!&& = \frac{1}{2}\omega_S(\mathscr{Y}\Psi_\mathrm{R},Q\Psi_\mathrm{R}) + \int_0^1 dt \left[\omega_S\left(\dot{\Psi}_{\mathrm{NS}}(t),\pi_1{\bf M}|_0\frac{1}{1-\Psi_\mathrm{NS}(t)}\right)\right.\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \ \ \ \ +\left.\omega_S\left(\dot{\Psi}_\mathrm{NS}(t),\pi_1{\bf m}|_2\frac{1}{1-\Psi_\mathrm{NS}(t)}\otimes \Psi_\mathrm{R}\otimes \frac{1}{1-\Psi_\mathrm{NS}(t)}\otimes\Psi_\mathrm{R}\otimes \frac{1}{1-\Psi_\mathrm{NS}(t)}\right)\right].\ \ \ \ \ \ \ \ \ \ \ \end{eqnarray} Because this form of the action was constructed from the integral of a total derivative, it only depends on the value of $\Psi_\mathrm{NS}(t)$ at $t=1$. Next it will be helpful to reformulate the theory in the large Hilbert space. We replace $\Psi_\mathrm{NS}$ with a new NS string field $\Phi_\mathrm{NS}$ in the large Hilbert space according to \begin{equation}\Psi_\mathrm{NS} = \eta\Phi_\mathrm{NS}.\end{equation} The new field $\Phi_\mathrm{NS}$ is degree odd (because it is Grassmann even) and carries ghost and picture number zero. We also introduce a corresponding family of string fields $\Phi_\mathrm{NS}(t),t\in[0,1]$ such that $\eta\Phi_\mathrm{NS}(t) = \Psi_\mathrm{NS}(t)$. Plugging into the action gives \begin{eqnarray} S\!\!\!\!\!\!\!\!&& = \frac{1}{2}\omega_S(\mathscr{Y}\Psi_\mathrm{R},Q\Psi_\mathrm{R}) + \int_0^1 dt \left[\omega_L\left(\dot{\Phi}_{\mathrm{NS}}(t),\pi_1{\bf M}|_0\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\right)\right.\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \ \ \ \ +\left.\omega_L\left(\dot{\Phi}_\mathrm{NS}(t),\pi_1{\bf m}|_2\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\otimes \Psi_\mathrm{R}\otimes \frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\otimes\Psi_\mathrm{R}\otimes \frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\right)\right].\ \ \ \ \ \ \ \ \ \ \ \label{eq:BAact1} \end{eqnarray} Here we replaced the small Hilbert space symplectic form with the large Hilbert space symplectic form using the relation \begin{equation} \omega_S(\eta \Phi,\Psi)=\omega_L(\Phi,\Psi), \end{equation} where $\Phi$ is in the large Hilbert space and $\Psi$ is in the small Hilbert space. Next we use the identity~\cite{OkWB,WBlarge} \begin{equation} \omega(B,C)=\omega\left(\pi_1{\bf \hat{H}}\frac{1}{1-A}\otimes B\otimes\frac{1}{1-A},\pi_1{\bf \hat{H}}\frac{1}{1-A}\otimes C\otimes\frac{1}{1-A}\right), \end{equation} where $B$ and $C$ are string fields, $A$ is a degree even string field, and the cohomomorphism ${\bf \hat{H}}$ is cyclic with respect to $\omega$. In the current application we identify \begin{equation}A\to \eta\Phi_\mathrm{NS}(t),\ \ \ {\bf \hat{H}}\to{\bf \hat{G}},\ \ \ \omega\to\omega_L. \end{equation} Note, in particular, that ${\bf \hat{G}}$ is cyclic with respect to the large Hilbert space symplectic form when it receives no Ramond inputs. Thus we can rewrite the action as follows: \begin{eqnarray} S \!\!\!\!\!\!\!\!&& = \frac{1}{2}\omega_S(\mathscr{Y}\Psi_\mathrm{R},Q\Psi_\mathrm{R})\nonumber\\ \!\!\!\!\!\!\!\!&& \ \ \ + \int_0^1 dt\,\omega_L\left(\pi_1{\bf \hat{G}}\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\otimes \dot{\Phi}_{\mathrm{NS}}(t)\otimes \frac{1}{1-\eta\Phi_\mathrm{NS}(t)},\right.\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.\pi_1{\bf \hat{G}}\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\otimes\left(\pi_1{\bf M}|_0\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\right)\otimes\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\right) \nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ + \int_0^1 dt\,\omega_L\left(\pi_1{\bf \hat{G}}\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\otimes \dot{\Phi}_{\mathrm{NS}}(t)\otimes \frac{1}{1-\eta\Phi_\mathrm{NS}(t)},\pi_1{\bf \hat{G}}\frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)} \otimes \right.\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.\left(\pi_1{\bf m}|_2\frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\! \Psi_\mathrm{R}\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\!\Psi_\mathrm{R}\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\right)\!\otimes\!\frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\right).\nonumber\\ \end{eqnarray} We can simplify the term with ${\bf M}|_0$ by writing \begin{equation} \frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\otimes\left(\pi_1{\bf M}|_0\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\right)\otimes\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}={\bf M}|_0\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}. \end{equation} The term with ${\bf m}|_2$ can also be simplified using \begin{eqnarray} \!\!\!\!\!\!\!\!&& \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\!\left(\pi_1{\bf m}|_2\frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\! \Psi_\mathrm{R}\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\!\Psi_\mathrm{R}\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\right)\!\otimes\!\frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ = {\bf m}|_2\frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\! \Psi_\mathrm{R}\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\!\Psi_\mathrm{R}\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}. \end{eqnarray} Therefore, we have \begin{eqnarray} S \!\!\!\!\!\!\!\!&& = \frac{1}{2}\omega_S(\mathscr{Y}\Psi_\mathrm{R},Q\Psi_\mathrm{R})\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ + \int_0^1 dt\,\omega_L\left(\pi_1{\bf \hat{G}}\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\otimes \dot{\Phi}_{\mathrm{NS}}(t)\otimes \frac{1}{1-\eta\Phi_\mathrm{NS}(t)}, \pi_1{\bf \hat{G}}{\bf M}|_0\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\right) \nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ + \int_0^1 dt\,\omega_L\!\left(\pi_1{\bf \hat{G}}\frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\! \dot{\Phi}_{\mathrm{NS}}(t)\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)},\right.\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.\pi_1{\bf \hat{G}}{\bf m}|_2\frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\! \Psi_\mathrm{R}\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\!\Psi_\mathrm{R}\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\right).\nonumber\\ \end{eqnarray} Now using \begin{eqnarray} \pi_1{\bf \hat{G}}{\bf M}|_0 \!\!\!\!\!\!\!\!&& = \pi_1{\bf Q}{\bf \hat{G}} = Q\pi_1{\bf \hat{G}},\\ \pi_1{\bf \hat{G}}{\bf m}|_2 \!\!\!\!\!\!\!\!&& = \pi_1{\bf m}_2|_2{\bf \hat{G}} = m_2\pi_2^2{\bf \hat{G}}, \end{eqnarray} we further obtain \begin{eqnarray} S \!\!\!\!\!\!\!\!&& = \frac{1}{2}\omega_S(\mathscr{Y}\Psi_\mathrm{R},Q\Psi_\mathrm{R})\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ + \int_0^1 dt\,\omega_L\left(\pi_1{\bf \hat{G}}\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\otimes \dot{\Phi}_{\mathrm{NS}}(t)\otimes \frac{1}{1-\eta\Phi_\mathrm{NS}(t)}, Q\pi_1{\bf \hat{G}}\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\right) \nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ + \int_0^1 dt\,\omega_L\!\left(\pi_1{\bf \hat{G}}\frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\! \dot{\Phi}_{\mathrm{NS}}(t)\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)},\right.\nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \left.m_2\pi_2^2{\bf \hat{G}}\frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\! \Psi_\mathrm{R}\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\!\Psi_\mathrm{R}\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\right). \end{eqnarray} Using $\pi_2^2 = \inverttriangle\![\pi_1^1\otimes'\pi_1^1]\triangle$, one can show that \begin{eqnarray} \!\!\!\!\!\!\!\!&& \pi_2^2{\bf \hat{G}}\frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\! \Psi_\mathrm{R}\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\!\Psi_\mathrm{R}\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)} = \Big(F(t)\Psi_\mathrm{R}\Big) \otimes \Big(F(t)\Psi_\mathrm{R}\Big),\label{eq:OkDer2} \end{eqnarray} where \begin{equation} F(t)\Psi_\mathrm{R} \equiv \pi_1{\bf \hat{G}}\frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}\!\otimes\! \Psi_\mathrm{R}\!\otimes\! \frac{1}{1\!-\!\eta\Phi_\mathrm{NS}(t)}. \end{equation} Switching from degree to Grassmann grading, the action is therefore expressed as \begin{equation} S = \frac{1}{2}\langle\mathscr{Y}\Psi_\mathrm{R},Q\Psi_\mathrm{R}\rangle_S- \int_0^1 dt\,\langle A_t(t), QA_\eta(t) + (F(t)\Psi_\mathrm{R})^2\rangle_L,\label{eq:sHSaction} \end{equation} where following \cite{OkWB,WBlarge} we define the potentials by \begin{eqnarray} A_t(t)\!\!\!\!\!\!\!\!&& \equiv \pi_1{\bf \hat{G}}\frac{1}{1-\eta\Phi_\mathrm{NS}(t)}\otimes \dot{\Phi}_{\mathrm{NS}}(t)\otimes \frac{1}{1-\eta\Phi_\mathrm{NS}(t)},\\ A_\eta(t) \!\!\!\!\!\!\!\!&& \equiv \pi_1{\bf \hat{G}} \frac{1}{1-\eta\Phi_\mathrm{NS}(t)}. \end{eqnarray} Thus the $A_\infty$ action is expressed in the same form as \eq{lHSaction} but with the ``hats" missing. Now we can show that the action of the $A_\infty$ theory is related to the action of the WZW-based theory by field redefinition. We postulate that the two theories are related by \begin{equation} \widehat{A}_t(t) = A_t(t),\ \ \ \ \widehat{\Psi}_\mathrm{R} = \Psi_\mathrm{R}.\label{eq:tid} \end{equation} Equating the $t$-potentials provides an invertible map between $\Phi_\mathrm{NS}(t)$ and $\widehat{\Phi}_\mathrm{NS}(t)$, and automatically equates the $\eta$-potentials \cite{WBlarge}: \begin{equation}\widehat{A}_\eta(t) = A_\eta(t).\end{equation} With these identifications it is identically true that the actions \eq{lHSaction} and \eq{sHSaction} are equal. Moreover, since the $A_\infty$ action is only a function of $\Psi_\mathrm{NS}(t) = \eta\Phi_\mathrm{NS}(t)$ at $t=1$, the identification \eq{tid} is equivalent to \begin{equation} \widehat{A}_\eta = A_\eta,\ \ \ \ \widehat{\Psi}_\mathrm{R} = \Psi_\mathrm{R}, \end{equation} which is the field redefinition anticipated in the previous subsection. \section{Conclusions} In this paper we have constructed the NS and R sectors of open superstring field theory realizing a cyclic $A_\infty$ structure. This means, in particular, that we have an explicit solution of the classical Batalin-Vilkovisky master equation, \begin{equation} \{S,S\} = 0, \end{equation} after relaxing the ghost number constraint on the NS and R string fields. Therefore, for the purpose of tree level amplitudes we have a consistent definition of the gauge-fixed path integral, and for the first time we are prepared to consider quantum effects in superstring field theory. However, the absence of explicit closed string fields and the appearance of spurious singularities at higher genus may make quantization subtle. Therefore it is desirable to give a construction of superstring field theory realizing a more general decomposition of the bosonic moduli space than is provided by the Witten vertex. This in turn is closely related to the generalization to heterotic and type II closed superstring field theories. The appropriate construction of NS actions and Ramond equations of motion is described in \cite{ClosedSS,Ramond}, and in principle all that is needed is to implement cyclicity. For example, in the closely related open string field theory with stubs \cite{ClosedSS,Ramond}, it is not difficult to see that the gauge products with one Ramond output and zero picture deficit should be defined by \begin{equation}\mu_{n+2}^{(n-r+1)}|_{2r}=\tilde{\xi } M_{n+2}^{(n-r)}|_{2r}\ \ \ \ (2r+1\ \mathrm{Ramond\ inputs}),\end{equation} so that the equations of motion are consistent with the projection onto the restricted space in the Ramond sector. However, a full specification of the vertices requires many additional gauge products of varying Ramond numbers and picture deficits. Solving the entire recursive system of products consistent with cyclicity is a much more challenging problem, which we hope to consider soon. \vspace{.5cm} \noindent{\bf Acknowledgments} \vspace{.25cm} \noindent T.E. would like to thank S. Konopka and I. Sachs for discussion. The work of T.E. was supported in part by the DFG Transregional Collaborative Research Centre TRR 33 and the DFG cluster of excellence Origin and Structure of the Universe. The work of Y.O. was supported in part by a Grant-in-Aid for Scientific Research (B) No.~25287049 and a Grant-in-Aid for Scientific Research~(C) No.~24540254 from the Japan Society for the Promotion of Science (JSPS). \begin{appendix} \section{Gauge Invariance in Sen's Formulation} \label{app:Sen} In Sen's formulation of the Ramond sector \cite{1PIR,SenBV}, the action does not realize a cyclic $A_\infty$ structure in the standard sense. Therefore it is worth explaining why it is gauge invariant. The infinitesimal gauge transformation can be written in the form \begin{eqnarray} \delta\widetilde{\Pi} \!\!\!\!\!\!\!\!&& = Q\widetilde{\Omega} + \pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\otimes \widetilde{\Lambda}\otimes\frac{1}{1-\widetilde{\Psi}},\\ \delta\widetilde{\Psi} \!\!\!\!\!\!\!\!&& = \pi_1\widetilde{{\bf M}}\frac{1}{1-\widetilde{\Psi}}\otimes \widetilde{\Lambda}\otimes\frac{1}{1-\widetilde{\Psi}}, \end{eqnarray} where $\widetilde{\Omega}$ and $\widetilde{\Lambda}$ are degree odd gauge parameters in the small Hilbert space, at ghost number zero, and with the appropriate picture in the NS and R sectors. The variation of the action is \begin{equation} \delta S = -\omega_S(\delta\widetilde{\Pi},\mathcal{G}Q\widetilde{\Pi}) + \omega_S(\delta\widetilde{\Psi},Q\widetilde{\Pi})+ \omega_S(\delta\widetilde{\Pi},Q\widetilde{\Psi})+\omega_S\left(\delta\widetilde{\Psi},\pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\right). \end{equation} The gauge parameter $\widetilde{\Omega}$ immediately drops out since $Q\widetilde{\Omega}$ always appears in the symplectic form contracted with a BRST invariant state. Substituting the infinitesimal gauge transformation then gives \begin{eqnarray} \delta S \!\!\!\!\!\!\!\!&& = -\omega_S\left(\pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\otimes\widetilde{\Lambda}\otimes\frac{1}{1-\widetilde{\Psi}},\mathcal{G}Q\widetilde{\Pi}\right) + \omega_S\left(\pi_1\widetilde{{\bf M}}\frac{1}{1-\widetilde{\Psi}}\otimes\widetilde{\Lambda}\otimes\frac{1}{1-\widetilde{\Psi}},Q\widetilde{\Pi}\right)\nonumber\\ \!\!\!\!\!\!\!\!&& \ \ \ + \omega_S\left(\pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\otimes\widetilde{\Lambda}\otimes\frac{1}{1-\widetilde{\Psi}},Q\widetilde{\Psi}\right)+\omega_S\left(\pi_1\widetilde{{\bf M}}\frac{1}{1-\widetilde{\Psi}}\otimes\widetilde{\Lambda}\otimes\frac{1}{1-\widetilde{\Psi}},\pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\right).\ \ \ \ \end{eqnarray} The first and second terms cancel upon using the BPZ even property of $\mathcal{G}$ and converting $\widetilde{{\bf b}}$ into $\widetilde{{\bf M}}-{\bf Q}$. In the last term we replace $\pi_1\widetilde{{\bf M}}$ with $\pi_1{\bf Q}+\mathcal{G}\pi_1\widetilde{{\bf b}}$: \begin{equation} \delta S = \omega_S\left(\pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\otimes\widetilde{\Lambda}\otimes\frac{1}{1-\widetilde{\Psi}},Q\widetilde{\Psi}\right)+\omega_S\left(Q\widetilde{\Lambda} + \mathcal{G}\pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\otimes\widetilde{\Lambda}\otimes\frac{1}{1-\widetilde{\Psi}},\pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\right). \end{equation} Next use the BPZ even property of $\mathcal{G}$ and again convert $\widetilde{{\bf b}}$ into $\widetilde{{\bf M}}-{\bf Q}$: \begin{eqnarray} \delta S\!\!\!\!\!\!\!\!&& = \omega_S\left(\pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\otimes\widetilde{\Lambda}\otimes\frac{1}{1-\widetilde{\Psi}},Q\widetilde{\Psi}\right)+\omega_S\left(Q\widetilde{\Lambda},\pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\right) \nonumber\\ \!\!\!\!\!\!\!\!&&\ \ \ + \omega_S\left(\pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\otimes\widetilde{\Lambda}\otimes\frac{1}{1-\widetilde{\Psi}},\pi_1(\widetilde{{\bf M}}-{\bf Q})\frac{1}{1-\widetilde{\Psi}}\right)\nonumber\\ \!\!\!\!\!\!\!\!&& = \omega_S\left(\widetilde{\Lambda},\pi_1{\bf Q}\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\right)+ \omega_S\left(\pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\otimes\widetilde{\Lambda}\otimes\frac{1}{1-\widetilde{\Psi}},\pi_1\widetilde{{\bf M}}\frac{1}{1-\widetilde{\Psi}}\right).\label{eq:Sengv1} \end{eqnarray} Using cyclicity of $\widetilde{{\bf b}}$ we can rewrite the second term as \begin{equation} \omega_S\left(\pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\otimes\widetilde{\Lambda}\otimes\frac{1}{1-\widetilde{\Psi}},\pi_1\widetilde{{\bf M}}\frac{1}{1-\widetilde{\Psi}}\right)= \omega_S\left(\widetilde{\Lambda},\pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\otimes\left(\pi_1\widetilde{{\bf M}}\frac{1}{1-\widetilde{\Psi}}\right)\otimes \frac{1}{1-\widetilde{\Psi}}\right).\label{eq:OkDer4} \end{equation} This follows from the relation \begin{equation} 0 = \langle \omega_S| \pi_2\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\otimes\widetilde{\Lambda}\otimes\frac{1}{1-\widetilde{\Psi}}\otimes\left(\pi_1\widetilde{{\bf M}}\frac{1}{1-\widetilde{\Psi}}\right)\otimes\frac{1}{1-\widetilde{\Psi}},\label{eq:OkDer3} \end{equation} after representing $\pi_2 = \inverttriangle\!\Big[\pi_1\otimes'\pi_1\Big]\triangle$ and acting with the coproduct. Therefore the gauge variation of the action produces \begin{eqnarray} \delta S\!\!\!\!\!\!\!\!&& = \omega_S\left(\widetilde{\Lambda},\pi_1{\bf Q}\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\right)+ \omega_S\left(\widetilde{\Lambda},\pi_1\widetilde{{\bf b}}\frac{1}{1-\widetilde{\Psi}}\otimes\left(\pi_1\widetilde{{\bf M}}\frac{1}{1-\widetilde{\Psi}}\right)\otimes\frac{1}{1-\widetilde{\Psi}}\right)\nonumber\\ \!\!\!\!\!\!\!\!&& = \omega_S\left(\widetilde{\Lambda},\pi_1({\bf Q}\widetilde{{\bf b}} + \widetilde{{\bf b}}\widetilde{{\bf M}})\frac{1}{1-\widetilde{\Psi}}\right)\nonumber\\ \!\!\!\!\!\!\!\!&& = 0, \end{eqnarray} which vanishes as a consequence of \eq{bcid}. \end{appendix}
1,116,691,497,022
arxiv
\section{Introduction} Epilepsy is a chronic neurological disorder that affects patients with recurrent seizures. Seizures are characterized by the excessive electrical discharges in neurons, their waveform is known as a spike. A spike is characterized by short bursts of high amplitude, synchronized and multi-phasic activity, where polarity changes occur several times. Spikes manifest themselves at or around the epileptic focus and stand out from the background EEG activity. Electroencephalography (EEG) is currently the main technique to record electrical activity in the brain. Specific Neurologists, trained in EEG, are able to properly determine an epilepsy diagnosis analyzing the different types of spikes in the so called \emph{rhythmic activity} of the brain. Automatic methods for detecting epileptic events in EEG signals greatly exceed visual inspection. These methods focus on interictal spikes \cite{Bergstrom2013,Bhuyan2017}, seizure onset detection \cite{QuinteroRincon2016a} or waveforms epileptic patterns \cite{Gajic2014,Navakatikyan2006}. There is a wide variety of EEG signal processing features such as spatial-temporal analysis \cite{Ossadtchi2010}, frequency-temporal analysis \cite{Wilson2002}, wavelet decomposition \cite{Bergstrom2013}, spectrogram \cite{VanHese2009,Pearce2014}, Hilbert transform \cite{Kamath2014}, neural networks \cite{Puspita2017}, Hurst exponent \cite{Indiradevi2009} or by statistical model \cite{QuinteroRincon2017b}. In general these features are used with classifiers based on machine learning techniques with high performance such as Support Vector Machine \cite{Siuly2015}, logistic regression \cite{Subasi2005}, decision trees \cite{QuinteroRincon2017c}, k- Nearest Neighbor \cite{DiGesu2009,Rezaee2016} or Random Forest \cite{Donos2015}. Spike-and-wave discharge (SWD) is a generalized EEG discharge pattern whose waveform has a regular and symmetric morphology. This morphology can be mathematically described by a Morlet wavelet transform that generates a time-frequency representation of the EEG signal \cite{Subasi2005,Xanthopoulos2009,Sitnikova2009,Richard2015}. The \emph{spike} component of a SWD is associated with neuronal firing and the \emph{wave} component is associated with neuronal inhibition or hyperpolarization of neurons \cite{Pollen1964}. SWD is widely used in mice studies \cite{VanHese2009,Ovchinnikov2010,Bergstrom2013,Rodgers2015} nonetheless, human testing is limited. Mice have a predisposition for generalized SWDs at 7-12 Hz \cite{Pearce2014} and typically have spontaneous absence-seizure-like-events. The the presence of an intact cortex, thalamus and their interconnections is necessary to record them \cite{Blumenfeld2005,Avoli2012}. Some recent works in humans, using feature extraction to estimate the SWD pattern coupled with machine learning classification, have been proposed in \cite{QuinteroRincon2017b} with the t-location-scale distribution as feature extraction and a $k$-NN classifier, in \cite{QuinteroRincon2017c} through cross-correlation coupled with decision trees and in \cite{Puspitaa2017} using Bayesian classification from the Walsh transformation which is used to define the SWD morphological characteristic. In other works a Hilbert-Huang transform is estimated to assess the characteristics of time-frequency energy distributions \cite{Zhu2015} or by using complex network of neuronal oscillators \cite{Medvedeva2018} or through the different parameters such as: variance, sum of wave amplitudes, slope of wave and mobility of the waveform \cite{Olejarczyk2009}. The remaining of this paper is structured as follows. Section \ref{sec:method} presents the proposed method and explains the Morlet wavelet, the generalized Gaussian distribution (GGD) model and the SWD detection by $k$-NN classifier based on the extracted parameters. Experimental results are reported in section \ref{sec:results} where the classifier vector is composed by the scale parameter of the GGD and the variance and median of the wavelet coefficients from EEG data. Lastly, the conclusion, remarks, and future perspectives are presented in Section \ref{sec:conclusion}. \section{Proposed method} \label{sec:method} This section presents a new statistical method to detect spike-and-wave discharges (SWD) in EEG signals. The methodology is computationally very efficient, suitable for real-time automation, and can be used to perform the spike-and-wave detection on-line. First, the database used in this study will be introduced, then the methodology will be explained. \subsection{Database} \label{subsec:dB} A database with 212 monopolar 256Hz signals was created for off-line training of the classifier: 106 SWD signals and 106 non-SWD signals, measured from six patients from \emph{Fundaci\'on Lucha contra las Enfermedades Neurol\'ogicas Infantiles} (FLENI). The SWD signals have different times and waveforms but their morphology is preserved, while the non-SWD signals have normal waveforms. See Figure \ref{fig:swdmorlet} for an example of a typical SWD signal. \begin{figure}[!t] \centering \subfigure[SWD and Morlet Wavelet]{\includegraphics[width=100mm]{swd_morlet.pdf}} \subfigure[EEG example]{\includegraphics[width=100mm]{raw2.png}} \caption{(a) SWD signal and Morlet wavelet respectively, we can see the symmetric and regular morphology in both signals. (b) Example of 6 channels of one monopolar raw EEG, we can see several SWDs in all channels.} \label{fig:swdmorlet} \end{figure} Analyzing each SWD in the frequency domain, it was observed that they are restricted to a narrow frequency band between 1-3 Hz. Each EEG was acquired with a 22-channels array using the standard $10/20$ system through the following channels: Fp1, Fp2, F7, F3, Fz, F4, F8, T3, C3, Cz, C4, T4, T5, P3, Pz, P4, T6, O1, O2, Oz, FT10 and FT9. All new segments to analyze contain different spike-and-waves events. Their onset and duration time has been labeled by an expert neurologist. Here we used the expert annotations to extract a short epoch from each recording such that it is focused on the spike-and-wave in long-time EEG signals (the epochs used have a duration of the order of 1 minute). It should be noted that, for each new patient to analyze, ten new SWD are selected to be part of the database. This permits to be Patient-specific seizure detection. \subsection{Methodology} \label{subsec:methodology} Let $\boldsymbol X \in \mathbb{R}^{N\times M}$ denote the matrix gathering $M$ EEG signals $\boldsymbol{x}_{m} \in \mathbb{R}^{N\times 1}$ measured simultaneously on different channels and at $N$ discrete time instants. The proposed method is composed of four stages. The first stage splits the original signal $\boldsymbol X$ in several segments of 2 seconds with 1 second overlap using a rectangular sliding window so that \begin{align} \boldsymbol X_{t} &= \boldsymbol \Omega_{t} \boldsymbol X\\ \boldsymbol \Omega_{t} &= \left[\boldsymbol0^{L\times tL}, \boldsymbol{I}^{L\times L},\boldsymbol0^{L\times N-tL-L}\right] \end{align} where $\boldsymbol 0^{N\times M} \in \mathbb R^{N\times M}$ is the null matrix, $\boldsymbol I^{N\times N} \in \mathbb R^{N\times N}$ is the identity matrix and $L$ is the number of measurement obtained in 2 seconds. The second stage consists of representing each segment $\boldsymbol X_{t}$ using a time-frequency Morlet decomposition. The purpose of this decomposition is to evaluate the energy distribution throughout SWD frequency band, which is restricted to a narrow 1-3 Hz frequency window. Then a time/scale relationship is applied in order to find the Morlet wavelet coefficients. In the third stage, the statistical distribution of the Morlet wavelet coefficients are represented by using a zero-mean generalized Gaussian distribution. Each wavelet coefficient is summarized by estimating the statistical parameters \emph{scale ($\varsigma$)} and \emph{shape ($\tau$)} of the generalized Gaussian distribution similar to our previous works \cite{QuinteroRincon2016a,QuinteroRincon2016b,QuinteroRincon2017a,QuinteroRincon2018a,QuinteroRincon2018b}. In these jobs we found that the scale parameter $\varsigma$ closely relates to the variability of the brain activity and is therefore a good descriptor for performing seizure detection. Therefore the scale parameter (which depends on the shape parameter) is enough to detect the seizure in EEG signals. \subsubsection{Morlet Wavelet} The Continuous Wavelet Transform is given by \begin{align} W_{f}(t,a,b) &= \int_{-\infty}^{\infty} \boldsymbol X_{t} \; \psi^{*}_{a,b}(t) \text{dt} \label{eq:wav} \\ \psi^{*}_{a,b}(t) &=\frac{1}{\sqrt{a}}\; \psi \left(\frac{t-b}{a}\right) \label{eq:wavmother}\\ \psi(t) &= \exp^{- \frac{t^{2}}{2}}cos(5t) \label{eq:morlet} \end{align} where $a$ is the scaling parameter, $b$ is the shifting parameter, \eqref{eq:wavmother} is the mother wavelet function, $(*)$ denotes the complex conjugate operation and \eqref{eq:morlet} is the analytic expression of the Morlet wavelet \cite{Ahuja2005}. In order to associate the Morlet wavelet as a purely periodic signal of frequency $F_{c}$, we use the relationship between scale and frequency \begin{align} F_{a} = \frac{F_{c}}{\alpha \Delta} \label{eq:scal2frq} \end{align} where $\alpha$ is the scale, $\Delta$ is the sampling period, $F_{c}$ is the center frequency of Morlet wavelet in Hz and $F_{a}$ is the pseudo-frequency corresponding to the scale $a$ in Hz. The center frequency-based approximation captures the main wavelet oscillations. Therefore, the center frequency is a convenient and simple characterization of the dominant frequency of the wavelet \cite{Abry1997}. Note that the wavelet scale is estimated according the 1-3 Hz restricted narrow frequency of the SWD database. \subsubsection{Generalized Gaussian distribution} \label{ssec:ggd} The univariate generalized Gaussian distribution (GGD) is a flexible statistical model for one-dimensional signals that has found numerous applications in science and engineering. The distribution of the Morlet wavelet coefficients $\boldsymbol C_{t}$ can be represented by using a zero-mean GGD statistical model \cite{Do2002} with probability density function (PDF) given by \begin{align} \label{eq:PDFGGD2} f_{\textnormal{GGD}}(x;\varsigma,\tau) = \frac{\tau}{2\varsigma\Gamma(\tau^{-1})} \exp\left(-\left|\frac{x}{\varsigma}\right|^\tau\right) \end{align} where $\varsigma \in \mathbb{R}^+$ is a scale parameter, $\tau \in \mathbb{R}^+$ is a shape parameter that controls the shape of the density tail and $\Gamma\left(\cdot\right)$ is the Gamma function. From \eqref{eq:PDFGGD2}, the statistical properties of a wavelet coefficient matrix $\boldsymbol C_{t}$ can be summarized with the maximum likelihood parameter vector $\boldsymbol\Theta_{\boldsymbol C_{t}}$: \begin{align} \boldsymbol\Theta_{\boldsymbol C_{t}} &= \left[\varsigma_{t},\tau_t\right]^{T} = \argmax_{\left[\varsigma,\tau\right]^{T}} f_\textnormal{GGD}(\boldsymbol C_{t};\varsigma,\tau) \end{align} For more detais about the GGD parameters, we refer the reader to our previous works \cite{QuinteroRincon2016a,QuinteroRincon2016b,QuinteroRincon2017a,QuinteroRincon2018a, QuinteroRincon2018b}. \subsubsection{Spike-and-wave detection using a k-nearest neighbors classifier} Consider a classification into two possible classes $c=0$ and $c=1$, then for a feature vector $\boldsymbol \Theta_{\boldsymbol C_{t}}$ each class is given by \begin{align} \nonumber \rho\left(\boldsymbol\Theta_{\boldsymbol C_{t}}|c=0\right) &= \frac{1}{N_{0}}\sum_{n \in \textnormal{class 0}}\mathcal{N}\left(\boldsymbol\Theta_{\boldsymbol C_{t}}|\boldsymbol\Theta_{\boldsymbol C_{t}}^n,\sigma^{2} \boldsymbol I\right) \\ \label{eq:class1} &=\frac{1}{N_{0}\left(2\pi\sigma^{2}\right)^{D/2}}\sum_{n \in \textnormal{class 0}} \exp^{-\frac{\left(\boldsymbol\Theta_{\boldsymbol C_{t}}-\boldsymbol\Theta_{\boldsymbol C_{t}}^n\right)^{2}}{2\sigma^{2}}} \\ \nonumber \rho\left(\boldsymbol\Theta_{\boldsymbol C_{t}}|c=1\right) &= \frac{1}{N_{1}}\sum_{n \in \textnormal{class 1}}\mathcal{N}\left(\boldsymbol\Theta_{\boldsymbol C_{t}}|\boldsymbol\Theta_{\boldsymbol C_{t}}^n,\sigma^{2} \boldsymbol I\right) \\ \label{eq:class2} &=\frac{1}{N_{1}\left(2\pi\sigma^{2}\right)^{D/2}}\sum_{n \in \textnormal{class 1}} \exp^{-\frac{\left(\boldsymbol\Theta_{\boldsymbol C_{t}}-\boldsymbol\Theta_{\boldsymbol C_{t}}^n\right)^{2}}{2\sigma^{2}}} \end{align} where $D$ is the dimention of a datapoint $\boldsymbol\Theta_{\boldsymbol C_{t}}$, $N_{0}$ or $N_{1}$ are the number of train points of class $0$ or class $1$ respectively and $\sigma^{2}$ is the variance. Using Bayes rule to classify a new datapoint $\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}$ in each class the following equation is obtained \begin{align} \rho\left(c=0|\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}\right) =\frac{\rho\left(\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}|c=0\right)\rho\left(c=0\right)}{\rho\left(\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}|c=0\right)\rho\left(c=0\right)+\rho\left(\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}|c=1\right)\rho\left(c=1\right)} \label{eq:bayes} \end{align} The Maximum Likelihood setting of $\rho(c = 0)$ is $N_{0}/(N_{0}+N_{1})$, and $\rho(c = 1) = N_{1}/(N_{0}+N_{1})$. An analogous expression to equation \eqref{eq:bayes} can be obtained for $\rho\left(c=1|\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}\right)$. To determine which class is most likely, the ratio between both expression is used, which simplifies as follows: \begin{align} \frac{\rho\left(c=0|\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}\right)}{\rho\left(c=1|\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}\right)} = \frac{\rho\left(\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}|c=0\right)\rho\left(c=0\right)}{\rho\left(\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}|c=1\right)\rho\left(c=1\right)} \label{eq:ratio} \end{align} If this ratio is greater than one, $\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}$ is classified as $c=0$, otherwise it is calssified as $c=1$. It is important to note that in the case where $\sigma^{2}$ is very small in \eqref{eq:ratio}, then both the numerator as denominator will be dominated by the term for which the datapoint $\boldsymbol\Theta_{\boldsymbol C_{t}}^{n_{0}}$ in class $0$ or $\boldsymbol\Theta_{\boldsymbol C_{t}}^{n_{1}}$ in class $1$ are closest to the point $\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}$ respectively, such that \begin{align} \nonumber \frac{\rho\left(c=0|\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}\right)}{\rho\left(c=1|\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}\right)} &= \frac{exp^{-\frac{\left(\boldsymbol\Theta_{\boldsymbol C_{t}}^{*} - \boldsymbol\Theta_{\boldsymbol C_{t}}^{n_{0}}\right)^{2}}{2\sigma^{2}}}\rho\left(c=0\right)/N_{0}}{exp^{-\frac{\left(\boldsymbol\Theta_{\boldsymbol C_{t}}^{*} - \boldsymbol\Theta_{\boldsymbol C_{t}}^{n_{1}}\right)^{2}}{2\sigma^{2}}}\rho\left(c=1\right)/{N_{1}}} \\ &= \frac{exp^{-\frac{\left(\boldsymbol\Theta_{\boldsymbol C_{t}}^{*} - \boldsymbol\Theta_{\boldsymbol C_{t}}^{n_{0}}\right)^{2}}{2\sigma^{2}}}}{exp^{-\frac{\left(\boldsymbol\Theta_{\boldsymbol C_{t}}^{*} - \boldsymbol\Theta_{\boldsymbol C_{t}}^{n_{1}}\right)^{2}}{2\sigma^{2}}}} \label{eq:ratiored} \end{align} On the limit $\sigma^{2} \to 0$, $\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}$ is classified as class $0$ if $\boldsymbol\Theta_{\boldsymbol C_{t}}^{*}$ has a point in the class $0$ data which is closer than the closest point in the class $1$ data. The nearest (single) neighbor method is therefore recovered as the limiting case of a probabilistic generative model. We refer the reader to \cite{Bishop2006,BayesMachineLearning2012} for a comprehensive treatment of the mathematical properties of $k$-nearest neighbors. \section{Results} \label{sec:results} In the training stage the annotated database introduced in Section \ref{subsec:dB} was utilized. These 212 monopolar signals were trained off-line using a $k$-nearest neighbors on a modified vector $[\varsigma, \sigma^{2}, \widetilde{x}] \in \mathbb{R}^3$ collecting the parameters associated with the Morlet wavelet coefficients for each 2-second segment with 1 second overlap, where $\varsigma$ is the scale parameters of the generalized Gaussian distribution, $\sigma^{2}$ is the variance parameter and $\widetilde{x}$ is the median parameter of the feature vector $\boldsymbol\Theta_{\boldsymbol C_{t}}$. Table \ref{tab:range} contains the different bounds for each parameter, note that both minimum and maximum values are large for $\varsigma$, $\sigma^{2}$ and $\widetilde{x}$, when SWD and non-SWD signals are compared. This observation suggests that a threshold could be implemented to detect SWD patterns as a clear discrimination exists between spike-and-wave with respect to non-spike-and-wave. To illustrate that mentioned above, Figure \ref{fig:Training} shows scatter plots for the tree parameters in the next couples: \begin{enumerate} \item Scale parameter ($\varsigma$) vs variance ($\sigma^{2}$): For class 1 (SWD), a direct relationship between the variance and sigma, where both parameters grow proportionally, can be detected. For class 0 (non-SWD), both sigma and variance remain in a limited range of values. \item Scale parameter ($\varsigma$) vs median ($\widetilde{x}$): As sigma grows, median increases and decreases for both SWD and non-SWD, but is larger for SWD. A cone shaped pattern can be identified. \item Variance ($\sigma^{2}$) vs median ($\widetilde{x}$): As the variance grows, the median increases and decreases for SWD, while for non-SWD it remains in a small range (cluster). \end{enumerate} \vspace*{-1em} \begin{table}[H] \caption{Range of values for sigma ($\varsigma$), variance ($\sigma^{2}$) and median ($\widetilde{x}$) parameters for class 0 or non-spike-and-wave and for class 1 or spike-and-wave.} \centering \begin{tabular}{||c|| c||c||c||} \hline \hline Metric & Sigma ($\varsigma$) & Variance ($\sigma^{2}$) & Median ($\widetilde{x}$) \\ \hline \hline Class 0 & [$12$, $1300$] & [$950$, $32\times10^6$] & [$-28\times10^3$, $22\times10^3$] \\ \hline \hline Class 1 & [$31$, $1800$] & [$2800$, $43\times10^6$] & [$-73\times10^3$, $74\times10^3$] \\ \hline \hline \end{tabular} \label{tab:range} \end{table} The performance of the $k$-nearest neighbors classification method using 10 neighbors with 3 predictors: $\varsigma$, $\sigma^{2}$ and $\widetilde{x}$ was evaluated using a dataset consisting of 69 new annotated measurement. The new 69 annotated dataset correspond to 69 segments extracted from new six EEG signals of different subjects from the \emph{Fundaci\'on Lucha contra las Enfermedades Neurol\'ogicas Infantiles} (FLENI). The assessment of the results was undertaken in terms of the overall accuracy of the classification. The classifier achieved a 100\% sensitivity (True Positive Rate) and specificity (True Negative Rate) for SWD detection. \section{Conclusions} \label{sec:conclusion} This paper presents a new classification method to detect spikes-and-waves events in long-term EEG signals. The method proposed is based on the scale parameter of the generalized Gaussian distribution augmented with the variance and the median of the Morlet wavelet coefficients from EEG data and a $k$-nearest neighbors classification scheme that discriminates spike-and-wave from non-spike-and-wave events. The performance of the method was evaluated by training the algorithm with a real dataset containing 212 signals recordings with both spike-and-wave and non-spike-and-wave events. The classification performance was assessed utilizing 96 annotated segments and achieved a 100\% accuracy for spike-and-wave detection. This result sheds light on the potential for new research opportunities in the underlying causes of the so called \emph{absence epilepsy} in long-term EEG recordings. \begin{figure}[H] \centering \subfigure[sigma ($\varsigma$) vs variance ($\sigma^{2}$)]{\includegraphics[width=100mm]{scatterT1.pdf}} \subfigure[sigma ($\varsigma$) vs median ($\widetilde{x}$)]{\includegraphics[width=100mm]{scatterT2.pdf}} \subfigure[variance ($\sigma^{2}$) vs median ($\widetilde{x}$)]{\includegraphics[width=100mm]{scatterT3.pdf}} \caption{Scatter plots of the off-line training classification in database signals, for $\varsigma$, $\sigma^{2}$ and $\widetilde{x}$ parameters for spike-and-waves events (SWD = class 1 = red dots) and non-spike-and-waves events (non-SWD = class 0 = blue dots), showing the data dispersion of the proposed approach. In (a) Scale parameter ($\varsigma$) vs variance ($\sigma^{2}$). For class 1 or SWD, we can see the direct relationship between the variance and sigma, both grow proportionally, while for class 0 or non-SWD both sigma and variance remain in a range of values. (b) Scale parameter ($\varsigma$) vs median ($\widetilde{x}$). As sigma grows, median increases and decreases for both SWD and non-SWD, but is larger for SWD. (c) variance ($\sigma^{2}$) vs median ($\widetilde{x}$). As variance grows, median increases and decreases for SWD, while for non-SWD, remains in a small range.} \label{fig:Training} \end{figure} The main advantage is that the proposed algorithm can be implemented in real time and classifies, with high accuracy, the spike-and-wave pattern in epileptic signals. It is considered that these excellent results are due to the fact that, for each new patient analyzed, ten new SWD patterns are selected to be part of the database before training. Once the entire new database is trained, the prediction transforms into a patient-specfic seizure detection. The main limitation is defining the ideal sliding time-window and the overlap of segments due to the high dynamics of epileptic signals. Future work will focus on other epileptic waveforms patterns as well as on an extensive evaluation of the proposed approach, comparison with other methods found in literature, implementing a medical-friendly interface with automatic count and increase the database of spike-and-waves in on-line EEG long-term signals detection. \section*{Acknowledgments} Part of this work: was supported by the DynBrain project supported by the STICAmSUD international program and was conducted when AQR kept a Ph.D. at Buenos Aires Institute of Technology (ITBA). We would also like to thank Ivana Zorgno for her assistance in the writing. \bibliographystyle{unsrt}
1,116,691,497,023
arxiv
\section{\label{sec1}Introduction} Recently, novel tunneling phenomena of Bogoliubov phonon have been theoretically predicted in superfluid Bose gases. Kovrizhin and co-workers\cite{Kov2,Kovrizhin,Kagan} clarified the perfect transmission of low-energy Bogoliubov phonon across a potential barrier, which is referred to as the anomalous tunneling effect. Danshita and co-workers\cite{Danshita} showed that the anomalous tunneling effect also occurs in the supercurrent state, as far as the magnitude of the supercurrent is less than the critical current\cite{Danshita}. In the critical supercurrent state, the perfect transmission is not obtained\cite{Danshita}, irrespective of the relative direction between the momentum of Bogoliubov phonon and superflow. They also extended their work to a Bose condensate in an optical lattice\cite{Danshita2,Danshita3}. Although the anomalous tunneling effect has not been observed yet, a cold atom gas may be useful to examine this interesting tunneling phenomenon. With this regard, we briefly note that a double-well trap has been realized in cold atom physics\cite{Andrews}, and a kind of Josephson effect has been observed\cite{Albiez}. \par For the mechanism of the anomalous tunneling effect, various key issues have been discussed, such as resonance tunneling\cite{Kagan}, localized state near the potential barrier\cite{Danshita}, similarity between the wavefunction of Bogoliubov mode and condensate wavefunction in the low energy limit\cite{Kato}, and coupling of quasiparticle current with supercurrent near the barrier\cite{Tsuchiya}. However, despite these great efforts, no consistent explanation for the anomalous tunneling and the breakdown of this effect in the critical supercurrent state has not been given yet. Since the appearance of Bogoliubov phonon is one of the most fundamental phenomena in the superfluid phase\cite{Bogoliubov}, clarifying physical properties of this collective mode is a very important issue in the research of superfluidity. \par In this paper, we investigate tunneling properties of Bogoliubov excitations in a weakly interacting Bose superfluid at $T=0$. We treat the condensate wavefunction and Bogoliubov excitations within the Gross Pitaevskii (GP) equation and Bogoliubov equations, respectively. Using an exactly solvable model, we analytically show that the tunneling mechanism of low-energy Bogoliubov phonon is the same as that of ordinary supercurrent associated with the condensate. Since the supercurrent is well known to tunnel through a barrier without reflection, our result immediately explains the anomalous tunneling effect associated with Bogoliubov phonon. However, in contrast to the ordinary supercurrent, the Bogoliubov phonon consists of two current components ($\equiv J_u$ and $J_v$), whose directions are opposite to each other. These counterpropagating currents have been recently observed experimentally by using Bragg spectroscopy\cite{Bragg}. In this paper, we show that they have the same upper limit which equals the upper limit of the ordinary supercurrent (critical supercurrent $J_c$). In the critical supercurrent state, $J_u$ or $J_v$ is shown to always exceed $J_c$, so that the supercurrent behavior of Bogoliubov mode (perfect transmission) is destroyed. This result gives simple and physical explanation for the breakdown of the anomalous tunneling effect in the critical supercurrent state predicted in Ref. \cite{Danshita}. \par This paper is organized as follows. In Sec. II, we present our tunneling model with a $\delta$-functional potential barrier. In this model, exact solutions for the GP and Bogoliubov equations have been derived in Refs. \cite{Kovrizhin,Danshita}. Since we use these solutions in later sections, we summarize them in this section. For their detailed derivations, we refer to Refs. \cite{Kovrizhin,Danshita}. In Sec. III, we consider the case in the absence of the supercurrent. Here, we show that the anomalous tunneling effect can be explained as a result of the supercurrent behavior of Bogoliubov mode. This result is extended to the supercurrent state in Sec. IV. In Sec. V, we discuss how the breakdown of the anomalous tunneling effect in the critical supercurrent state can be understood based on our results obtained in Secs. III and IV. Throughout this paper, we set $\hbar=1$. \par \begin{figure} \centerline{\includegraphics[width=10cm]{fig1.ps}} \caption{Schematic picture of our model. A Bose superfluid is separated by the barrier potential $V\delta(x)$, and we consider the tunneling of Bogoliubov phonon injected from $x=-\infty$. We examine both the cases with and without supercurrent $J_s$ ($>0$) carried by condensate. The quasiparticle current $J_{qp}$ carried by Bogoliubov phonon is described by the sum of two currents as $J_{qp}=J_u+J_v$, while the probability current density $J$ is given by $J=J_u-J_v$, where $J_u$ and $J_v$ are defined by Eq. (\ref{eq.22}). We note that the flow direction of $J_v$ is opposite to that of $J_u$.} \label{fig1} \end{figure} \section{Model tunneling problem, condensate wavefunction and Bogoliubov excitations} \par We consider a superfluid Bose gas at $T=0$, which is separated by the potential barrier $V(x)=V\delta(x)$ (Fig. \ref{fig1}). Since the system is uniform in the $y$- and $z$-direction, our model is essentially a one-dimensional system. Thus, we only retain the $x$-direction, and ignore the $y$- and $z$-direction. We also ignore effects of a harmonic trap potential, for simplicity. The latter simplification is allowed when a box-shaped trap is considered\cite{Meyrath}. \par In this model, exact solutions for the GP equation and Bogoliubov equations have been derived in Refs. \cite{Kovrizhin,Danshita}. Since we use these solutions in later sections, we summarize them in this section. For detailed derivations of the exact solutions, we refer to Refs. \cite{Kovrizhin,Danshita}. \par The Gross-Pitaevskii (GP) equation for the condensate wavefunction $\Psi(x)$ is given by\cite{Pitaevskii} \begin{eqnarray} \Bigl( -{1 \over 2m}{d^2 \over dx^2}-\mu+V\delta(x)+g|\Psi(x)|^2 \Bigr)\Psi(x)=0. \label{eq.1} \end{eqnarray} Here, $m$ is the mass of a Bose atom, and $\mu$ is the chemical potential. $g$ is a repulsive interaction between Bose atoms. Introducing the scaled variables, \begin{eqnarray} {\bar \Psi}({\bar x})\equiv {\Psi(x) \over \sqrt{n_0}},~~{\bar \mu}\equiv {\mu \over gn_0},~~{\bar V}\equiv{V \over gn_0\xi},~~{\bar x}\equiv{x \over \xi}, \label{eq.2} \end{eqnarray} we can write Eq. (\ref{eq.1}) in the dimensionless form \begin{eqnarray} \Bigl( -{1 \over 2}{d^2 \over d{\bar x}^2}-{\bar \mu}+{\bar V}\delta({\bar x})+|{\bar \Psi({\bar x})}|^2 \Bigr){\bar \Psi}({\bar x})=0. \label{eq.3} \end{eqnarray} In Eq. (\ref{eq.2}), $n_0\equiv|\Psi(x=\pm\infty)|^2$ is the condensate density far away from the barrier, and $\xi\equiv 1/\sqrt{mgn_0}$ is the healing length. The scaled chemical potential ${\bar \mu}$ equals unity in the absence of supercurrent. In the supercurrent state with momentum $q$, one finds $\mu=1+q^2/2$\cite{Danshita}. In the following, we simply write $({\bar \Psi},{\bar \mu},{\bar V},{\bar x})$ as $(\Psi,\mu,V,x)$. \par The boundary conditions at $x=0$ are given by \begin{eqnarray} \Psi(+0)=\Psi(-0), \label{eq.4} \end{eqnarray} \begin{eqnarray} \Bigl({d\Psi \over dx}\Bigr)_{x=+0} - \Bigl({d\Psi \over dx}\Bigr)_{x=-0} =2V\Psi(0). \label{eq.5} \end{eqnarray} We solve the GP equation (\ref{eq.3}) together with the boundary conditions in Eqs. (\ref{eq.4}) and (\ref{eq.5}). The condensate wavefunction $\Psi_q(x)$ in the supercurrent state is obtained as\cite{Danshita} \begin{equation} \Psi_q(x)=e^{i[qx-{\rm sgn}(x)\theta_q]} [\gamma(x)-iq{\rm sgn}(x)], \label{eq.6} \end{equation} where \begin{equation} \gamma(x)\equiv\sqrt{1-q^2}\tanh(\sqrt{1-q^2}[|x|+x_0]), \label{eq.7} \end{equation} \begin{equation} e^{i\theta_q}\equiv {\gamma(0)-iq \over \sqrt{\gamma(0)^2+q^2}}. \label{eq.8} \end{equation} $x_0$ in Eq. (\ref{eq.7}) is determined by \begin{equation} \gamma(0)^3+V\Bigl(\gamma(0)^2+q^2\Bigr)-\Bigl(1-q^2\Bigr)\gamma(0)=0. \label{eq.9} \end{equation} Equation (\ref{eq.9}) is obtained from the boundary condition in Eq. (\ref{eq.5}). \par We note that the magnitude of the condensate wavefunction $\Psi_q(x)$ in Eq. (\ref{eq.6}) is suppressed near the barrier. However, the supercurrent density $J_s$ is independent of $x$ as, \begin{equation} J_s={\rm Im}\Bigl[\Psi_q^*(x){d \over dx}\Psi_q(x)\Bigr]=q. \label{eq.10} \end{equation} Namely, the supercurrent is conserved in the whole system. \par For a given condensate wavefunction $\Psi_q(x)$, the two-component wavefunction $(u,v)$ of the Bogoliubov mode is obtained from the Bogoliubov equations\cite{Pitaevskii}. Using the scaled variables in Eq. (\ref{eq.2}), one can write the Bogoliubov equations in the dimensionless forms \begin{equation} \Bigl( -{1 \over 2}{d^2 \over d{\bar x}^2}-{\bar \mu}+{\bar V}\delta({\bar x})+2|{\bar \Psi}_q({\bar x})|^2 \Bigr){\bar u}({\bar x}) -{\bar \Psi}_q({\bar x})^2{\bar v}({\bar x})={\bar E}{\bar u}({\bar x}), \label{eq.11} \end{equation} \begin{equation} \Bigl( -{1 \over 2}{d^2 \over d{\bar x}^2}-{\bar \mu}+{\bar V}\delta({\bar x})+2|{\bar \Psi}_q({\bar x})|^2 \Bigr){\bar v}({\bar x}) -{\bar \Psi}_q^*({\bar x})^2{\bar u}({\bar x})=-{\bar E}{\bar u}({\bar x}). \label{eq.12} \end{equation} Here, the Bogoliubov wavefunction $(u(x),v(x))$ and the energy $E$ have been scaled as $({\bar u}({\bar x}),{\bar v}({\bar x}))\equiv (u(x),v(x))/\sqrt{n_0}$ and ${\bar E}\equiv E/gn_0$, respectively. In the following, we omit the bars in the Bogoliubov equations (\ref{eq.11}) and (\ref{eq.12}). The boundary conditions for the Bogoliubov mode $(u,v)$ at $x=0$ are given by \begin{equation} u(+0)=u(-0), ~~~ \Bigl({du(x) \over dx}\Bigr)_{x=+0}-\Bigl({du(x) \over dx}\Bigr)_{x=-0}=2Vu(0), \label{eq.13} \end{equation} \begin{equation} v(+0)=v(-0), ~~~ \Bigl({dv(x) \over dx}\Bigr)_{x=+0}-\Bigl({dv(x) \over dx}\Bigr)_{x=-0}=2Vv(0).\label{eq.14} \end{equation} \par To construct $(u,v)$ satisfying the boundary conditions in Eqs. (\ref{eq.13}) and (\ref{eq.14}), we need particular solutions of the Bogoliubov equations (\ref{eq.11}) and (\ref{eq.12}) for $x\ge 0$ and $x\le 0$. For a given condensate wavefunction $\Psi_q(x)$ and energy $E$, the coupled equations (\ref{eq.11}) and (\ref{eq.12}) have four particular solutions $(u_n,v_n)$ $(n=1,2,3,4)$, given by\cite{Kovrizhin,Danshita} \begin{eqnarray} \displaystyle \left\{ \begin{array}{l} \displaystyle u_n=e^{i[(p_n+q)x-{\rm sgn}(x)\theta_q]} \Bigl[ \Bigl(1+{p_n^2 \over 2E}\Bigr)\gamma(x) -i{\rm sgn}(x) \Bigl( q+{p_n \over 2E}(1-q^2-\gamma(x)^2+E) +{p_n^3 \over 4E} \Bigr) \Bigr], \\ \displaystyle v_n=e^{i[(p_n-q)x+{\rm sgn}(x)\theta_q]} \Bigl[ \Bigl(1-{p_n^2 \over 2E}\Bigr)\gamma(x) +i{\rm sgn}(x) \Bigl( q+{p_n \over 2E}(1-q^2-\gamma(x)^2-E) +{p_n^3 \over 4E} \Bigr) \Bigr]. \end{array} \right. \nonumber \\ \label{eq.16} \end{eqnarray} (We note that Eq. (\ref{eq.16}) is not normalized.) The momenta $p_n$ ($n=1,2,3,4$) are obtained from the expression for the Bogoliubov excitation spectrum\cite{Danshita} \begin{equation} E=p_nq+\sqrt{{p_n^2 \over 2}\Bigl({p_n^2 \over 2}+2\Bigr)}. \label{eq.17} \end{equation} Among the four solutions, two of them ($n=1$ and 2) describe propagating wave characterized by real momenta ($p_1$ and $p_2$). The remaining two solutions ($n=3$ and 4) describe localized states having complex momenta ($p_3$ and $p_4$). While only the propagating solutions are necessary in considering a uniform system, one has to also take into account the localized solutions in the present inhomogeneous system. Indeed, in Sec. III, we show that the localized states appear near the barrier. \par In the low energy region ($E\ll 1$), $p_n$ ($n=1,2,3,4$) reduce to\cite{note}, within the accuracy of $O(E)$, \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle p_1={E \over 1+q},\\ \displaystyle p_2=-{E \over 1-q},\\ \displaystyle p_3=2i\sqrt{1-q^2}+{qE \over 1-q^2},\\ \displaystyle p_4=-2i\sqrt{1-q^2}+{qE \over 1-q^2}. \end{array} \label{eq.17b} \right. \end{eqnarray} \par Using these particular solutions, we construct the Bogoliubov wavefunction $(u,v)$. Assuming that the incident Bogoliubov phonon comes from $x=-\infty$, we set \begin{eqnarray} \left( \begin{array}{l} u(x)\\v(x) \end{array} \right) = \left( \begin{array}{l} u_<(x)\\v_<(x) \end{array} \right) \theta(-x)+ \left( \begin{array}{l} u_>(x)\\v_>(x) \end{array} \right) \theta(x), \label{eq.16b} \end{eqnarray} Here, $\theta(x)$ is the step function, and \begin{eqnarray} \left( \begin{array}{l} u_<(x)\\v_<(x) \end{array} \right) = \left( \begin{array}{l} u_1(x)\\v_1(x) \end{array} \right)+ A \left( \begin{array}{l} u_2(x)\\v_2(x) \end{array} \right) + B\left( \begin{array}{l} u_4(x)\\v_4(x) \end{array} \right)~~~~~(x\le 0), \label{eq.18} \end{eqnarray} \begin{eqnarray} \left( \begin{array}{l} u_>(x)\\v_>(x) \end{array} \right) = C \left( \begin{array}{l} u_1(x)\\v_1(x) \end{array} \right) + D\left( \begin{array}{l} u_3(x)\\v_3(x) \end{array} \right)~~~~~~~~~~~~~~~(x\ge0). \label{eq.19} \end{eqnarray} The coefficients $(A,B,C,D)$ are determined so that the boundary conditions in Eqs. (\ref{eq.13}) and (\ref{eq.14}) can be satisfied. We will give their detailed expressions in later sections. \par Once $(A,B,C,D)$ are determined, the transmission probability is obtained from the ratio of the probability current density $J\equiv J_u-J_v$ for the incident wave to that for the transmitted wave. Here, $J_u$ and $J_v$ are given by\cite{note9} \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle J_u={\rm Im}\Bigl[u(x)^*{d \over dx}u(x)\Bigr],\\ \displaystyle J_v=-{\rm Im}\Bigl[v(x)^*{d \over dx}v(x)\Bigr]. \end{array} \right. \label{eq.20} \end{eqnarray} We note that the quasiparticle current density $J_{\rm qp}$ carried by Bogoliubov phonon has the different form from the probability current density as $J_{\rm qp}=J_u+J_v$. To see the difference between $J$ and $J_{\rm qp}$, it is useful to consider a uniform system, in which the Bogoliubov equations give the plane wave solution\cite{Pitaevskii} \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle u_p=\sqrt{{1 \over 2}\Bigl({p^2/2+1 \over E}+1\Bigr)}e^{ipx},\\ \displaystyle v_p=\sqrt{{1 \over 2}\Bigl({p^2/2+1 \over E}-1\Bigr)}e^{ipx}. \end{array} \right. \label{eq.21} \end{eqnarray} Here, $E=\sqrt{(p^2/2)(p^2/2+2)}$ is the Bogoliubov excitation spectrum. Substituting Eq. (\ref{eq.21}) into Eq. (\ref{eq.20}), we obtain ($p>0$), \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle J_u={1 \over 2}\Bigl({p^2/2+1 \over E}+1\Bigr)p~~~(>0),\\ \displaystyle J_v=-{1 \over 2}\Bigl({p^2/2+1 \over E}-1\Bigr)p~~~(<0). \end{array} \right. \label{eq.22} \end{eqnarray} Equation (\ref{eq.22}) shows that the leading terms of $J_u$ and $J_v$ with respect to $p$ are constant in the low energy limit. While they dominantly contribute to the probability current density $J$, they are cancelled out in the quasiparticle current $J_{\rm qp}$, because the flow directions of $J_u$ and $J_v$ are opposite to each other. The contribution to $J_{\rm qp}$ comes from higher order terms in $J_u$ and $J_v$ in terms of $p$. In Sec. III, we find that the cancellation of the leading terms of $J_u$ and $J_v$ also occur in the presence of the barrier, as schematically shown in Fig. \ref{fig1}. Thus. in considering the tunneling of low-energy Bogoliubov phonon, the tunneling properties of each component $J_u$ and $J_v$ are more crucial than the quasiparticle tunneling current $J_{\rm qp}$ given by the sum of them. In Secs. III and IV, we show that each $J_u$ and $J_v$ has the same tunneling properties as those of supercurrent in the low energy region, leading to the anomalous tunneling of Bogoliubov phonon. \par \section{Supercurrent behavior of low-energy Bogoliubov mode} \par In this section, we first consider the tunneling of low-energy Bogoliubov phonon in the absence of supercurrent. In this case, the condensate wavefunction in Eq. (\ref{eq.6}) reduces to \begin{equation} \Psi_{q=0}(x)=\gamma(x)=\tanh(|x|+x_0). \label{eq.3.1} \end{equation} Since we are interested in the anomalous tunneling effect, we consider the low energy region ($E\ll 1$). In this regime, the Bogoliubov excitation spectrum has the linear dispersion $E=p$, where $p$ ($>0$) is the momentum of incident wave coming from $x=-\infty$. For the momenta of the four particular solutions in Eq. (\ref{eq.17b}), we may take $(p_1,p_2,p_3,p_4)=(p,-p,2i,-2i)$ within the accuracy of $O(p)$. Determining the coefficients $(A,B,C,D)$ in Eqs. (\ref{eq.18}) and (\ref{eq.19}), we obtain\cite{Kovrizhin,Danshita}, to the accuracy of $O(p)$, \begin{eqnarray} \left( \begin{array}{l} u_<(x)\\v_<(x) \end{array} \right) = \left( \begin{array}{l} u_I(x)\\v_I(x) \end{array} \right)e^{ipx} + i\alpha p \left( \begin{array}{l} u_R(x)\\v_R(x) \end{array} \right)e^{-ipx} + \Bigl({i \over 2}+\beta p\Bigr)\left( \begin{array}{l} u_L(x)\\v_L(x) \end{array} \right), \label{eq.3.2} \end{eqnarray} \begin{eqnarray} \left( \begin{array}{l} u_>(x)\\v_>(x) \end{array} \right) = (1-i\eta p) \left( \begin{array}{l} u_T(x)\\v_T(x) \end{array} \right)e^{ipx} - \Bigl({i \over 2}+\beta p\Bigr)\left( \begin{array}{l} u_L(x)\\v_L(x) \end{array} \right). \label{eq.3.3} \end{eqnarray} The coefficients $(\alpha,~\beta,~\eta)$ are given by \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle \alpha=-{1 \over 2}(1-\gamma_0) {2+\gamma_0+\gamma_0^2 \over (1+\gamma_0^2)\gamma_0},\\ \displaystyle \beta={1 \over 2}\Bigl({1 \over 2}-{1 \over \gamma_0}\Bigr),\\ \displaystyle \eta=\alpha-{2\gamma_0 \over 1+\gamma_0^2}, \end{array} \right. \label{eq.3.4} \end{eqnarray} where we have simply written $\gamma_0=\gamma(x=0)$. In Eqs. (\ref{eq.3.2}) and (\ref{eq.3.3}), the incident wave $(u_I,v_I)$ and transmitted wave $(u_T,v_T)$ are obtained from the particular solution $(u_1,v_1)$ in Eq. (\ref{eq.16}), while the reflected wave $(u_R,v_R)$ is obtained from $(u_2,v_2)$. In the low energy limit, they are given by \begin{eqnarray} \left( \begin{array}{c} u_I \\v_I \end{array} \right) = \left( \begin{array}{l} \displaystyle \gamma(x)+{i \over 2}[1-\gamma(x)^2]+{p \over 2}[\gamma(x)+i]\\ \displaystyle \gamma(x)-{i \over 2}[1-\gamma(x)^2]-{p \over 2}[\gamma(x)-i] \end{array} \right), \label{eq.3.5} \end{eqnarray} \begin{eqnarray} \left( \begin{array}{c} u_R \\v_R \end{array} \right) = \left( \begin{array}{c} u_T \\v_T \end{array} \right) = \left( \begin{array}{l} \displaystyle \gamma(x)-{i \over 2}[1-\gamma(x)^2]+{p \over 2}[\gamma(x)-i]\\ \displaystyle \gamma(x)+{i \over 2}[1-\gamma(x)^2]-{p \over 2}[\gamma(x)+i] \end{array} \right). \label{eq.3.6b} \end{eqnarray} The localized component $(u_L,v_L)$ in Eqs. (\ref{eq.3.2}) and (\ref{eq.3.3}) is obtained from $(u_3,v_3)$ and $(u_4,v_4)$, which has the form \begin{eqnarray} \left( \begin{array}{c} u_L \\v_L \end{array} \right) = \left( \begin{array}{l} \displaystyle -[1-\gamma(x)^2]+p[1-\gamma(x)]\\ \displaystyle [1-\gamma(x)^2]+p[1-\gamma(x)] \end{array} \right). \label{eq.3.6c} \end{eqnarray} We note that, in the last terms of Eqs. (\ref{eq.3.2}) and (\ref{eq.3.3}), the exponential factors $e^{\pm 2x}$ appearing in $(u_3,v_3)$ and $(u_4,v_4)$ have been absorbed into $(u_L,v_L)$ by using the identity \begin{equation} e^{-2(|x|+x_0)}[1+\gamma(x)]=1-\gamma(x). \label{eq.3.7} \end{equation} \par To show the supercurrent behavior of low-energy Bogoliubov phonon in the tunneling process, we expand Eqs. (\ref{eq.3.2}) and (\ref{eq.3.3}) in terms of $p$ to the first order. For example, $u_>(x)$ then becomes \begin{eqnarray} u_>(x) &=&\gamma(x)+ip{\gamma(x)-\gamma_0 \over \gamma_0}+ipx\gamma(x) \nonumber \\ &+& ip{\gamma_0 \over 1+\gamma_0^2}\gamma(x) +{p \over 2}{\gamma_0 \over 1+\gamma_0^2}[1-\gamma(x)^2] +{p \over 2}\gamma(x)+{p \over 2}x[1-\gamma(x)^2]. \label{eq.3.8} \end{eqnarray} Using the identity \begin{equation} {d \over dx}\gamma(x)={\rm sgn}(x) \Bigl(1-\gamma(x)^2\Bigr), \label{eq.3.9} \end{equation} we find that Eq. (\ref{eq.3.8}) is equivalent to the expression, \begin{equation} u_>(x)=\sqrt{1+p}e^{ip{\gamma_0 \over 1+\gamma_0^2}}e^{ipx} \Bigl[ \gamma(\sqrt{1+p}x+{p\gamma_0 \over 2(1+\gamma_0^2)}) +ip{\gamma(x)-\gamma_0 \over \gamma_0} \Bigr], \label{eq.3.10} \end{equation} within the accuracy of $O(p)$. Expanding the condensate wave function in Eq. (\ref{eq.6}) in the long wave length limit, we obtain ($x>0$) \begin{equation} \Psi_q(x)=e^{iqx}\Bigl(\gamma(x)+iq{\gamma(x)-\gamma_0 \over \gamma_0}\Bigr). \label{eq.3.11} \end{equation} Comparing Eq. (\ref{eq.3.10}) with (\ref{eq.3.11}), we find that $u_>(x)$ essentially has the same form as the condensate wavefunction $\Psi_{q=p}(x)$ in the {\it supercurrent state} with momentum $p$. The same conclusion is also obtained for $u_<(x)$. Introducing $y_+\equiv\sqrt{1+p}x$ and \begin{eqnarray} {\tilde u}(y_+)\equiv {1 \over \sqrt{1+p}}e^{-ip{\gamma_0 \over 1+\gamma_0^2}}u(x=y_+/\sqrt{1+p}), \label{eq.3.12} \end{eqnarray} we obtain \begin{equation} {\tilde u}(y_+)={\tilde \Psi}_p(y_+,Y_+). \label{eq.3.13} \end{equation} Here, ${\tilde \Psi}_p$ is given by Eq. (\ref{eq.6}), where $x_0$ in $\gamma$ is replaced by \begin{equation} Y_+= x_0+{\gamma_0 \over 2(1+\gamma_0^2)}p. \label{eq.3.13b} \end{equation} \par The same analysis is applicable to $v(x)$. From Eqs. (\ref{eq.3.2})-(\ref{eq.3.6c}), $v_p(x)$ is found to be related to $u_p(x)$ as $v_p(x)=u_{-p}(x)^*$. Using this, we obtain \begin{eqnarray} {\tilde v}(y_-)={\tilde \Psi}_p(y_-,Y_-), \label{eq.3.14} \end{eqnarray} where $y_-=\sqrt{1-p}x$, \begin{equation} Y_-=x_0-{\gamma_0 \over 2(1+\gamma_0^2)}p, \label{eq.3.14b} \end{equation} and \begin{eqnarray} {\tilde v}(y_-)\equiv {1 \over \sqrt{1-p}}e^{-ip{\gamma_0 \over 1+\gamma_0^2}}v(x=y_-\sqrt{1-p}). \label{eq.3.15} \end{eqnarray} \par Equations (\ref{eq.3.13}) and (\ref{eq.3.14}) clearly show that, in the low energy region, $u(x)$ and $v(x)$ have the same properties as the condensate wavefunction in the supercurrent state. Namely, the currents $J_u$ and $J_v$ tunnel through the barrier without reflection in the low energy region. The tunneling of low energy Bogoliubov phonon described by the wavefunction $(u,v)$ is also not accompanied by reflection, which naturally explains the anomalous tunneling effect\cite{Kovrizhin}. The coefficient of the reflected wave component given by the second term in the right hand side of Eq. (\ref{eq.3.2}) is proportional to $p$, so that the reflection probability is proportional to $p^2$. Namely, deviation from the perfect transmission starts from $O(p^2)$. \par Since $u(x)$ and $v(x)$ have the same form as the condensate wavefunction, the currents $J_u$ and $J_v$ are immediately obtained from Eq. (\ref{eq.10}) as \begin{eqnarray} J_u=-J_v=p. \label{eq.3.16} \end{eqnarray} Namely, $J_u$ and $J_v$ are conserved everywhere in the low energy region. When we evaluate $J_u$ and $J_v$ of the incident wave, one finds $J_u=-J_v=p+O(p^2)$ for $x\to -\infty$. This also confirms the perfect transmission of $J_u$ and $J_v$, as well as Bogoliubov phonon. \par We note that, although Eq. (\ref{eq.3.16}) looks different from Eq. (\ref{eq.22}) (where $J_u$ and $J_v$ are constant in the low energy limit), the reason is simply due to the assumed magnitude of the incident wave in Eq. (\ref{eq.3.2}). When we choose the magnitude of the incident wave $(u_I,v_I)$ far away from the barrier so as to be equal to the plane wave solution $(u_p,v_p)$ given by Eq. (\ref{eq.21}), Eq. (\ref{eq.3.16}) is replaced by $J_u=-J_v=1/2$. Under this normalization, a finite quasiparticle current $J_{\rm qp}$ obtained from higher order terms in $J_u$ and $J_v$ with respect to $p$ is proportional to $p$, as in the uniform system discussed in Sec. II. It has been shown\cite{Tsuchiya} that this finite $J_{\rm q}$ is not conserved near the barrier due to the induction of supercurrent counterflow. \par \begin{figure} \centerline{\includegraphics[width=10cm]{fig2.ps}} \caption{Spatial variation of $u(x)$ given by Eqs. (\ref{eq.3.2}) and (\ref{eq.3.3}). We also show the incident wave component $U_I(x)\equiv u_I(x)e^{ipx}$, reflected wave $U_R(x)=i\alpha p u_R(x)e^{-ipx}$, transmitted wave $U_T(x)=(1-i\eta p)u_T(x)e^{ipx}$, and localized component $U_L(x)= {\rm sgn}(x)(i/2+\beta p)u_L(x)$. We take $p=0.05$ and $x_0=0.5$. } \label{fig2} \end{figure} \par Figure \ref{fig2} shows the incident, reflected, transmitted, and localized components of $u(x)$ in Eqs. (\ref{eq.3.2}) and (\ref{eq.3.3}). Although the perfect transmission looks as if the potential barrier is transparent for low-energy Bogoliubov phonon, the reflected component $U_R(x)\equiv i\alpha pu_R(x)e^{-ipx}$, as well as the localized component $U_L(x)\equiv{\rm sgn}(x)(i/2+\beta p)u_L(x)$, actually contribute to the solution. Indeed, these components are necessary to satisfy the boundary conditions at $x=0$. However, since the coefficient of the reflected component $U_R(x)$ is proportional to $p$, the plane wave factor $e^{-ipx}$ in this component is actually irrelevant in the present treatment within $O(p)$. Thus, although Eq. (\ref{eq.3.12}) involves the reflected wave component, the reflected ``plane wave" is not actually involved in it. \par We note that, while the localized state described by the last terms in Eqs. (\ref{eq.3.2}) and (\ref{eq.3.3}) does not carry current by itself, it still contributes to the tunneling current through the coupling with the propagating wave in the wavefunction. To see this, we divide $J_u$ into contributions coming from each wave component and their couplings. Then we obtain \begin{eqnarray} J_u=J_P+J_{PL}. \label{eq.3.17} \end{eqnarray} Here, $J_P\equiv{\rm Im}[U_P(x)^*\partial_x U_P(x)]$ is the contribution from the right-going wave $U_P(x)\equiv \theta(-x)u_I(x)e^{ipx}+\theta(x)(1-i\eta p)u_T(x)e^{ipx}$, while $J_{PL}\equiv{\rm Im}[U_P(x)^*\partial_x U_L(x)+U_L(x)^*\partial_x U_P(x)]$ describes coupling effects between the right-going wave and the localized state. The other current components involving the reflected wave $U_R(x)$, such as $J_R\equiv{\rm Im}[U_R(x)^*\partial_x U_R(x)]$, $J_{RP}\equiv{\rm Im}[U_R(x)^*\partial_x U_P(x)+U_P(x)^*\partial_x U_R(x)]$, and $J_{RL}\equiv{\rm Im}[U_R(x)^*\partial_x U_L(x)+U_R(x)^*\partial_x U_L(x)]$, can be ignored in the low energy limit. Calculating $J_P$ and $J_{PL}$, we obtain \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle J_P=p+{1 \over 4}\Bigl(1-\gamma(x)^4\Bigr),\\ \displaystyle J_{PL}=-{1 \over 4}\Bigl(1-\gamma(x)^4\Bigr). \end{array} \right. \label{eq.3.18} \end{eqnarray} \par Figure \ref{fig3} shows the contribution of $J_P$ and $J_{PL}$ to the current $J_u$. When we ignore effects of localized state $U_L(x)$, the current $J_u$ $(=J_P)$ is not conserved near the potential barrier, where the suppression of the condensate wavefunction $\Psi_{q=0}(x)$ is remarkable. The enhancement of $J_P$ is cancelled out by the counterflow $J_{PL}$ $(<0)$ originating from the coupling of the propagating wave $U_P(x)$ with the localized state $U_L(x)$. \begin{figure} \centerline{\includegraphics[width=10cm]{fig3.ps}} \caption{Effects of localized state $U_L(x)$ on $J_u$. We take $p=0.05$ and $x_0=0.5$. } \label{fig3} \end{figure} \par Since $u(x)$ and $v(x)$ have the same form as the condensate wavefunction $\Psi_p(x)$, we can expect that the Bogoliubov equations (\ref{eq.11}) and (\ref{eq.12}) are also related to GP equation (\ref{eq.3}) in the low energy region. To see this, it is convenient to note that the Bogoliubov equations (\ref{eq.11}) and (\ref{eq.12}) can be {\it formally} written as \begin{equation} \Bigl( -{1 \over 2}{d^2 \over dx^2}-(\mu+E)+V\delta(x)+W_u(x) \Bigr)u(x)=0, \label{eq.3.19} \end{equation} \begin{equation} \Bigl( -{1 \over 2}{d^2 \over dx^2}-(\mu-E)+V\delta(x)+W_v(x) \Bigr)v(x)=0. \label{eq.3.20} \end{equation} Here $W_u$ and $W_v$ are given by \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle W_u=2|\Psi_q(x)|^2-\Psi_q(x)^2{v(x) \over u(x)},\\ \displaystyle W_v=2|\Psi_q(x)|^2-\Psi_q^*(x)^2{u(x) \over v(x)}.\\ \end{array} \right. \label{eq.3.21} \end{eqnarray} In the absence of the supercurrent ($q=0$), setting $\mu=1$, one can rewrite Eqs. (\ref{eq.3.19}) and (\ref{eq.3.20}) in the form \begin{equation} \Bigl( -{1 \over 2}{d^2 \over dy_+^2}-1+{V \over \sqrt{1+p}}\delta(y_+)+{W_u(x) \over 1+p} \Bigr){\tilde u}(y_+)=0, \label{eq.3.22} \end{equation} \begin{equation} \Bigl( -{1 \over 2}{d^2 \over dy_-^2}-1+{V \over \sqrt{1-p}}\delta(y_-)+{W_v(x) \over 1-p} \Bigr){\tilde v}(y_-)=0. \label{eq.3.23} \end{equation} When $W_u(x)/(1+p)$ and $W_v(x)/(1-p)$ coincide with the nonlinear term in the GP equation, Eqs.(\ref{eq.3.22}) and (\ref{eq.3.23}) reproduce the GP equation (\ref{eq.3}). Indeed, substituting Eqs. (\ref{eq.3.1})-(\ref{eq.3.7}) into Eqs. (\ref{eq.3.21}), we obtain $W_u(x)/(1+p)=|{\tilde u}(y_+)|^2$ and $W_v/(1-p)=|{\tilde v}(y_-)|^2$, within the accuracy of $O(p)$. Thus, as expected, Eqs. (\ref{eq.3.22}) and (\ref{eq.3.23}) have the same form as the GP equation in the supercurrent state with momentum $p$\cite{note2}. \par We note that the potential barrier is modified as $V/\sqrt{1\pm p}$ in Eqs. (\ref{eq.3.22}) and (\ref{eq.3.23}), which can explain the reason for the shift of $x_0$ given by $Y_\pm$ in Eqs. (\ref{eq.3.13b}) and (\ref{eq.3.14b}). For example, when one solves the ``GP" equation (\ref{eq.3.22}), the equation for $Y_+$ is obtained as \begin{equation} 1-\gamma(0,Y_+)^2={V \over \sqrt{1+p}}\gamma(0,Y_+)\simeq \Bigl(1-{p \over 2}\Bigr)V\gamma(0,Y_+), \label{eq.3.24} \end{equation} where we have ignored higher order terms than $O(p)$. In Eq. (\ref{eq.3.24}), we are using the notation $\gamma(0,Y_+)$ to emphasize that $x_0$ is different from $Y_+$. Noting that $x_0$ is obtained from the equation $1-\gamma(0,x_0)^2=V\gamma(0,x_0)$, one can solve Eq. (\ref{eq.3.24}) within the accuracy of $O(p)$. The result is \begin{eqnarray} \gamma(0,Y_+) &=& \gamma(0,x_0)+{1-\gamma(0,x_0)^2 \over 2(1+\gamma(0,x_0)^2)}\gamma(0,x_0)p \nonumber \\ &\simeq& \gamma(0,x_0+{\gamma(0,x_0) \over 2(1+\gamma(0,x_0)^2)}p). \label{eq.3.24b} \end{eqnarray} Here, we have used the identity in Eq. (\ref{eq.3.9}). Equation (\ref{eq.3.24b}) reproduces $Y_+$ in Eq. (\ref{eq.3.13b}). In the same manner, one can confirm that the shift $Y_-$ of $x_0$ in ${\tilde v}(y_-)$ is due to the modified potential barrier $V/\sqrt{1-p}$ in the ``GP" equation (\ref{eq.3.23}). \par \section{Quasiparticle tunneling in the supercurrent state} \par In this section, we examine effects of supercurrent on the tunneling of Bogoliubov phonon. Assuming that the momentum $q$ carried by condensate, as well as the incident momentum $p$ of the Bogoliubov mode, are very small, we treat them within the first order. We also assume that the supercurrent $J_s$ is much smaller than the critical current. The critical supercurrent state is considered in Sec. V. \par Within the accuracy of $O(q)$, we may still ignore $q$ in Eqs. (\ref{eq.7}) and (\ref{eq.9}). Namely, $\gamma(x)$ and $x_0$ are not modified by supercurrent. In addition, we can also set $E=p$ and $(p_1,p_2,p_3,p_4)=(p,-p,2i,-2i)$ under the assumption of small $p$ and $q$. The Bogoliubov wavefunction has the form \begin{eqnarray} \left( \begin{array}{c} u(x)\\ v(x) \end{array} \right) = \left( \begin{array}{l} e^{i(qx+\theta_q)}{\hat u}_<(x)\\ e^{-i(qx+\theta_q)}{\hat v}_<(x) \end{array} \right)\theta(-x) + \left( \begin{array}{l} e^{i(qx-\theta_q)}{\hat u}_>(x)\\ e^{-i(qx-\theta_q)}{\hat v}_>(x) \end{array} \right)\theta(x), \label{eq.4.1} \end{eqnarray} where \begin{eqnarray} \left( \begin{array}{l} {\hat u}_<(x)\\{\hat v}_<(x) \end{array} \right) = \left( \begin{array}{l} u_I(x)\\v_I(x) \end{array} \right)e^{ipx} + i\alpha p \left( \begin{array}{l} u_R(x)\\v_R(x) \end{array} \right)e^{-ipx} + \Bigl({i \over 2}+\beta p-{i \over 2}q\Bigr)\left( \begin{array}{l} u_L(x)\\v_L(x) \end{array} \right), \label{eq.4.2} \end{eqnarray} \begin{eqnarray} \left( \begin{array}{l} {\hat u}_>(x)\\{\hat v}_>(x) \end{array} \right) = (1-i\eta p) \left( \begin{array}{l} u_T(x)\\v_T(x) \end{array} \right)e^{ipx} - \Bigl({i \over 2}+\beta p-{i \over 2}q\Bigr)\left( \begin{array}{l} u_L(x)\\v_L(x) \end{array} \right). \label{eq.4.3} \end{eqnarray} Here, coefficients $(\alpha$,$\beta$,$\eta)$ and the localized component $(u_L,u_L)$ are not affected by supercurrent, and they are given by Eqs. (\ref{eq.3.4}) and (\ref{eq.3.6c}), respectively. In contrast, the incident wave $(u_I,v_I)$, reflected wave $(u_R,v_R)$, and transmitted wave ($u_T,v_T$) are modified to be \begin{eqnarray} \left( \begin{array}{c} u_I \\v_I \end{array} \right) = \left( \begin{array}{l} \displaystyle \gamma(x)+{i \over 2}[1-\gamma(x)^2]+{p \over 2}[\gamma(x)+i] +i{q \over 2}[1+\gamma(x)^2]\\ \displaystyle \gamma(x)-{i \over 2}[1-\gamma(x)^2]-{p \over 2}[\gamma(x)-i] -i{q \over 2}[1+\gamma(x)^2] \end{array} \right), \label{eq.4.4} \end{eqnarray} \begin{eqnarray} \left( \begin{array}{c} u_R \\v_R \end{array} \right) = \left( \begin{array}{l} \displaystyle \gamma(x)-{i \over 2}[1-\gamma(x)^2]+{p \over 2}[\gamma(x)-i] +i{q \over 2}[1+\gamma(x)^2] \\ \displaystyle \gamma(x)+{i \over 2}[1-\gamma(x)^2]-{p \over 2}[\gamma(x)+i] -i{q \over 2}[1+\gamma(x)^2] \end{array} \right), \label{eq.4.5} \end{eqnarray} \begin{eqnarray} \left( \begin{array}{c} u_T \\v_T \end{array} \right) = \left( \begin{array}{l} \displaystyle \gamma(x)-{i \over 2}[1-\gamma(x)^2]+{p \over 2}[\gamma(x)-i] -i{q \over 2}[1+\gamma(x)^2] \\ \displaystyle \gamma(x)+{i \over 2}[1-\gamma(x)^2]-{p \over 2}[\gamma(x)+i] +i{q \over 2}[1+\gamma(x)^2] \end{array} \right). \label{eq.4.6} \end{eqnarray} \par To see the supercurrent behavior of low-energy Bogoliubov phonon in the supercurrent state, we expand Eq. (\ref{eq.4.1}) in terms of $p$ and $q$ to the first order. In this procedure, we also expand the phase factor $e^{\pm i\theta_q}$ defined in Eq. (\ref{eq.8}) as \begin{equation} e^{\pm i\theta_q}=1\mp i{q \over \gamma_0}+O(q^2). \label{eq.4.7} \end{equation} For example, $u(x\ge 0)$ becomes \begin{eqnarray} u(x\ge0) &=&\gamma(x)+i(p+q){\gamma(x)-\gamma_0 \over \gamma_0}+i(p+q)x\gamma(x) \nonumber \\ &+& ip{\gamma_0 \over 1+\gamma_0^2}\gamma(x) +{p \over 2}{\gamma_0 \over 1+\gamma_0^2}[1-\gamma(x)^2] +{p \over 2}\gamma(x)+{p \over 2}x[1-\gamma(x)^2]. \label{eq.4.8} \end{eqnarray} Comparing Eq. (\ref{eq.4.8}) with Eq. (\ref{eq.3.8}), we find that Eq. (\ref{eq.4.8}) can be written as, within the accuracy of $O(p)$ and $O(q)$, \begin{equation} u(x\ge 0)=\sqrt{1+p}e^{ip{\gamma_0 \over 1+\gamma_0^2}}e^{i(p+q)x} \Bigl[ \gamma(\sqrt{1+p}x+{p\gamma_0 \over 2(1+\gamma_0^2)}) +i(p+q){\gamma(x)-\gamma_0 \over \gamma_0} \Bigr]. \label{eq.4.9} \end{equation} This is essentially the same form as the condensate wavefunction $\Psi_{p+q}(x)$ in (\ref{eq.3.11}). We also reach the same conclusion for $u(\le 0)$. As a result, we obtain \begin{equation} {\tilde u}(y_+)={\tilde \Psi}_{p+q}(y_+,Y_+). \label{eq.4.10} \end{equation} Using the relation $v_p(x)=u_{-p}^*(x)$, we also find \begin{equation} {\tilde v}(y_-)={\tilde \Psi}_{p-q}(y_-,Y_-). \label{eq.4.11} \end{equation} \par Equations (\ref{eq.4.10}) and (\ref{eq.4.11}) indicate that tunneling properties of the current $J_u$ and $J_v$ are the same as those of supercurrent in the low energy region. Namely, the Bogoliubov phonon still shows the perfect transmission in the supercurrent state\cite{Danshita}. Equations (\ref{eq.4.10}) and (\ref{eq.4.11}) also show that the supercurrent $J_s=q$ affects $J_u$ and $J_v$ as $J_u=p+q$ and $J_v=-p+q$. This implies that, in the critical supercurrent state, the ``supercurrent" $J_u$ or $J_v$ exceeds the critical current, leading to the breakdown of the perfect transmission. We will confirm this in Sec. V. \par We note that Eqs. (\ref{eq.3.22}) and (\ref{eq.3.23}) are also valid for the supercurrent state. Substituting Eqs. (\ref{eq.4.10}) and (\ref{eq.4.11}) into Eq. (\ref{eq.3.21}), we obtain $W_u/(1+p)=|{\tilde u}(y_+)|^2$ and $W_v/(1-p)=|{\tilde v}(y_-)|^2$. Thus, Eqs. (\ref{eq.3.22}) and (\ref{eq.3.23}) reduce to the GP equation, as expected. \par \section{Breakdown of anomalous tunneling effect in the critical supercurrent state} \par In Ref. \cite{Danshita}, the breakdown of the anomalous tunneling is predicted in the critical supercurrent state, when the barrier $V$ is high enough so that the tunneling of supercurrent can be regarded as the Josephson effect. In this section, we also consider the same situation to see if the absence of the anomalous tunneling in the critical supercurrent state can be understood as that the currents $J_u$ or $J_v$ exceeds the critical value. \par We briefly summarize the critical supercurrent $J_c~(=q_c)$ in the Josephson coupling regime. To derive the current-phase relation in this regime, it is convenient to write the condensate wavefunction $\Psi_q(x)$ in the form \begin{equation} \Psi_q(x)=\sqrt{q^2+\gamma(x)^2}e^{iqx} e^{i{\rm sgn}(x){\phi(x) \over 2}}. \label{eq.5.1} \end{equation} Here, the phase $\phi(x)$ is defined by \begin{equation} \phi(x)=2 \Bigl( \tan^{-1}{\gamma(x) \over q}-\tan^{-1}{\gamma_0 \over q} \Bigr). \label{eq.5.2} \end{equation} Equation (\ref{eq.5.2}) can be rewritten as \begin{equation} q=\gamma_0\tan{\Phi(x) \over 2}, \label{eq.5.3} \end{equation} where $\Phi(x)=\phi(x)-2[\tan^{-1}(\gamma(x)/q)-\pi/2]$. When the barrier potential $V$ is very large, $q$ and $\gamma_0$ are very small and one can expand them in terms of $V^{-1}$\cite{Danshita}. Within the accuracy of $O(V^{-1})$, Eq. (\ref{eq.9}) gives \begin{equation} \gamma_0=V(\gamma_0^2+q^2). \label{eq.5.4} \end{equation} From Eqs. (\ref{eq.5.3}) and (\ref{eq.5.4}), one finds \begin{eqnarray} \left\{ \begin{array}{l} q={1 \over 2V}\sin\Phi,\\ \gamma_0={1 \over 2V}(1+\cos\Phi). \end{array} \right. \label{eq.5.5} \end{eqnarray} Since the supercurrent is uniform, the $x$ dependence of $\Phi(x)$ in Eq. (\ref{eq.5.5}) is actually irrelevant. When we take the phase $\phi(x)$ at $x=\infty$ ($\equiv\phi_0$), we obtain the well-known Josephson's current-phase relation, $J_s=(1/2V)\sin\phi_0$ within the accuracy of $O(V^{-1})$\cite{Danshita}. Equation (\ref{eq.5.5}) gives the critical current $J_c$ in the Josephson coupling regime as \begin{equation} J_c=q_c={1 \over 2V}. \label{eq.5.6} \end{equation} \par Now we construct the Bogoliubov wavefunction $(u,v)$. In the following discussion, we always assume $p\ll q\sim \gamma_0\sim 1/2V \ll 1$. The Bogoliubov wavefunction, satisfying the boundary conditions, are given by Eqs. (\ref{eq.4.1})-(\ref{eq.4.6}), where the coefficient $\beta$ in Eq. (\ref{eq.4.3}) is replaced by $\beta'$. The coefficients $(\alpha,\beta,\beta',\eta)$ are given by \begin{eqnarray} \left\{ \begin{array}{l} \displaystyle \alpha={1 \over 2}-(1+q){\gamma_0 \over \gamma_0^2-q^2},\\ \displaystyle \beta={1 \over 4}-{1-2q \over 2}{\gamma_0 \over \gamma_0^2-q^2},\\ \displaystyle \beta'={1 \over 4}-{1-q \over 2}{\gamma_0 \over \gamma_0^2-q^2},\\ \displaystyle \eta={1 \over 2}-(1-4q){\gamma_0 \over \gamma_0^2-q^2}. \end{array} \right. \label{eq.5.7} \end{eqnarray} We note that Eq. (\ref{eq.5.5}) gives $\gamma_0=q_c=1/2V$ in the critical supercurrent state. In Sec. IV, we have expanded the Bogoliubov wavefunction $(u,v)$ in terms of $p$ and $q$, where we have implicitly assumed $q/\gamma_0\ll 1$. (See, for example, Eq. (\ref{eq.4.7}).) In the present case ($J_s\ \raise.3ex\hbox{$<$}\kern-0.8em\lower.7ex\hbox{$\sim$}\ J_c$), although we can still assume $q\ll 1$, we have to treat $\gamma_0$ as the same order as $q$. Namely, the expansion in terms of $q/\gamma_0$, such as Eq. (\ref{eq.4.7}), is not allowed in the present case. In addition, because of $q\simeq \gamma_0$, we have to carefully treat the factor $1/(\gamma_0^2-q^2)$ in Eq. (\ref{eq.5.7}). However, even in this case, we can show that $u$ and $v$ still have the supercurrent properties {\it near the barrier} at $x=0$. \par To show this, we first consider $u(x)$. Within the accuracy of $O(x)$, ${\hat u}(x\ge 0)$ reduces to \begin{eqnarray} {\hat u}(x\ge 0)= \Bigl( 1+ip{\gamma_0 \over \gamma_0^2-q^2} \Bigr) \Bigl( \gamma(x)-iQ_+-2pq{\gamma_0 \over \gamma_0-q^2}, \Bigr). \label{eq.5.8} \end{eqnarray} where $Q_+=p+q$. We also expand the factor $e^{-i\theta_q}$ in Eq. (\ref{eq.4.1}) to $O(p)$, which gives \begin{eqnarray} e^{-i\theta_q} = {\gamma_0+iq \over \sqrt{\gamma_0^2+q^2}} = {\gamma_0+iQ_+ \over \sqrt{\gamma_0^2+Q_+^2}} \Bigl( 1-ip{\gamma_0 \over \gamma_0^2+q^2} \Bigr). \label{eq.5.9} \end{eqnarray} Using Eqs. (\ref{eq.5.8}) and (\ref{eq.5.9}), we obtain\cite{note3} \begin{equation} u(x\ge 0)={\gamma(0,Z_+)+iQ_+ \over \sqrt{\gamma(0,Z_+)^2+Q_+^2}}e^{iQ_+x} [\gamma(x,Z_+)-iQ_+], \label{eq.5.10} \end{equation} where $\gamma(x,Z_+)$ equals $\gamma(x)$ where $x_0$ is replaced by \begin{equation} Z_+=x_0-2pq{\gamma(0,Z_+) \over \gamma(0,Z_+)-q^2}. \label{eq.5.11} \end{equation} \par In the same way, we can rewrite $u(x\le 0)$ as \begin{equation} u(x\le 0)={\gamma(0,Z_-)-iQ_+ \over \sqrt{\gamma(0,Z_+)^2+Q_+^2}}e^{iQ_+x} [\gamma(x,Z_+)+iQ_+]. \label{eq.5.12} \end{equation} Comparing Eqs. (\ref{eq.5.10}) and (\ref{eq.5.12}) with Eq. (\ref{eq.6}) we find near the barrier ($x\ll 1$), \begin{equation} u(x)={\tilde \Psi}_{p+q}(x,Z_+). \label{eq.5.13} \end{equation} Applying the same analysis to $v(x)$, we obtain \begin{equation} v(x)={\tilde \Psi}_{p-q}(x,Z_-), \label{eq.5.13b} \end{equation} where \begin{equation} Z_-=x_0+2pq{\gamma(0,Z_-) \over \gamma(0,Z_-)-q^2}. \label{eq.5.14} \end{equation} Thus, we conclude that $u$ and $v$ still have the same properties as those of the condensate wavefunction near the barrier at $x=0$. Their currents are given by $J_u=p+q$ and $J_v=-p+q$ at $x\ll 1$. \par Using the fact that $\gamma_0$ satisfies Eq. (\ref{eq.5.4}), we obtains, within the accuracy of $O(p)$ \begin{equation} \gamma(0,Z_\pm)=V\Bigl(\gamma(0,Z_\pm)^2+Q_\pm^2\Bigr). \label{eq.5.15} \end{equation} We define the phase $\phi_\pm(x)$ ($x\ll 1$) by \begin{equation} \phi_\pm(x)=2 \Bigl( \tan^{-1}{\gamma(x,Z_\pm) \over Q_\pm} - \tan^{-1}{\gamma(0,Z_\pm) \over Q_\pm} \Bigr). \label{eq.5.16} \end{equation} This is analogous to the phase $\phi(x)$ introduced in Eq. (\ref{eq.5.2}). From Eqs. (\ref{eq.5.15}) and (\ref{eq.5.16}), we obtain \begin{eqnarray} \left\{ \begin{array}{l} Q_\pm={1 \over 2V}\sin\Phi_\pm,\\ \gamma(0,Z_\pm)={1 \over 2V}(1+\cos\Phi_\pm), \end{array} \right. \label{eq.5.17} \end{eqnarray} where $\Phi_\pm(x)=\phi_\pm(x)-2[\tan^{-1}(\gamma(x,Z_\pm)/Q_\pm)-\pi/2]$. Since the currents $J_u=Q_+=p+q$ and $J_v=-Q_-=-p+q$ are uniform near the barrier, the $x$ dependence of $\Phi_\pm$ in Eq. (\ref{eq.5.17}) can be actually ignored as far as $x\ll 1$. Equation (\ref{eq.5.17}) shows that both $J_u$ and $J_v$ must be smaller than the upper limit of the supercurrent $J_c=1/2V$. However, this condition cannot be satisfied in the critical supercurrent state ($q=J_c=1/2V$). Namely, $J_u$ or $J_v$ always exceeds $J_c$, depending on the direction of the momentum $p$ of Bogoliubov phonon. As a result, the perfect transmission is not obtained in the critical supercurrent state. We emphasize that this gives simple explanation for the breakdown of the anomalous tunneling in the critical supercurrent state predicted in Ref. \cite{Danshita}. \par We note that the coefficients $(\alpha,\beta,\beta',\gamma)$ in Eqs. (\ref{eq.5.7}) diverge when $q=\gamma_0=1/2V$. This means that the components $u$ and $v$ in the Bogoliubov wavefunction no longer have the same form as the condensate wavefunction in the critical supercurrent state. In this case, $u$ and $v$ would have different forms from the condensate wavefunction, leading to a finite reflection probability at the barrier. \par \section{Summary} In this paper, we have investigated tunneling properties of low-energy Bogoliubov excitations in a Bose superfluid. Using the exactly solvable tunneling problem through a $\delta$-functional potential barrier, we have clarified that, in the low energy region, the components $u(x)$ and $v(x)$ in the Bogoliubov wavefunction $(u(x),v(x))$ have the same form as the condensate wavefunction in the supercurrent state. Based on this result, we have given physical explanation for the anomalous tunneling effect of Bogoliubov phonon, as well as the absence of this phenomenon in the critical supercurrent state, in a consistent manner. \par The current $J_u$ and $J_v$ associated with $u(x)$ and $v(x)$, respectively, tunnel through the barrier without reflection, as in the case of supercurrent tunneling. This gives physical explanation for the perfect transmission of the Bogoliubov phonon predicted recently. We also showed that the supercurrent behaviors of $J_u$ and $J_v$, as well as the resulting perfect transmission of Bogoliubov phonon, are also obtained in the supercurrent state unless the magnitude of the supercurrent reaches the critical value. \par In the Josephson coupling regime (which is realized when the tunneling barrier is very high), the upper limit of $J_u$ and $J_v$ is given by the critical current $J_c$ of the ordinary supercurrent. In the critical supercurrent state, $J_u$ or $J_v$ always exceeds $J_c$, irrespective of the direction of the momentum of Bogoliubov phonon. Because of this, the perfect transmission of Bogoliubov phonon is no longer obtained in the critical supercurrent state. This can explain the reason why the breakdown of the anomalous tunneling effect occurs in the critical supercurrent state. \par In this paper, we have used a simple $\delta$-functional potential barrier. While this model enables us to treat the GP and Bogoliubov equations analytically, any realistic potential barrier should have a finite potential width. However, although our model is simple, we still expect that the essence of our results in this paper would be valid for the case with a more realistic barrier potential. We will extend our study to more general and realistic cases in our future paper. Since the Bogoliubov phonon is one of the most fundamental phenomena in the superfluid phase, the observation of the anomalous tunneling would be important in understanding basic properties of this collective mode. \begin{acknowledgments} We would like to thank I. Danshita for useful discussions on the anomalous tunneling effect in the supercurrent state. This work was supported by a Grant-in-Aid for Scientific research from MEXT (18043005) and CTC program. \end{acknowledgments}
1,116,691,497,024
arxiv
\section{Conclusion and Future Work} This work provides a promising foundation for optimizing and evaluating marker placement for improved visual localization. Our OMP algorithm defines localizability scores for different areas in the scene and uses a greedy algorithm to find the best marker placements in the sense of increased localizability scores. We applied the OMP algorithm to three scenes and demonstrated that OMP consistently improves camera localization recall compared to random marker placements and no marker placement. The OMP algorithm only considers placing markers in a scene model (i.e., mapped areas in the scene), however, hard locations that are not able to be mapped are probably the best places for markers. The algorithm can be extended with a hole-filling approach that prioritizes marker placements in unmapped areas (i.e., holes on model surfaces) if needed. Further research is also needed to compute more accurate localizability scores and explore more efficient optimization methods beyond the greedy algorithm, including: (1) joint optimization of marker poses and sizes, (2) extending the single-layer ground plane to multi-layer planes, (3) using non-Gaussian distribution estimation techniques to compute localizability scores, and (4) applying submodular optimization to jointly select multiple best markers together with fewer iterations. \section{Experimental Setup} \subsection{Implementation} We implemented all three key techniques and Algorithm~\ref{algo:imp} in Sec.~\ref{sec:key-tech} in Python with assistance of a few open source software packages. We used the Unreal Engine 4.27 \cite{unrealengine} and the AirSim library (v1.8.1) \cite{shah2018airsim} to simulate and collect images from 3D models. We used the Open3D library \cite{Zhou2018} to downsample scan points to get candidate marker locations. We used the GTSAM library \cite{gtsam} to create factor graphs and estimate covariance in Gaussian approximations of camera pose distributions. Additionally, we implemented a simulation system for testing marker placement algorithms and a camera localization module for estimating camera poses of test images. Fig.~\ref{fig:system} presents a flowchart of the system. The system adds markers to a scene model at positions planned by marker placement algorithms and then acquires test images from the same set of camera poses for different marker placements for the fairness in comparison. We stress three advantages of the simulation system over real world pipelines for performing camera localization experiments: 1) reproducible data collection by other researchers for future development of marker placement algorithms, 2) a large number of test images that cover the scene, 3) consistent camera poses for acquiring test images in scenes with different marker placements. \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{figs/loc_module.pdf} \caption{The localization module using fiducial marker detection. The numbers indicate the order of different operations. The map data consists of RGB images (mapping images), camera poses, and depth for computing 3D points. The test data only includes RGB images (test images). We detect markers, compute the VLAD descriptor, and extract features for any RGB image. The markers and the VLAD descriptor of a test image are used to find $N$ best matches in the mapping images. Feature points in the matched images are used to estimate the pose of the test image.} \label{fig:loc-module} \end{figure} \subsection{Evaluation} \begin{figure*}[t!] \centering \includegraphics[width=1.0\linewidth]{figs/all_model_res2.pdf} \caption{Results for three scene models: (a) 3D scene models, (b) ground plane space with no markers where darker dots indicate lower localizability scores, (c) optimized marker placements where the red arrows represent optimized marker placements and the numbers beside the arrows indicate the order of marker placements, and (d) the recall in camera localization experiments where we generated 5 random placements for each scene and report the mean and standard deviation.} \label{fig:all-model-res} \end{figure*} \subsubsection{Methods for comparison} We compare our algorithm OMP with 1) no marker placement and 2) random marker placements. Random marker placements refer to uniformly weighted samples from feasible marker poses. We generated 5 versions of random placements for each scene and all randomly placed markers were manually inspected in scene models to ensure reasonable quality of random placements. \subsubsection{Scene models} The method comparison is performed on three scene models. The models are designated with names apartment, studio, and office, as seen in Fig.~\ref{fig:all-model-res}. The first two models are pre-built dense maps of realworld spaces provided by the Habitat-Matterport 3D (HM3D) Research Dataset \cite{ramakrishnan2021hm3d} while the last model is an Unreal Engine simulation environment that resembles typical realworld offices\protect\footnotemark. Table~\ref{model-spec} lists specifics of these models. \subsubsection{The localization module} Fig.~\ref{fig:loc-module} presents the flowchart of our localization module. The localization module is similar to standard approaches \cite{sarlin2019coarse} but with an extra function of fiducial marker detection. The fiducial marker detection was provided by the AprilTag library \cite{olson2011apriltag}. Our implementation of VLAD descriptors \cite{jegou2011aggregating} was adapted from \cite{vlad-repo}. The tag detection and VLAD descriptors were sequentially employed to find 10 matched images in the map data (i.e., $N=10$ in Fig.~\ref{fig:loc-module}). Camera poses were estimated using P3P~\cite{gao2003complete} with RANSAC~\cite{fischler1981random} followed by Levenberg-Marquardt optimization~\cite{opencv_library}. The rotation error $\delta_{\rot}$ is defined as the angular distance between the estimated rotation matrix $\est{\rot}$ and the groundtruth rotation $\TrueRot$ while the translation error $\delta_{\tran}$ is defined as the Euclidean distance between the estimated translation $\est{\tran}$ and the groundtruth translation $\true{\tran}$, as seen in \begin{align} \delta_{\rot}&=\big| \text{arccos}\bigl( \frac{\tr({\est{\rot}\transpose \TrueRot})-1}{2} \bigr) \big|,\\ \delta_{\tran}&=\big\| \est{\tran} - \TrueTran \big\|_{2}. \end{align} \subsubsection{The map and test data}\label{sec:test-data} The camera poses for collecting the map data are the same as the feasible camera poses in the ground plane space. The camera poses for collecting test images are sampled from the feasible camera poses with weights and then perturbed by translation and rotation noises that are subject to a uniform distribution in $[-0.5, 0.5]$. The weights in the sampling correlate with localizability scores for generating more test images around low-scoring camera poses. Let $\locSet=\{\locVal(\camVal):\camVal \in \camSet \}$ be the set of localizability scores of feasible camera poses in the ground plane space with no markers. The weights are defined as \begin{equation} \weights = \{ 2\locVal^{\star} - \overline{l} - \locVal(\camVal):\camVal \in \camSet \}, \end{equation} where $\locVal^{\star}$ is the maximal score in $\locSet$ and $\overline{l}$ is the mean of all scores. Thus all weights will be non-negative and a lower score incurs a greater weight. \footnotetext{The serial number of the apartment model is \href{https://aihabitat.org/datasets/hm3d/00770-NBg5UqG3di3/index.html}{00770-NBg5UqG3di3} in the HM3D dataset and that of the studio model is \href{https://aihabitat.org/datasets/hm3d/00254-YMNvYDhK8mB/index.html}{00254-YMNvYDhK8mB}. The office model is the \href{https://www.unrealengine.com/marketplace/en-US/product/threedee-office}{ThreeDee Office} project in the Unreal Engine Marketplace.} \section{Introduction} Visual localization is a foundational technique for applications including AR/VR, autonomous driving, and robotic navigation and manipulation. A typical problem in visual localization is to estimate the camera pose of a query image, provided a pre-built map. While the problem has long been investigated in many fields~\cite{zhang2021reference}, visual localization still suffers due to challenging scenes such as textureless walls and repetitive structures (e.g., Rooms A and B in Fig.~\ref{fig:demo-challenges}). One common solution to these challenges is to place fiducial markers as additional texture and identifiers in the scene~\cite{munoz2020ucoslam,DeGol_2018_ECCV}; however, placing fiducial markers in larger environments is a time consuming process and the resulting performance improvement depends on marker positions. Thus, optimizing marker placement is valuable for robust visual localization. This work proposes an automatic approach to optimizing marker placement such that 1) the resulting marker positions yield improved accuracy in visual localization and 2) a human user will be able to place markers at positions planned by the approach (e.g., no markers on the ceiling). Specifically, the approach computes optimized marker positions, given a predetermined set of markers and a scene model. The key contributions of this work include: \begin{enumerate} \item This is the first work that optimizes marker placement for visual localization based on scene features and fiducial markers. \item We propose a novel framework that models localizability of camera poses in a scene and computes localizability scores. \item We develop a greedy algorithm that optimizes marker positions with the goal of increased localizability scores. \item We design a simulation framework for testing marker placement algorithms on 3D scene models that enables others to reproduce and build on our work. \item We demonstrate that optimized marker placement by our approach can improve the localization rate by up to 20 percent on three different scenes. \end{enumerate} \begin{figure}[!t] \centering \includegraphics[width=.9\linewidth]{figs/showcase_hard.pdf} \caption{Three challenging examples for visual localization within the same scene. The images on the left and middle show two almost identical rooms in the scene, whereas the image on the right depicts a very weakly textured surface. Marker placements\protect\footnotemark in this scene guided by our optimized marker placement approach led to improved visual localization on these examples. } \label{fig:demo-challenges} \end{figure} \footnotetext{Fiducial markers in the examples are AprilTags\cite{olson2011apriltag} but our algorithm is general and can be used with any existing family of fiducial markers.} \subsection*{Notation} Deterministic values are denoted by lowercase letters while random variables are indicated by uppercase letters. A set of values is denoted by calligraphic font (e.g., $\mathcal{L}=\{l\}$). We use $p(X)$ to denote the probability density function $p_X(\boldsymbol{\cdot})$ of random variable $X$. \section{Methods} \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{figs/overview.pdf} \caption{An overview of our approach for optimizing marker placements. We first create a set of feasible camera poses and marker poses by discretizing space in the 3D model. Then we evaluate localizability scores of feasible camera poses in the 3D model and update the scores once a feasible marker pose is selected to place a marker. The marker placement is selected by a greedy algorithm as the best trial out of trial placements in the vacancies (unselected marker poses). These trial placements yield gains of localizability scores and are ranked by the gains.} \label{fig:overview} \end{figure} We aim to compute $k$ 3D locations in the scene for placing $k$ fiducial markers such that after marker placement, the camera localization performance improves for query images from anywhere within the scene. We describe our optimized marker placement (OMP) approach in the following sections. \subsection{Assumptions} This work makes two assumptions: \begin{enumerate} \item A textured 3D model of the scene is available. \item Markers and cameras are located on a 3D plane parallel to the ground plane at roughly the eye level of a person with average height. \end{enumerate} Note that the textured model can be a 3D simulation environment or a dense reconstruction of scenes. We will collect images (e.g., RGB, depth, and surface normal) and corresponding camera poses from the model and take them as input to our approach for optimizing marker placement. The second assumption ensures that our marker placement will be reachable to a human user and constrains the number of feasible camera and marker locations for the sake of computation efficiency. \subsection{Key Techniques} \label{sec:key-tech} Fig.~\ref{fig:overview} shows an overview of our approach. The approach is composed of three key techniques: 1) discretization, 2) evaluation of camera localizability, and 3) a greedy algorithm for selecting marker placements. \begin{figure}[t!] \centering \includegraphics[width=.9\linewidth]{figs/discretization2.pdf} \caption{Discretization of one of the 3D models from the Habitat-Matterport 3D dataset\protect\footnotemark. We select a ground plane in the 3D model at roughly eye level of a human user and create an occupancy grid map of the plane using depth images. The discretized space of the ground plane consists of feasible marker poses (red arrows), which are sampled from scan points on the ground plane perimeter, and feasible camera locations (blue dots), which are centers of unoccupied cells.} \label{fig:discrete} \end{figure} \footnotetext{The 3D model in the figure is provided by the Habitat-Matterport 3D Research Dataset (model name: \href{https://aihabitat.org/datasets/hm3d/00770-NBg5UqG3di3/index.html}{00770-NBg5UqG3di3}) \cite{ramakrishnan2021hm3d}.} \begin{figure*}[ht] \centering \includegraphics[width=.9\linewidth]{figs/loc_score.pdf} \caption{Evaluation of localizability scores and the information gain brought by a marker placement. On the left we show a grid of feasible camera poses. Feasible camera poses are positioned at the center of cells with orientations shown as the red arrows. The field of view of camera pose $\camVal$ covers feature points $p_1$, $p_2$, and $p_3$ in the 3D model and a marker placement on the discretized perimeter of the level set of the ground plane. We synthesize measurements $\measVal$ of feature points to create a camera localization problem using scene features. The problem is represented by factor graph 1 and distribution $p(\camVar|\measVal)$ by which we can compute entropy as well as localizability score of the camera pose seeing no markers. We penalize contributions of repetitive structures on the localizability score via the similarity analysis over scene features. With additional measurements $\markerVal$ to the marker, we create another localization problem which is represented by factor graph 2 and distribution $p(\camVar|\measVal, \markerVal)$. The new problem leads to a new entropy and a new localizability score. The difference between old and new entropies defines the information gain at the camera pose yielded by the marker placement.} \label{fig:loc-score} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=0.9\linewidth]{figs/info_gain_res3.pdf} \caption{Results of localizability scores: (a) no markers, (b) a trial marker placement (red arrow), and (c) the information gain. The score (or gain) at a dot is the mean score (or gain) of camera poses with all feasible orientations and the same location of the dot. The number of feasible orientations is 8. Darker dots stress low localizability scores in (a) and (b) and high information gains in (c).} \label{fig:info-gain-res} \end{figure*} \subsubsection{Discretization}We first convert the ground plane in the 3D model to a discretized space of camera and marker poses, as shown in Fig.~\ref{fig:discrete}. The conversion is implemented by occupancy grid mapping. Specifically, given the 3D model, we synthesize pseudo laser scans in the ground plane and use them to create an occupancy grid. Centers of unoccupied grid cells are designated as feasible camera locations (dots in Fig.~\ref{fig:discrete}) while scan points form the perimeter of the free space (lines in Fig.~\ref{fig:discrete}). We uniformly downsample the scan points to generate a set of feasible marker poses $\markerSet$ on the perimeter of the ground plane (arrows in Fig.~\ref{fig:discrete}) whose orientations are determined by surface normals in the 3D model. We derive a set of feasible camera poses $\camSet$ from the feasible camera locations. Each of the camera locations yields $n$ camera poses whose optical axes are parallel to the ground plane and evenly spaced in $[0, 2\pi]$ (e.g., the default $n=8$). \subsubsection{Camera localizability score}We compute camera localizability scores by evaluating uncertainty in localizing feasible camera poses. Specifically, for any feasible camera pose $\camVal \in \camSet$ (the corresponding random variable is $\camVar$), we synthesize measurements $\measVal$ to create a camera localization problem, estimate the distribution of the camera pose $p(\camVar|\measVal)$, and define the localizability score of the camera pose $\locVal(\camVal)$ as the negation of the entropy of the distribution, as shown in \begin{equation} \locVal(\camVal) = -H(p(\camVar|\measVal))=\Expectation[\ln p(\camVar|\measVal)]. \label{eq:loc-score} \end{equation} If a new fiducial marker is added in the field of view (FOV) of the camera pose, the new synthetic measurement regarding the marker will change the entropy of the camera pose distribution, resulting in an information gain that quantifies the impact of the marker placement. Fig.~\ref{fig:loc-score} summarizes steps for evaluating the localizability score and the information gain. These steps are explained in detail in following paragraphs. \textbf{Synthetic localization problems for computing the localizability score and information gain}: The leftmost part of Fig.~\ref{fig:loc-score} illustrates feature points and a feasible marker pose (i.e., trial marker placement) that are in the FOV of a feasible camera pose. We collect RGB and depth images at the camera pose in the 3D model. These images will be used to compute 3D points and descriptors of features (e.g., SIFT~\cite{Lowe04ijcv}). We use these known poses and points to synthesize measurements and estimate probability density functions (PDFs) of the camera pose variable. The PDF enables computation of the entropy as well as the localizability score in \eqref{eq:loc-score}. For example, measurements $\mathbf{z}$ in Fig.~\ref{fig:loc-score} contain the camera pose, the 3D points of features, and bearings between the camera pose and the 3D points. Thus the PDF $p(C|\mathbf{z})$, which is represented by factor graph 1, expresses the distribution of the camera pose constrained by features. Placing a marker in the FOV of the camera leads to new synthetic measurements $\mathbf{m}$ of the marker pose and the relative pose between the marker and the camera. As a result, the camera pose is further constrained by measurements $\mathbf{m}$ thus is described by a new PDF $p(C|\mathbf{z}, \mathbf{m})$ represented by factor graph 2 in Fig.~\ref{fig:loc-score}. We use an approach that is similar to the one proposed by Stachniss et al. \cite{stachniss2005information} to define the information gain of a marker placement. The information gain is defined as the change of entropy that the marker placement $\markerVal$ yields at the camera pose $\camVal$, as seen in \begin{equation} I(\markerVal,\camVal)=H(p(\camVar|\measVal))-H(p(\camVar|\measVal,\markerVal)). \end{equation} Fig.~\ref{fig:info-gain-res}a shows localizability scores of camera poses in the original ground plane with no marker placement. Note that the score at a dot in the figure is the mean score of camera poses with all feasible orientations. Fig.~\ref{fig:info-gain-res}b shows localizability scores after adding a marker (the arrow) to the ground plane perimeter. The marker increases scores in the region around the marker, indicated by the brighter dots in the region in Fig.~\ref{fig:info-gain-res}b and the information gain in Fig.~\ref{fig:info-gain-res}c. \begin{algorithm}[t] \fontsize{9pt}{9pt}\selectfont \DontPrintSemicolon \KwIn{The number of markers $k$, the list of feasible marker poses $\mathcal{M}$, the ground plane space $\mathcal{S}$} \KwOut{$k$ marker poses} \SetKwFor{RepTimes}{repeat}{times}{end} Initialize an empty list for storing selected marker poses $\mathcal{O}$ \RepTimes{$k$}{ Initialize the best marker pose $T^{\star}=\emptyset$ Initialize the highest localizability gain $g^{\star}=-\inf$ Evaluate localizability scores $\mathcal{L}^{\star}$ of camera poses in space $\mathcal{S}$ \For{Pose $T$ in $\mathcal{M}$}{ Place a marker at pose $T$ in space $\mathcal{S}$ Evaluate localizability scores $\mathcal{L}$ of camera poses Compute information gains $\mathcal{I}=\mathcal{L}-\mathcal{L}^{\star}$ Evaluate localizability gain $g$ of the marker by \eqref{eq:marker-loc-gain} \If{$g>g^{\star}$}{ $T^{\star}=T$ $g^{\star}=g$ } Remove the marker from space $\mathcal{S}$ } Push $T^{\star}$ to $\mathcal{O}$ Place a marker at pose $T^{\star}$ in space $\mathcal{S}$ Remove $T^{\star}$ from $\mathcal{M}$ } \Return{List of marker poses $\mathcal{O}$}\; \caption{Optimized Marker Placement (OMP)} \label{algo:imp} \end{algorithm} \textbf{Similarity analysis over feature points}: Repetitive structures in scenes cause similar features across RGB images and can result in localizing to a wrong location. To reduce the contribution of repetitive structures to localizability scores, we penalize the localizability score of a camera pose if similar features appear in the FOV of the camera. Specifically, when modeling similar feature points in factor graphs, we set greater uncertainty in noise models of feature point factors to encode the fact that similar features are ambiguous and less informative. \eqref{eq:point-factor} shows the feature point factor that formulates the difference between noisy 3D location $\noisy{\mathbf{p}}$ and true 3D location $\mathbf{p}$ using a Gaussian distribution \begin{equation} p(\noisy{\mathbf{p}}|\mathbf{p})=\mathcal{N}(\noisy{\mathbf{p}}-\mathbf{p};\mathbf{0},\Sigma_{\mathbf{p}})\label{eq:point-factor} \end{equation} where $\Sigma_{\mathbf{p}}$ is the covariance we set for modeling noise. For example, in the leftmost part of Fig.~\ref{fig:loc-score}, feature points $\mathbf{p}_1$ and $\mathbf{p}_3$ are visually similar, so we set a big covariance in feature point factors of $\mathbf{p}_1$ and $\mathbf{p}_3$. Informally speaking, factors with big covariances impose loose constraints on the camera pose distribution, leading to lower contributions on the localizability score. Thus the negative effect of repetitive structures is considered in the localizability score by modeling similar feature points in factor graphs with greater uncertainty. We perform a similarity analysis over scene features to determine noise models in feature point factors (i.e., $\Sigma_{\mathbf{p}}$ in \eqref{eq:point-factor}), as shown in the flow chart in Fig.~\ref{fig:loc-score}. The similarity analysis is to count the number of similar feature points to any feature point. The resulting covariance $\Sigma_{\mathbf{p}}$ for the query point $\mathbf{p}$ is formulated as \begin{equation} \Sigma_{\mathbf{p}}=(1+n_{\mathbf{p}})\Sigma_0 \end{equation} where $\Sigma_0$ is a base covariance (e.g., $diag(2.5,2.5,2.5)\times 10^{-3}\ m^{2}$ in our experiments) and $n_{\mathbf{p}}$ denotes the number of similar feature points to the query point $\mathbf{p}$. Feature points observed by all feasible camera poses are filtered to select similar feature points of a query feature point. The selection is determined by two criteria: 1) the selected feature points have similar descriptors to the query feature point and 2) 3D locations of selected feature points are not too close to the 3D location of the query feature point. The intuition is that, if two areas in the scene look similar but they are far away from each other, a wrong place recognition would incur a huge localization error, so it is necessary to reflect such a situation in the localizability score. \textbf{Estimation of camera pose distributions}: We use the Laplace approximation~\cite[Ch. 4.4]{bishop2006pattern} to estimate a Gaussian distribution that approximates the camera pose distribution encountered in the synthetic localization problem. The Gaussian distribution centers on the mode of the camera pose distribution which is simply the known feasible camera pose. The covariance $\Sigma$ is the only unknown and can be approximated by an estimated Hessian of the negative logarithm of the camera pose distribution at the mode (see \cite[Sec. 2]{kaess2009covariance} for the estimation of the covariance). Thus the entropy encountered in the synthetic localization problem can be approximated by the entropy of the Gaussian distribution, as seen in \begin{equation} H(p(C|\cdot)) \approx \frac{1}{2}\ln |\Sigma| +\frac{d}{2}(1+\ln(2\pi)) \end{equation} where the dimensionality $d$ is 6 for 6DOF poses. \subsubsection{The greedy algorithm}\label{sec:greedy} The algorithm sequentially selects $k$ poses from feasible marker poses $\markerSet$ (see Algorithm~\ref{algo:imp}). The algorithm executes $k$ loops to search the best $k$ poses. In each loop, we update localizability scores, tentatively place a marker at any feasible marker pose, and compute localizability gains of trial marker placements. The best pose that earns the highest localizability gain will be removed from feasible marker poses and be permanently occupied by the marker. The new marker placement will influence the next update of localizability scores. We summarize information gains for all feasible camera poses in the scene, using a single scalar quantity that we refer to as localizability gain. Informally, one could think of the localizability gain as the reward for placing an additional marker at a specific position. The localizability gain of any marker placement $\markerVal$ is defined as the $q^{th}$ percentile of information gains at all feasible camera poses $\camSet$, as seen in \begin{equation} g(\markerVal)= \inf\{i\in \R: F_{I}(i) \geq \frac{q}{100}\}, \label{eq:marker-loc-gain} \end{equation} where $F_I(\cdot)$ is the cumulative distribution function (CDF) of information gains \begin{equation} \mathcal{I}=\{I(\markerVal, \camVal): \camVal \in \camSet\}. \end{equation} The choice of percentile $q\in [0,100]$ is crucial and dependent on environments (i.e., the ground plane). For example, in a large environment where any marker is only visible to a small fraction of feasible camera poses, a low percentile $q$ would likely incur zero localizability gain for all markers since the camera poses seeing no markers receive a zero information gain and constitute a great portion of the information gain distribution $\mathcal{I}$. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figs/pctls.pdf} \caption{Histograms for the HM3D apartment model: (a) percentage of affected camera poses and (b) information gains at camera poses yielded by a marker. The most visible $90\%$ markers (i.e., $v=90$) means $10^{th}$ percentile in (a), determining the percentile $q=99.76$ by \eqref{eq:q-v-relation}. The $99.76^{th}$ percentile in (b) indicates a localizability gain 25.21 of the marker by \eqref{eq:marker-loc-gain}.} \label{fig:pctls} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figs/implementation2.pdf} \caption{The flowchart of our system for performing camera localization experiments. Scenes with different marker placements share the same set of camera poses for acquiring test images and the same localization module.} \label{fig:system} \end{figure} We use an adaptive approach to determine the percentile $q$ before computing the localizability gain. The approach introduces a hyperparameter $v\in[0,100]$ and ensures that the most visible $v$ percent of markers earn nonzero localizability gains. A high $v$ allows more markers, even the ones stuck in corners, to effectively join in the selection of best marker while a low $v$ favors the most visible ones among feasible marker poses. In the ground plane space, for any marker $\markerVal$, we can find a set of affected camera poses $\camSet_{\markerVal}$ that are supposed to see the marker in the FOV (i.e., nonzero info. gain). We can derive a CDF $F_{P}(p)$ using percentages of affected camera poses for all markers \begin{equation} \mathcal{P}= \left\{ \frac{|\camSet_{\markerVal}|}{|\camSet|}\times 100: \markerVal \in \markerSet \right\}. \end{equation} To ensure only the most visible $v$ percent of markers earn nonzero localizability gains, the percentile $q$ is determined by the $(100-v)^{th}$ percentile in percentages of affected camera poses, as seen in \begin{equation} q=100 - \inf\left\{p\in [0,100]: F_{P}(p) \geq \frac{100-v}{100}\right\}.\label{eq:q-v-relation} \end{equation} \eqref{eq:q-v-relation} indicates $q$ is a non-decreasing function of $v$. When $v$ approaches 100, $q$ approaches 100 as well so only markers that earn a greater maximum in information gains will be considered in the best marker selection (see \eqref{eq:marker-loc-gain}); when $v$ approaches 0, $q$ approaches 0 as well so the best marker will only be selected from markers that influence large areas. Thus the choice of hyperparameter $v$ can reflect the trade-off between helping the worst single camera pose and influencing the most camera poses. Fig.~\ref{fig:pctls} shows an example for computing the percentile $q$ and the localizability gain for the marker placement in Fig.~\ref{fig:info-gain-res}. We set $v=90$ as the default setting so the most visible $90\%$ markers receive nonzero localizability gains and are effective best marker candidates. This setting results in a marker placement strategy that tends to support worst camera poses instead of area coverage, as shown in the optimized marker placement for the apartment model in Fig.~\ref{fig:all-model-res}. No markers are placed in the two big rooms on the right of the apartment since (i) camera poses in these rooms already enjoyed good localizability scores (see Fig.~\ref{fig:info-gain-res}a) and (ii) a large hyperparameter $v$ does not emphasize area coverage. \section{Related Work} We briefly review some recent work related to mapping and localization with fiducial markers and marker/landmark placement planning. Examples of fiducial markers include tag families with explicit IDs (e.g., ArUco markers~\cite{aruco2014}, AprilTag~\cite{olson2011apriltag}, ChromaTag~\cite{DeGol:ICCV:17}) and emerging learning-based marker designs~\cite{zhang2022deeptag}. Fiducial markers are widely recognized as an effective approach for improving localization and mapping accuracy. DeGol et al.~\cite{DeGol_2018_ECCV} demonstrate that marker IDs are useful in image matching and resectioning for structure from motion (SfM), leading to improvements in reconstruction results. The UcoSLAM system~\cite{munoz2020ucoslam} integrates marker detection with a bag-of-words approach and presents more robust tracking and relocalization than SLAM techniques with no marker detection~\cite{mur2017orb, gao2018ldso}. However, marker placements in these SfM or SLAM systems are manually determined and not planned by algorithms. To the best of our knowledge, there is no prior work on optimizing marker placement for visual localization based on scene features and fiducial markers. Existing related work focuses on landmark deployment for robotic localization without considering scene features \cite{chen2006practical,vitus2011sensor,jourdan2008optimal}. Beinhofer et al. \cite{beinhofer2013effective} explore optimal placement of artificial landmarks such that a robot equipped with range and/or bearing sensors repeatedly follows predetermined trajectories in planar environments with improved accuracy. Lei et al. \cite{lei2022tie} investigate landmark deployment for poses on $\SE(3)$ and demonstrate placing fiducial markers in a cubic environment; however, features in the scene are not involved in optimizing the marker placement. \section{Results} We present three sections of results. In the first section, we present results comparing different marker placement methods. Next, we show a parameter study about factors that can affect our algorithm and the localization performance. Finally, we present a sensitivity study about the influence of marker position and size deviations on the localization performance. The recall is defined as the percent of localized images in all test images and is the main metric we use. A test image is recognized as being localized if the translation error is lower than 5cm and the rotation error is lower than 5 degrees. The default hyperparameter $v$ is 90. \begin{table} \caption{Specifics of models} \label{model-spec} \begin{tabularx}{\linewidth}{@{} l *{2}{C} c @{}} \toprule Model & Area ($m^2$) & $\#$ of map images & $\#$ of test images\\ \midrule Apartment & 339.3 & 10856 & 10000 \\ Studio & 149.6 & 2832 & 3000 \\ Office & 108.3 & 1768 & 2000 \\ \bottomrule \end{tabularx} \end{table} \subsection{Comparison of Marker Placement Methods} \textbf{Optimized marker placements for three scene models can be found in Fig.~\ref{fig:all-model-res}.} It is evident that our algorithm focuses on placing markers around low-scoring areas and improves mean localizability scores by a large margin. For example, the largest room in the studio model only receives a single marker (marker 9 on the top right of the studio) since the room already possesses good localizability scores even with no markers. \textbf{Optimized marker placements consistently outperform no marker placement and random placements on the recall.} Optimized marker placements and 5 sets of random marker placements were separately applied to each model for performing camera localization experiments. The recall of test images for the three models can be found in Fig.~\ref{fig:all-model-res}. After placing 20 markers, our algorithm improves the recall by over 1.5 percentages for the apartment model, 3.0 percentages for the studio model, and 20.0 percentages for the office model. Note that the area of the apartment model is very big and the model has attained a high recall $85\%$ with no assistance of markers so the increment of recall for the apartment model was expected to be lower than that for the other models. \subsection{Parameter Study} In the parameter study, we design three experiment groups and change one of the default parameters in each experiment group. The experiment groups are 1) different values of $v$ in the greedy algorithm, 2) enabling/disabling tag detection in the localization module, and 3) low-scoring/uniform test data, as seen in Table~\ref{table:param-study}. The default setting is with $v=90$, tag detection enabled, and the low-scoring test data that has more test images in low-scoring areas in the ground plane. For the parameter study, we use the office model. \begin{table} \caption{Parameter study about the hyperparameter $v$, the test data, and enabling/disabling tag detection. The default (df.) setting is with $v=90$, tag detection enabled, and the low-scoring test data. The hyperparameter $v$ means selecting best poses among the most visible $v\%$ feasible marker poses. The uniform test data means test images are uniformly sampled from the ground plane while the low-scoring data means sampling test images with weights favoring low-scoring areas. No tag ID and relative pose are retrieved when tag detection is turned off.} \label{table:param-study} \begin{tabularx}{\linewidth}{@{} l *{5}{C} @{}} \toprule \multirow{2}{*}{Experiment group} & \multicolumn{5}{c}{Recall of test images with $k$ markers (\%)} \\ \cline{2-6} & $k=0$ & $5$ & $10$ & $15$ & $20$ \\ \midrule $v=90$ (df.) & \textbf{48.6} & \textbf{55.5} & \textbf{62.1} & \textbf{65.7} & \textbf{69.2} \\ $v=99$ & 48.6 & 55.5 & 60.4 & 64.5 & 67.4 \\ $v=70$ & 48.6 & 54.8 & 61.1 & 63.2 & 66.6 \\ $v=50$ & 48.6 & 54.3 & 57.6 & 63.9 & 66.8 \\ \midrule Tag detect. on (df.) & \textbf{48.6} & \textbf{55.5} & \textbf{62.1} & \textbf{65.7} & \textbf{69.2} \\ Tag detect. off & 48.6 & 55.2 & 60.7 & 64.2 & 67.5 \\ \midrule low-scoring data (df.) & 48.6 & 55.5 & 62.1 & 65.7 & 69.2 \\ Unif. test data & \textbf{57.4} & \textbf{63.7} & \textbf{68.4} & \textbf{72.1} & \textbf{74.8} \\ \bottomrule \end{tabularx} \end{table} \textbf{Too large or small values of hyperparameter $v$ incur lower improvements of the recall.} The hyperparameter was introduced to compute the localizability gain of a marker. As explained in Sec.~\ref{sec:greedy}, lower $v$ favors markers that cover larger areas while greater $v$ tends to stress the worst single camera pose. Table~\ref{table:param-study} shows that the default value ($v=90$) consistently outperforms small value 50 and large value 99, indicating that the default attains a good balance between area coverage and helping the worst cases. \textbf{The localizability score can be a good indicator of localization errors.} As explained in Sec.~\ref{sec:test-data}, we sampled more test images in low-scoring areas by the default setting. Now we are curious about the recall of test images that are uniformly sampled in the ground plane space. Table~\ref{table:param-study} shows that uniform test samples enjoy greater recall than test samples that stress low-scoring areas by at least 5 percentages. \textbf{Both the texture and ID of tags are helpful for localization.} We disable tag detection in the localization module (Fig.~\ref{fig:loc-module}) to investigate the impact of tag detection on the recall. Table~\ref{table:param-study} shows that tags still improve the recall even though the tag detector is turned off (i.e., no tag ID and relative pose). The reason is that the texture of markers is still helpful for coarse localization and pose estimation in the localization module. \subsection{Sensitivity study of marker sizes and positions} It is quite likely that a user will not be able to place fiducial markers exactly at the positions computed by the OMP algorithm; meanwhile, different users may print fiducial markers with different sizes. Thus we investigate the impact of position deviations and marker sizes on the recall. For the sensitivity study, we used the office model. \textbf{Enlarging markers up to a certain size keeps increasing the recall.} Fig.~\ref{fig:sens-study}a shows that, under 50 cm, larger tag widths lead to greater recall (note that the threshold 50 cm should correlate with environments). Excessively large sizes can degrade the recall because the markers become too big to be detected from nearby views. \textbf{Mild position deviations slightly degrade the performance of the optimized marker placement.} All 20 markers planned by the OMP algorithm were moved left or right by certain distances to implement position deviations. Fig.~\ref{fig:sens-study}b shows the recall can decrease by 2\% in the presence of $\pm0.25$ meters position deviations and by 5\% in the presence of $\pm$1 meter position deviations, compared with zero position deviation. However, marker placements with the position deviations still outperform no marker placement by a large margin ($\sim$ 15 percentages in the recall). \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{figs/sens.pdf} \caption{Sensitivity study: (a) re-sizing tags and (b) applying different position deviations to marker poses planned by the OMP algorithm.} \label{fig:sens-study} \end{figure}
1,116,691,497,025
arxiv
\section{Introduction} Primordial black holes (PBHs) are black holes which may have formed in the early universe. The possible existence of PBHs was initially considered by Novikov and Zel'Dovic \cite{1967SvA....10..602Z}, followed shortly by work from Hawking and Carr \cite{Hawking:1971ei,Carr:1974nx,Carr:1975}. PBHs are of great interest cosmologically, because, in addition to being a viable dark matter candidate \cite{Carr:2009jm,Carr:2020xqk,Carr:2020gox}, PBHs provide unique constraints on the primordial power spectrum, and have been proposed to be the source of the gravitational waves from merging black holes observed by LIGO-Virgo \cite{Clesse:2017bsw,DeLuca:2019buf,Mirbabayi:2019uph,Postnov:2019tmw,Fernandez:2019kyb,He:2019cdb,LIGOScientific:2018jsj,LIGOScientific:2018mvr,DeLuca:2020bjf}. Numerous mechanisms have been proposed for their formation (for example, from cosmic strings \cite{Hawking:1987bn} or bubble collisions \cite{Hawking:1982ga}, see \cite{Green:2014faa} for a review) although we will here focus on PBHs which form from the gravitational collapse of large density perturbations. Carr's initial work \cite{Carr:1974nx} calculated that if the density contrast is above some threshold value $\delta_c$ then that region will collapse to form a PBH when it enters the horizon. An order of magnitude estimate was performed, finding that the density contrast should be greater than the equation of state in order for a PBH to form, $\delta_c=\omega$ (with $\omega=1/3$ during radiation domination). However, there has since been an extensive amount of work to determine the collapse threshold, settling on a slightly larger value $\delta_c\simeq 0.5$ \cite{Musco:2004ak,Musco:2008hv,Musco:2012au,Musco:2018rwt,Harada:2015ewt,Harada:2015yda,Nakama:2013ica,Nakama:2014fra,Shibata:1999zs,Niemeyer:1999ak,Polnarev:2006aa,Escriva:2019phb,Escriva:2020tak}. This formation threshold is orders of magnitude larger than perturbations seen in the CMB, and so in order for a significant number of PBHs to form, the power spectrum on scales which form PBHs must also be orders of magnitude larger than observed on cosmological scales. There are many models which do make this prediction (for example, \cite{Drees:2011hb,Bugaev:2013fya,Ozsoy:2018flq,GarciaBellido:1996qt,Lyth:2012yp,Bugaev:2011wy,Ballesteros:2018wlw}, amongst many others). Researchers building models of inflation typically make predictions for the power spectrum (and higher-order correlation functions) in terms of the curvature perturbation $\zeta$, which appears as a perturbative quantity in the FLRW metric in the comoving uniform-density gauge as \begin{equation} \mathrm{d}s^2 = -\mathrm{d}t^2 + a^2(t)\exp\left( 2 \zeta \right)\mathrm{d}\mathbf{X}^2, \label{eqn:metric} \end{equation} where $\zeta$ can be seen to be an effective rescaling of the physical coordinate $\mathbf{X}$. When we want to predict the abundance of PBHs given a particular model, or else constrain a particular model using constraints on PBHs, it is desirable to be able to relate the power spectrum and non-Gaussianity of $\zeta$ to the PBH abundance (see \cite{Carr:2020gox,Gow:2020bzo} for a recent compilation of constraints on the PBH abundance and consequent constraints on the power spectrum). It has been shown that primordial non-Gaussianity can have a strong impact on the abundance of PBHs \cite{Bullock:1996at,Ivanov:1997ia,Byrnes:2012yx,Shandera:2012ke,Young:2013oia,Young:2015cyn,Franciolini:2018vbk,Yoo:2020dkz,Yoo:2019pma,Atal:2019cdz,Atal:2018neu,Riccardi:2021rlf,Kitajima:2021fpq}, as well as the mass function and initial clustering \cite{Young:2019gfc}. It has also been argued recently that, in models which predict large numbers of PBHs, there is expected to be a large amount of non-Gaussianity \cite{Figueroa:2020jkf,Biagetti:2021eep}. It is therefore important to correctly account for the effect of non-Gaussianity when deriving constraints of the power spectrum from PBHs. In this paper, we will reconsider the effect of non-Gaussianity on the PBH abundance, accounting for recent developments in the field. The layout of the paper is as follows: in section \ref{sec:criteria} we will discuss the formation criteria for PBHs in the context of local-type non-Gaussianity, in section \ref{sec:variables} we will discuss the technical details of the calculation and introduce variables, section \ref{sec:NGabundance} calculates the effect of local-type non-Gaussianity on the PBH abundance, a comparison is made to previous literature in section \ref{sec:literature}, and finally section \ref{sec:conclusions} contains the conclusions reached in the paper. \section{Formation criteria} \label{sec:criteria} The most suitable parameter to use to determine whether a perturbation will collapse to form a PBH is the compaction function $C$ \cite{Young:2014ana,Musco:2018rwt,Young:2019osy}, which is defined as \begin{equation} C(\mathbf{x},r) \equiv 2 \frac{\delta M(\mathbf{x},r,t)}{R(r,t)}, \end{equation} where $\delta M(\mathbf{x},r,t)=M(\mathbf{x},r,t)-M_b(\mathbf{x},r,t)$ is the mass excess within a sphere areal radius $R(\mathbf{x},r,t)=a(t)\exp(\zeta(\mathbf{x}))r$ centred on spatial coordinate $\mathbf{x}$; $M(\mathbf{x},r,t)$ is the Misner-Sharp mass and the subscript $b$ denotes the background value in a region of unperturbed space. The compaction is closely related to the density constrast, and is used to describe the amplitude of density perturbations with a time-independent parameterisation (whilst the individual components of the compaction are time-dependent, the overall function is not). The compaction is discussed in more detail in appendix \ref{app:compaction}, and the benefits of using such a parameter are also described in detail in reference \cite{Young:2019osy}. If the compaction is above some critical value $C_\mathrm{th}$ in a region whilst it is super-horizon, a PBH will form once the regions re-enters the horizon, with a mass given by \begin{equation} M_\mathrm{PBH}\left( C \right) = K M_\mathrm{H}\left( C - C_\mathrm{th} \right)^\gamma, \label{eqn:mass} \end{equation} where $M_H$ is the horizon mass or the unperturbed background at the time when the horizon scale is equal to the smoothing scale used (in comoving units), and we take the values $K=4$, $C_{th}=0.5$ and $\gamma=0.36$ \cite{Young:2019yug}. There are several different approaches which can be used to calculate the PBH abundance. The simplest, a Press-Schechter-type approach (also referred as the threshold statistics approach), simply states that PBHs form in regions where the compaction at a given point is above the threshold. Peaks theory provides a more accurate calculation by adding the condition that PBHs will form at locations where the compaction function is at a maximum (i.e. where there is a peak in the compaction). Recent developments have also included a condition to determine the scale of peaks in the compaction - since the mass of a PBH depends non-trivially on both the scale and amplitude of a perturbation \cite{Young:2020xmk,Germani:2019zez}. The peak constraint $c_\mathrm{pk}$ describes these criteria and gives the number of peaks (either 1 or 0) of height $\bar{C}$ and scale $r$ in the infinitesimal volume $\mathrm{d}^3x\mathrm{d}r$, \begin{equation} c_\mathrm{pk} = \delta_D\left(C-\bar{C}\right) \delta_D^{(3)}\left( \bar{\nabla}_i C \right) \Theta_H\left( \lambda_3 \right) \delta_D\left( \frac{\mathrm{d}C}{\mathrm{d}r} \right)\Theta_H\left(- \frac{\mathrm{d}^2 C}{\mathrm{d}r^2} \right), \label{eqn:peakConstraint} \end{equation} where $\delta_D^{(n)}$ is the $n$-dimensional Dirac-delta function, $\Theta_H$ is the Heaviside step function, and $\lambda_3$ is the smallest eigenvalue of $\nabla_i\nabla_jC$. For sufficiently narrow spectrum, the critieria to determine the scale of the perturbation is unnecessary, essentially since all perturbations have the same characteristic scale, and the PBH abundance can be calculated by considering only this single scale \cite{Young:2020xmk,Germani:2019zez}. If the probability density function (PDF) of the compaction $C$ and its derivatives are known, the number of peaks of given height and scale can be calculated, and from there the PBH abundance. In general, this is problematic, because there is no analytic expression for the compaction in terms of $\zeta$. However, in the high-peak limit, the simplifying assumption is usually made that peaks are spherically symmetric, which is considered a suitable approximation since rare peaks that form PBHs are expected to be spherically symmetric \cite{Bardeen:1985tr}. The validity of this assumption is discussed in appendix \ref{sec:validity}. Under the assumption of spherical symmetry, the compaction can be expressed in terms of the linear component $C_1(\mathbf{x},r)$ \cite{Harada:2015yda} (a brief derivation of this is found in appendix \ref{app:compaction}): \begin{align} C(\mathbf{x},r) &=- \frac{4}{3}r\zeta'(r)\left( 1+\frac{1}{2}r\zeta'(r) \right),\\ &= C_1(\mathbf{x},r)-\frac{3}{8}C_1(\mathbf{x},r)^2. \label{eqn:quadratic} \end{align} There is a maximum value for the compaction, $C_\mathrm{max}=2/3$ which occurs when $C_1=4/3$. Perturbations with $C_1<4/3$ are referred to as type I perturbations and, if above the threshold value, can form PBHs with a mass dependent on both the scale and amplitude of the perturbation. Perturbations with $C_1>4/3$ are referred to as type II perturbations (whereby the areal radius does not increase monotonically with the radial coordinate $r$). It was previously thought that such perturbations did not form PBHs (instead forming separate universes), but reference \cite{Kopp:2010sh} showed rather that type II perturbations always lead to the formation of PBHs. However, it is not possible to simulate the formation of such PBHs using the density, and as a result, the formation and resultant mass of such PBHs is not well understood. In addition, the abundance of type II perturbations is exponentially suppressed compared to type I perturbations, and so we will neglect type II perturbations for the remainder of this paper. With this in mind, inverting equation \eqref{eqn:quadratic} and keeping only the relevant solution gives \begin{equation} C_1 = \frac{4}{3}\left( 1 - \frac{ \sqrt{2-3 C} }{\sqrt{2}} \right). \label{eqn:C1ofC} \end{equation} \subsection{Effect of non-Gaussianity} In this section, we will consider the effect of local-type non-Gaussianity on the compaction function. The effect of non-Gaussianity on the profile shape and threshold for collapse was studied in reference \cite{Kehagias:2019eil} finding that non-Gaussianity has a small impact on the formation threshold, and we will thus neglect these effects for the remainder of this paper. We can express the effect of local-type non-Gaussianity by making writing the curvature perturbation $\zeta$ as a series in terms of Gaussian variable $\zeta_G$ \begin{equation} \zeta = \zeta_G+f\left( \zeta_G^2-\langle \zeta_G^2 \rangle \right)+g\zeta_G^3+\cdots, \label{eqn:localNG} \end{equation} where $f=3f_\mathrm{NL}^\mathrm{local}/5$ and $g=9g_\mathrm{NL}^\mathrm{local}/25$ describe the level of non-Gaussianity. Higher-order terms are neglected here, but were studied in reference \cite{Young:2013oia}, finding the effects of odd- or even-order terms to be qualitatively similar to the quadratic and cubic terms respectively (although our results here will suggest that they have a much smaller effect). The $\langle\zeta_G^2 \rangle$ term is included such that the expecation value remains zero, $\langle\zeta\rangle=0$. The quadratic term introduces skewness to the distribution, whilst the cubic term affects the kurtosis. It is worth noting that an expansion such as this may not be valid when studying PBHs, since higher-order terms can have a large impact and may not be neglectable \cite{Young:2013oia,Atal:2019erb,DeLuca:2022rfz} - although we will see later in the calculation presented here that the effect of higher order terms will be suppressed compared to previous calculations. Peaks in $\zeta_G$ typically correspond to peaks in $\zeta$, but can be troughs depending on the amplitude of the peak and the values of $f$ and $g$ (this will be discussed in more detail in section \ref{sec:NGabundance}). In addition, in the high-peak limit which will be relevant for studying PBH formation, we can continue to make the assumption that relevant peaks are spherically symmetric - in which case peaks in one variable will correspond to peaks (or troughs) in any other variable we will consider, such as the density or compaction. Neglecting higher-order terms, we can now express the linear component of the compaction as \begin{equation} C_1(\mathbf{x},r)=-\frac{4}{3}r\zeta'(r) =-\frac{4}{3} r\zeta_G'(r)\left(1+2 f\zeta_G(r)+3g \zeta_G(r)^2\right), \label{eqn:c1} \end{equation} which can be substituted into equation \eqref{eqn:quadratic} to give the full expression for the compaction. In principle, the number density of peaks of given scale and amplitude (which then gives the abundance of PBHs of given mass), can be calculated by numerically integrating the peak constraint, equation \eqref{eqn:peakConstraint}, over the probability density function (PDF) of the relevant variables and their first and second derivatives (see i.e. section III of \cite{Young:2020xmk} for more information). In the following sections, we will formulate a simple procedure to calculate the abundance of PBHs by taking the high-peak limit. \section{Variables, correlation factors, and probability density functions} \label{sec:variables} In the context of local-type non-Gaussianity, the compaction depends on the terms $-\frac{4}{3}r\zeta_G'(r)$ and $\zeta_G(r)$, and the PDF of the compaction $P_C(C)$ can therefore be expressed by making use of the (Gaussian) PDFs of these terms. The term $-\frac{4}{3}r\zeta_G'(r)$ is the expression that one would obtain by convolving the linear expression for the density contrast, $\delta_l=-4 \nabla^2\zeta_G/9$, with a top-hat smoothing function $W(\mathbf{x},r)$ \cite{Young:2019yug} at the centre of a spherically symmetric peak. We will refer to this quantity as the Gaussian component of the compaction, $C_G$: \begin{equation} C_G(\mathbf{x},r) = -\frac{4}{9}r^2\int\mathrm{d}^3\mathbf{y} \nabla^2\zeta_G(\mathbf{y})W\left( \mathbf{x}-\mathbf{y} ,r \right) = -\frac{4}{3}r\zeta_G'(r), \end{equation} where $\mathbf{x}$ is the centre of a peak (corresponding to $r=0$), and spherical symmetry has been assumed in the second equality. The smoothing function is given by \begin{equation} W(\mathbf{x},r) = \frac{3}{4\pi r^3}\Theta_\mathrm{H}\left( r-x \right), \end{equation} where $\Theta_\mathrm{H}(x)$ is the Heaviside-step function. The Fourier transform of this window function is \begin{equation} \tilde{W}(k,r) = 3\frac{\sin(kr)-kr\cos(kr)}{(kr)^3}. \label{eqn:FourierTH} \end{equation} Similarly, since the $\zeta_G(r)$ term originates from the surface term of an integral over a sphere of radius $r$, it can be expressed as a smoothing of the curvature perturbation with a spherical-shell function. We will refer to this term as $\zeta_r$: \begin{equation} \zeta_r(\mathbf{x}) = \int\mathrm{d}^3\mathbf{y} \zeta_G(\mathbf{y})W_s\left( \mathbf{x}-\mathbf{y} ,r \right) = \zeta_G(r), \end{equation} where $\mathbf{x}$ is the centre of a peak (corresponding to $r=0$), and spherical symmetry has been assumed in the second equality. The spherical-shell window function $W_s$ is given by \begin{equation} W_s(\mathbf{x},r) =\frac{1}{4\pi r^2} \delta_D\left(x-r \right), \end{equation} with Fourier transform given by \begin{equation} \tilde{W}_s(k,r) = \frac{\sin(kr)}{kr}. \end{equation} For ease of reference, we will now define some of variables which will be used throughout the remainder of the paper. Firstly, we define the following integrals over the Gaussian component of the power spectrum, which will be needed for the covariance matrix and calculation of the PBH abundance later: \begin{align} \begin{split} \sigma_{\zeta}^2 &= \int\limits_0^\infty \frac{\mathrm{d}k}{k}\mathcal{P}_{\zeta_G},\\ \sigma_c^2 &= \frac{16}{81}\int\limits_0^\infty \frac{\mathrm{d}k}{k}(kr)^4 \tilde{W}^2(k,r)\mathcal{P}_{\zeta_G},\\ \sigma_n^2 &= \frac{16}{81}\int\limits_0^\infty \frac{\mathrm{d}k}{k}(kr)^4 \tilde{W}^2(k,r)k^{2n}\mathcal{P}_{\zeta_G},\\ \sigma_r^2 &= \int\limits_0^\infty \frac{\mathrm{d}k}{k} \tilde{W}_s^2(k,r) \mathcal{P}_{\zeta_G},\\ \sigma_{cr}^2 &= \frac{4}{9}\int\limits_0^\infty \frac{\mathrm{d}k}{k}(kr)^2 \tilde{W}(k,r)\tilde{W}_s(kr) \mathcal{P}_{\zeta_G},\\ \sigma_{c\zeta}^2 &= \frac{4}{9}\int\limits_0^\infty \frac{\mathrm{d}k}{k}(kr)^2 \tilde{W}(k,r) \mathcal{P}_{\zeta_G}. \end{split} \label{eqn:moments} \end{align} We will also define the following variables, with unit variance: \begin{equation} \nu_c = \frac{C_G}{\sigma_c}, \nu_r=\frac{\zeta_r}{\sigma_r}, z = \frac{\nu_r-\gamma_{cr}\nu_c}{\sqrt{1-\gamma_{cr}^2}}, \end{equation} where \begin{equation} \gamma_{cr}=\frac{\sigma_{cr}^2}{\sigma_c\sigma_r}, \end{equation} is the correlation coefficient of $\nu_c$ and $\nu_r$. The reason for introducing the variable $z$ is to diagonalise the 2-variate Gaussian appearing in the next section. \subsection{The 2-variate Gaussian probability density function} The PDF of $C_G$ and $\zeta_r$ can be described with a 2-variate Gaussian \begin{equation} \mathcal{N}\left( \mathbf{Y} \right) = (2\pi)^{-1/2}\det\left(\mathbf{\Sigma}\right)^{-1/2} \exp\left( -\frac{1}{2}\mathbf{Y}^T \mathbf{\Sigma}^{-1} \mathbf{Y} \right) \end{equation} where $\mathbf{Y}=\left[ \nu_c,\nu_r \right]$, and $\Sigma$ is the covariance matrix \footnote{When considering peaks theory, there are other relevant scalar variables which normally appear in the multi-variate Gaussian, such as the laplacian of $C_G$. However,they will not be important when we consider the high peak limit, and so we will not consider them further here.}. The PDF of the compaction, $P(C)$, can be calculated by integrating the 2-variate Gaussian over the range of values of $C_G$ and $\zeta_r$ which give the specified value of $C$, \begin{equation} P(C) = \int\mathrm{d}C_G\mathcal{N}\left(C_G,\zeta_r(C,\nu_c,f,g)\right), \end{equation} where $\zeta_r(C,C_G,f,g)$ is expressed as a function of $C,\nu_c,f$ and $g$. In theory, this is the solution of a quartic solution, but as we will see, it is not necessary to calculate this in the high-peak limit. To simplify the calculation, we can diagonalise the PDF by expressing it in terms of $\nu_c$ and $z$: \begin{equation} \mathcal{N}(\nu_c,\zeta_r) =\mathcal{N}\left(\nu_c\right)\mathcal{N}\left( z \right)= \frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}\nu_c^2\right)\frac{1}{\sqrt{2\pi}}\exp \left( -\frac{1}{2}z^2\right), \end{equation} where $\mathcal{N}(z)$ can be expressed as \begin{equation} \mathcal{N}(z) = \mathcal{N}\left( \frac{\nu_r-\gamma_{cr}\nu_c}{\sqrt{1-\gamma_{cr}^2}} \right) = \frac{1}{\sqrt{2\pi(1-\gamma_{cr}^2)}}\exp\left( -\frac{\left( \nu_r - \gamma_{cr}\nu_c \right)^2}{2 \left(1-\gamma_{cr}^2\right)} \right). \end{equation} In the high-peak limit, $\nu_c\rightarrow \infty$, then $\mathcal{N}(z)$ can be approximated as a Dirac-delta function: \begin{equation} \mathcal{N}(\nu_r) = \delta_D\left( \nu_r - \gamma_{cr}\nu_c \right). \end{equation} This means that, when integrating over $\mathcal{N}(\nu_r)$, we can simply make the substition $\nu_r = \gamma_{cr}\nu_c$, or in terms of $C_G$ and $\zeta_r$ we can write, \begin{equation} \zeta_r = \gamma_{cr}\frac{\sigma_r}{\sigma_c}C_G. \label{eqn:variableRelation} \end{equation} The net result of this is that, in the high-peak limit, we can write the linear compaction as \begin{equation} C_1 = C_G + \tilde{f} C_G^2 + \tilde{g} C_G^3, \label{eqn:NGcomp} \end{equation} where we have introduced the notation $\tilde{f}=2f\gamma_{cr}\frac{\sigma_r}{\sigma_c}$ and $\tilde{g} = 3g\left(\gamma_{cr}\frac{\sigma_r}{\sigma_c}\right)^2$ for convenience. This takes a form similar to our original model of local-type non-Gaussianity, equation \eqref{eqn:localNG}. \section{Calculating the abundance of primordial black holes in the high-peak limit} \label{sec:NGabundance} We will here consider the case of a delta-function peak in the power spectrum (corresponding to $\Delta\rightarrow 0$) \begin{equation} \mathcal{P}_\zeta(k) = \mathcal{A}k\delta_D\left( k - k_p \right). \label{eqn:deltaPower} \end{equation} We note here that this is the power spectrum of the full, non-Gaussian, power spectrum. However, for the calculation of the PBH abundance, it will be much simpler to make use of the Gaussian component of the compaction. The variance of the Gaussian component of the compaction $\sigma_c^2$ to the variance of the linear, non-Gaussian, component of the compaction $\sigma_\mathrm{NG}^2$ as \begin{equation} \sigma_\mathrm{NG}^2 = \sigma_c^2 + 2\tilde{f}^2\sigma_c^4+6\tilde{g}\sigma_c^4+15\tilde{g}^2\sigma_c^6, \label{eqn:sigmaNG} \end{equation} where $\sigma_\mathrm{NG}^2$ can be calculated from the curvature perturbation power spectrum as \begin{equation} \sigma_\mathrm{NG}^2= \frac{16}{81}\int\limits_0^k \frac{\mathrm{d}k}{k}(kr)^4 \tilde{W}^2(k,r)\mathcal{P}_{\zeta}. \label{eqn:NGpower} \end{equation} This provides a simple method to determine the amplitude of the relevant moments of the power spectrum given in equation \eqref{eqn:moments}. Assuming this form for the power spectrum allows us to make a number of simplifications to calculation and maintain analytic control of the calculation, whilst still giving an accurate calculation of the abundance: \begin{enumerate} \item Considering a narrow spectrum ensures that the assumption of the high-peak limit and spherical symmetry is valid (this is discussed in more detail in appendix \ref{sec:validity}). \item Since there is only a single scale at which perturbations are large, we can neglect other scales. This means that we can neglect the criteria that perturbations have a particular scale, e.g. we can neglect the $\delta_D\left( \mathrm{d}C/\mathrm{d}r \right)$ term in equation \eqref{eqn:peakConstraint} - and use traditional peaks theory \cite{Bardeen:1985tr}. \item Additionally, we can conclude that, since peaks are spherically symmetric, we can apply the peak constraint to the Gaussian component of the compaction $C_G$ - assuming care is taken to integrate only over values corresponding to peaks and not troughs. \item Whilst the delta-function power spectrum is unphysical \cite{Byrnes:2018txb}, the abundance of PBHs for lognormal peaks in the power spectrum with a width less than $\Delta\lesssim 0.3$ is well described by using the delta function \cite{Gow:2020bzo}. Therefore, rather than considering a power spectrum of finite width, we can simply investigate a delta-function peak in the power spectrum without worrying about integrating over a range of scales at which PBHs form. \end{enumerate} We note that, whilst the calculation presented here could easily be extended to broad power spectrum (although one runs into the well-known problem that the variance $\sigma_c^2$ diverges for scale-invariant spectra), the consideration of such power spectra is left for future work. The results are expected to be qualitatively to previous work studying the effect of modal coupling in the context of broad power spectra and non-Gaussianity \cite{Young:2014oea,Tada:2015noa,Young:2015kda}. For the narrow power spectrum in equation \eqref{eqn:deltaPower}, and setting the smoothing scale $r=2.74/k_p$, we obtain the following values for the required integrals of the power spectrum and correlation functions, \begin{align} \begin{split} \sigma_c^2 &= (k_p r)^4 \tilde{W}^2(k_p,r)\mathcal{A}\simeq 2.01\mathcal{A},\\ \sigma_r^2 &= \tilde{W}_s^2(k_p,r)\mathcal{A} \simeq 0.141\mathcal{A},\\ \sigma_{cr}^2 &= \frac{4}{9}(k_p r)^2 \tilde{W}(k_p,r)\tilde{W}_s(k_p,r) \mathcal{A}\simeq 2.00\times 10^{-1}\mathcal{A}. \end{split} \end{align} Combining these gives us the factor appearing in equation \eqref{eqn:variableRelation}, \begin{align} \gamma_{cr}\frac{\sigma_r}{\sigma_c}\simeq 9.95 \times 10^{-2}, \end{align} gving us \begin{align} \tilde{f} \simeq 1.19 \times 10^{-1} f_\mathrm{NL}, \tilde{g}\simeq1.07\times 10^{-2}g_\mathrm{NL}. \label{eqn:NGparams} \end{align} That the factor $\gamma_{cr}\frac{\sigma_r}{\sigma_c}$ is significantly less than unity implies that the impact of local-type non-Gaussianity on PBH abundance will be significantly less than has been calculated previously (such as in \cite{Byrnes:2012yx}), especially for higher order terms. The key reason for the difference is that the compaction is volume-averaged over the scale of the perturbation - and we are thus sensitive to the values of $\zeta$ at the edge of the perturbation, rather than the centre. We note that, in this paper, we are neglecting changes to the profile shape of perturbations from non-Gaussianity (see \cite{Atal:2019erb} for a more detailed discussion of this effect). A changing profile shape would affect the threshold value for collapse as well as the mass scaling relationship (see equation \eqref{eqn:mass}). It would also affect the scale at which the compaction peaks, and would therefore affect the smoothing scale, and the correlation factor. Using equation \eqref{eqn:NGparams} can therefore underestimate (overestimate) the effect of non-Gaussianity in the case that the non-Gaussianity parameters become large and positive (negative). A similar effect is discussed in more detail in reference \cite{Kitajima:2021fpq}. In the high-peak limit, the number density of peaks of height in the range $C_G$ to $C_G+\mathrm{d}C_G$ is given by \cite{Bardeen:1985tr} \begin{equation} n(C_G) = \frac{1}{3^{3/2}(2\pi)^2}\left(\frac{\sigma_{1}}{\sigma_c}\right)^3 \left(\frac{C_G}{\sigma_c} \right)^3\exp\left( -\frac{C_G^2}{2\sigma_c^2} \right), \label{eqn:numberDensity} \end{equation} where $\sigma_1/\sigma_c = k_p^2$ for a delta-function power spectrum. We note that, due to the symmetry of a Gaussian field, the number density of peaks of height $C_G$ will be equal to the number density of troughs of depth $-C_G$. By considering that peaks in $C_G$ correspond to peaks (or troughs) in $C$, equation \eqref{eqn:numberDensity} will form the basis of our calculation going forwards. The mass fraction of the universe which will collapse to form PBHs at the time of horizon entry is given by \begin{equation} \beta = \left( 2\pi \right)^{3/2}r^3\int \mathrm{d}C_G \frac{M_\mathrm{PBH}\left( C_G \right)}{M_H}n(C_G), \label{eqn:beta} \end{equation} where the integral is performed over the range of values of $C_G$ which form PBHs. Recalling that we will only consider the formation of PBHs from type I perturbations, which means that we will integrate over values of $C_G$ corresponding to values in the range $C_{1,th}<C-1<4/3$. $C_{1,th}$ is the threshold value of the linear component of the compaction, given from the compaction threshold $C_{th}$ by equation \eqref{eqn:C1ofC}, where for $C_{th}=0.50$, we obtain $C_{1,th}\simeq 0.67$. For the Gaussian case, the integration is therefore over the range $0.67<C_G<4/3$. We note that, whilst equation \eqref{eqn:beta} is straightforwards to re-cast in terms of the compaction $C$, it is far simpler to perform the calculation using $C_G$ - which also allows us to differentiate between type I and type II perturbations. \begin{figure*} \centering \includegraphics[width=0.45\textwidth]{CvC1} \\ \includegraphics[width=0.45\textwidth]{CgVsF} \includegraphics[width=0.45\textwidth]{CgVsG} \caption{In all the plots, the blue region shows the values for PBH forming type I perturbations, and the red region shows the values for type II perturbations (which also form PBHs, although we neglect type II perturbations in the calculation). \emph{Top plot}: the relation between the full non-linear compaction $C$ and the linear component $C_1$. \emph{Bottom-left plot}: the values of the linear, Gaussian component of the compaction $C_G$ which form PBHs, as a function of $\tilde{f}$ (and assuming $\tilde{g}=0$). \emph{Bottom-right plot}: similarly, the values of the linear, Gaussian component of the compaction $C_G$ which form PBHs, as a function of $\tilde{g}$ (and assuming $\tilde{f}=0$). } \label{fig:intLimits} \vspace{1cm} \end{figure*} In the following sections, we will quantify the effect of local-type non-Gaussianity on PBH abundance by considering the quadratic and cubic terms independently. A consideration of combining the terms is again left for future study. \subsection{Quadratic non-Gaussianity} In this section, we will consider the effect of quadratic non-Gaussianity on the PBH abundance, setting $\tilde{g}=0$. In this case, there are two solutions for expressing $C_G$ as a function of $C_1$: \begin{equation} C_G\left(C\right) = C_\pm\left(C_1\right) = \frac{ -1 \pm \sqrt{1+4\tilde{f} C_1} }{ 2\tilde{f} }, \end{equation} where the two solutions will be identified using the subscript $\pm$, as used in middle equality above. The limits on the integral in equation \eqref{eqn:beta} depend on the value of $\tilde{f}$: \begin{itemize} \item $\tilde{f}>-\frac{3}{16}$ (excluding $\tilde{f}=0$): PBHs form in the range $C_+(C_{1,th})<C_G<C_+(4/3)$ as well as in the range $C_-(4/3)<C_G<C_-(C_{1,th})$. \item $-\frac{1}{4C_{1,th}} < \tilde{f} \leq -\frac{3}{16}$: type II perturbations do not form in this regime, and we instead integrate over the range $C_+(C_{1,th})<C_G<C_-(C_{1,th})$. \item $\tilde{f}\leq -3/8$: in this regime, there are no perturbations which form PBHs. \end{itemize} The integration limits are shown in the bottom left plot of figure \ref{fig:intLimits}. It is noteworthy that, except for large values $\tilde{f}\gg 1$, the abundance of PBHs is dominated by the value of $C_+(C_{t,th})$, and we could obtain an excellent approximation by simply integrating equation \eqref{eqn:beta} in the range $C_G>C_+(C_{t,th})$. \begin{figure*} \centering \includegraphics[width=0.6\textwidth]{AVsf} \caption{The amplitude of a delta-function power spectrum $\mathcal{A}$ required to produce a given initial abundance of PBHs $\beta$ as a function of the non-Gaussianity parameter $\tilde{f}$. } \label{fig:betaFnl} \end{figure*} This now allows us to relate the power spectrum $\mathcal{P}_\zeta$ to the variance of the Gaussian component of the compaction $\sigma_c^2$ using equations \eqref{eqn:sigmaNG} and \eqref{eqn:NGpower}, and then use equation \eqref{eqn:beta} to calculate the PBH abundance. Solving the integral numerically allows us to the amplitude of the power spectrum to the PBH abundance. Figure \ref{fig:betaFnl} shows the amplitude of the power spectrum required to produce PBHs in the abundance $\beta = 10^{-5},10^{-20}$ for varying values of $\tilde{f}$ (recalling $\tilde{f}\simeq 1.19 \times 10^{-1}f_\mathrm{NL}^\mathrm{local}$, for the specific case considered here). For negative $\tilde{f}$, the abundance of PBHs decreases rapidly, which means that a larger amplitude of the power spectrum $\mathcal{A}$ is required to produce the same number. For $\tilde{f}<-3/8$, there are no PBH forming perturbations, and thus the value of $\mathcal{A}$ diverges as we approach this limit. For positive $\tilde{f}$, the abundance of PBHs is significantly increased - and thus a smaller $\mathcal{A}$ is required to produce the same abundance. An alternative interpretation of the results is that, if one has a given bound on PBH abundance, for example, $\beta<10^{-20}$, then the constraints on the power spectrum become weaker (stronger) for negative (positive) $\tilde{f}$. \subsection{Cubic non-Gaussianity} We will now consider the effect of cubic non-Gaussianity on the PBH abundance, this time setting $\tilde{f}=0$. Since equation \eqref{eqn:NGcomp} is now cubic, there are 3 solutions for $C_G$ as a function of $C_1$, given by \begin{align} \begin{split} C_G(C_1) = C_{a}(C_1) = \frac{ \left( \frac{2}{3} \right)^{1/3} \exp\left(i \theta_a\right) }{ \lambda } - \frac{ \exp\left( -i\theta_a \right) \lambda }{ 2^{2/3} 3^{1/3} \tilde{g} },\\ \lambda = \left( 9\tilde{g}^2C_1 + \sqrt{ 12\tilde{g}^3+81\tilde{g}^4C_1^2 } \right)^{1/3}, \end{split} \end{align} where $\theta_a = \left[ \pi, \pi/3, -\pi/3 \right]$ for $a = \left[ i,j,k\right]$. As before, we find that the limits on the integral in equation \eqref{eqn:beta} depend on the value of $\tilde{g}$: \begin{itemize} \item $\tilde{g}\leq \tilde{g_c}$: PBHs form in the range $C_i(4/3)<C_G<C_i(C_{1,th})$. \item $-0.33 < \tilde{g} \leq -1/12$: PBHs form in the range $C_i(4/3)<C_G<C_i(C_{1,th})$ and $C_k(C_{1,th}<C_G<C_j(C_{1,th})$. \item $-1/12 < \tilde{g}\leq 0$: PBHs form in the range $C_i({4/3})<C_G<C_i(C_{1,th})$, $C_j(4/3)<C_G<C_j(C_{1,th})$ and $C_k(C_{1,th})<C_G<C_k(4/3)$. \item $\tilde{g} > 0$: PBHs form in the range $C_i(C_{1,th})<C_G<C_i(4/3)$. \end{itemize} where $\tilde{g}_c$ is given by \begin{equation} \tilde{g}_c = -\frac{4}{27C_{1.th}}, \end{equation} and we obtain $\tilde{g}_c\simeq-0.33$ for $C_{1,th}\simeq 0.67$. The integration limits are shown graphically in the bottom right plot of figure \ref{fig:intLimits}, and we again note that the PBH abundance is typically dominated by the solution $C_G(C_{1,th})$ with the smallest magnitude. For $\tilde{g}>0$, this is $C_i$, and for $\tilde{g}<0$, this is $C_k$. The integral in equation \eqref{eqn:beta} can now be solved numerically to calculate the PBH abundance as a function of the power spectrum. Figure \ref{fig:betaGnl} shows the amplitude of the power spectrum required to produce PBHs in the abundance $\beta = 10^{-5},10^{-20}$ for varying values of $\tilde{g}$ (recalling now that $\tilde{g}\simeq 1.07 \times 10^{-2}g_\mathrm{NL}^\mathrm{local}$). We find that the abundance of PBHs is increased for positive $\tilde{g}$ (resulting in a smaller amplitude of the power spectrum required to produce the same abundance). For slightly negative $\tilde{g}$, the abundance of PBHs decreases dramatically - resulting in a severe increase in the amplitude of the power spectrum required to produce the same abundance. This is due to the fact that, for $\tilde{g}<0$ there is a maximum amplitude of the compaction $C$ which can form from positive $C_G$, given by, \begin{equation} C_\mathrm{max} = \frac{ 1 }{ 18\tilde{g} }+\frac{ 2 i }{ 3\sqrt{ 3\tilde{g} } }. \end{equation} Switching from the regime where positive $C_G$ can form PBHs to the regime where they cannot results in the dramatic increase in the amplitude of the power spectrum $\mathcal{A}$ required to produce the same abundance of PBHs, as seen in figure \ref{fig:betaGnl}. For more negative values of $\tilde{g}$, we see that the abundance of PBHs starts to increase again, resulting in a smaller $\mathcal{A}$. For $\tilde{g}\rightarrow+\infty$ or $\tilde{g}\rightarrow-\infty$, the value of $\mathcal{A}$ asymptotes to the same value. This is because we can neglect the linear term and simply write $C_1 = \tilde{g}C_G^3$, which is invariant under the transformation $\tilde{g}\rightarrow-\tilde{g}$, $C_G\rightarrow-C_G$. \begin{figure*} \centering \includegraphics[width=0.6\textwidth]{AVsg} \caption{The amplitude of a delta-function power spectrum $\mathcal{A}$ required to produce a given initial abundance of PBHs $\beta$ as a function of the non-Gaussianity parameter $\tilde{g}$. } \label{fig:betaGnl} \end{figure*} \section{Comparison to previous literature} \label{sec:literature} Qualitatively, the results are most similar to previous work by Byrnes et al \cite{Byrnes:2012yx} (which was followed up by a series of papers by Young and Byrnes \cite{Young:2013oia,Young:2014oea,Young:2015cyn,Young:2015kda}). The paper made use of a Press-Schechter-type approach and used the curvature perturbation as the formation criterion. Whilst this can be considered a valid approach for narrow power spectra (as is also considered here), it has since been argued that the density, and specifically the compaction should be used as the formation criterion, although there are many methods for performing the calculation \cite{Young:2014ana,Musco:2018rwt,Young:2019osy,Yoo:2018kvb,Yoo:2020dkz}. Whilst the approaches used by Byrnes et al and this study are very different, the results are qualitatively very similar due to the similarity between equation \eqref{eqn:localNG} (which forms the basis for Byrnes et al) and equation \eqref{eqn:NGcomp} (which forms the basis for this study). Quantitatively, we find that the effect of local-type non-Gaussianity can be an order of magnitude smaller than Byrnes et al found, which is due to the fact that the compaction is sensitive to the value of the curvature at the edge of a perturbation, rather than the peak value in the centre. Riccardi et al \cite{Riccardi:2021rlf} makes use of the ``spiky enough" criteria to determine whether a perturbation in $\zeta$ will collapse to form a black hole. It achieves this by using equation \eqref{eqn:denCon} to relate $\zeta$ to the density contrast, and then effectively uses the density contrast as the formation criterion. Using this approach, it is found that a positive $f_\mathrm{NL}^\mathrm{local}$ would actually suppress the formation of PBHs - in contradiction to the findings here and in previous papers. The contradiction is due to using the non-linear expression for the density contrast, equation \eqref{eqn:denCon}, and specifically, it is due to the $\exp\left( -2\zeta\right)$ term in the equation. When one includes an additional positive quadratic term to $\zeta$ (as in the local-type expansion, equation \eqref{eqn:localNG}) then, for the large, positive perturbations which form PBHs, this increases the value of $\zeta$ - which can therefore decrease the magnitude of the density contrast. Since the addition of an $f_\mathrm{NL}^\mathrm{local}$ term decreases the amplitude of the density contrast, the conclusion is then that the abundance of PBHs will be decreased as well. However, the increased value of $\zeta$ also introduces a change to the horizon size whilst the perturbation is in the super-horizon regime. To illustrate this, let us consider the simple scenario of adding a constant value $\phi$ to the curvature perturbation, $\zeta\rightarrow \zeta+\phi$. This decreases the value of the density contrast everywhere by a factor $\exp(-2\phi)$. By applying the separate universe approach, we conclude that this should not affect the evolution of the universe, but instead simply introduces a time shift. A given perturbation will then take longer to enter the horizon - and grows by an additional factor $\exp(2\phi)$ before horizon entry - exactly cancelling the effect of $\phi$. This could be addressed, for example by smoothing over a specified areal radius, where one obtains an expression proportional to the compaction, and would then find results compatible with those presented here. Kitajima et al \cite{Kitajima:2021fpq} investigated the effect of local-type non-Gaussianity, finding that $f_\mathrm{NL}$ has a similar effect on the PBH abundance. Although similar, their approach does differ in several key regards. Rather than using the compaction to determine the threshold value for PBH formation, they use the averaged value of the compaction, which has been argued to minimise the dependence on the profile shape \cite{Escriva:2019phb}. The averaged compaction is related to the Laplacian of the curvature perturbation $-\Delta\zeta$ by assuming a typical profile shape for $\zeta$. Peaks theory is then used to calculate the number density of peaks in $-\Delta\zeta$ which form PBHs ($-\Delta\zeta$ is also used in the mass scaling relation instead of the compaction, as in equation \eqref{eqn:mass}). For the monochromatic power spectra considered, this approach is entirely valid, but would run into complications when broad power spectra are considered since no smoothing is performed (see \cite{Young:2019osy} for more discussion). \section{Conclusions} \label{sec:conclusions} The effect of local-type non-Gaussianity on the PBH abundance has been considered, which can also be applied to constraints on the primordial power spectrum derived from constraints on the PBH abundance. The effect of non-Gaussian corrections at second- and third-order were considered, with results broadly in line with previous work by Byrnes et al \cite{Byrnes:2012yx}. We have updated the calculation to account for recent developments in the field: \begin{itemize} \item the use of the compaction as the appropriate parameter to determine whether a perturbation will collapse to form a PBH; \item the non-linear relationship between the compaction and the curvature perturbation; \item the mass scaling relationship which relates the amplitude and scale of a perturbation to the eventual PBH mass; \item and we have also included peaks theory in the calculation rather than a Press-Schechter-type approach. \end{itemize} We find that the effect of the non-Gaussianity parameters $f_\mathrm{NL}^\mathrm{local}$ and $g_\mathrm{NL}^\mathrm{local}$ is qualitatively similar to that found previously. Positive $f_\mathrm{NL}^\mathrm{local}$ increases the PBH abundance (tightening constraints), whilst negative $f_\mathrm{NL}^\mathrm{local}$ decreases the abundance (weaking constraints). Positive $g_\mathrm{NL}^\mathrm{local}$ also increases the PBH abundance, whilst negative $g_\mathrm{NL}^\mathrm{local}$ can have varying effects. For small negative values, the PBH abundance is decreased significantly, but increases for large negative values. However, by considering the compaction as the relevant parameter for PBH formation, we find that, quantitatively, the non-Gaussianity parameters have a much weaker effect than found previously, and must be orders of magnitude larger to have the same effect. This is especially true when higher order terms are considered, and is due to the fact that the compaction is sensitive to the value of the curvature perturbation at the edge of the perturbation rather than the peak value (i.e. the compaction includes the term $\zeta(r)$ rather than $\zeta(0)$). Previous papers have also studied the effect of modal coupling on the PBH abundance, which required artifical insertion of long-wavelength modes into the calculation (often utilising the peak-background split). Whilst not considered here, if broad power spectra were to be considered, the formalism derived here automatically encodes the effect of such long-wavelength modes and the effect of modal coupling on the PBH abundance and mass function. One result of considering such mocal-coupling is the formation of dark matter isocurvature modes if the PBH abundance and non-Gaussianity is not negligible. Previous papers found that this would place extremely strong constraints on the local-type non-Gaussianity parameters if even a small amount of PBHs exist \cite{Tada:2015noa,Young:2015kda}. Scch constraints would be made considerably weaker once the updated calculation presented here is accounted for - especially for the higher-order non-Gaussianity parameters. We've assumed and justified spherically symmetry for the power spectrum considered here. However, as discussed in section \ref{sec:validity} we have shown that this assumption is not valid for broad power spectrum - revealing a problem with the calculation performed in many papers related to the assumption of the high-peak limit. Whilst it may be expected that the expression for C in equation \eqref{eqn:quadratic} still holds at least approximately true for broad power spectra, this is an important consideration worthy of further study and will be the subject of future research. \section*{Acknowledgements} SY is supported by an MCSA postdoctoral fellowships, and would like to thank Subodh Patil for his helpful comments on a draft of this work.
1,116,691,497,026
arxiv
\section{\label{}} \newcommand{$\frac{\mathcal{B}(X(3872)\rightarrow\pi^0\chi_{c0})}{\mathcal{B}(X(3872)\to\pi^0\chi_{c1})}$}{$\frac{\mathcal{B}(X(3872)\rightarrow\pi^0\chi_{c0})}{\mathcal{B}(X(3872)\to\pi^0\chi_{c1})}$} \section{Introduction} The XYZ states in the charmonium region have properties that make them inconsistent with the predicted charmonium spectrum. The first of these exotic hadron candidates to be discovered was the $X(3872)$ \cite{belle}. Its quantum numbers have been measured to be $J^{PC}=1^{++}$ \cite{lhcbNum}, but it has several properties that make it unlikely to be the $\chi_{c1}(2P)$. Its mass is below the predicted value for the $\chi_{c1}(2P)$, and its width has been measured to be $\Gamma=1.39\pm0.24\pm0.10$ MeV \cite{lhcbWidth}, which is narrower than expected for a pure charmonium state above $D\bar{D}$ threshold. In addition, the isospin violating decays $X(3872)\to\rho^0 J/\psi$ \cite{besRho} and $X(3872)\to\pi^0\chi_{c1}$ \cite{besPi} occur at a rate that is much larger than expected for a pure charmonium state. The measured mass of the $X(3872)$ is consistent with $D^{*0}\bar{D}^0$ threshold within experimental uncertainties, which has prompted predictions that the $X(3872)$ is a molecular meson, a compact tetraquark, the $\chi_{c1}(2P)$, or some mixture of these three scenarios \cite{xyzRev}. The vector meson $Y$ states overpopulate the predicted distribution for the charmonium system. Since the vector mesons have $J^{PC}=1^{--}$, they can be produced directly in $e^+e^-$ collisions, so their masses and widths can be determined by fitting the measured $e^+e^-$ cross section values. BESIII has used this method to resolve the single $Y(4260)$ resonance into two resonances, the $Y(4230)$ and the $Y(4360)$, using the measured $e^+e^-\to\pi^+\pi^-J/\psi$ cross section values \cite{besY}. The theoretical explanations for these states include compact tetraquarks, molecular mesons, and hybrid mesons \cite{xyzRev}. The $Z_c(3900)^\pm$ and $Z_c(3900)^0$ states were observed at BESIII in the processes $e^+e^-\to\pi\pi J/\psi$ in the invariant $\pi J/\psi$ spectrum \cite{zc,zc0}. Since the $Z_c(3900)$ is an isovector, it clearly cannot be described as a pure charmonium state, and must have at least four constituent quarks. The leading interpretations for the $Z_c$ states are molecular mesons or compact tetraquarks \cite{xyzRev}. To study the quark configuration of the XYZ states, we use data collected by the BESIII detector, which is located at the Beijing Electron Positron Collider (BEPCII). The BESIII detector records symmetric $e^+e^-$ collisions and covers 93\% of the $4\pi$ solid angle for photons and charged particles. This is an excellent environment for studying XYZ states because they are produced nearly at rest with relatively small background levels. This enables BESIII to reconstruct complicated decay modes of these states. In the remainder of this paper, we discuss the recent searches for new $X(3872)$ and $Y(4230)$ decay modes at BESIII. We also report the recent discovery of the $Z_{cs}(3985)^+$, the strange partner to the $Z_c(3900)$. \section{Search for $X(3872)\to\pi^0\chi_{c0}$} \begin{figure*} \includegraphics[scale=1.0]{figures/Xpi0chic0.pdf} \caption{Fit to the $\pi^0\chi_{c0}$ invariant mass distribution for the search of $X(3872)\to\pi^0\chi_{c0}$. The solid line is the fit with a signal component while the dashed line is a fit with just the background component. No significant signals are found.} \label{fig:pi0chic} \end{figure*} The decay $X(3872)\to\pi^0\chi_{c0}$ is predicted to be sensitive to the quark configuration of the $X(3872)$, since if the $X(3872)$ has four constituent quarks, the ratio of branching fractions is predicted to be $\frac{\mathcal{B}(X(3872)\rightarrow\pi^0\chi_{c0})}{\mathcal{B}(X(3872)\to\pi^0\chi_{c1})}$$\approx 3$, while for pure $c\bar{c}$, this decay should be forbidden. Using the production process $e^+e^-\to\gamma X(3872)$, this analysis searches for the decay $X(3872)\to\pi^0\chi_{c0}$, where the $\chi_{c0}$ decays hadronically to five final states. No significant signals are observed, so we place a 90\% confidence level upper limit of $\frac{\mathcal{B}(X(3872)\rightarrow\pi^0\chi_{c0})}{\mathcal{B}(X(3872)\to\pi^0\chi_{c1})}$$<4.5$ using the fit results in Fig. \ref{fig:pi0chic}. This is too large to rule out any interpretation of the $X(3872)$, but it is the most sensitive search for this decay mode to date \cite{mine}. We also perform the first searches for the decays $X(3872)\to\pi^+\pi^-\chi_{c0}$ and $X(3872)\to\pi^0\pi^0\chi_{c0}$ but find no significant signals. \section{Observation of $Y(4230)$ in $e^+e^-\to K^+K^- J/\psi$} \begin{figure} \includegraphics[scale=1.0]{figures/kkjpsi.pdf} \caption{Measured cross section for $e^+e^-\to K^+K^- J/\psi$ showing the total fit (red line) and the components due to the $Y(4230)$ (dashed blue) and $Y(4500)$ (dashed green).} \label{fig:kkjpsi} \end{figure} To probe the strange quark content of the $Y(4230)$, we measure the cross section of $e^+e^-\to K^+K^-J/\psi$ to search for resonant contributions. In addition to the $Y(4230)$, this analysis could be sensitive to predictions from conventional charmonium models with 5S-4D mixing, hadronic molecule models, and tetraquark models that all predict a state near 4.5 GeV$/c^2$. The measured cross section values are shown in Fig. \ref{fig:kkjpsi}, where we observe a clear $Y(4230)$ peak \cite{kkjpsi}. The cross section clearly rises after the $Y(4230)$ peak, so this rise is fit with a Breit-Wigner. The mass from the Breit-Wigner is consistent with a peak near 4.5 GeV$/c^2$, but more data is needed to draw firm conclusions about what is happening in this higher $E_{\rm cm}$ region. \section{Search for Charmoniumlike States Decaying to Light Hadrons} \begin{figure*} \includegraphics[scale=1.0]{figures/lightHadronA.pdf}\includegraphics[scale=1.0]{figures/lightHadronB.pdf} \caption{Measured cross section values (points) for 8 light hadron final states fit with $\sigma_{cont}^{expected}$ (red line). We do not observe any resonant structures for any of the measured cross sections.} \label{fig:light} \end{figure*} No light hadron decays have been found for any charmonium or charmoniumlike states above 4 GeV. The purpose of this analysis is to search for light hadron decays above 4 GeV to probe the light quark content of the $Y$ states. To do this, we measure the cross sections for eight light hadron final states. The measured values for each cross section are fit with $\sigma_{cont}^{expected}=|A_{cont}|^2$ where \begin{equation} A_{cont}=\sqrt{\frac{f_{cont}}{(E_{\rm cm}/4.226\textrm{ GeV})^n}} \end{equation} where $f_{cont}$ and $n$ are floating parameters in the fit and $E_{\rm cm}$ is the measured center-of-mass energy. We do not observe any resonant structures in these energy regions \cite{light}, as shown in Fig. \ref{fig:light}. \section{Measurement of $\sigma(e^+e^-\to D^{*+}D^{(*)-})$} The conventional vector meson charmonium states above open charm threshold predominantly decay to open charm final states. By contrast, the vector meson charmoniumlike states have large decay rates to hidden charm final states. Open charm cross section measurements can be used in a coupled channel analysis to determine the pole positions of resonances in these energy regions. BESIII recently measured more precise cross section values for $e^+e^-\to D^{*+}D^{-}$ and $e^+e^-\to D^{*+}D^{*-}$ \cite{dd}, as shown in Fig. \ref{fig:dd}. The cross section measurements are consistent with previous Belle measurements \cite{belleD} but have smaller error bars, which will improve the sensitivity of future coupled channel analyses of the charmonium system. \begin{figure} \includegraphics[scale=1.0,trim={6.5cm 0cm 0cm 0.06cm},clip]{figures/ddbarB.pdf}\\\includegraphics[scale=1.0,trim={6.5cm 0cm 0cm 0cm},clip]{figures/ddbarA.pdf} \caption{Measured open charm cross section values from Belle (black points) and BESIII (red points). The recent BESIII measurements have a good overall agreement with the Belle results, but with much smaller error bars. The error bars shown in the plot are the quadrature sum of the statistical and systematic uncertainties.} \label{fig:dd} \end{figure} \section{The Charged and Neutral $Z_{cs}(3985)$ at BESIII} BESIII recently reported a $5.3\sigma$ observation of the $Z_{cs}(3985)^\pm$ in $e^+e^-\to K^+(D_s^-D^{*0}+D_s^{*-}D^0)$ \cite{zcs}, as shown in Fig. \ref{fig:zcs} left. This is a candidate to be the strange partner of the $Z_c$ states, and would have a minimal quark content of $c\bar{c}s\bar{u}$. Additionally, BESIII has searched for the neutral partner to the $Z_{cs}^\pm$ in $e^+e^-\to K_s^0(D_s^-D^{*+}+D_s^{*-}D^+)$, and found $4.6\sigma$ evidence for the $Z_{cs}(3985)^0$ \cite{zcs0}, as shown in Fig. \ref{fig:zcs} right. The minimal quark content of the neutral state is $c\bar{c}s\bar{d}$. The measured mass and width for the charged and neutral $Z_{cs}$ candidates are shown in Table \ref{tab:zcs}. These are consistent with theoretical predictions that the neutral $Z_{cs}$ should have a higher mass than the charged $Z_{cs}$. \begin{table} \caption{Measured mass and width values for each of the $Z_{cs}$ candidates} \begin{tabular}{|c|c|c|}\hline State & Mass [MeV/$c^{2}$] & Width [MeV]\\\hline $Z_{cs}(3985)^+$ & $3985.2^{+2.1}_{-2.0}\pm 1.7$ & 13.8$^{+8.1}_{-5.2}\pm4.9$ \\\hline $Z_{cs}(3985)^0$ & $3992.2\pm1.7\pm1.6$ & $7.7^{+4.1}_{-3.8}\pm 4.3$\\\hline \end{tabular} \label{tab:zcs} \end{table} \begin{figure} \includegraphics[scale=0.8]{figures/zcsLegend.pdf}\\ \includegraphics[scale=0.8]{figures/zcsBES.pdf}\includegraphics[scale=0.85]{figures/zcs0.pdf} \caption{Measurements of $Z_{cs}(3985)^\pm$ (left) and $Z_{cs}(3985)^0$ (right). The charged structure has a significance of $5.3\sigma$, while the neutral case has a significance of $4.6\sigma$.} \label{fig:zcs} \end{figure} \section{Summary} BESIII continues to be active in studying the XYZ states in the charmonium system. We have recently searched for new decays of both the $X(3872)$ and the $Y(4230)$. No evidence was found for the decays $X(3872)\to\pi^0\chi_{c0}$, $X(3872)\to\pi\pi\chi_{c0}$, or $Y(4230)\to$ light hadrons, but the process $Y(4230)\to K^+K^-J/\psi$ was observed for the first time. The open charm cross sections $e^+e^-\to D^{*+}D^{-}$ and $e^+e^-\to D^{*+}D^{*-}$ have been measured with a higher precision than the previous Belle results, which will improve the sensitivity to the XYZ states in future coupled channel analyses. We recently observed the $Z_{cs}(3985)^\pm$ and found evidence for its neutral partner. An accelerator upgrade is planned for 2024 that will increase the luminosity by up to a factor of 3 and give access to energies up to 5.6 GeV. This will give the experiment access to more charmed baryons and the capability to search for new $Y$ states at higher energies. \bigskip
1,116,691,497,027
arxiv
\section{Introduction} Faced with the ever-growing problem of crime, prevention strategies have come to the fore as a key issue and one of the main challenges of Law Enforcement authorities. This increase has been observed mainly in large urban centers and even with huge masses of digitized data about police activities and reported crimes, the police institutions do not seem to use adequately this information to fight the growth of crime. Within this context, strategies of police allocation play an important role in crime prevention and a recent discovery by Caminha {\it et al.} \cite{caminha2017} motivated us to revisit the state of the art of this subject. Caminha {\it et al.} discovered a superlinear relationship \cite{bettencourt2010,gomez2012,ignazzi2014,hanley2016,alves2015a,arcaute2015, arcaute2016,cottineau2016,leitao2016,bettencourt2013,arbesman2009} between the flow of people and property crimes. In other words, the authors found that the increase of the floating population in a urban space implies a disproportionate growth of property crime in that space. Formally this relationship can be represented by the equation $Y=aX^\beta$, indicating a Power Law, where $Y$ quantifies property crimes, $X$ quantifies the volume of people flow, $a$ is a normalization constant and $\beta$ is the exponent that scales the relation, which in the case of a superlinear relation is assumed to be $\beta > 1$. The authors further assert that administrative territorial units that typically account for features of resident population, such as divisions by neighborhoods, census tracts or zones, are unable to precisely capture the effect that social relations have over crime. This fact has already been stated by several urban indicators in important scientific productions \cite{makse1998, rozenfeld2008, giesen2010, rozenfeld2011, duranton2013, gallos2012, duranton2015, eeckhout2004, oliveira2014}. Although over the years a series of scientific works have studied factors surrounding crime \cite{melo2014,gelman2007,agnew2007,caminha2012, furtado2012,guedes2014,kennedy1990,beato2014}, none of them have taken into consideration this finding that quantifies the relation between human mobility and crime. More specifically, they do not make use of the divisions of urban spaces estimated from the floating population to police allocation. In this article we seek to understand the impact, in the allocation of police resources, from the fact that the relationship between the movement of people and property crimes follows a Power Law. To estimate this impact, we use data from a big metropolis to build clusters of floating population that will be considered as the basis for the allocation strategy. The distribution of the police resources obtained from the application of this strategy is compared to a conventional allocation strategy, in which police officers are distributed into administrative territorial divisions. Doing so, we were able to show the difference between the distribution of allocated police according to the two strategies. This difference allows us to conclude that, under the light of these new evidences of cause-effect between floating population and property crimes, it is inaccurate to apply a conventional strategy of police allocation, which is based only on resident population. \section{State of Art} There is a vast selection of literature on police allocation in urban space to combat criminal activity. There was a growing interest in developing techniques using programs of spatial analysis to identify areas where the police resources are to be allocated. In a very general way, a typical strategy of allocation is to implement a heterogeneous model, in which the distribution of resources in a geographic area is directly proportional to the density of crimes of that region. Typically these areas are administrative regions ({ \it e.g.} census tract or neighbohoods) demarcated from features of the resident population \cite{sherman1989}. This perspective, is not totally in line with routine activity theory \cite{clarke1993,michael1933,cohen1979} and criminal career approaches \cite{blumstein1986},but ,for practical reasons, have been used for years \cite{sherman1995}. With the increase in the volume of digital data and the creation of more sophisticated mapping techniques, opportunities have appeared to go beyond the approaches where only the density of crimes in areas of resident population is considered \cite{weisburd2006,ratcliffe2006,wortley2016}. Nevertheless, much of the work in this area continued to focus on the concentration of crime in administrative territorial units \cite{groff2002,berk2011}. It is true that Kennedy {\it et al.} \cite{kennedy2011} developed an in-depth assessment of social factors that contribute to crime occurrence. However, its allocation algorithm is based on risk areas which indirectly are also measured by resident population indicators. There are also a number of papers that use simulation models to teach the police officer how to make an allocation of quality resources \cite{greasley1998,melo2005,furtado2006,reis2006,guedes2015}. However, these works do not consider the new evidence that human mobility is the key to understanding the emergence of property crimes in regions of urban space. Finally, it is worth noting that there are numerous studies that seek to understand phenomena related to human mobility \cite{gonzalez2008,wang2011,caminha2016,andrade2009,ponte2016}, however, works that apply the knowledge obtained from these studies on crime prevention from police allocation is scarce. \section{Datasets} In this paper, data on property crimes was used, this was obtained from \cite{ciops2016}. In total this dataset contains 81,911 geo-referenced crimes occurring between August 2005 and July 2007. Three levels of segmentation were used for the city of Fortaleza-CE, Brazil. The first level was a division by neighborhood, obtained from \cite{bairros2017}, in total, Fortaleza has 116 districts spread over an area of 313 $\mathrm{km^2}$ where more than 2,400,000 people live. The second level, a division by defined census tracts by IBGE (Brazilian Institute of Geography and Statistics) \cite{ibge2016}, which divides the city into 3043 subareas that on average contain 800 residents each. Finally, the third level of segmentation, a division by clusters of floating population, estimated in \cite{caminha2017}. In total, the authors divided Fortaleza into 119 clusters using \textit{City Clustering Algorithm} (CCA) \cite{makse1998, rozenfeld2008, giesen2010, rozenfeld2011, duranton2013, gallos2012, duranton2015, eeckhout2004}. To define the boundaries of this clusters, the CCA algorithm considered the notion of spatial continuity through the aggregation of census tracts that are near one another. The CCA constructs the floating population boundaries of an urban area considering two parameters, namely, a population density threshold, $D^*$, and a distance threshold, $\ell$. For the $i\mathrm{-th}$ census tract, the population density $D_i$ is located in its geometric center; if $D_i > D^*$, then the $i\mathrm{-th}$ census tract is considered populated. The length $\ell$ represents a cutoff distance between census tracts to consider them as spatially contiguous, {\it i.e.}, all of the nearest neighboring census tracts that are at distances smaller than $\ell$ are clustered. Hence, a cluster made by the CCA is defined by populated areas within a distance less than $\ell$, as seen schematically in Figure \ref{fig1}. Previous studies \cite{oliveira2014,duranton2013, duranton2015} have demonstrated that the results produced by the CCA can be weakly dependent on $D^*$ and $\ell$ for some range of parameter values. In \cite{caminha2017} $\ell$ was quantified in meters (m) and $D^*$ in people passing by $\mathrm{km^2}$ in one day. \begin{figure}[!h] \includegraphics[width=0.486\textwidth]{Fig1.pdf} \caption{{\bf The scheme of the City Clustering Algorithm (CCA).} Each square represents a clustering unit, specifically in our case, they represent census tracts. Black squares are candidates for clustering $(D_i > D^*)$, in contrast, the gray squares cannot be clustered ($D_i \le D^*$). (a) The red dot represents the geometric center of the $i$-th census tract and the white circle with radius $\ell$ seeks neighbors belonging to the same cluster. (b) The same search operation is made for the other census tracts. (c) The same operation is done until there are no more neighbors within the radius of operation. (d) The algorithm finishes running and the cluster is found.} \label{fig1} \end{figure} Figure \ref{fig2} illustrates the clusters found. The base division used in the cluster was the census tract map. The census tract in light gray color were not grouped because they have low flux density ($D_i \le D^*$), the other colors represent clusters found. In the division reached by the CCA the volume of flow of a cluster is proportional to its area \cite{oliveira2014}. It was estimated $\ell=320$ and $D^*=6000$. \begin{figure}[!h] \includegraphics[width=0.48\textwidth]{Fig2.pdf} \caption{{\bf Agglomerates found by the CCA algorithm.} The regions in light gray color were not grouped because they have low floating population density ($D_i \le D^*$), the other colors represent clusters found. More precisely, each color represents a people flow agglomerate.} \label{fig2} \end{figure} \section{Methods} Two strategies of police allocation will be compared here, these strategies are based on the most popular heterogeneous allocation model, namely by high crime density. The first, called \textit{Resident Population Allocation (RPA) Strategy} is a conventional strategy of police allocation, whose resources are distributed in proportion to the quantity of occurrences in administrative divisions of a territory (what is typically estimated from features from the resident population). In this work the division by neighborhood's boundaries will be adopted, because, despite the division by census tracts being available, it is too segmented, with some of them being less than one block, thus being unfeasible to be used in a real policy of resource allocation. The second allocation strategy, called \textit{Floating Population Allocation (FPA) Strategy}, will also distribute police resources proportionally to the number of calls to the police in a spatial division, however, in this strategy the boundaries of the areas follow the clusters of floating population estimated in \cite{caminha2017}. In this way, the part of a police resource, $T_{s_i}$, allocated to a sub-region of urban space (whether a clusters of floating population or a neighborhoods), $s_i$ $\in$ $S$, from the quantity of crimes occurring in $s_i$, $C{s_i}$, can be formally defined as $T_{s_i} = \frac{T * C{s_i}}{C}$. Where $T$ is the total number of police officers available for allocation and $C$ is the total number of crimes that have occurred in all the urban space available for allocation. A policy of internal allocation was also adopted, precisely at the level of $s_i$. Each cluster of floating population or neighborhood is composed of census tracts and internally there is also a allocating of resources in a manner proportional to the number of crimes of each census tract within $s_i$. In other words, within each sub-region $s_i$, sectors with more crimes receive more police officers. This sub-allocation policy is justified by the need to compare the two strategies, which will be discussed later on. \section{Results} When applying \textit{RPA Strategy} and \textit{FPA Strategy} in Fortaleza to simulate the availability of a total police resource $T=10,000$, the heat maps shown in Figure \ref{fig3}, items (a) and (b), respectively. Hot Spots with more intensity can be seen in \textit{FPA Strategy}, mainly in the commercial center of the city, highlighted by the black circle in both figures. This is because \textit{FPA Strategy} does not allocate police resources in areas that are considered uninhabited $(D_i > D^*)$, instead concentrating more police in the most critical regions of the city. \begin{figure}[!h] \includegraphics[width=0.48\textwidth]{Fig3.pdf} \caption{{\bf Police allocation using the two strategies studied.} (a) shows density map of police allocated using \textit{RPA Strategy}, (b) shows density map of police allocated using \textit{FPA Strategy}. Black circles highlight the shopping center of Fortaleza, an area with a large population concentration and consequently, a large concentration of crimes against property.} \label{fig3} \end{figure} For the purpose of comparison, the amount of police allocated per neighborhood was calculated using \textit{FPA Strategy}. Then, the number of police officers in the census tracts located within each neighborhood was added. After this, we calculated the percentage difference of the number of policemen allocated by neighborhood by both strategies. In Figure \ref{fig4}, items (a) and (b) illustrate the neighborhoods where the allocation is more similar and more different respectively. In general, a greater similarity was observed in the allocations in the neighborhoods with greater presence of fluctuating populations, these neighborhoods are close to the commercial center of the city or located in regions with a high concentration of residents (normally locations that are the source of floating population). It was also observed that the districts that presented a greater percentage difference between the quantities of police officers allocated using the allocation strategies studied, are those which have more non-populated census tracts, that is, with a floating population density below the threshold $D^*$, as estimated in \cite{caminha2017}. \begin{figure}[!h] \includegraphics[width=0.48\textwidth]{Fig4.pdf} \caption{{\bf Differences and similarities among the studied allocations.} (a) highlights in black the neighborhoods that had the most similar allocation using \textit{RPA e FPA Strategy}. (b) highlights the neighborhoods with the highest difference in the number of police officers allocated. In both figures 24 neighborhoods are highlighted, 20\% of the city total.} \label{fig4} \end{figure} In Figure \ref{fig5} a more detailed comparison can be observed between the two allocation strategies. (a) illustrates the interpolation functions \cite{deboor1978} of the neighborhoods by the number of police officers allocated by the two strategies studied. The intersection of the areas formed by interpolation curves and the $x$ axis reveals approximately 15\% dissimilarity between the allocations. This dissimilarity can be observed more clearly in Figure \ref{fig5} (b), where the interpolation functions of the histograms generated from the number of police officers allocated by neighborhoods according to the two strategies is shown. The blue line represents the interpolation function of the \textit{RPA Strategy} data. The red line represents the estimated function for the \textit{FPA Strategy}. The regions in light red color represent areas where there was no intersection. Added together, these regions represent 15\% of the total area. \begin{figure}[!h] \includegraphics[width=0.48\textwidth]{Fig5.pdf} \caption{{\bf \textit{RPA and FPA Strategy} statistical comparison.} In (a) is illustrated the number of police resources allocated by neighborhood. The blue line represents a \textit{Cubic Spline Interpolation} \cite{deboor1978} applied to the values that was found in \textit{RPA Strategy}. The red line is the same interpolation applied to the \textit{FPA Strategy}. (b) show the histograms distribution to the allocations in the neighborhoods of the city. For better visualization, the histograms has been generated in 20 bins \cite{wand1997}.} \label{fig5} \end{figure} Such difference quantifies the inefficacy of the {\it RPA Strategy}. While the allocation produced from the {\it FPA strategy} is strongly correlated with the flow of people, the {\it RPA strategy} fails to capture the scale found by Caminha {\it et al.} \cite{caminha2017}. Remember that their studies found a superlinear relationship between property crimes and floating population with exponent $\beta = 1.15 \pm 0.4$. Figure \ref{fig6} shows the correlation between the number of police resources allocated and floating population following the two different strategies. In (a) it is shown the correlation between the resources allocated and floating population following the FPA strategy. There is a clear superlinear relation with an exponent of $\beta = 1.18 \pm 0.05$ and a strong coefficient of determination \cite{rawlings2001, montgomery2015} ($R^2 = 0.83$), On the other hand, in (b), although a superlinear relation appears, the determination coefficient ($R^2 = 0.70$) as well as the standard error of this \cite{rawlings2001, montgomery2015} ($\pm 0.11$) reveals that the {\it RPA Strategy} is not the more adequate to the city of Fortaleza. Another important feature that indicates the inappropriateness of the {\it RPA strategy} is also observed in Figure \ref{fig6}, specifically the analysis of the dispersion of the dots (clusters). In (b), we can see four clusters of floating population with considerable activity (flow of people) with few police resources. This happens because the boundaries between neighborhoods sometimes divide the floating clusters what makes difficult a precise allocation of resource in that region. In general, although the {\it RPA strategy} suggests the distribution of resources in a way that follows a Power Law, there is an imprecision because this strategy aims at capturing the influence of floating population indirectly via the incidence of crime. That is to say, as crime occurs due to the presence of people, looking at crime is a way to consider the floating population. This is not however the best approach because it fails to capture the potential of occurrence of crime in a disproportional way caused by the existence of clusters of floating population. When the {\it FPA Strategy} is applied the cause (flow of people in a region) and the amount of crime are considered to determine the amount of resources to be allotted. Doing so, it is possible to statistically approximate (in terms of exponent and standard error) the superlinear relation as suggested by Caminha {\it et al.} \cite{caminha2017}. \begin{figure}[!h] \includegraphics[width=0.48\textwidth]{Fig6.pdf} \caption{{\bf Correlations between police officers and floating population in \textit{RPA and FPA Strategy}.} (a) and (b), respectively, illustrates the correlations achieved for \textit{RPA and FPA Strategy}. The $x$-axis represents the floating population and $y$-axis the number of police officers allocated. The red lines represent the simple linear regressions applied to the data, the blue continuous lines represent the Nadaraya-Watson method \cite{nadaraya1964,watson1964} and the blue dashed lines delimit the 95\% confidence interval (CI) estimated by bootstrap}. \label{fig6} \end{figure} \section{Conclusion} This paper presented a study that investigates new ways of allocating police resources within the urban space. Differently to conventional allocation policies, which allocate resources through the city using administrative units , an allocation strategy was presented which distributes police by clusters of floating population, which have already been proved to be much more precise in explaining the behavior of crimes against property in a city \cite{caminha2017}. This precision is due to the fact that the borders of population flux often go beyond the boundaries of the administrative divisions and clustering algorithms identify the ''islands'' formed by those clusters that are naturally strategic regions in combating crime. Our study reveals that allocation of police resources into clusters of floating population leads the distribution of resources in a way significantly different from strategies that allocates resources having per basis the administrative regions. More specifically, we show that the allocation having as basis the clusters of floating population tends to be more adequate for fighting crime against properties because the distribution of police resources will naturally follow a Power Law, what is desirable since it is expected that crime grows disproportionally in areas with high density of floating population. The aspects discussed here open new lines of further investigations. In particular, it is important to notice that the work by Caminha et al. has also shown that for certain types of crimes (e.g. peace disturbance) the superlinear relationship is only captured having as basis administrative areas that account for features of resident population rather than clusters of floating population. This indicates that it is necessary to think in a hybrid strategy in which different polices and different divisions of the urban space need to be taken into consideration for each type of crime. \ifCLASSOPTIONcaptionsoff \newpage \fi
1,116,691,497,028
arxiv
\section{Measuring the Accuracy of the Tail Estimator} In order to verify the accuracy of this estimator, we conduct a preliminary experiment, where we first generate $K = k \times l$ many ${\cal S}\alpha{\cal S}(1)$ distributed random variables with $k= 100$, $l =1000$ for $100$ different values of $\alpha$. Then, we estimate $\alpha$ by using $\hat{\alpha} \triangleq (\widehat{\phantom{a}\frac1{\alpha}\phantom{a}})^{-1}$. We repeat this experiment $100$ times for each $\alpha$. As shown in Figure~\ref{fig:exp_synth}, the estimator is very accurate for a large range of $\alpha$. Due to its nice theoretical properties and empirical validity, we choose this estimator in our experiments. \begin{figure}[h] \centering \includegraphics[width=0.4\columnwidth]{alphaestim_synth.pdf} \caption{Estimation of $\alpha$. } \label{fig:exp_synth} \end{figure} \section{Conclusion and Open Problems} \label{sec:conc} We investigated the tail behavior of the gradient noise in deep neural networks and empirically showed that the gradient noise is highly non-Gaussian. This outcome enabled us to analyze SGD as an SDE driven by a L\'{e}vy motion and establish a bridge between SGD and existing theoretical results, which provides more illumination on the behavior of SGD, especially in terms of choosing wide minima. This study also brings up interesting open questions and future directions: (i) While the current metastability theory applies for the continuous-time processes, the behavior of the discretized process and its dependence on the algorithm parameters (e.g., the step-size, minibatch size) are not clear and yet to be investigated. (ii) We observe that, especially during the first iterations, the tail-index depends on the current state $\mathbf{w}_k$, which suggests analyzing SGD as a stable-like process \cite{bass1988uniqueness} where the tail-index can depend on time. However, the metastability behavior of these processes are not clear at the moment and its theory is still in an early phase \cite{kuhwald2016bistable}. (iii) Furthermore, an extension of the current metastability theory that includes minima with zero modes is also missing and appears to be challenging yet crucial direction of future research. \section{Metastability in the discretized regime} \section{Experimental Setup and Methodology} \label{sec:exps} \textbf{Experimental setup: } We investigate the tail behavior of the stochastic gradient noise in a variety of scenarios. We first consider a fully-connected network (FCN) on the MNIST and CIFAR10 datasets. For this model, we vary the depth (i.e.\ the number of layers) in the set $\{2,3,\dots,10\}$, the width (i.e.\ the number of neurons per layer) in the set $\{2,4,8,\dots,1024\}$, and the minibatch size ranging from $1$ to full batch. We then consider a convolutional neural network (CNN) architecture (AlexNet) on the CIFAR10 and CIFAR100 datasets. We scale the number of filters in each convolutional layer in range $\{2,4,\dots,512\}$. We randomly split the MNIST dataset into train and test parts of sizes $60$K and $10$K, and CIFAR10 and CIFAR100 datasets into train and test parts of sizes $50$K and $10$K, respectively. The order of the total number of parameters $p$ range from several thousands to tens of millions. For both fully connected and convolutional settings, we run each configuration with the negative-log-likelihood (i.e.\ cross entropy) and with the linear hinge loss, and we repeat each experiment with three different random seeds. The training algorithm is SGD with no explicit modification such as momentum or weight decay. The training runs until 100\% training accuracy is achieved or until maximum number of iterations limit is reached (the latter limit is effective in the under-parametrized models). At every $100$th iteration, we log the full training and test accuracies, and the tail estimate of the gradients that are sampled using the corresponding mini-batch size. The codebase is implemented in python using pytorch and provided it in the supplementary material. Total runtime is $\sim$3 weeks on 8 relatively modern GPUs. \textbf{Method for tail-index estimation:} Estimating the tail-index of an extreme-value distribution is a long-standing topic. Some of the well-known estimators for this task are \cite{hill1975simple,pickands1975statistical,dekkers1989moment,de1998comparison}. Despite their popularity, these methods are not specifically developed for $\alpha$-stable distributions and it has been shown that they might fail for estimating the tail-index for $\alpha$-stable distributions \cite{mittnik1996tail,paulauskas2011once}. In this study, we use a relatively recent estimator proposed in \cite{mohammadi2015estimating} for $\alpha$-stable distributions. It is given in the following theorem. \begin{thm}[\cite{mohammadi2015estimating}] Let $\{X_i\}_{i=1}^K$ be a collection of random variables with $X_i \sim {\cal S}\alpha{\cal S}(\sigma)$ and $K = K_1 \times K_2$. Define $Y_i \triangleq \sum_{j=1}^{K_1} X_{j+(i-1)K_1} \>$ for $i \in \llbracket 1, K_2 \rrbracket$. Then, the estimator \begin{align} \label{eqn:alpha_estim} \widehat{\phantom{a}\frac1{\alpha}\phantom{a}} \hspace{-4pt} \triangleq \hspace{-2pt} \frac1{\log K_1} \Bigl(\frac1{K_2 } \sum_{i=1}^{K_2} \log |Y_i| - \frac1{K} \sum_{i=1}^K \log |X_i| \Bigr). \end{align} converges to $1/{\alpha}$ almost surely, as $K_2 \rightarrow \infty$. \end{thm} As shown in Theorem 2.3 of \cite{mohammadi2015estimating}, this estimator admits a provably faster convergence rate and smaller asymptotic variance than all the aforementioned methods. In order to verify the accuracy of this estimator, we conduct a preliminary experiment, where we first generate $K = K_1 \times K_2$ many ${\cal S}\alpha{\cal S}(1)$ distributed random variables with $K_1= 100$, $K_2 =1000$ for $100$ different values of $\alpha$. Then, we estimate $\alpha$ by using $\hat{\alpha} \triangleq (\widehat{\phantom{a}\frac1{\alpha}\phantom{a}})^{-1}$. We repeat this experiment $100$ times for each $\alpha$. As shown in Figure~\ref{fig:exp_synth}, the estimator is very accurate for a large range of $\alpha$. Due to its favorable theoretical properties such as independence of the scale parameter $\sigma$, combined with its empirical stability, we choose this estimator in our experiments. In order to estimate the tail-index $\alpha$ at iteration $k$, we first partition the set of data points $\mathcal{D} \triangleq \{1,\dots,n\}$ into many disjoint sets $\Omega_k^{i} \subset \mathcal{D}$ of size $b$, such that the union of these subsets give all the data points. Formally, for all $i,j =1,\dots, n/b$, $|\Omega_k^i| = b$, $\cup_{i} \Omega_k^i = \mathcal{D}$, and $\Omega_k^i \cap \Omega_k^j=\emptyset$ for $i \neq j$. This approach is similar to sampling without replacement. We then compute the full gradient $\nabla f(\mathbf{w}_k)$ and the stochastic gradients $\nabla \tilde{f}_{\Omega_k^i}(\mathbf{w}_k)$ for each minibatch $\Omega_k^i$. We finally compute the stochastic gradient noises $U^i_k(\mathbf{w}_k) = \nabla \tilde{f}_{\Omega_k^i}(\mathbf{w}_k) - \nabla f(\mathbf{w}_k)$, vectorize each $U^i_k(\mathbf{w}_k)$ and concatenate them to obtain a single vector, and compute the reciprocal of the estimator \eqref{eqn:alpha_estim}. In this case, we have $K=pn/b$ and we set $K_1$ to the divisor of $K$ that is the closest to $\sqrt{K}$. \section{Results} In this section we present the most important and representative results. We have observed that, in all configurations, the choice of the two loss functions and the three different initializations yield no significant difference. Therefore, throughout this section, we will focus on the negative-log-likelihood loss. Unless stated otherwise, we set the minibatch size $b=500$ and the step-size $\eta = 0.1$. \begin{figure}[t] \centering \subfigure[MNIST]{ \includegraphics[width=0.39\columnwidth]{mnist_NLL_fc_widths_tr.pdf} \includegraphics[width=0.39\columnwidth]{mnist_NLL_fc_depths_tr.pdf} } % \subfigure[CIFAR10]{\includegraphics[width=0.39\columnwidth]{cifar10_NLL_fc_widths_tr.pdf} \includegraphics[width=0.39\columnwidth]{cifar10_NLL_fc_depths_tr.pdf} } \vspace{-15pt} \caption{Estimation of $\alpha$ for varying widths and depths in FCN. The curves in the left figures correspond to different depths, and the ones on the right figures correspond to widths.} \label{fig:exp_fc_widths} \end{figure} \textbf{Effect of varying network size: } In our first set of experiments, we measure the tail-index for varying the widths and depths for the FCN, and varying widths (i.e.\ the number of filters) for the CNN. For very small sizes, the networks perform poorly, therefore, we only illustrate sufficiently large network sizes, which yield similar accuracies. For these experiments, we compute the average of the tail-index measurements for the last $10$K iterations (i.e.\ when $\hat{\alpha}$ becomes stationary) to focus on the late stage dynamics. Figure~\ref{fig:exp_fc_widths} shows the results for the FCN. The first striking observation is that in all the cases, the estimated tail-index is far from $2$ with a very high confidence (the variance of the estimates were around $0.001$), meaning that the distribution of the gradient noise is highly non-Gaussian. For the MNIST dataset, we observe that $\alpha$ systematically decreases for increasing network size, where this behavior becomes more prominent with the depth. This result shows that, for MNIST, increasing the dimension of the network results in a gradient noise with heavier tails and therefore increases the probability to end up in a wider basin. For the CIFAR10 dataset, we still observe that $\alpha$ is far from $2$; however, in this case, increasing the network size does not have a clear effect on $\alpha$: in all cases, we observe that $\alpha$ is in the range $1.1$--$1.2$. \begin{figure}[t!] \centering \subfigure[CIFAR10 ]{ \includegraphics[width=0.39\columnwidth]{cifar10_NLL_alexnet_widths.pdf} \includegraphics[width=0.39\columnwidth]{cifar10_NLL_alexnet_widths_acc.pdf}} \subfigure[CIFAR100 ]{ \includegraphics[width=0.39\columnwidth]{cifar100_NLL_alexnet_widths.pdf} \includegraphics[width=0.39\columnwidth]{cifar100_NLL_alexnet_widths_acc.pdf}} \vspace{-15pt} \caption{The accuracy and $\hat{\alpha}$ of the CNN for varying widths. } \label{fig:cnn_widths} \end{figure} Figure~\ref{fig:cnn_widths} shows the results for the CNN. In this figure, we also depict the train and test accuracy, as well as the tail-index that is estimated on the test set. These results show that, for both CIFAR10 and CIFAR100, the tail-index is extremely low for the under-parametrized regime (e.g.\ the case when the width is $2$, $4$, or $8$ for CIFAR10). As we increase the size of the network the value of $\alpha$ increases until the network performs reasonably well and stabilizes in the range $1.0$--$1.1$. We also observe that $\alpha$ behaves similarly for both train and test sets\footnote{We observed a similar behavior in under-parametrized FCN; however, did not plot those results to avoid clutter.}. These results show that there is strong interplay between the network architecture, dataset, and the algorithm dynamics: (i) we see that the size of the network can strongly influence $\alpha$, (ii) for the exact same network architecture, the choice of the dataset has a significant impact on not only the landscape of the problem, but also the noise characteristics, hence on the algorithm dynamics. \textbf{Effect of the minibatch size: } In our second set of experiments, we investigate the effect of the size of the minibatch on $\alpha$. We focus on the FCN and monitor the behavior of $\alpha$ for different network and minibatch sizes $b$. Figure~\ref{fig:exp_fc_mbscale} illustrates the results. These rather remarkable results show that, as opposed to the common belief that the gradient noise behaves similar to a Gaussian for large $b$, the tail-index does not increase at all with the increasing $b$. We observe that $\alpha$ stays almost the same when the depth is $2$ and it moves in a small interval when the depth is set to $4$. We note that we obtained the same the train and test accuracies for different minibatch sizes. \begin{figure}[t] \centering \subfigure[Depth = 2]{\includegraphics[width=0.39\columnwidth]{fc_mnist_minibatch_22.pdf}} \subfigure[Depth = 4]{\includegraphics[width=0.39\columnwidth]{fc_mnist_minibatch_42.pdf}} \vspace{-10pt} \caption{Estimation of $\alpha$ for varying minibatch size.} \label{fig:exp_fc_mbscale} \end{figure} \textbf{Tail behavior throughout iterations: } So far, we have focused on the last iterations of SGD, where $\alpha$ is in a stationary regime. In our last set of experiments, we shift our focus on the first iterations and report an interesting behavior that we observed in almost all our experiments. As a representative, in Figure~\ref{fig:exp_iter_fc}, we show the temporal evolution of SGD for the FCN with $9$ layers and $512$ neurons/layer. The results clearly show that there are two distinct phases of SGD (in this configuration before and after iteration $1000$). In the first phase, the loss decreases very slowly, the accuracy slightly increases, and more interestingly $\alpha$ rapidly decreases. When $\alpha$ reaches its lowest level, the process possesses a jump, which causes a sudden decrease in the accuracy. After this point the process recovers again and we see a stationary behavior in $\alpha$ and an increasing behavior in the accuracy. The fact that the process has a jump when $\alpha$ is at its smallest value provides a strong support to our assumptions and the metastability theory that we discussed in the previous section. Furthermore, these results further strengthen the view that SGD crosses barriers at the very initial phase. On the other hand, our current analysis is not able to determine whether the process jumps in a different basin or a `better' part of the same basin and we leave it as a future work. \begin{figure}[t] \centering \vspace{5pt} \subfigure[MNIST]{ \includegraphics[width=0.33\columnwidth]{mnist_NLL_fc_9_9_iters.pdf} \includegraphics[width=0.33\columnwidth]{mnist_NLL_fc_9_9_iters_acc.pdf} \includegraphics[width=0.33\columnwidth]{mnist_NLL_fc_9_9_iters_loss.pdf} } \subfigure[CIFAR10]{ \includegraphics[width=0.33\columnwidth]{cifar10_NLL_fc_9_9_iters.pdf} \includegraphics[width=0.33\columnwidth]{cifar10_NLL_fc_9_9_iters_acc.pdf} \includegraphics[width=0.32\columnwidth]{cifar10_NLL_fc_9_9_iters_loss.pdf} } \vspace{-12pt} \caption{The iteration-wise behavior of of $\alpha$ for the FCN.} \vspace{-5pt} \label{fig:exp_iter_fc} \end{figure} \section{Introduction} \label{sec:intro} \textbf{Context and motivation: } Deep neural networks have revolutionized machine learning and have ubiquitous use in many application domains \cite{hinton-nature,Krizhevsky12,Hinton12}. In full generality, many key tasks in deep learning reduces to solving the following optimization problem: \begin{align} \mathbf{w}^\star = \argmin_{\mathbf{w} \in \mathbb{R}^p} \Bigl\{ f(\mathbf{w}) \triangleq \frac1{n} \sum\nolimits_{i=1}^n f^{(i)}(\mathbf{w}) \Bigr\} \end{align} where $\mathbf{w}\in\mathbb{R}^p$ denotes the weights of the neural network, $f:\mathbb{R}^p\to\mathbb{R}$ denotes the loss function that is typically non-convex in $\mathbf{w}$, each $f^{(i)}$ denotes the (instantaneous) loss function that is contributed by the \emph{data point} $i \in \{1, \dots, n\}$, and $n$ denotes the total number of data points. Stochastic gradient descent (SGD) is one the most popular approaches for attacking this problem in practice and is based on the following iterative updates: \begin{align} \mathbf{w}_{k+1} = \mathbf{w}_{k} - \eta \nabla \tilde{f}_{k}(\mathbf{w}_k) \label{eqn:sgd_main} \end{align} where $k \in \{1, \dots, K\}$ denotes the iteration number and $\nabla \tilde{f}_{k}$ denotes the stochastic gradient at iteration $k$, that is defined as follows: \begin{align} \label{eqn:stoch_grad} \nabla \tilde{f}_{k} (\mathbf{w}) \triangleq \nabla \tilde{f}_{\Omega_k} (\mathbf{w}) \triangleq \frac1{b} \sum\nolimits_{i \in \Omega_k} \nabla f^{(i)}(\mathbf{w}). \end{align} Here, $\Omega_k \subset \{1,\dots,n\}$ is a random subset that is drawn with or without replacement at iteration $k$, and $b = |\Omega_k|$ denotes the number of elements in $\Omega_k$. SGD is widely used in deep learning with a great success in its computational efficiency \cite{bottou2010large,bottou2008tradeoffs}. Beyond efficiency, understanding how SGD performs better than its full batch counterpart in terms of test accuracy remains a major challenge. Even though SGD seems to find zero loss solutions on the training landscape (at least in certain regimes \cite{Zhang16, sagun2014explorations, keskar2016large, Geiger18}), it appears that the algorithm finds solutions with different properties depending on how it is tuned \cite{sutskever2013importance, keskar2016large, jastrzkebski2017three, hoffer2017train, masters2018revisiting, smith2017don}. Despite the fact that the impact of SGD on generalization has been studied \cite{advani2017high, wu2018sgd, neyshabur2017exploring}, a satisfactory theory that can explain its success in a way that encompasses such peculiar empirical properties is still lacking. A popular approach for analyzing SGD is based on considering SGD as a discretization of a continuous-time process \cite{mandt2016variational,jastrzkebski2017three,pmlr-v70-li17f,hu2017diffusion,zhu2018anisotropic,chaudhari2018stochastic}. This approach mainly requires the following assumption\footnote{We note that more sophisticated assumptions than \eqref{eqn:noise_gauss} have been made in terms of the covariance matrix of the Gaussian distribution (e.g.\ state dependent, anisotropic). However, in all these cases, the resulting distribution is still a Gaussian, therefore the same criticism holds.} on the stochastic gradient noise $U_k(\mathbf{w}) \triangleq \nabla \tilde{f}_{k} (\mathbf{w}) - \nabla f(\mathbf{w})$: \begin{align} U_k(\mathbf{w}) \sim {\cal N}(\mathbf{0}, \sigma^2 \mathbf{I}), \label{eqn:noise_gauss} \end{align} where ${\cal N}$ denotes the multivariate (Gaussian) normal distribution and $\mathbf{I}$ denotes the identity matrix of appropriate size. The rationale behind this assumption is that, if the size of the minibatch $b$ is large enough, then we can invoke the Central Limit Theorem (CLT) and assume that the distribution of $U_k$ is approximately Gaussian. Then, under this assumption, \eqref{eqn:sgd_main} can be written as follows: \begin{align} \mathbf{w}_{k+1} = \mathbf{w}_{k} - \eta \nabla f(\mathbf{w}_k) + \sqrt{\eta} \sqrt{\eta \sigma^2} Z_k, \label{eqn:sgd_gauss} \end{align} where $Z_k$ denotes a standard normal random variable in $\mathbb{R}^p$. If we further assume that the step-size $\eta$ is small enough, then the continuous-time analogue of the discrete-time process \eqref{eqn:sgd_gauss} is the following stochastic differential equation (SDE):\footnote{ In a recent work with a similar critic taken on the recent theories on the SGD dynamics, some theoretical concerns have been also raised about the SDE approximation of SGD \cite{yaida2018fluctuationdissipation}. We believe that the SDE representation is sufficiently accurate for small step-sizes and a good, if not the best, proxy for understanding the behavior of SGD. } \begin{align} \mathrm{d} \mathbf{w}_t = - \nabla f(\mathbf{w}_t) \mathrm{d} t + \sqrt{\eta \sigma^2} \mathrm{d} \mathrm{B}_t , \label{eqn:sgd_langevin} \end{align} where $\mathrm{B}_t$ denotes the standard Brownian motion. This SDE is a variant of the well-known \emph{Langevin diffusion} and under mild regularity assumptions on $f$, one can show that the Markov process $(\mathbf{w}_t)_{t\geq 0}$ is ergodic with its unique invariant measure, whose density is proportional to $\exp(-f(x)/(\eta \sigma^2))$ for any $\eta>0$. \cite{Roberts03}. From this perspective, the SGD recursion in \eqref{eqn:sgd_gauss} can be seen as a first-order Euler-Maruyama discretization of the Langevin dynamics (see also \cite{pmlr-v70-li17f,jastrzkebski2017three,hu2017diffusion}), which is often referred to as the Unadjusted Langevin Algorithm (ULA) \cite{Roberts03,lamberton2003recursive,durmus2015non}. Based on this observation, \cite{jastrzkebski2017three} focused on the relation between this invariant measure and the algorithm parameters, namely the step-size $\eta$ and mini-batch size, as a function of $\sigma^2$. They concluded that the ratio of learning rate divided by the batch size is the control parameter that determines the width of the minima found by SGD. Furthermore, they revisit the famous wide minima folklore \cite{hochreiter1997flat}: Among the minima found by SGD, the wider it is, the better it performs on the test set. However, there are several fundamental issues with this approach, which we will explain below. We first illustrate a typical mismatch between the Gaussianity assumption and the empirical behavior of the stochastic gradient noise. In Figure~\ref{fig:noise_norms}, we plot the histogram of the norms of the stochastic gradient noise that is computed using a convolutional neural network in a real classification problem and compare it to the histogram of the norms of Gaussian random variables. It can be clearly observed that the shape of the real histogram is very different than the Gaussian and shows a \emph{heavy-tailed} behavior. In addition to the empirical observations, the Gaussianity assumption also yields some theoretical issues. The first issue with this assumption is that the current SDE analyses of SGD are based on the \emph{invariant measure} of the SDE, which implicitly assumes that sufficiently many iterations have been taken to converge to that measure. Recent results on ULA \cite{raginsky17a,xu2018global} have shown that, the required number of iterations to achieve the invariant measure often grows exponentially with the dimension $p$. This result contradicts with the current practice: considering the large size of the neural networks and limited computational budget, only a limited number of iterations -- which is much smaller than $\exp(\mathcal{O}(p))$ -- can be taken. This conflict becomes clearer in the light of the recent works that studied the \emph{local} behavior of ULA \cite{tzen2018local,zhang17b}. These studies showed that ULA will get close to the nearest local optimum in polynomial time; however, the required amount of time for escaping from that local optimum increases exponentially with the dimension. Therefore, the phenomenon that SGD prefers wide minima within a considerably small number of iterations cannot be explained using the asymptotic distribution of the SDE given in \eqref{eqn:sgd_langevin}. \begin{figure}[t] \centering \subfigure[Real]{\includegraphics[width=0.32\columnwidth]{cifar10_NLL_alexnet_noisenorm_hist.pdf}} \subfigure[Gaussian]{\includegraphics[width=0.32\columnwidth]{cifar10_NLL_alexnet_noisenorm_gauss.pdf}} \subfigure[$\alpha$-stable]{\includegraphics[width=0.32\columnwidth]{cifar10_NLL_alexnet_noisenorm_sas.pdf} \label{fig:hist_sas}} \vspace{-5pt} \caption{(a)The histogram of the norm of the gradient noises computed with AlexNet on Cifar10. (b) and (c) the histograms of the norms of (scaled) Guassian and $\alpha$-stable random variables. } \vspace{-10pt} \label{fig:noise_norms} \end{figure} The second issue is related to the local behavior of the process and becomes clear when we consider the \emph{metastability} analysis of Brownian motion-driven SDEs. These studies \cite{freidlin1998random,bovier2004metastability,Imkeller2010} consider the case where $\mathbf{w}_0$ is initialized in a quadratic basin and then analyze the minimum time $t$ such that $\mathbf{w}_t$ is outside that basin. They show that this so-called \emph{first exit time} depends \emph{exponentially} on the height of the basin; however, this dependency is only \emph{polynomial} with the width of the basin. These theoretical results directly contradict with the the wide minima phenomenon: even if the height of a basin is slightly larger, the exit-time from this basin will be dominated by its height, which implies that the process would stay longer in (or in other words, `prefer') deeper minima as opposed to wider minima. The reason why the exit-time is dominated by the height is due to the \emph{continuity} of the Brownian motion, which is in fact a direct consequence of the Gaussian noise assumption. A final remark on the issues of this approach is the observation that landscape is flat at the bottom regardless of the batch size used in SGD \cite{sagun2017empirical}. In particular, the spectrum of the Hessian at a near critical point with close to zero loss value has many near zero eigenvalues. Therefore, local curvature measures that are used as a proxy for measuring the width of a basin correlates with the magnitudes of large eigenvalues of the Hessian which are few. Besides, during the dynamics of SGD it has been observed that the algorithm does not cross barriers except perhaps at the very initial phase \cite{xing2018walk,Baity18}. Such dependence of width on an essentially-flat landscape combined with the lack of explicit barrier crossing during the SGD descent forces us to rethink the analysis of basin hopping under a noisy dynamics. \textbf{Proposed framework: } In this study, we aim at addressing these contradictions and come up with an arguably better-suited hypothesis for the stochastic gradient noise that has more pertinent theoretical implications for the phenomena associated with SGD. In particular, we go back to \eqref{eqn:stoch_grad} and \eqref{eqn:noise_gauss} and reconsider the application of CLT. This \emph{classical} CLT assumes that $U_k$ is a sum of many independent and identically distributed (i.i.d.)\ random variables, whose variance is \emph{finite}, and then it states that the law of $U_k$ converges to a Gaussian distribution, which then paves the way for \eqref{eqn:sgd_gauss}. Even though the finite-variance assumption seems natural and intuitive at the first sight, it turns out that in many domains, such as turbulent motions \cite{weeks1995observation}, oceanic fluid flows \cite{woyczynski2001levy}, finance \cite{mandelbrot2013fractals}, biological evolution \cite{jourdain2012levy}, audio signals \cite{liutkus2015generalized}, the assumption might fail to hold (see \cite{duan} for more examples). In such cases, the classical CLT along with the Gaussian approximation will no longer hold. While this might seem daunting, fortunately, one can prove an \emph{extended} CLT and show that the law of the sum of these i.i.d.\ variables with infinite variance still converges to a family of \emph{heavy-tailed} distributions that is called the $\alpha$-stable distribution \cite{paul1937theorie}. As we will detail in Section~\ref{sec:levy_sde}, these distributions are parametrized by their \emph{tail-index} $\alpha \in (0,2]$ and they coincide with the Gaussian distribution when $\alpha =2$. In this study, we relax the finite-variance assumption on the stochastic gradient noise and by invoking the extended CLT, we assume that $U_k$ follows an $\alpha$-stable distribution, as hinted in Figure~\ref{fig:hist_sas}. By following a similar rationale to \eqref{eqn:sgd_gauss} and \eqref{eqn:sgd_langevin}, we reformulate SGD with this new assumption and consider its continuous-time limit for small step-sizes. Since the noise might not be Gaussian anymore (i.e.\ when $\alpha \neq 2$), the use of the Brownian motion would not be appropriate in this case and we need to replace it with the $\alpha$-stable L\'{e}vy motion, whose increments have an $\alpha$-stable distribution \cite{yanovsky2000levy}. Due to the heavy-tailed nature of $\alpha$-stable distribution, the L\'{e}vy motion might incur large discontinuous jumps and therefore exhibits a fundamentally different behavior than the Brownian motion, whose paths are on the contrary almost surely continuous. As we will describe in detail in Section~\ref{sec:levy_sde}, the discontinuities also reflect in the metastability properties of L\'{e}vy-driven SDEs, which indicate that, as soon as $\alpha <2$, the first exit time from a basin does \emph{not} depend on its height; on the contrary, it directly depends on its width and the tail-index $\alpha$. Informally, this implies that the process will \emph{escape} from narrow minima -- no matter how deep they are -- and stay longer in wide minima. Besides, as $\alpha$ get smaller, the probability for the dynamics to jump in a wide basin will increase. Therefore, if the $\alpha$-stable assumption on the stochastic gradient noise holds, then the existing metastability results automatically provide strong theoretical insights for illuminating the behavior of SGD. \textbf{Contributions: } The main contributions of this paper are twofold: (i) we perform an extensive empirical analysis of the tail-index of the stochastic gradient noise in deep neural networks and (ii) based on these empirical results, we bring an alternative perspective to the existing approaches for analyzing SGD and shed more light on the folklore that SGD prefers wide minima by establishing a bridge between SGD and the related theoretical results from statistical physics and stochastic analysis. We conduct experiments on the most common deep learning architectures. In particular, we investigate the tail behavior under fully-connected and convolutional models using negative log likelihood and linear hinge loss functions on MNIST, CIFAR10, and CIFAR100 datasets. For each configuration, we scale the size of the network and batch size used in SGD and monitor the effect of each of these settings on the tail index $\alpha$. Our experiments reveal several remarkable results: \begin{itemize}[noitemsep,topsep=0pt,leftmargin=*,align=left] \item In all our configurations, the stochastic gradient noise turns out to be highly non-Gaussian and possesses a heavy-tailed behavior. \item Increasing the size of the minibatch has a very little impact on the tail-index, and as opposed to the common belief that larger minibatches result in Gaussian gradient noise, the noise is still far from being Gaussian. \item There is a strong interaction between the network architecture, network size, dataset, and the tail-index, which ultimately determine the dynamics of SGD on the training surface. This observation supports the view that, the geometry of the problem and the dynamics induced by the algorithm cannot be separated from each other. \item In almost all configurations, we observe two distinct phases of SGD throughout iterations. During the first phase, the tail-index rapidly decreases and SGD possesses a clear jump when the tail-index is at its lowest value and causes a sudden jump in the accuracy. This behavior strengthens the view that SGD crosses barriers at the very initial phase. \end{itemize} Our methodology also opens up several interesting future directions and open questions, as we discuss in Section~\ref{sec:conc}. \section{Stable distributions and SGD as a L\'{e}vy-Driven SDE} \label{sec:levy_sde} {The CLT states that the sum of i.i.d. random variables with a finite second moment converges to a normal distribution if the number of summands grow. However, if the variables have heavy-tail, the second moment may not exist. For instance, if their density $p(x)$ has a power-law tail decreasing as $1/|x|^{\alpha+1}$ where $0 < \alpha < 2$; only $\alpha$-th moment exist with $\alpha<2$. In this case, generalized central limit theorem (GCLT) says that the sum of such variables will converge to a distribution called the \emph{$\alpha$-stable} distribution instead as the number of summands grows (see e.g. \cite{fischer2010history}. In this work, we focus on the centered \emph{symmetric $\alpha$-stable} (${\cal S}\alpha{\cal S}$) distribution, which is a special case of $\alpha$-stable distributions that are symmetric around the origin. } \begin{figure}[t] \centering \includegraphics[width=0.48\columnwidth]{stablepdf.pdf} \includegraphics[width=0.48\columnwidth]{stablemotion.pdf} \caption{Left: ${\cal S}\alpha{\cal S}$ densities, right: $\mathrm{L}_t^\alpha$ for $p=1$. For $\alpha<2$, ${\cal S}\alpha{\cal S}$ becomes heavier-tailed and $\mathrm{L}_t^\alpha$ incurs jumps. } \vspace{-10pt} \label{fig:sas_lm} \end{figure} We can view the ${\cal S}\alpha{\cal S}$ distribution as a heavy-tailed generalization of a centered Gaussian distribution. The ${\cal S}\alpha{\cal S}$ distributions are defined through their characteristic function via $X\sim {\cal S}\alpha{\cal S}(\sigma) \iff \mathbb{E}[\exp(i \omega X)] = \exp(-|\sigma \omega|^\alpha)$. Even though their probability density function does not admit a closed-form formula in general except in special cases, their density decays with a power law tail like $1/|x|^{\alpha+1}$ where $\alpha \in (0,2]$ is called the \emph{tail-index} which determines the behavior of the distribution: as $\alpha$ gets smaller; the distribution has a heavier tail. In fact, the parameter $\alpha$ also determines the moments: $\mathds{E}[|X|^r] < \infty$ if and only if $r<\alpha$; implying $X$ has infinite variance when $\alpha\neq 2$. The parameter $\sigma \in \mathds{R}_+$ is known as the \emph{scale} parameter and controls the spread of $X$ around $0$. We recover the Gaussian distribution ${\cal N}(0,2\sigma^2)$ as a special case of ${\cal S}\alpha{\cal S}$ when $\alpha=2$. In this study, we make the following assumption on the stochastic gradient noise: \begin{align} [U_k(\mathbf{w})]_i \sim {\cal S}\alpha{\cal S}(\sigma(\mathbf{w})), \quad \forall i =1,\dots,n \label{eqn:noise_levy} \end{align} where $[v]_i$ denotes the $i$'th component of a vector $v$. Informally, we assume that each coordinate of $U_k$ is ${\cal S}\alpha{\cal S}$ distributed with the same $\alpha$ and the scale parameter $\sigma$ depends on the state $\mathbf{w}$. Here, this dependency is not crucial since we are mainly interested in the tail-index $\alpha$, which can be estimated \emph{independently} from the scale parameter. Therefore, we will simply denote $\sigma(\mathbf{w})$ as $\sigma$ for clarity. By using the assumption \eqref{eqn:noise_levy}, we can rewrite the SGD recursion as follows: \begin{align} \mathbf{w}_{k+1} = \mathbf{w}_{k} - \eta \nabla f(\mathbf{w}_k) + \eta^{1/\alpha} \Bigl(\eta^{\frac{\alpha-1}{\alpha} } \sigma\Bigr) S_k, \label{eqn:sgd_alpha} \end{align} where $S_k \in \mathbb{R}^p$ is a random vector such that $[S_k]_i \sim {\cal S}\alpha{\cal S}(1)$. If the step-size $\eta$ is small enough, then we can consider the continuous-time limit of this discrete-time process, which is expressed in the following SDE driven by an $\alpha$-stable L\'{e}vy process: \begin{align} \mathrm{d} \mathbf{w}_t = - \nabla f(\mathbf{w}_t) \mathrm{d} t + \eta^{(\alpha-1)/\alpha} \sigma \> \mathrm{d} \mathrm{L}^\alpha_t, \label{eqn:sgd_levy} \end{align} where $\mathrm{L}^\alpha_t$ denotes the $p$-dimensional $\alpha$-stable L\'{e}vy motion with \emph{independent components}. In other words, each component of $\mathrm{L}^\alpha_t$ is an independent $\alpha$-stable L\'{e}vy motion in $\mathbb{R}$. For the scalar case it is defined as follows for $\alpha \in (0,2]$ \cite{duan}: \begin{enumerate}[label=(\roman*),itemsep=0pt,topsep=0pt,leftmargin=*,align=left] \item $\mathrm{L}^\alpha_0 = 0$ almost surely. \item For $t_0<t_1 < \cdots < t_N$, the increments $ (\mathrm{L}^\alpha_{t_{i}} - \mathrm{L}^\alpha_{t_{i-1}} )$ are independent ($i = 1,\dots, N$). \item The difference $(\mathrm{L}^\alpha_t - \mathrm{L}^\alpha_s)$ and $\mathrm{L}^\alpha_{t-s}$ have the same distribution: ${\cal S}\alpha{\cal S}((t-s)^{1/\alpha})$ for $s<t$. \item $\mathrm{L}^\alpha_t$ is continuous in probability (i.e.\ it has \emph{stochastically continuous} sample paths): for all $\delta >0$ and $s\geq 0$, $p(|\mathrm{L}^\alpha_t - \mathrm{L}^\alpha_s| > \delta) \rightarrow 0$ as $t \rightarrow s$. \end{enumerate} When $\alpha = 2$, $\mathrm{L}^\alpha_t$ coincides with a scaled version of Brownian motion, $\sqrt{2} \mathrm{B}_t$. ${\cal S}\alpha{\cal S}$ and $\mathrm{L}^\alpha_t$ are illustrated in Figure~\ref{fig:sas_lm}. The SDE in \eqref{eqn:sgd_levy} exhibits a fundamentally different behavior than the one in \eqref{eqn:sgd_langevin} does. This is mostly due to the stochastic continuity property of $\mathrm{L}^\alpha_t$, which enables $\mathrm{L}^\alpha_t$ to have a countable number of discontinuities, which are sometimes called `jumps'. In the rest of this section, we will recall important theoretical results about this SDE and discuss their implications on SGD. For clarity of the presentation and notational simplicity we focus on the scalar case and consider the SDE \eqref{eqn:sgd_levy} in $\mathbb{R}$ (i.e.\ $p=1$). Multidimensional generalizations of the metastability results presented in this paper can be found in \cite{imkeller2010first}. We rewrite \eqref{eqn:sgd_levy} as follows: \begin{eqnarray} \label{eq-levy-sde} \mathrm{d} w_t^\varepsilon = -\nabla f(w_t^\varepsilon) \mathrm{d} t + \varepsilon \mathrm{d} \mathrm{L}^\alpha_t \end{eqnarray} for $t\geq 0$, started from the initial point $w_0\in\mathbb{R}$, where $\mathrm{L}^\alpha_t$ is the $\alpha$-stable L\'evy process, $\varepsilon \geq 0$ is a parameter and $f$ is a non-convex objective with $r \geq 2$ local minima. When $\varepsilon=0$, we recover the gradient descent dynamics in continuous time: $\mathrm{d} w_t^0 = -\nabla f(w_t^0) \mathrm{d} t$, where the local minima are the stable points of this differential equation. However, as soon as $\varepsilon >0$, these states become `metastable', meaning that there is a positive probability for $w_t^\varepsilon$ to transition from one basin to another. However, the time required for transitioning to another basin strongly depends on the characteristics of the injected noise. The two most important cases are $\alpha =2$ and $\alpha < 2$. When $\alpha =2$, (i.e.\ the Gaussianity assumption) the process $(w^\varepsilon_t)_{t \geq 0}$ is continuous, which requires it to `climb' the basin all the way up, in order to be able to transition to another basin. This fact makes the transition-time depend on the height of the basin. On the contrary, when $\alpha <2$, the process can incur discontinuities and do not need to cross the boundaries of the basin in order to transition to another one since it can directly jump. This property is called the `transition phenomenon' \cite{duan} and makes the transition-time mostly depend on the \emph{width} of the basin. In the rest of the section, we will formalize these explanations. Under some assumptions on the objective $f$, it is known that the process \eqref{eq-levy-sde} admits a stationary density \cite{Samorodnitsky2003}. For a general $f$, an explicit formula for the equilibrium distribution is not known, however when the noise level $\varepsilon$ is small enough, finer characterizations of the structure of the equilibrium density in dimension one is known. We next summarize known results in this area, which show that L\'{e}vy-driven dynamics spends more time in `wide valleys' in the sense of \cite{entropy-sgd} when $\varepsilon$ goes to zero. Assume that $f$ is smooth with $r$ local minima $\{m_i\}_{i=1}^r$ separated by $r-1$ local maxima $\{s_i\}_{i=1}^{r-1}$, i.e. \begin{eqnarray*} -\infty := s_0 < m_1 < s_1 < \dots <s_{r-1} < m_r < s_r := \infty. \end{eqnarray*} Furthermore, assume that the local minima and maxima are not degenerate, i.e. $f''(m_i)>0$ and $f''(s_i)<0$ for every $i$. We also assume the objective gradient has a growth condition $f'(w) >|w|^{1+c}$ for some constant $c>0$ and when $|w|$ is large enough. Each local minima $m_i$ lies in the (interval) valley $S_i = (s_{i-1},s_i)$ of (width) length $L_i = |s_i-s_{i-1}|$. Consider also a $\delta$-neighborhood $B_i := \{ |x - m_i|\leq \delta \}$ around the local minimum with $\delta>0$ small enough so that the neighborhood is contained in the valley $S_i$ for every $i$. We are interested in the first exit time from $B_i$ starting from a point $w_0\in B_i$ and the transition time $T_{w_0}^i(\varepsilon):= \inf \{ t\geq 0 : w_t^\varepsilon \not\in \cup_{j\neq i} B_j \}$ to a neighborhood of another local minimum, we will remove the dependency to $w_0$ of the transition time in our discussions as it is clear from the context. The following result shows that the transition times are asymptotically exponentially distributed in the limit of small noise and scales like $\frac{1}{\varepsilon^\alpha}$ with $\varepsilon$. \begin{thm}[\cite{pavlyukevich2007cooling}]\label{thm-levy-exit} For an initial point $w_0\in B_i$, in the limit $\varepsilon\to0$, the following statements hold regarding the transition time: \begin{eqnarray*} \mathbb{P}_{w_0}(T^i(\varepsilon) \in B_j) &\to& q_{ij} q_i^{-1} \quad \mbox{if} \quad i\neq j, \\ \mathbb{P}_{w_0}(\varepsilon^\alpha T^i(\varepsilon) \geq u ) &\leq& e^{-q_i u} \quad \mbox{for any} \quad u\geq 0. \end{eqnarray*} where \begin{eqnarray} q_{ij} &=& \frac{1}{\alpha} \left| \frac{1}{|s_{j-1} - m_i |^\alpha} - \frac{1}{|s_{j} - m_i |^\alpha} \right|, \\ q_{i} &=& \sum_{j\neq i}q_{ij}. \end{eqnarray} \end{thm} If the SDE \eqref{eq-levy-sde} would be driven by the Brownian motion instead, then an analogous theorem to Theorem \ref{thm-levy-exit} holds saying that the transition times are still exponentially distributed but the scaling $\varepsilon^\alpha$ needs to be replaced by $e^{2H/\varepsilon^2}$ where $H$ is the maximal depth of the basins to be traversed between the two local minima \cite{Day83,bovier2005metastability}. This means that in the small noise limit, Brownian-motion driven gradient descent dynamics need exponential time to transit to another minimum whereas Levy-driven gradient descent dynamics need only polynomial time. We also note from Theorem \ref{thm-levy-exit} that the mean transition time between valleys for L\'evy SDE does not depend on the depth $H$ of the valleys they reside in which is an advantage over Brownian motion driven SDE in the existence of deep valleys. Informally, this difference is due to the fact that Brownian motion driven SDE has to typically climb up a valley to exit it, whereas L\'{e}vy-driven SDE could jump out. The following theorem says that as $\varepsilon \to 0$, up to a normalization in time, the process $w_t^\varepsilon$ behaves like a finite state-space Markov process that has support over the set of local minima $\{m_i\}_{i=1}^r$ admitting a stationary density $\pi = (\pi_i)_{i=1}^r$ with an infinitesimal generator $Q$. The process jumps between the valleys $S_i$, spending time proportional to probability $p_i$ amount of time in each valley in the equilibrium where the probabilities $\pi = (\pi_i)_{i=1}^r$ are given by the solution to the linear system $Q\pi = 0$. \begin{thm}[{\cite{pavlyukevich2007cooling}}]\label{thm-levy-exit} Let $w_0 \in S_i$, for some $ 1\leq i \leq r$. For $t\geq 0$, $w_{t\varepsilon^{-\alpha}}^\varepsilon \to Y_{m_i}(t)$, as $\varepsilon\to 0$, in the sense of finite-dimensional distributions, where $Y = (Y_{y}(t))_{t\geq 0}$ is a continuous-time Markov chain on a state space $\{m_1,m_2,\dots,m_r\}$ with the infinitesimal generator $Q = (q_{ij})_{i,j=1}^r$ with \begin{eqnarray} q_{ij} &=& \frac{1}{\alpha} \left| \frac{1}{|s_{j-1} - m_i |^\alpha} - \frac{1}{|s_{j} - m_i |^\alpha} \right|, \\ q_{ii} &=&-\sum \nolimits_{j\neq i} q_{ij}. \end{eqnarray} This process admits a density $\pi$ satisfying $Q^T\pi = 0$. \end{thm} A consequence of this theorem is that equilibrium probabilities $p_i$ are typically larger for ``wide valleys". To see this consider the special case illustrated in Figure \ref{fig:double_well} with $r=2$ local minima $m_1 < s_1 = 0 < m_2$ separated by a local maximum at $s_1 = 0$. For this example, $m_2 > |m_1|$, and the second local minimum lies in a wider valley. A simple computation reveals $$ \pi_1 = \frac{|m_1|^\alpha}{|m_1|^\alpha + m_2^\alpha}, \quad \pi_2 = \frac{|m_2|^\alpha}{|m_1|^\alpha + |m_2|^\alpha} $$ We see that $\pi_2 > \pi_1$, that is in the equilibrium the process spends more time on the wider walley. In particular, the ratio $\frac{\pi_2}{\pi_1} = \left(\frac{m_2}{|m_1|}\right)^\alpha$ grows with an exponent $\alpha$ when the ratio $\frac{m_2}{|m_1|}$ of the width of the valleys grows. Consequently, if the gradient noise is indeed $\alpha$-stable distributed, these results directly provide theoretical evidence for the wide-minima behavior of SGD. \begin{figure}[t] \centering \subfigure[]{\label{fig:double_well} \includegraphics[width=0.49\columnwidth]{double-well-v7.png}} \subfigure[]{\label{fig:exp_synth} \includegraphics[width=0.45\columnwidth]{alphaestim_synth.pdf}} \vspace{-5pt} \caption{ (a) An objective with two local minima $m_1, m_2$ seperated by a local maxima at $s_1 = 0$. (b) Illustration of the tail-index estimator $\hat{\alpha}$. % } \vspace{-5pt} \end{figure} \section*{Acknowledgments} This work is partly supported by the French National Research Agency (ANR) as a part of the FBIMATRIX (ANR-16-CE23-0014) project. Mert G\"urb\"uzbalaban acknowledges support from the grants NSF DMS-1723085 and NSF CCF-1814888. \section*{Main arguments} \input{alpha_estim} \input{width_flat} \input{discretization} \section{Stable distributions} \label{sec:tech_bg} {Central limit theorem (CLT) says that the sum of i.i.d. random variables with a finite second moment converges to a normal distribution if the number of summands grow. However, if the variables have heavy-tail, the second moment may not exist for instance if their density $p(x)$ has a power-law tail decreasing as $1/|x|^{\alpha+1}$ where $0 < \alpha < 2$ only $\alpha$-th moment exist with $\alpha<2$. In this case, generalized central limit theorem (GCLT) says that the sum of such variables will converge to a distribution called the \emph{$\alpha$-stable} distribution instead as the number of summands grows (see e.g. \cite{fischer2010history}. In this work, we focus on the centered \emph{symmetric $\alpha$-stable} (${\cal S}\alpha{\cal S}$) distribution, which is a special case of $\alpha$-stable distributions that are symmetric around the origin. } We can view the ${\cal S}\alpha{\cal S}$ distribution as a heavy-tailed generalization of a centered Gaussian distribution. The ${\cal S}\alpha{\cal S}$ distributions are defined through their characteristic function via $X\sim {\cal S}\alpha{\cal S}(\sigma) \iff \mathbb{E}[\exp(i \omega X)] = \exp(-|\sigma \omega|^\alpha)$. Even though, their probability density function do not admit a closed-form formula in general except in special cases, their density decays with a power law tail like $1/|x|^{\alpha+1}$ where $\alpha \in (0,2]$ is called the \emph{characteristic exponent} which determines the behavior of the distribution, as $\alpha$ gets smaller; the distribution has a heavier tail. In fact, the parameter $\alpha$ also determines the moments $\mathds{E}[|X|^p] < \infty$ if and only if $p<\alpha$; implying $X$ has infinite variance when $\alpha\neq 2$. The parameter $\sigma \in \mathds{R}_+$ is known as the \emph{scale} parameter and controls the spread of $X$ around $0$. We recover the Gaussian distribution ${\cal N}(0,2\sigma^2)$ as a special case of ${\cal S}\alpha{\cal S}$ when $\alpha=2$. \section{Making sense of different basins on flat landscapes} \label{sec:width_flat} In this section we attempt to connect the basin hopping perspective on flat landscapes \begin{itemize} \item toy experiment to show an overparametrized flat landscape \item decomposition of the weight space into two parts \item cite redundant op cohen, dlp, criteo guys etc...n \end{itemize} Take $f(w) = w^2$ and $\hat f(w_1, w_2)=(w_1w_2)^2$ plot the graph of $f$ and plot the level curves of $\hat f $. take random initial points and run GD, then run GD + noise whose alpha should range from 2 to 0.5. repeat this experiment with many different initial points and collect statistics. Here the width can be measured by the distance from the zero point. at a solution where wolog $w_1=0$, grad is zero, one eigenvalue of the hessian is zero and the second eigenvalue is $2w_2^2$ which can be the proxy for `width' test to see if the one dimensional theory fits into this framework where, in some sense, there is a continuum many minima, each of which with different \textit{width}. The problem is convex but various line segments are non-convex. This strurcture is similar to what happens in deep learning. In DL we have a convex loss function (MSE NLL hinge etc...) and a non-linear output function. It doesn't have to be super nonconvex in the strict sense, the landscape is skewwed and ill conditioned in many ways but it can essentially convex once one looks at it from the global weight space point of view. However, navigating this ill-conditioned landscape is similar to navigating different basins on a landscape that is formed of many isolated minima.
1,116,691,497,029
arxiv
\section{Motivations} \label{sec:motivations} The top-quark mass is now measured with a remarkable precision around 0.5~\% both at the Tevatron and at the LHC using well-developped ``standard'' methods based on templates, matrix elements or ideogram~\cite{velev,brock}. Despite this precision, some questions remain. Indeed since the top quark is a color object, it is non trivial to know which mass is really measured using these standard methods. In all standard methods, Monte Carlo (MC) is used to calibrate the measurements. This mass implemented in MC generators is different from a well-defined mass in theory. A way to get some hints about these points experimentally is to determine the top-quark mass using alternative methods. Such methods can use less inputs from MC or can have different sensitivity to systematic uncertainties than the standard analyses. In this article I will concentrate on the extraction of the top-quark mass from the top-antitop (\ttbar) cross-section, on the mass measurement using the so-called endpoint method and on the top-quark mass determination from the $b$-lifetime. Other alternative methods are described in another article~\cite{fuster}. Before studying methods which rely differently on MC, it is interesting to look at the dependence of the measured top-quark mass using standard methods with the event kinematics and to compare the data measurements with the predictions from MC. This allows to test the description of the top-quark mass by MC in various phase space regions and to detect potential large deviations due to the pole mass definition problem described above. This has been looked at by the CMS Collaboration~\cite{cms} in the \ljets\ final state asking for two $b$-tag jets using 5~\ifb\ of LHC at 7~TeV~\cite{cms:massdep}. For these comparisons, the top-antitop final state is fully reconstructed and the top-quark mass is measured using the ideogram technique either solely or together with the jet energy scale. The measurements are compared to Madgraph~\cite{madgraph} with different Pythia~\cite{pythia} tunes and to MC@NLO~\cite{mcatnlo}. Differential measurements as a function of several variables have been performed~\cite{cms:massdep} that are sensitive to different physics effects. For instance, the top-quark mass distribution as a function of the opening angles between the two light jets (see Figure~\ref{fig:massdep}) or as a function of the pseudo-rapidity ($\eta$) of the hadronic decaying top is sensitive to color reconnection. The influence of initial and final state radiation can be investigated by looking at the top-quark mass as a function of the invariant mass of the \ttbar\ pair or as a function of the transverse momentum (\pt) of the \ttbar\ pair. To test the sensitivity to the $b$-quark kinematics, the top-quark mass is measured as a function of the transverse momentum or the pseudo-rapidity of the $b$-jet assigned to the hadronic decaying top quark. The mass distribution as a function of the distance between the $b$- and $\bar{b}$-jets ($\Delta R_{b\bar{b}} = \sqrt{\Delta \eta^2 + \Delta \phi^2}$) is also scrutinized (see Figure~\ref{fig:massdep}). Even if the statistical error on these differential measurements is still large, there is currently no indication of specific biases due to the choice of generators. \begin{figure}[hb] \centerline{ \includegraphics[width=0.5\textwidth,height=5.5cm]{deliot_frederic.fig1a.eps} \includegraphics[width=0.5\textwidth,height=5.5cm]{deliot_frederic.fig1b.eps} } \caption{Differential top-quark mass measurements as a function of the separation of the light-quark jets (left) and of the b-quark jets (right) performed by CMS~\cite{cms:massdep} compared to several MC predictions.} \label{fig:massdep} \end{figure} \section{Mass extraction from the \ttbar\ cross section} \label{sec:fromxs} The principle for the mass extraction from the \ttbar\ cross section is to compare the experimental measured \ttbar\ cross section with the one computed theoretically. Both the experimental and theoretical cross sections depend on the top-quark mass but the dependence is different in the two cases. In the experimental case, the dependency comes from the acceptance cuts while in the theoretical case, it originates from the matrix element. The advantage of this alternative method lies in the fact that it allows to extract a top-quark mass in a well-defined renormalization scheme (the one that in used in the theory computation) in contrast to the one that is implemented in the MC generators. This method has however the drawback that it is less precise than direct measurements. This determination of the top-quark mass has been performed by the D0 Collaboration using the \ttbar\ cross section measured in the \ljets\ channel using $b$-tagging requirement with 5.4~\ifb. This measured cross section is the one that exhibits the weakest dependence on the top-quark mass. The variation of the measurement as a function of the MC mass (\mtmc) is parametrized using a third-order polynomial divided by the mass to the fourth power. As theory input cross section, the next-to-leading order (NLO), the NLO including next-to-leading log (NLL) resummation computations and some approximation of the next-to-next-to-leading order (NNLO) calculations. The mass is extracted from the maximum of a normalized likelihood distribution defined as: $$L(\mt) = \int f_{\mathrm{exp}} (\sigma | \mt) \, \left[ f_{\rm scale} (\sigma | \mt) \otimes f_{\rm PDF} (\sigma | \mt) \right] \, d\sigma,$$ where $f_{\mathrm{exp}}$ comes from the experimental measurement which uncertainties are assumed to be Gaussian distributed, $f_{\rm scale}$ represents the theoretical scale uncertainty, taken to be flat and $f_{\rm PDF}$ represents the uncertainty of parton density functions (PDF) taken to be a Gaussian function. The mass determination is peformed assuming that \mtmc\ corresponds to the pole mass (\mtpole) and assuming that \mtmc\ corresponds to the \msbar\ mass (\mtmsbar). The experimental and theoretical \ttbar\ cross sections used in the extraction are shown in Figure~\ref{fig:d0fromxs}. With this technique, D0 measures the top-quark pole mass shown in Table~\ref{tab:d0fromxs}~\cite{d0:fromxs}. These values are compatible but slightly lower than the top-quark mass world average~\cite{pdg}. The \msbar\ mass is also extracted~\cite{d0:fromxs}. \begin{figure}[!htb] \centerline{ \includegraphics[width=0.45\textwidth,height=5.5cm]{deliot_frederic.fig2.eps} } \caption{Experimental and theoretical \ttbar\ cross sections used by D0 to extract the top-quark mass~\cite{d0:fromxs}.} \label{fig:d0fromxs} \end{figure} \begin{table} \centerline{ \begin{tabular}{c|cc} \hline \hline \\[-9pt] Theoretical prediction & {$m_{t}^{\rm pole}$ (GeV)} & {$\Delta m_{t}^{\rm pole}$ (GeV)} \\[2pt] \hline \\[-8pt] MC mass assumption & $m_{t}^{\rm MC} = m_{t}^{\rm pole}$ & $m_{t}^{\rm MC} = m_{t}^{\overline{\rm MS}}$ \\[2pt] \hline \\[-8pt] NLO & $164.8^{+5.7}_{-5.4}$ & $-3.0$ \\[2pt] NLO+NLL & $166.5^{+5.5}_{-4.8}$ & $-2.7$ \\[2pt] NLO+NNLL & $163.0^{+5.1}_{-4.6}$ & $-3.3$ \\ [2pt] Approximate NNLO & $167.5^{+5.2}_{-4.7}$ & $-2.7$ \\ [2pt] \hline \hline \end{tabular} } \caption{Values of the pole top-quark mass \mtpole, with their 68\% C.L. uncertainties extracted for different theoretical predictions by D0~\cite{d0:fromxs}.} \label{tab:d0fromxs} \end{table} A similar method has been developped by CMS. In that analysis, CMS uses the \ttbar\ cross section measured in the dilepton channel using 2.3~\ifb\ at 7~TeV as experimental input. This cross section is the most precise one measured by CMS with a total uncertainty of 4.1~\%. As for D0, it is parametrized using a third-order polynomial divided by the mass to the fourth power. The full NNLO prediction including next-to-next-to-leading log (NNLL) resummation is employed as theoretical input. The mass is extracted using a probability function similar to the D0 analysis. A 1~GeV addition uncertainty is added to the experimental result to cover the possible difference between \mtmc\ and \mtpole. CMS also studies the interplay of the mass extraction with the value of the strong coupling constant $\alpha_S$ (see~\cite{naumann} for more details). \begin{figure}[!htb] \centerline{ \includegraphics[width=0.5\textwidth]{deliot_frederic.fig3.eps} } \caption{Experimental and theoretical \ttbar\ cross sections used by CMS to extract the top-quark mass~\cite{cms:fromxs}.} \label{fig:cmsfromxs} \end{figure} The experimental and theoretical \ttbar\ cross sections used in the extraction are shown in Figure~\ref{fig:cmsfromxs}. The extracted top-quark pole mass by CMS are shown in Table~\ref{tab:cmsfromxs} for different PDFs~\cite{cms:fromxs}. These values are compatible but slightly higher than the top-quark mass world average~\cite{pdg}. The same kind of extraction has been also performed by the ATLAS Collaboration~\cite{atlas} using the first 35~pb$^{-1}$ of LHC data leading to a top-quark mass of $\mtpole = 166^{+7.8}_{-7.3}$~GeV~\cite{atlas:fromxs}. \begin{table} \centerline{ \begin{tabular}{l|c|ccc} \hline & Most likely $\mtpole$ & \multicolumn{3}{c}{Uncertainty (GeV)} \\ \cline{3-5} & value (GeV) & Total & From $\delta \alpha_{S}$ & From $\delta E_{\text{LHC}}$ \\ \hline ABM11 & 172.7 & ${}^{+3.8}_{-3.5}$ & ${}^{+1.0}_{-1.0}$ & ${}^{+0.8}_{-0.8}$\\ CT10 & 177.0 & ${}^{+4.3}_{-3.8}$ & ${}^{+0.8}_{-0.8}$ & ${}^{+0.9}_{-0.9}$\\ HERAPDF1.5 & 179.5 & ${}^{+4.3}_{-3.8}$ & ${}^{+1.2}_{-1.1}$ & ${}^{+1.0}_{-1.0}$\\ MSTW2008 & 177.9 & ${}^{+4.0}_{-3.6}$ & ${}^{+0.9}_{-0.9}$ & ${}^{+0.9}_{-0.9}$\\ NNPDF2.3 & 176.7 & ${}^{+3.8}_{-3.4}$ & ${}^{+0.7}_{-0.7}$ & ${}^{+0.9}_{-0.9}$\\ \hline \end{tabular} } \caption{Results obtained by CMS for $\mtpole$ by comparing the measured \ttbar\ cross section to the NNLO+NNLL prediction with different NNLO PDF sets~\cite{cms:fromxs}. } \label{tab:cmsfromxs} \end{table} \quad \\ To summarize, the top-quark pole mass has been extracted from the \ttbar\ cross section by D0 leading to a precision of 3~\% (where the input experimental cross section has a precision of 12~\%, and the input theoretical cross section of 3~\%), by ATLAS with a precision of 4.5~\% (where the experimental input has an uncertainty of 13~\% and the theory input of 5~\%) and by CMS with a precision of 2~\% (where the experimental input has an uncertainty of 4~\% and the theory input of 4~\%). Looking at the current theoretical uncertainty on the \ttbar\ cross section and assuming no experimental errors, one can estimate the ultimate uncertainty on the top-quark mass achievable with this method to be around 3~GeV (1.7 \%). \section{Mass measurement using the endpoint method} \label{sec:endpoint} The endpoint method employed for the first time by CMS to measure the top-quark mass~\cite{cms:endpoint} was originally developed to measure masses of potentially pair produced new particles with two cascade decays each ending in an invisible particle, like neutralino. It is thus also applicable to the \ttbar\ dilepton final state which contains two escaping neutrinos. This method relies on the end distribution of the variable named \mtt\ used as mass estimator. This $M_{T2}$ is a generalization of the usual transverse mass and is defined as: $$\mtt \equiv \min_{\vec{p}^{\text{a}}_{\text{T}}+\vec{p}^{\text{b}}_{\text{T}}=\vPTm}\; \left\{\max(M^{\text{a}}_{\text{T}},M^{\text{b}}_{\text{T}})\right\}.$$ This variable corresponds to the minimum parent mass consistent with the observed kinematics for the hypothetical $\vec{p}^{\text{a}}_{\text{T}}$ and $\vec{p}^{\text{b}}_{\text{T}}$. To limit the sensitivity to the transverse momentum of the \ttbar\ system ($\pt(\ttbar)$), the variable \mttp\ is rather used. It is computed with the \pt\ components perpendicular to $\pt(\ttbar)$. Three variables are needed to solve the dilepton event kinematics. The chosen variables are \mttp\ computed at the lepton level ($\mu_{\ell\ell}$) after the $W$-boson decays, \mttp\ computed at the $b$-jet level ($\mu_{bb}$), ignoring that leptons are in fact observed and the invariant mass between the $b$-jet and the lepton ($M_{lb}$) which is very correlated with \mttp\ constructed with the $b$-jet+lepton combined. In the analysis, the physics background is estimated using MC while the background with mistag $b$-jets is evaluated using antitag events. The combinatoric background is suppresed using a dedicated selection algorithm~\cite{cms:endpoint}. The top-quark mass is extracted using a maximum likelihood fit of the endpoint of the three chosen variables taking the object resolution into account. Indeed in the limit of perfect object measurements, the maximum of the $\mu_{\ell\ell}$ distribution is equal to the $W$-boson mass (assuming zero neutrino mass), the maximum of the $\mu_{bb}$ distribution is equal to the top-quark mass while the maximum of $M_{lb}$ can be expressed analytically using the energies and momenta of the daughter of $t \to Wb$ in the top-quark rest frame. The fitted distributions are shown in Figure~\ref{fig:cmsendpoint}. Using this technique, CMS measures~\cite{cms:endpoint}: $\mt = 173.9 \pm 0.9 {\rm (stat)} ^{+1.7}_{-2.1} {\rm (syst)}$~GeV. The precision of this result is comparable to the one from the standard measurement in the same channel. As can be seen in Table~\ref{tab:cmsendpoint}, the largest systematic uncertainty comes from the uncertainty on the jet energy scale. \begin{figure}[!htb] \centerline{ \includegraphics[width=0.7\textwidth]{deliot_frederic.fig4.eps} } \caption{Results for the endpoint fit in CMS where the red line represents the full fit while the green and blue curves are for the signal and background shapes respectively~\cite{cms:endpoint}.} \label{fig:cmsendpoint} \end{figure} \begin{table} \centerline{ \begin{tabular}{lc} \hline ~~~Source~~~ & ~~~ $\delta\mt$ (GeV)~~~ \cr\hline Jet Energy Scale & ${}^{+1.3}_{-1.8}$\cr Jet Energy Resolution & ${\pm}0.5$\cr Lepton Energy Scale & ${}^{+0.3}_{-0.4}$\cr Fit Range & ${\pm}0.6$\cr Background Shape & ${\pm}0.5$\cr Jet and Lepton Efficiencies & ${}^{+0.1}_{-0.2}$\cr Pileup & $<$0.1 \cr QCD effects & $\pm$0.6\cr\hline Total & ${}^{+1.7}_{-2.1}$\cr\hline \end{tabular} } \caption{Summary of the systematic uncertainties affecting the CMS measurement of the top-quark using the endpoint method~\cite{cms:endpoint}. } \label{tab:cmsendpoint} \end{table} \section{Mass measurement using the B-hadron lifetime} \label{sec:bhadron} The top-quark mass can also be measured using different observables. For instance the lifetime and decay length of the B-hadrons from the top-quark decay depends almost linearly on the top-quark mass as can be seen in Figure~\ref{fig:lxymtop}. Alternatively the lepton \pt\ from the decay of the $W$-boson from the top quark can also be used as a mass estimator. The advantage of such estimators is that they minimally rely on the calorimeter-based uncertainty like the jet energy scale uncertainty. However these methods can potentially be rather sensitive to the modeling of the top production kinematics or to the calibration of the $b$ decay length or the $b$ fragmentation model. \begin{figure}[!htb] \centerline{ \includegraphics[width=0.4\textwidth,height=5cm]{deliot_frederic.fig5.eps} } \caption{Median of the transverse $b$ decay length distribution between the primary and the secondary vertex as a function of the simulated top-quark mass for three final states studied by CMS~\cite{cms:lxy}.} \label{fig:lxymtop} \end{figure} \subsection{Measurement using the B-hadron lifetime at CDF} These alternative methods were first developped at CDF in the \ljets\ channel with at least one $b$-tagged jet using 1.9~\ifb~\cite{cdf:lxy}. The top-quark mass was simultaneously extracted from the B-hadron lifetime and from the lepton \pt. The main difficulty of this analysis appears to be the calibration of the transverse decay length. Indeed corrections for the inaccuracy of the fragmentation simulation in EVTGEN has been necessary as well as corrections for the tracker modeling in the simulation. These corrections are determined using a sample of $b\bar{b}$ events (with 95\% purity) as a function the \pt\ of jets reconstructed only in the tracker. These track-based jets are previously calibrated using $\gamma$+jets events. The uncertainty on the calibration of the transverse decay length are the dominant systematic uncertainty on the final result. In the case of the measurement using the lepton \pt, the understanding of the lepton \pt\ scale is the largest systematic uncertainty. Constructing a combined likelihood shown in Figure~\ref{fig:cdflxy} with the two observables, CDF measures~\cite{cdf:lxy}: $\mt = 170.7 \pm 6.3 {\rm (stat)} \pm 2.6 {\rm (syst)}$~GeV. Details on the systematic uncertainties limiting the measurements are presented in Table~\ref{tab:cdflxy}. \begin{figure}[!hbt] \centerline{ \includegraphics[width=0.5\textwidth]{deliot_frederic.fig6.eps} } \caption{Likelihood constructed from 23 \mt\ test points using the transverse B-hadron decay length and the lepton \pt\ in the top-quark decay by CDF~\cite{cdf:lxy}.} \label{fig:cdflxy} \end{figure} \begin{table} \centerline{ \begin{tabular}{lccc} Systematic [$\textnormal{GeV}/c^2$]& Lxy & Lepton $p_T$ & Simultaneous\\ \hline Background Shape & 1.0 & 2.3 & 1.7 \\ QCD Radiation & 0.5 & 1.2 & 0.7 \\ PDF & 0.3 & 0.6 & 0.5\\ Generator & 0.7 & 0.9 & 0.3 \\ Lepton $p_T$ Scale & 0 & 2.3 & 1.2 \\ Lxy Calibration & 2.5 & 0 & 1.1 \\ Multiple Interactions & 0.2 & 1.2 & 0.7 \\ Calorimeter JES & 0.4 & 0.4 & 0.3\\ \hline Systematics Total & 2.9 & 3.8 & 2.6 \\ \end{tabular} } \caption{Final systematic uncertainties for the transverse B-hadron decay length and the lepton \pt\ CDF measurement~\cite{cdf:lxy}. } \label{tab:cdflxy} \end{table} \subsection{Measurement using the B-hadron lifetime at CMS} CMS has adapted CDF method using both the \ljets\ and dilepton final state using 19~\ifb\ of LHC data at 8~TeV. In this analysis, the chosen observable is the median of the distribution of secondary vertices with maximal transverse decay length ($L_{xy}$). The calibration for $L_{xy}$ is cross-checked using dijet events with one muon-tagged jet, taken to be the tag jet, while the second jet is taken to be the probe. The distribution of the secondary vertex mass of this probe jet is then compared with the prediction after fitting the light, $c$ and $b$-jets fractions. The agreement appears to be good as shown in Figure~\ref{fig:cmslxy}. \begin{figure}[!hbt] \centerline{ \includegraphics[width=0.4\textwidth]{deliot_frederic.fig7.eps} } \caption{Inclusive fit to the flavor content of a dijet sample based on the secondary vertex mass distribution to check the calibration of $L_{xy}$ in CMS B-hadron lifetime measurement~\cite{cms:lxy}.} \label{fig:cmslxy} \end{figure} The top-quark mass extraction using the median values of $L_{xy}$ after calibration leads to~\cite{cms:lxy}: $\mt = 173.5 \pm 1.5 {\rm (stat)} \pm 1.3 {\rm (syst)} \pm 2.6 {\rm (\pt(t))}$~GeV. As can be seen in Table~\ref{tab:cmslxy}, the modeling of the top-quark \pt, which is mass dependent, has a huge influence on the result. A systematic uncertainty based on reweighting the simulation to the unfolded top-quark \pt\ spectrum from data is assigned. This is currently the limiting uncertainty. In the future, the possibility to use an invariant quantity like the lepton-vertex invariant mass could be studied since it would keep the information on the top-quark mass while being less dependent on the top-quark kinematics. \begin{table} \centerline{ \begin{tabular}{llccc} \hline \hline \multicolumn{2}{c}{Source} & \multicolumn{3}{c}{$\Delta$\mt [GeV]} \\ & & $\mu$+jets& $e$+jets & $e\mu$\\\hline \multicolumn{2}{l}{Statistical} & 1.0 & 1.0 & 2.0\\ \hline \multirow{5}{*}{Experimental} & Jet energy scale & $0.30\pm0.01$ & $0.30\pm0.01$ & $0.30\pm0.01$ \\ &Multijet normalization ($\ell$+jets) & $0.50\pm0.01$& $0.67\pm0.01$ & - \\ &W+jets normalization ($\ell$+jets) & $1.42\pm0.01$ & $1.33\pm0.01$ & - \\ &DY normalization ($\ell\ell$) &- & - & $0.38\pm0.06$\\ &Other backgrounds normalization & $0.05\pm0.01$ & $0.05\pm0.01$ & $0.15\pm0.07$ \\ &W+jets background shapes ($\ell$+jets)& $0.40\pm 0.01$ & $0.20\pm 0.01 $ &- \\ &Single top background shapes & $0.20\pm 0.01$ & $0.20\pm 0.01$ & $0.30\pm0.06$\\ &DY background shapes ($\ell\ell$) & - & - & $0.04\pm0.06$\\ & Calibration & $0.42\pm0.01$ & $0.50\pm0.01$ & $0.21\pm0.01$\\ \hline \multirow{4}{*}{Theory} & $Q^{2}$-scale & $0.47 \pm 0.13$ & $0.20 \pm 0.03$ & $0.11\pm0.08$ \\ & ME-PS matching scale & $0.73 \pm 0.01$ & $0.87 \pm 0.03$ & $0.44\pm0.08$ \\ & PDF & $0.26\pm0.15$ & $0.26\pm0.15$ & $0.26\pm0.15$\\ & Hadronization model & $0.95\pm 0.13$ & $0.95 \pm 0.13$ & $0.67 \pm 0.10$\\ & B hadron composition & $0.39\pm 0.01$ & $0.39\pm 0.01$ & $0.39\pm 0.01$\\ & B hadron lifetime & $0.29 \pm 0.18$ & $0.29 \pm 0.18$ & $0.29 \pm 0.18$ \\ & Top quark \pt modeling & $3.27\pm0.48$ & $3.07\pm0.45$ & $2.36 \pm0.35$ \\ & Underlying event & $0.27\pm0.51$ & $0.25\pm0.48$ & $0.19\pm0.37$ \\ & Colour reconnection & $0.36\pm0.51$ & $0.34\pm0.48$ & $0.26\pm0.37$ \\ \hline \hline \end{tabular} } \caption{Statistical, experimental and theoretical systematic uncertainties on the measured top-quark mass based on the median of the transverse B-hadron decay length distribution between the primary and the secondary vertex by CMS~\cite{cms:lxy}. } \label{tab:cmslxy} \end{table} \section{Conclusion} \label{sec:conclusion} Now that the precision on the direct top-quark mass measurements reaches 1~GeV, alternative methods that are less sensitive to MC (and so less sensitive to the top-quark mass scheme implemented in MC) or with different sensitivity to systematic uncertainties need to be developped. Some of these alternative approaches have been described here. For some of them the achieved precision is still modest. However with plenty of statistics forseen, the LHC Run 2 will enable to improve them allowing in particular to study the systematic limitation using data. \begin{footnotesize}
1,116,691,497,030
arxiv
\section{Introduction} Relative entropy are powerful tools in quantum information theory \cite{Ohya}. It has a monotonicity property under a certain class of quantum channels and the condition of equality is an interesting and important subject. It is Petz who first studied the equality condition of monotonicity of relative entropy \cite{Petz1,Petz2}. Later Ruskai obtained similar result in terms of another elegant approach \cite{Ruskai}. The most general equality condition along with this line are recently reviewed in \cite{Hiai}. In this note we will make use of the most general equality condition for relative entropy to find the specific form of states which satisfy the zero-discord condition (see details below). Let $\cH$ denote an $N$-dimensional complex Hilbert space. A \emph{state} $\rho$ on $\cH$ is a positive semi-definite operator of trace one. We denote $\density{\cH}$ the set of all the density matrices acting on $\cH$. If $\rho = \sum_k\lambda_k\out{u_k}{u_k}$ is the spectral decomposition of $\rho$, with $\lambda_k$ and $|u_k\rangle$ the eigenvalues and eigenvectors respectively, then the \emph{support} of $\rho$ is defined by $$ \mathrm{supp}(\rho) \stackrel{\smash{\textnormal{\tiny def}}}{=} \mathrm{span}\set{\ket{u_k} : \lambda_k>0}, $$ and the \emph{generalized inverse} $\rho^{-1}$ of $\rho$ is defined by $$ \rho^{-1} = \sum_{k:\lambda_k>0}\lambda^{-1}_k\out{u_k}{u_k}. $$ The \emph{von Neumann entropy} $\rS(\rho)$ of $\rho$ is defined by $$ \rS(\rho) \stackrel{\smash{\textnormal{\tiny def}}}{=} - \Tr{\rho\log\rho}, $$ which quantifies information encoded in the quantum state $\rho$. If $\sigma$ is also a quantum state on $\cH$, then the \emph{relative entropy} \cite{Ohya} between $\rho$ and $\sigma$ is defined by $$ \rS(\rho||\sigma) \stackrel{\smash{\textnormal{\tiny def}}}{=} \left\{\begin{array}{cl} \Tr{\rho(\log\rho - \log\sigma)}, & \text{if}\ \mathrm{supp}(\rho) \subseteq \mathrm{supp}(\sigma), \\ +\infty, & \text{otherwise}. \end{array} \right. $$ Let $\lin{\cH}$ be the set of all linear operators on $\cH$. If $X, Y \in \lin{\cH}$, then $\inner{X}{Y} = \Tr{X^{\dagger}Y}$ defines the \emph{Hilbert-Schmidt inner product} on $\lin{\cH}$. Let $\trans{\cH}$ denote the set of all linear super-operators from $\lin{\cH}$ to itself. $\Lambda\in \trans{\cH}$ is said to be a \emph{completely positive super-operator} if for each $k \in \mathbb{N}$, $$ \Lambda\otimes \mathbb{1}_{\rM_{k}(\mathbb{C})}: \lin{\cH} \otimes \rM_{k}(\mathbb{C}) \to \lin{\cH}\otimes \rM_{k}(\mathbb{C}) $$ is positive, where $\rM_{k}(\mathbb{C})$ is the set of all $k\times k$ complex matrices. It follows from Choi's theorem \cite{Choi} that every completely positive super-operator $\Lambda$ has a Kraus representation $$ \Lambda = \sum_{\mu}\mathrm{Ad}_{M_{\mu}}, $$ that is, for every $X\in \lin{\cH}$, $\Lambda (X) = \sum_{\mu}M_\mu XM_\mu^\dagger$, where $\set{M_\mu}\subseteq \lin{\cH}$, $M_\mu^\dagger$ is the adjoint operator of $M_\mu$. It is clear that for the super-operator $\Lambda$, there is \emph{adjoint super-operator} $\Lambda^{\dagger}\in\trans{\cH}$ such that for $A, B\in\lin{\cH}$, $\inner{\Lambda(A)}{B} = \inner{A}{\Lambda^{\dagger}(B)}$. Moreover, $\Lambda$ is a completely positive super-operator if and only if $\Lambda^\dagger$ is also a completely positive super-operator. A \emph{quantum channel} is just a trace-preserving completely positive super-operator $\Phi$. If $\Phi$ is also unit-preserving, then it is called \emph{unital quantum channel}. It has been reviewed in \cite{Hiai} that, \begin{lem}\label{lem:Hiai} Let $\rho,\sigma\in\density{\cH}$, $\Phi\in\trans{\cH}$ be a quantum channel. If $\mathrm{supp}(\rho) \subseteq \mathrm{supp}(\sigma)$, then $\rS(\Phi(\rho)||\Phi(\sigma)) \leqslant \rS(\rho||\sigma)$; moreover \begin{eqnarray*} \rS(\Phi(\rho)||\Phi(\sigma)) = \rS(\rho||\sigma) \quad \mbox{if and only if}\quad \Phi^\dagger_\sigma \circ \Phi(\rho) = \rho, \end{eqnarray*} where $\Phi^\dagger_\sigma = \mathrm{Ad}_{\sigma^{1/2}} \circ \Phi^\dagger \circ \mathrm{Ad}_{\Phi(\sigma)^{-1/2}}$. \end{lem} Moreover, for a tripartite state, one has \cite{Linden,Cadney}, \begin{lem}\label{bi-SSA} Let $\rho_{ABC}\in\density{\cH_A\otimes\cH_B\otimes\cH_C}$ for which strong subadditivity is saturated for both triples $ABC, BAC$. Then $\rho_{ABC}$ must have the following form: $$ \rho_{ABC} = \bigoplus_{i,j}p_{ij} \rho_{a_i^L}^{(i)} \otimes \rho_{a_i^Rb_j^L}^{(ij)} \otimes \rho_{b_j^R}^{(j)} \otimes \rho_C^{(k)}, $$ where $k$ is a function only of $i,j$ in the sense that $$ k = k(i,j) = k_1(i) = k_2(j) \quad \mbox{whenever} \quad p_{ij}>0. $$ In particular, $k$ need only be defined where $p_{ij}>0$ so that it is not necessarily constant. By collecting the terms of equivalent $k$ we can write $$ \rho_{ABC} = \bigoplus_k p_k \rho_{AB}^{(k)} \otimes \rho_C^{(k)}, $$ where $$ p_k\rho_{AB}^{(k)} = \sum_{i,j;k(i,j) = k} p_{ij} \rho_{a_i^L}^{(i)} \otimes \rho_{a_i^Rb_j^L}^{(ij)} \otimes \rho_{b_j^R}^{(j)}. $$ \end{lem} \section{Quantum discord} Consider a bipartite system $AB$ composed of subsystems $A$ and $B$. Let $\rho_{AB}$ be the density operator of $AB$, and $\rho_A$ and $\rho_B$ the reduced density operators. The total correlation between the systems $A$ and $B$ is measured by the \emph{quantum mutual information} $$ I\Pa{\rho_{AB}} = \rS\Pa{\rho_A} - \rS\Pa{\rho_A\big|\rho_B}, $$ where $$ \rS\Pa{\rho_A\big|\rho_B} = \rS\Pa{\rho_{AB}} - \rS\Pa{\rho_B} $$ is the entropy of $A$ conditional on $B$. The conditional entropy can also be introduced by a measurement-based approach. Consider a measurement locally performed on $B$, which can be described by a set of projectors $\Pi_B = \Set{\Pi_{B,\mu}}=\Set{\out{b_\mu}{b_\mu}}$. The state of the quantum system, conditioned on the measurement of the outcome labeled by $\mu$, is $$ \rho_{AB,\mu} = \frac{1}{p_{B,\mu}}\Pa{\mathbb{1}_A \otimes \Pi_{B,\mu}} \rho_{AB} \Pa{\mathbb{1}_A \otimes \Pi_{B,\mu}}, $$ where $$ p_{B,\mu} = \Tr{(\mathbb{1}_A \otimes \Pi_{B,\mu}) \rho_{AB} (\mathbb{1}_A \otimes \Pi_{B,\mu})} = \Innerm{b_\mu}{\rho_{AB}}{b_\mu}>0 $$ denotes the probability of obtaining the outcome $\mu$, and $\mathbb{1}_A$ denotes the identity operator for $A$. The conditional density operator $\rho_{AB,\mu}$ allows for the following alternative definition of the conditional entropy: $$ \rS(\rho_{AB}|\Set{\Pi_{B,\mu}}) = \sum_\mu p_{B,\mu} \rS(\rho_{AB,\mu}) = \sum_\mu p_{B,\mu} \rS(\rho_{A,\mu}), $$ where $\rho_{A,\mu} = \Ptr{B}{\rho_{AB,\mu}} = (1/p_{B,\mu}) \Innerm{b_\mu}{\rho_{AB}}{b_\mu}$. Therefore, the quantum mutual information can also be defined by $$ I(\rho_{AB}|\Set{\Pi_{B,\mu}}) = \rS(\rho_A) - \rS(\rho_{AB}|\Set{\Pi_{B,\mu}}). $$ The quantities $I(\rho_{AB})$ and $I(\rho_{AB}|\Set{\Pi_{B,\mu}})$ are classically equivalent but distinct in the quantum case. The one-sided \emph{quantum discord} is defined by: $$ D_B(\rho_{AB}) = \inf_{\Pi_B} \Set{I(\rho_{AB}) - I(\rho_{AB}|\Pi_B)}. $$ If we denote the nonselective von Neumann measurement performed on $B$ by $$ \Pi_B(\rho_{AB}) = \sum_\mu (\mathbb{1}_A \otimes \Pi_{B,\mu}) \rho_{AB} (\mathbb{1}_A \otimes \Pi_{B,\mu}) = \sum_\mu p_{B,\mu} \rho_{A,\mu} \otimes \out{b_\mu}{b_\mu}, $$ then the quantum discord can be written alternatively as \begin{eqnarray*} D_B(\rho_{AB}) &=& \inf_{\Pi_B} \Set{\rS(\rho_{AB}||\rho_A \otimes \rho_B) - \rS(\Pi_B(\rho_{AB})||\rho_A \otimes \Pi_B(\rho_B))}\\ &=& \inf_{\Pi_B} \Set{\rS(\rho_{AB}||\Pi_B(\rho_{AB})) - \rS(\rho_B||\Pi_B(\rho_B))}. \end{eqnarray*} Apparently, $D_B(\rho_{AB})\geqslant0$ from Lemma~\ref{lem:Hiai}. The \emph{symmetric quantum discord} of $\rho_{AB}$ is defined by \cite{Rulli}, \begin{eqnarray*} D(\rho_{AB}) =\inf_{\Pi_A \otimes \Pi_B} \Set{\rS(\rho_{AB}||\Pi_A\otimes\Pi_B(\rho_{AB})) - \rS(\rho_A||\Pi_A(\rho_A)) - \rS(\rho_B||\Pi_B(\rho_B))}. \end{eqnarray*} For the symmetric quantum discord of $\rho_{AB}$, one still has that \begin{eqnarray}\label{ast} D(\rho_{AB}) = \inf_{\Pi_A\otimes\Pi_B} \Set{\rS(\rho_{AB}||\rho_A \otimes \rho_B) - \rS(\Pi_A \otimes \Pi_B(\rho_{AB})||\Pi_A\otimes\Pi_B(\rho_A \otimes \rho_B))}. \end{eqnarray} The symmetric quantum discord of $\rho_{A_1\ldots A_N}$ for $N$-partite systems are defined by: \begin{eqnarray*} &&D(\rho_{A_1\ldots A_N}) \\ &&= \inf_{\Pi_{A_1} \otimes \cdots \otimes \Pi_{A_N}} \Set{\rS(\rho_{A_1\ldots A_N}||\Pi_{A_1} \otimes \cdots \otimes \Pi_{A_N}(\rho_{A_1\ldots A_N}))-\sum_{i=1}^N\rS(\rho_{A_i}||\Pi_{A_i}(\rho_{A_i}))}\\ &&=\inf_{\Pi_{A_1} \otimes \cdots \otimes \Pi_{A_N}} \{\rS(\rho_{A_1\ldots A_N}||\rho_{A_1}\otimes\cdots\otimes\rho_{A_N})\\ &&- \rS(\Pi_{A_1} \otimes \cdots \otimes \Pi_{A_N}(\rho_{A_1\ldots A_N})||\Pi_{A_1} \otimes \cdots \otimes \Pi_{A_N}(\rho_{A_1}\otimes\cdots\otimes\rho_{A_N}))\}, \end{eqnarray*} which is non-negative, $D(\rho_{A_1\ldots A_N}) \geqslant 0$. The following theorem describes the structure of symmetric zero-discord states: \begin{thrm} $D(\rho_{AB}) = 0$ if and only if $$ \rho_{AB} = \sum_{\mu,\nu} \frac{p_{AB,\mu\nu}}{p_{A,\mu} p_{B,\nu}}\sqrt{\rho_A}\Pi_{A,\mu}\sqrt{\rho_A} \otimes \sqrt{\rho_B}\Pi_{B,\nu}\sqrt{\rho_B} $$ for both von Neumann measurements $\Pi_A = \Set{\Pi_{A,\mu}}$ and $\Pi_B = \Set{\Pi_{B,\nu}}$, where $$ p_{A,\mu} = \Tr{\Pi_{A,\mu}\rho_A},\quad p_{B,\nu} = \Tr{\Pi_{B,\nu}\rho_B},\quad p_{AB,\mu\nu} = \Tr{\Pi_{A,\mu} \otimes \Pi_{B,\nu} \rho_{AB}}. $$ \end{thrm} \begin{proof} Clearly, $\mathrm{supp}\Pa{\rho_{AB}} \subseteq \mathrm{supp}\Pa{\rho_A} \otimes \mathrm{supp}\Pa{\rho_B} = \mathrm{supp}\Pa{\rho_A \otimes \rho_B}$ \cite{Renner}. Since $D(\rho_{AB})=0$, from Eq.~\eqref{ast}, it follows that there exist two von Neumann measurement $\Pi_A = \Set{\Pi_{A,\mu}}$ and $\Pi_B = \Set{\Pi_{B,\nu}}$ such that $$ \rS(\Pi_A \otimes \Pi_B(\rho_{AB})||\Pi_A\otimes\Pi_B(\rho_A \otimes \rho_B)) = \rS(\rho_{AB}||\rho_A \otimes \rho_B). $$ Assume that $\sigma = \rho_A \otimes \rho_B, \Phi = \Pi_A \otimes \Pi_B$ in Lemma~\ref{lem:Hiai}. Therefore $D(\rho_{AB}) = 0$ if and only if $$ \rS(\Pi_A \otimes \Pi_B(\rho_{AB})||\Pi_A\otimes\Pi_B(\rho_A \otimes \rho_B)) = \rS(\rho_{AB}||\rho_A \otimes \rho_B). $$ Namely $$ \rho_{AB} = \Phi^\dagger_\sigma \circ\Phi(\rho_{AB}) = ((\Pi^\dagger_{A,\rho_A}\circ\Pi_A) \otimes (\Pi^\dagger_{B,\rho_B}\circ\Pi_B))(\rho_{AB}) $$ Therefore $$ \rho_{AB} = \sum_{\mu,\nu} \frac{p_{AB,\mu\nu}}{p_{A,\mu} p_{B,\nu}}\sqrt{\rho_A}\Pi_{A,\mu}\sqrt{\rho_A} \otimes \sqrt{\rho_B}\Pi_{B,\nu}\sqrt{\rho_B}. $$ \end{proof} Accordingly we have \begin{cor} $D_B(\rho_{AB}) = 0$ if and only if \begin{equation}\label{c1} \rho_{AB} = \sum_\mu \rho_{A,\mu} \otimes \sqrt{\rho_B}\Pi_{B,\mu}\sqrt{\rho_B} \end{equation} for some von Neumann measurement $\Pi_B = \Set{\Pi_{B,\mu}}$, where $$ \rho_{A,\mu} = \frac{1}{p_{B,\mu}}\Ptr{B}{\mathbb{1}_A \otimes \Pi_{B,\mu}\rho_{AB}},\quad p_{B,\mu} = \Tr{\Pi_{B,\mu}\rho_B}. $$ \end{cor} \begin{remark} Suppose that the von Neumann measurement in Eq.~\eqref{c1} is $\Pi_B = \Set{\Pi_{B,\mu}} = \Set{\out{b_\mu}{b_\mu}}$. Then we can assert that $\Ket{b_\mu}$ is the eigenvectors of $\rho_B$. This can be seen as follows. From Eq.~\eqref{c1} it follows that \begin{equation}\label{r1} \Pi_B(\rho_{AB}) = \sum_\mu \rho_{A,\mu} \otimes \Pi_B(\sqrt{\rho_B}\Pi_{B,\mu}\sqrt{\rho_B}). \end{equation} Actually, \begin{equation}\label{r2} \Pi_B(\rho_{AB}) = \sum_\mu (\mathbb{1}_A \otimes \Pi_{B,\mu}) \rho_{AB} (\mathbb{1}_A \otimes \Pi_{B,\mu}) = \sum_\mu p_{B,\mu} \rho_{A,\mu} \otimes \Pi_{B,\mu}. \end{equation} From Eq.~\eqref{r1} and Eq.~\eqref{r2}, we have $$ \Pi_B(\sqrt{\rho_B}\Pi_{B,\mu}\sqrt{\rho_B}) = p_{B,\mu}\Pi_{B,\mu}, $$ which implies that \begin{eqnarray} \left\{\begin{array}{cc} \Pi_{B,\mu}\sqrt{\rho_B}\Pi_{B,\nu}\sqrt{\rho_B}\Pi_{B,\mu} = 0, & \mbox{if}\quad \mu\neq \nu, \\[3mm] \Pi_{B,\mu}\sqrt{\rho_B}\Pi_{B,\mu}\sqrt{\rho_B}\Pi_{B,\mu} = p_{B,\mu}\Pi_{B,\mu}, & \mbox{otherwise}. \end{array}\right. \end{eqnarray} That is, \begin{eqnarray*} \left\{\begin{array}{cc} \abs{\Innerm{b_\mu}{\sqrt{\rho_B}}{b_\nu}}^2 = 0 & \mbox{if}\quad \mu\neq \nu, \\ \Innerm{b_\mu}{\sqrt{\rho_B}}{b_\mu} = \sqrt{p_{B,\mu}} = \sqrt{\Innerm{b_\mu}{\rho_B}{b_\mu}} & \mbox{otherwise}. \end{array}\right. \end{eqnarray*} Thus we conclude that $\Set{\Ket{b_\mu}}$ is the eigenvectors of $\rho_B$. \end{remark} For general multipartite case we have \begin{cor} $D(\rho_{A_1\ldots A_N}) = 0$ if and only if $$ \rho_{A_1\ldots A_N} = \sum_{\mu_1,\ldots,\mu_N} \frac{p_{A_1\ldots A_N,\mu_1\ldots \mu_N}}{p_{A_1,\mu_1}\cdots p_{A_N,\mu_N}}\sqrt{\rho_{A_1}}\Pi_{A_1,\mu_1}\sqrt{\rho_{A_1}} \otimes \cdots \otimes \sqrt{\rho_{A_N}}\Pi_{A_N,\mu_N}\sqrt{\rho_{A_N}} $$ for $N$ von Neumann measurements $\Pi_{A_i} = \Set{\Pi_{A_i,\mu_i}}$, where $$ p_{A_i,\mu_i} = \Tr{\Pi_{A_i,\mu_i}\rho_{A_i}}(i = 1,\ldots,N),\quad p_{A_1\ldots A_N,\mu_1\ldots \mu_N}= \Tr{\Pi_{A_1,\mu_1} \otimes \cdots \otimes \Pi_{A_N,\mu_N} \rho_{A_1\ldots A_N}}. $$ \end{cor} In order to obtain a connection with strong subadditivity of quantum entropy \cite{Datta}, we associate each von Neumann measurement $\Pi_B = \Set{\Pi_{B,\mu}}$ with a system $C$ as follows: \begin{eqnarray}\label{d1} \sigma_{ABC} = V\rho_{AB}V^\dagger = \sum_{\mu,\nu} (\mathbb{1}_A \otimes \Pi_{B,\mu})\rho_{AB} (\mathbb{1}_A \otimes \Pi_{B,\nu})\otimes \out{\mu}{\nu}_C, \end{eqnarray} where $$ V\ket{\psi_B} \stackrel{\smash{\textnormal{\tiny def}}}{=} \sum_\mu \Pi_{B,\mu}\ket{\psi_B} \otimes \ket{\mu}_C $$ is an isometry from $B$ to $BC$. From Eq.~\eqref{d1} we have \begin{eqnarray*} \sigma_{AB} &=& \Ptr{C}{V\rho_{AB}V^\dagger} = \Pi_B\Pa{\rho_{AB}} = \sum_\mu p_{B,\mu} \rho_{A,\mu} \otimes \Pi_{B,\mu},\\ \sigma_{BC} &=& \Ptr{A}{V\rho_{AB}V^\dagger} = \sum_{\mu,\nu} \Pi_{B,\mu}\rho_B\Pi_{B,\nu}\otimes \out{\mu}{\nu}_C,\\ \sigma_B &=& \sum_\mu p_{B,\mu}\Pi_{B,\mu}, \end{eqnarray*} where $p_{B,\mu} = \Tr{\rho_B\Pi_{B,\mu}}$. This implies that the conditional mutual information between $A$ and $C$ conditioned on $B$ is \begin{eqnarray*} I(A;C|B)_\sigma &\stackrel{\smash{\textnormal{\tiny def}}}{=}& \rS(\sigma_{AB}) + \rS(\sigma_{BC}) - \rS(\sigma_{ABC}) - \rS(\sigma_B)\\ &=& \sum_\mu p_{B,\mu}\rS(\rho_{A,\mu}) + \rS(\rho_B) - \rS(\rho_{AB})\\ &=& \rS(\rho_{AB}||\rho_A \otimes \rho_B) - \rS(\Pi_B(\rho_{AB})||\rho_A \otimes \Pi_B(\rho_B)). \end{eqnarray*} Similarly we have $$ I(A;B|C)_\sigma = \rS\Pa{\rho_{AB}||\rho_A \otimes \rho_B} - \rS(\Pi_B(\rho_{AB})||\rho_A \otimes \Pi_B(\rho_B)). $$ That is, \begin{eqnarray}\label{DoubleSSA} I(A;C|B)_\sigma = I(A;B|C)_\sigma = \rS(\rho_{AB}||\rho_A \otimes \rho_B) - \rS(\Pi_B(\rho_{AB})||\rho_A \otimes \Pi_B(\rho_B)). \end{eqnarray} If Eq.~(\ref{DoubleSSA}) vanishes for some von Neumann measurement $\Pi_B = \Set{\Pi_{B,\mu}}$, $I(A;C|B)_\sigma = I(A;B|C)_\sigma = 0$, then from Lemma~\ref{bi-SSA}(i), $$ \sigma_{ABC} = \bigoplus_k p_k \sigma^{(k)}_A \otimes \sigma^{(k)}_{BC}. $$ If $D_B(\rho_{AB}) = \rS(\rho_{AB}||\rho_A \otimes \rho_B) - \rS(\Pi_B(\rho_{AB})||\rho_A \otimes \Pi_B(\rho_B))$ for some von Neumann measurement $\Pi_B$, then $$ D_B(\rho_{AB}) = I(A;B|C)_\sigma. $$ There exists a famous protocol---state redistribution---which gives an operational interpretation of conditional mutual information $I(A;B|C)_\sigma$ \cite{Devetak,Yard}. This amounts to give implicitly an operational interpretation of quantum discord \cite{Madhok,Cavalcanti}. \section{A generalization of zero-discord states} Denote \begin{eqnarray*} \Omega^0_A &\stackrel{\smash{\textnormal{\tiny def}}}{=}& \Set{\rho_{AB}\in\density{\cH_A\otimes\cH_B}: D_A(\rho_{AB}) = 0},\\ \Omega^0 &\stackrel{\smash{\textnormal{\tiny def}}}{=}& \Set{\rho_{AB}\in\density{\cH_A\otimes\cH_B}: D(\rho_{AB}) = 0}. \end{eqnarray*} Suppose $\rho_{AB}\in\density{\cH_A\otimes\cH_B}$, with two marginal density matrices are $\rho_A = \Ptr{B}{\rho_{AB}}$ and $\rho_B = \Ptr{A}{\rho_{AB}}$, respectively. A sufficient condition for zero-discord states has been derived in \cite{Ferraro}: if $\rho_{AB}\in\Omega^0_A$, then $\Br{\rho_{AB},\rho_A\otimes\mathbb{1}_B}=0$. A characterization of $\Br{\rho_{AB},\rho_A\otimes\mathbb{1}_B}=0$ is obtained in \cite{Cesar}, $\Br{\rho_{AB},\rho_A\otimes\mathbb{1}_B}=0$ if and only if $\rho_{AB} = \Pi_A(\rho_{AB})$, where $\Pi_A = \Set{\Pi_{A,\mu}}$ is some positive valued measurement for which each projector $\Pi_{A,\mu}$ is of any rank. That is, $$ \rho_{AB} = \sum_\mu (\Pi_{A,\mu}\otimes\mathbb{1}_B) \rho_{AB} (\Pi_{A,\mu}\otimes\mathbb{1}_B). $$ States $\rho_{AB}$ such that $\Br{\rho_{AB},\rho_A\otimes\mathbb{1}_B}=0$ are called \emph{lazy ones} with particular physical interpretations \cite{Cesar}. Consider general evolution of the state in a finite-dimensional composite system $AB$: $$ \Br{\frac{d}{dt}\rho_{AB,t}}_{t=\tau} = -\mathrm{i}\Br{H_{AB},\rho_{AB,\tau}}, $$ where the total Hamiltonian is $H_{AB}\equiv H_A\otimes\mathbb{1}_B + \mathbb{1}_A\otimes H_B + H_{\mathrm{int}}$, which consists of the system, the environment and the interaction Hamiltonians. Clearly, it is required that $\Ptr{A}{H_{\mathrm{int}}} = \Ptr{B}{H_{\mathrm{int}}} = 0$. For the system $A$, the change rate of the system entropy at a time $\tau$ is given by \cite{Ferraro}: \begin{eqnarray} \Br{\frac{d}{dt}\rS(\rho_{A,t})}_{t=\tau} = -\mathrm{i}\Tr{H_{\mathrm{int}}\Br{\rho_{AB,\tau},\log_2(\rho_{A,\tau})\otimes\mathbb{1}_B}}. \end{eqnarray} Since the von Neumann entropy $\rS(\rho_X)$ of $\rho_X$ quantifies the degree of decoherence of the system $X(=A,B)$, it follows that the system entropy rates are independent of the $AB$ coupling if and only if $$ \Br{\frac{d}{dt}\rS(\rho_{A,t})}_{t=\tau} = 0, $$ which is equivalent to the following expression: $$ \Br{\rho_{AB,\tau},\log_2(\rho_{A,\tau})\otimes\mathbb{1}_B} = 0 \Longleftrightarrow \Br{\rho_{AB,\tau},\rho_{A,\tau}\otimes\mathbb{1}_B} = 0. $$ In view of this point, the entropy of quantum systems can be preserved from decoherence under any coupling between $A$ and $B$ if and only if the composite system states are lazy ones. From the symmetry with respect to $A$ and $B$, one has \begin{eqnarray} \Br{\frac{d}{dt}\rS(\rho_{B,t})}_{t=\tau} = -\mathrm{i}\Tr{H_{\mathrm{int}}\Br{\rho_{AB,\tau},\mathbb{1}_A\otimes\log(\rho_{B,\tau})}}. \end{eqnarray} Due to that $$ \Br{\frac{d}{dt}I(\rho_{AB,t})}_{t=\tau} = \Br{\frac{d}{dt}\rS(\rho_{A,t})}_{t=\tau} + \Br{\frac{d}{dt}\rS(\rho_{B,t})}_{t=\tau} - \Br{\frac{d}{dt}\rS(\rho_{AB,t})}_{t=\tau} $$ and $$ \Br{\frac{d}{dt}\rS(\rho_{AB,t})}_{t=\tau} = 0, $$ we have further \begin{eqnarray}\label{eq:mutualentropy} \Br{\frac{d}{dt}I(\rho_{AB,t})}_{t=\tau} = -\mathrm{i}\Tr{H_{\mathrm{int}}\Br{\rho_{AB,\tau},\log(\rho_{A,\tau}\otimes\rho_{B,\tau})}}. \end{eqnarray} We can see from Eq.~(\ref{eq:mutualentropy}) that the total correlation is preserved under any coupling between $A$ and $B$ if and only if the mutual entropy rate of composite system $AB$ is zero: $$ \Br{\frac{d}{dt}I(\rho_{AB,t})}_{t=\tau} = 0, $$ which is equivalent to the following expression: $$ \Br{\rho_{AB,\tau},\log(\rho_{A,\tau}\otimes\rho_{B,\tau})} = 0 \Longleftrightarrow \Br{\rho_{AB,\tau},\rho_{A,\tau}\otimes\rho_{B,\tau}} = 0. $$ Similarly, we have: \begin{prop} If $\rho_{AB}\in\Omega^0$, then $\Br{\rho_{AB},\rho_A\otimes\rho_B}=0$. \end{prop} Moreover, \begin{prop} $\Br{\rho_{AB},\rho_A\otimes\rho_B}=0$ if and only if $\rho_{AB} = \Pi_A\otimes\Pi_B(\rho_{AB})$, where $\Pi_X = \Set{\Pi_{X,\alpha}}$ are some PVM for which each projector $\Pi_{X,\alpha}$, where $(X,\alpha)= (A,\mu),(B,\nu)$, are of any rank. That is, $$ \rho_{AB} = \sum_{\mu,\nu} (\Pi_{A,\mu}\otimes\Pi_{B,\nu}) \rho_{AB} (\Pi_{A,\mu}\otimes\Pi_{B,\nu}). $$ \end{prop} \begin{proof} Let the spectral decompositions of $\rho_{A,\tau}$ and $\rho_{B,\tau}$ be $$ \rho_{A,\tau} = \sum_\mu p_\mu \Pi_{A,\tau},\quad \rho_{B,\tau} = \sum_\nu q_\nu \Pi_{B,\nu}, $$ respectively, where $\Set{\Pi_{A,\mu}}$ and $\Set{\Pi_{B,\nu}}$ are the orthogonal projectors of any rank, such that $\set{p_\mu}$ and $\set{q_\nu}$ are non-degenerate, respectively. Then $\Set{\Pi_{A,\mu}\otimes\Pi_{B,\nu}}$ are orthogonal eigen-projectors of $\rho_{A,\tau}\otimes\rho_{B,\tau}$. Since $\Br{\rho_{AB},\rho_A\otimes\rho_B}=0$ is equivalent to $\Br{\rho_{AB},\Pi_{A,\mu}\otimes\Pi_{B,\nu}}=0$ for all $\mu,\nu$, it follows from $\sum_{\mu,\nu}\Pi_{A,\mu}\otimes\Pi_{B,\nu} = \mathbb{1}_A\otimes\mathbb{1}_B$ that $$ \rho_{AB} = \sum_{\mu,\nu} (\Pi_{A,\mu}\otimes\Pi_{B,\nu}) \rho_{AB} (\Pi_{A,\mu}\otimes\Pi_{B,\nu}). $$ The converse follows from direct computation. \end{proof} Here the states $\rho_{AB}$ satisfying the condition $\Br{\rho_{AB},\rho_A\otimes\rho_B}=0$ are just the generalization of zero-symmetric discord states and lazy states are the generalization of zero discord states. \section{Conclusion} We have studied the well-known monotonicity inequality of relative entropy under completely positive linear maps, by deriving some properties of symmetric discord. A new form of zero-discord state via Petz's monotonicity condition on relative entropy with equality has been derived systematically. The results are generalized for the zero-discord states. There is a more interesting and challenging problem which can be considered in the future study: What is a sufficient and necessary condition for the vanishing conditional mutual entropy rates at a time $\tau$: $$ \Br{\frac{d}{dt}I(A:B|E)_\rho}_{t=\tau}=0, $$ where $I(A:B|E)_\rho = \rS(\rho_{AE}) + \rS(\rho_{BE}) - \rS(\rho_{ABE}) - \rS(\rho_E)$. \subsection*{Acknowledgement} We thank F. Brand\~{a}o, M. Mosonyi, M. Piani, J. Rau and A. Winter for valuable comments. This project is supported by Natural Science Foundations of China (11171301, 10771191 and 10471124) and Natural Science Foundation of Zhejiang Province of China (Y6090105).
1,116,691,497,031
arxiv
\section{Non-ideal fabrication in fixed frequency qubits} Lattices of coupled qubits are proposed to enable error-correction algorithms such as the `surface code' \cite{Gambetta2017Build_s,fowler2012surface_s}. Qubits are arranged into a square grid with alternate qubits serving either data or error-checking functions. Bus-couplers provide interaction among adjacent qubits, with up to four qubits attached to each bus. A seven qubit-lattice thereby comprises 12 qubit pairs and a seventeen-qubit lattice comprises 34 pairs. However, single junction transmon qubits are challenging to fabricate at precisely set frequencies. Among dozens of identically-fabricated qubits, the frequencies typically have a spread of $\sigma_f \sim 200$ MHz \cite{privcommsrosenblatt_s}. Such imprecision will inhibit functioning of qubit lattices. Considering a lattice of tansmon qubits of frequency $\sim 5$ GHz and anharmonicity $\delta/2\pi = -340$ MHz, and considering cross-resonance gate operations, we can estimate the number of undesired interactions among these pairs. Studies of the cross-resonance gate \citep{divincenzo2013quantum_s} indicate that these gates will be dominated by undesirable interactions if the frequency separation $|\Delta|$ between adjacent qubits is equal to zero, a degeneracy between $f_{01}$ of the qubits; equal to $-\delta/2\pi$, a degeneracy between $f_{01}$ of one qubit and $f_{12}$ of the next; or if $|\Delta| > -\delta/2\pi$ (weak interaction leading to very slow gate operation). In a simple Monte Carlo model, we assign to all points in the lattice a random qubit frequency from a gaussian distribution around 5 GHz, and count the number of degenerate or weak-interaction pairs, taking a range of $\pm (\delta/2\pi)/20$, or $\pm 17$ MHz around each degeneracy. The results appearing in Table \ref{table:MCModelCollisions} make it evident that the likelihood of frequency collisions increases as the lattice grows. \begin{table}[h] \centering \begin{tabular}{c|c|c} Number & & Mean Number \\ of QBs & $\sigma_f$ & of Collisions \\ \hline 7 & $\frac{1}{2}|\delta/2\pi|$ & 2.3 \\ 7 & $\frac{3}{4}|\delta/2\pi|$ & 3.6 \\ 17 & $\frac{1}{2}|\delta/2\pi|$ & 6.6 \\ 17 & $\frac{3}{4}|\delta/2\pi|$ & 10.6 \end{tabular} \caption{\label{table:MCModelCollisions} Frequency-collision modeling in lattices of transmon qubits employing cross-resonance gates. Predicted number of bad gate pairs (`frequency collisions') in two different lattice sizes. 7-qubit lattice has 12 pairs and 17-qubit lattice has 34 pairs. Mean of distribution is 5 GHz and two different distribution widths $\sigma_f$ are considered.} \end{table} \section{Device design and fabrication} The device for sample A, shown in Fig. \ref{fig:1}, has all eight qubit/cavities capacitively coupled to a common feedline through which individual qubit readout was achieved via a single microwave drive and output line. Sample B, shown in Fig. \ref{fig:1}, employs a design where all qubits have separate drive and readout microwave lines. As in Ref. \cite{Takita2016Dem_s} and \cite{ibmquantumexp_s}, this sample is designed as a lattice of coupled qubits for use in multi-qubit gate operations, although no such operations are presented in this paper. Coplanar-waveguide buses, half-wave resonant at $\sim$6 GHz, span the space between the qubits. Each bus resonator couples together three adjacent qubits. As compared to Ref. \cite{Takita2016Dem_s}, here the lattice comprises eight qubits and four buses instead of the seven qubits and two buses found in Ref. \cite{Takita2016Dem_s}. Both samples were fabricated using standard lithographic processing to pattern the coplanar waveguides, ground plane, and qubit capacitors from a sputtered Nb film on a Si substrate. In sample A the Nb films are 100 nm thick. In sample B they are 200 nm. The qubits were similar in design to \citep{Sheldon_Procedure_2016_s,chow2014implementing_s,corcoles2015demonstration_s,Takita2016Dem_s} with large transmon capacitor pads bridged by electron-beam patterned Al traces used to create Josephson junctions. Conventional shadow-evaporated double-angle Al-AlOx-Al was used to fabricate the junctions. Transmon capacitor pads in samples A and B have different size and separation, necessitating different SQUID loop geometries, as shown in Fig. \ref{fig:1}. The SQUID loops for qubits on sample A were created by bridging the transmon capacitor pads with two separate $0.6-\mu{\rm m}$ wide Al traces and Josephson junctions, with the asymmetry in the junctions fabricated by increasing the width of one junction with respect to the other, while keeping the overlap fixed at $0.2\, \mu{\rm m}$. The sum of the large and small junction areas was designed to be constant, independent of $\alpha$. Qubits on sample A had capacitor pads separated by $20\, \mu{\rm m}$ and the Al electrodes separated such that the SQUID loop area was roughly $400\, \mu{\rm m^{2}}$. In sample B, the Nb capacitor pads were separated by $70\, \mu{\rm m}$. The SQUID comprises a $\sim 20 \times 20\, \mu{\rm m}^2$ Al loop of 2 $\mu$m trace width, placed midway between the capacitor pads and joined to Nb leads extending from the pads. In sample B, the large and small junction differ in both width and overlap. In this sample, all SQUIDs of a given $\alpha$ were fabricated identically but SQUIDs of different $\alpha$ had different total junction area. \begin{figure}[!b] \includegraphics[width=1.0\columnwidth]{Device_Image_Supp2} \caption{(color online) Optical micrographs of samples including higher magnification images of qubits and SQUID loops. Sample B image is a chip of identical design to the ones used for measurements. In sample B image, labels indicate each qubit and its individual readout resonators, while unlabeled resonators are bus resonators. \label{fig:1}} \end{figure} \section{Measurement setup} Measurements of sample A were completed in a dilution refrigerator (DR) at Syracuse University (SU), while sample B was measured in a DR at the IBM TJ Watson Research Center. Both samples were wire-bonded into holders designed to suppress microwave chip modes. Each sample was mounted to the mixing chamber of its respective DR and placed inside a cryoperm magnetic shield, thermally anchored at the mixing chamber. Both SU and IBM DRs had room-temperature $\mu$-metal shields. Measurements for both samples were performed using standard cQED readout techniques \citep{Reed_High_2010_s}. For sample A, room-temperature microwave signals were supplied through attenuated coaxial lines, thermalized at each stage of the DR and filtered using 10 GHz low pass filters (K\&L) thermalized at the mixing chamber. We used a total of 70 dB of attenuation on the drive-lines: 20 dB at $4\, {\rm K}$, 20 dB at $0.7\, {\rm K}$ and 30 dB at the mixing chamber, with a base temperature of $30\,{\rm mK}$. Output measurement signals from the sample pass through another 10 GHz low-pass filter, a microwave switch, and two magnetically shielded cryogenic isolators, all thermally anchored to the mixing chamber. In the case of sample A, the signal was amplified by a low-noise HEMT at $4\, {\rm K}$, passing through a Nb/Nb superconducting coaxial cable between the mixing chamber and $4\, {\rm K}$ stage. The signal was amplified further at room temperature before being mixed down to 10 MHz and digitized. The eight resonators, coupled to each qubit on sample A, had measured frequencies that ranged from $6.975 - 7.136\, {\rm GHz}$, separated by $20 - 25\, {\rm MHz}$. $\kappa/{2\pi}$ linewidths for these resonators were on the order of a few hundreds of kHz. Figure \ref{fig:1} shows the layout of the sample B chip. The $\alpha = 15$ asymmetric-SQUID transmon reported in the paper was located at position $Q_7$. It was read out through a coplanar waveguide resonator of frequency 6.559 GHz and linewidth $\sim$ 300 kHz, and was found to have $f_{01}^{max} = 5.387$ GHz. The fixed-frequency transmon (5.346 GHz) at position $Q_2$ was read out through a 6.418 GHz resonator having linewidth $\sim$ 300 kHz. Sample B qubits were measured via signal wiring similar to that presented in Refs. \cite{Takita2016Dem_s,chow2014implementing_s,corcoles2015demonstration_s,sheldon2016characterizing_s}. Drive wiring included 10 dB of attenuation at 50 K, 10 dB at 4K, 6 dB at 0.7 K, 10 dB at 100 mK, and at the mixing-chamber plate 30 dB of attenuation plus a homemade `Eccosorb' low-pass filter. Drive signals entered a microwave circulator at the mixing plate. On one set of signal wiring, the 2nd port of the circulator passed directly to qubit $Q_7$. In another set of signal wiring, the second port of the circulator passed to several different qubits via a microwave switch. Signals reflected from the device passed back through the circulator to output and amplifier circuitry. Output circuitry comprised a low-pass Cu powder filter, followed by two cryogenic isolators in series, followed by an additional low-pass filter, followed by superconducting NbTi coaxial cable, followed by a low-noise HEMT amplifier at 4K and an additional low-noise amplifier at room temperature. Low-pass filters were intended to block signals above $\sim$ 10 GHz. In the case of $Q_7$, additional amplification was afforded by a SLUG amplifier \cite{HoverAPL2014_104_152601_s} mounted at the mixing stage, biased via two bias-tee networks and isolated from the sample by an additional cryogenic isolator. Output signals were mixed down to 5 MHz before being digitized and averaged. Mixing-plate thermometer indicated a temperature of $\sim$ 15 to 20 mK during measurements. Magnetic flux was supplied to sample A via a 6-mm inner diameter superconducting wire coil placed $2\,{\rm mm}$ above the sample. A Stanford SRS SIM928 dc voltage source with a room-temperature $2\,{\rm k}\Omega$ resistor in series supplied the bias current to the coil. The flux bias current passed through brass coaxial lines that were thermally anchored at each stage of the DR, with a $80~{\rm MHz}$ $\pi$-filter at 4K and a copper powder filter on the mixing chamber. In sample B, a similar wire-wound superconducting coil was mounted about 3 mm above the qubit chip and likewise driven from a SIM928 voltage source through a room-temperature $5\,{\rm k}\Omega$ bias resistor. DC pair wiring (Cu above 4K within the fridge, NbTi below) was used to drive the coil. The coil had a self-inductance of 3.9 mH and mutual inductance to the SQUID loop of $\sim$ 1 pH. The flux coil applied a dc flux through all qubits with the flux level being set just prior to qubit measurement and maintained at a constant level throughout the measurement. For each qubit, we measured $f_{01}$ as a function of coil current and fit this against Eq. (1) of our paper to enable scaling of $\Phi_0$ and subtract any offset flux, as well as to determine $f_{01}^{max}$ and asymmetry $d$. We treat the sign of flux as arbitrary. \section{Qubit Coherence} Coherence data for both samples was collected using an automated measurement algorithm. After applying a prescribed fixed flux, the system determined the qubit frequency from Ramsey fringe fitting, optimized $\pi$ and $\pi/2$ pulses at this frequency, and measured coherence. $T_{2}^{*}$ measurements were completed at a frequency detuned from the qubit frequency, with the level of detuning optimized to provide a reasonable number of fringes for fitting. All raw coherence data was visually checked to confirm that a good quality measurement was achieved. If the automated tuning routine failed to find the frequency or properly scale the $\pi$ and $\pi/2$ pulses, this point was omitted from the dataset. For sample A, three $T_1$ measurements were made at each flux point followed by three $T_2^*$ measurements. At each flux point, the reported $T_1$ and $T_2^*$ values and error bars comprise the mean and standard deviation of the three measurements. The corresponding $\Gamma_{\phi}$ value is found from these mean values and its error bar is found by propagating the errors in $T_1$ and $T_2^*$ through via partial derivative and combining these in a quadrature sum. For sample B, at each flux point first $T_1$ was measured, then $T_2^*$, three times in succession. For this device the reported $T_1$ and $T_2^*$ values comprise the mean of the three measurements and the error bars are their standard deviation. Here the reported dephasing rate $\Gamma_{\phi}$ comprises the mean of the three values of $\Gamma_{\phi}=1/T_{2}^{*}-1/2T_{1}$ found from the three $T_1$, $T_2^*$ pairs, and the error bar is the standard deviation. \begin{figure} \includegraphics[width=0.75\columnwidth]{T1_vs_Freq_SYR_IBM2} \caption{$T_{1}$ vs. frequency measured for all qubits discussed in the main paper. Single points included for $T_{1}$ values measured for the fixed-frequency qubits. \label{fig:2}} \end{figure} Figure \ref{fig:2} shows $T_{1}$ plotted versus qubit frequency, measured for the qubits discussed in our paper. We observe a trend of increasing $T_{1}$ with decreasing qubit frequency. In sample A, each qubit's quality factor $\omega T_{1}$ is roughly constant, consistent with dielectric loss and a frequency-independent loss tangent, as observed in other tunable superconducting qubits \citep{barends2013coherent_s}. On sample B, $T_{1}$ decreases by about 10 $\mu$s from the low to high end of the frequency range, consistent with Purcell loss to the readout resonator. In addition, fine structure is occasionally observed in Fig. \ref{fig:2} where $T_{1}$ drops sharply at specific frequencies. These localized features in the $T_{1}$ frequency dependence are observed for all tunable qubits that we have measured. These features, similar to those observed by \citep{barends2013coherent_s}, are attributed to frequencies where a qubit transition is resonant with a two-level system defect on or near the qubit. Additionally, on sample B, at a few frequency points inter-qubit coupling affects relaxation. Where the $Q_7$ qubit is nearly degenerate to $Q_6$ (at $\sim$5.33 GHz) and to $Q_8$ (at $\sim$5.22 GHz), coupling via the adjacent buses produces an avoided crossing in the energy spectrum. This effect is barely noticeable in both the frequency curve of Fig. 2 of our paper as well as the relaxation data in Fig. \ref{eq:2} here. \begin{figure} \includegraphics[width=0.75\columnwidth]{Ramsey_vs_Phi_All2} \caption{$T_{2}^{*}$ vs. flux measured for the qubits discussed in the main paper. $T_{2}^{*}$ measured for the fixed-frequency qubits on both samples is included with dashed lines to help guide the eye. \label{fig:3}} \end{figure} Figure \ref{fig:3} shows $T_{2}^{*}$ plotted versus flux, measured for the qubits discussed in our paper. For the tunable qubits on sample A, $T_{2}^{*}$ is greatest at the qubit sweet-spots and decreases away from these sweet spots as $D_{\Phi}$ increases. In the $\alpha = 15$ tunable qubit on sample B, $T_{2}^{*}$ is nearly constant over the measured half flux quantum range. The small frequency dependence observed in $T_{2}^{*}$ in sample B is consistent with the observed variation of $T_{1}$ with frequency, leading to the frequency-independent dephasing rate observed for this qubit in Fig. 3 of our paper. \section{Relaxation Due to coupling to Flux Bias Line} While using two Josephson junctions to form a dc SQUID for the inductive element of a transmon allows its frequency to be tuned via magnetic flux, this opens up an additional channel for energy relaxation via emission into the dissipative environment across the bias coil that is coupled to the qubit through a mutual inductance. This was first discussed by Koch et al \citep{koch2007charge_s}. regarding a near symmetrical split-junction transmon. We apply the same analysis here to study the effect of increasing junction asymmetry on the qubit $T_{1}$ through this loss mechanism. For an asymmetric transmon, Koch et al. show in Eq. {(}2.17{)} of Ref. \citep{koch2007charge_s} that the Josephson portion of the qubit Hamiltonian can be written in terms of a single phase variable with a shifted minimum that depends upon the qubit's asymmetry and the applied flux bias. By linearizing this Hamiltonian about the static flux bias point for small noise amplitudes, Koch et al. compute the relaxation rate for a particular current noise power from the bias impedance coupled to the SQUID loop through a mutual inductance $M$. We followed this same analysis for our qubit parameters, assuming harmonic oscillator wavefunctions for the qubit ground and excited state, and obtained the dependence of $T_{1}$ due to this mechanism as a function of bias flux. Using our typical device parameters ($E_{J} = 20\,{\rm GHz}$, $E_{c} = 350\,{\rm MHz}$, $M = 2\,{\rm pH}$, $R = 50~\Omega$) we obtain the intrinsic loss for the asymmetries discussed in our paper, shown in Fig. \ref{fig:4}. This analysis agrees with the results described in Ref. {[}4{]}. For a 10\% junction asymmetry, this contribution results in a $T_{1}$ that varies between $25\,{\rm ms}$ and a few seconds. As the junction asymmetry is increased, the minimum $T_{1}$ value, obtained at odd half-integer multiples of $\Phi_{0}$, decreases slightly. However, even for our $\alpha = 15$ qubit, the calculated value of $T_{1}$ due to this mechanism never falls below $10\,{\rm ms}$. Therefore, although increasing junction asymmetry does place an upper bound on $T_{1}$ of an asymmetric transmon, this level is two orders of magnitude larger than the measured $T_{1}$ in current state-of-the-art superconducting qubits due to other mechanisms. \begin{figure} \includegraphics[width=0.75\columnwidth]{T1max_vs_phi} \caption{Dependence of $T_{1}$ with flux for asymmetric transmons, calculated for the asymmetries discussed in the main paper, due to coupling to an external flux bias following the analysis of Koch et al \citep{koch2007charge_s}. Though in the main paper our symmetric qubit was an $\alpha = 1$, in this calculation we used $\alpha = 1.1$ so that $T_{1}$ did not diverge at $\Phi = 0$. \label{fig:4}} \end{figure} Also in Ref. [4], Koch et al. described a second loss channel for a transmon related to coupling to the flux-bias line. In this case, the relaxation occurs due to the oscillatory current through the inductive element of the qubit -- independent of the presence of a SQUID loop -- coupling to the flux-bias line, described by an effective mutual inductance $M'$. This mutual vanishes when the Josephson element of the qubit and the bias line are arranged symmetrically. With a moderate coupling asymmetry for an on-chip bias line, Koch et al. estimate that the $T_{1}$ corresponding to this loss mechanism would be of the order of 70 ms. Because this mechanism does not directly involve the presence or absence of a SQUID loop for the inductive element, the asymmetry between junctions that we employ in our asymmetric transmons will not play any role here and this particular limit on $T_{1}$ should be no different from that for a conventional transmon. An additional potential relaxation channel may arise due to capacitive coupling to the flux-bias line, as discussed in Ref. \cite{JohnsonDisserationYale2011_s}. However, this is expected to be negligible where a bobbin coil is used as in our experiments. \section{Ramsey Decay Fitting} As described in the main paper, our analysis of qubit dephasing rates used a purely exponential fit to all of the measured Ramsey decays. Here we discuss why this fitting approach is appropriate for all asymmetric qubits and a large portion of the coherence data measured for the symmetric qubit. Of all the qubits measured in this study, the symmetric $\alpha = 1$ qubit was most impacted by flux noise away from the qubit sweet spot because of its large energy-band gradient. Therefore, to illustrate the impact that flux noise has upon the Ramsey decay envelope we will consider the Ramsey measurements for this qubit on and off the sweet spot. Example measurements are shown at flux values of 0 and 0.3 ~$\Phi_0$ in Fig. \ref{fig:5}a and b, respectively. At each flux point, we fit the Ramsey decay with both a purely exponential (Fig. \ref{fig:5}a I) and purely Gaussian form (Fig \ref{fig:5}a II), the residuals of each fit are included to compare the quality of fit in each case. As has been discussed in the main paper, at the upper sweet-spot, where $D_{\Phi} = 0$, non-flux dependent background-dephasing should dominate and the Ramsey decay should be more readily fit using an exponential. Figure \ref{fig:5}a shows that this is indeed the case: the purely exponential fit provides a more precise fit to the Ramsey decay, with the residuals to this fit being smaller over the entire range compared to those corresponding to the Gaussian fit. The Ramsey decay shown in Fig \ref{fig:5}b was measured at a point where $D_{\Phi}$ was the maximum measured for the $\alpha = 1$ qubit. Here, it is clear that a purely Gaussian form results in a better fit with smaller residuals than an exponential envelope. This indicates that, at this flux point, the $\alpha = 1$ qubit is heavily impacted by low-frequency flux noise, as a purely 1/f dephasing source would result in a Gaussian envolope for the decay \citep{ithier2005decoherence_s}. Although a purely Gaussian fit form is useful for illustrating the impact that flux noise has upon the Ramsey decay form, it is not an optimal quantitative approach for investigating dephasing in these qubits. This is because tunable transmons dephase not only due to flux noise with a roughly $1/f$ power spectrum, but also due to other noise sources with different non-$1/f$ power spectra \citep{sears2012photon_s,schuster2005ac_s,gambetta2006qubit_s}. These other noise sources generally result in an exponential dephasing envelope. Also, dephasing has an intrinsic loss component that is always exponential in nature. Therefore, to accurately fit decay due to dephasing in these qubits, we must account for these exponential decay envelopes in any fitting approach that is not purely exponential. \begin{figure} \includegraphics[width=1\columnwidth]{Ramsey_Decay_Fit} \caption{Ramsey decay envolopes measured for the $\alpha = 1$ qubit at a) the sweet-spot $\Phi=0$ and b) $\Phi=0.3 \Phi_{0}$ where $D_{\Phi}$ was the largest value measured for this qubit. At each flux point, the Ramsey decay envelopes are fit with both a purely exponential {(}I{)} and Gaussian {(}II{)} fit form. Functions fitted to the measured data {(}blue open circles{)} plotted as solid red lines. \label{fig:5}} \end{figure} To account for the $T_{1}$ contribution to the Ramsey decay envelope in our non-exponential fitting, we take the average $T_{1}$ measured at each flux point and separate this from $T_{2}^{*}$ in the Ramsey fit function using $1/T_{2}^{*} = 1/T_{\phi} + 1/2T_{1}$. Therefore, instead of fitting a $T_{2}^{*}$ time, we fit $T_{\phi} $ directly. To fit the Ramsey using a Gaussian fit form, we square the dephasing exponent within the fitting function {[}Eq. {(}\ref{eq:1}{)}{]}. We can go one step further by not forcing an explicit fit form to the dephasing exponent, but instead adding another fit parameter $\gamma$ {[}Eq. {(}\ref{eq:2}{)}{]}, which would be 1 for a pure exponential and 2 for a pure Gaussian. Although a fit that is not explicitly exponential or Gaussian is not motivated directly by a particular theoretical model, by fitting Ramsey decays with this free exponent $\gamma$, we gain insight into the transition from flux-noise dominated dephasing at large $D_{\Phi}$ to background dephasing near the sweet-spots. The two separate fit forms described above are given by the following decay functions: \begin{equation} f_{Ramsey}(t)=A+B\{\cos{(\omega t+\delta)}\exp{(-\Gamma_{1}t/2)}\exp{[-(\Gamma_{\phi}t)^2]}\}\label{eq:1}, \end{equation} \begin{equation} f_{Ramsey}(t)=A+B\{\cos{(\omega t+\delta)}\exp{(-\Gamma_{1}t/2)}\exp{[-(\Gamma_{\phi}t)^\gamma}]\}\label{eq:2}, \end{equation} where A and B are magnitude and offset constants to adjust the arbitrary measured signal, $\omega$ is the detuning from the qubit frequency with a phase offset $\delta$, $\Gamma_{1}$ is the intrinsic loss rate {(}$1/T_{1}${)} and $\Gamma_{\phi}$ is the dephasing rate. Here, A, B, $\omega$, $\delta$, $\Gamma_{\phi}$, and $\gamma$ are fit parameters. All other components are fixed with values determined using the methods discussed above. \begin{figure} \includegraphics[width=0.75\columnwidth]{1to1_gamma_vs_Phi} \caption{$\gamma$ vs flux extracted from fits to the Ramsey measurements on the $\alpha = 1$ qubit using Eq. \ref{eq:2}. \label{fig:6}} \end{figure} This behavior is illustrated in Fig. \ref{fig:6}, where we plot $\gamma$ vs. flux extracted from fits to the Ramsey measurements on the $\alpha = 1$ qubit using Eq. {(}\ref{eq:1}{)}. In the flux region between +/- 0.1 $\Phi_{0}$, $\gamma \approx 1$, indicating that the dephasing envelope is primarily exponential, and thus the dominant dephasing noise affecting the qubits here does not have a $1/f$ spectrum. At flux bias points further away from the sweet-spot, $\gamma$ shifts towards 2 as $D_{\Phi}$ increases and appears to level off close to this value at flux biases above $\sim 0.2\,\Phi_{0}$. Thus, in this bias regime, the dephasing envelope is primarily Gaussian and the dephasing noise influencing the qubits is predominantly low-frequency in nature with a $1/f$-like spectrum \citep{ithier2005decoherence_s,yoshihara2006decoherence_s}. We can also vizualize this variable-exponent fit by plotting $\gamma$ vs. $D_{\Phi}$ rather than $\Phi$, again, for the $\alpha = 1$ qubit {(}Fig. \ref{fig:7}{)}. In this plot, $\gamma$ approaches 2 for $D_{\Phi}$ values around $6~\rm GHz/\Phi_{0}$. We have also included vertical dashed lines on Fig. \ref{fig:7} indicating the maximum $D_{\Phi}$ values reached by the less tunable $\alpha = 4$ and 7 qubits on sample A. Below these $D_{\Phi}$ levels, $\gamma$ is close to 1 implying that the decay envelope is nearly exponential, and thus justifying our use of an exponential decay for fitting the asymmetrical qubits in the main paper. \begin{figure} \includegraphics[width=0.75\columnwidth]{1to1_gamma_vs_Grad} \caption{$\gamma$ vs $D_{\Phi}$ extracted from fits to the Ramsey measurements on the $\alpha = 1$ qubit using Eq. \ref{eq:2}. Dashed lines included to indicate the maximum $D_{\Phi}$ reached by the $\alpha = 7$ {(}black dashed line{)} and $\alpha = 4$ {(}blue dot-dashed line{)} qubits measured on sample A. \label{fig:7}} \end{figure} As yet another approach to fitting the Ramsey decay envelopes, we can employ a function that separates the exponential from background-dephasing from the Gaussian form due to dephasing from noise with a low-frequency tail. For this fit, along with separating out the $T_{1}$ contribution to the Ramsey decay envelope, we also determine the non-flux dependent background-dephasing rate at the sweet-spot, then use this rate as a fixed parameter in the fitting of our Ramsey measurements at any given flux point. We now have a composite Ramsey fit form that has three components: a $T_{1}$ contribution and background dephasing component that are purely exponential and fixed by the fitting of separate measurements, plus a Gaussian component to capture the dephasing due to noise with a $1/f$ spectrum. This leads to a composite fitting function of the form: \begin{equation} f_{Ramsey}(t)=A+B\{\cos{(\omega t+\delta)}\exp{(-\Gamma_{1}t/2)}\exp{(-\Gamma_{\phi ,bkg}t)}\exp{[-(\Gamma_{\phi}t)^2}]\}\label{eq:3}, \end{equation} where A and B are magnitude and offset constants to adjust the arbitrary measured signal, $\omega$ is the detuning from the qubit frequency with a phase offset $\delta$, $\Gamma_{1}$ is the intrinsic loss rate {(}$1/T_{1}${)}, $\Gamma_{\phi ,bkg}$ is the background dephasing rate measured at $D_{\Phi}=0$ and $\Gamma_{\phi}$ is the fitted dephasing rate. Here, A, B, $\omega$, $\delta$, and $\Gamma_{\phi}$ are fit parameters. All other components are fixed with values determined using methods discussed above. Though this fit form well separates the different components to dephasing decay, it has one key deficiency: it assumes that the background dephasing rate is frequency independent, which is not necessarily justified, as the background dephasing mechanism may also vary with frequency. To calculate the total dephasing rate using this fit form, we add the constant background dephasing to the fitted $\Gamma_{\phi}$. \begin{figure} \includegraphics[width=0.75\columnwidth]{1to1_RatevsGrad_4type_MAIN} \caption{$\Gamma_{\phi}$ vs. $D_{\Phi}$ calculated for the $\alpha = 1$ qubit using the exponential, Gaussian {[}Eq. {(}\ref{eq:1}{)}{]}, $\gamma$-exponent {[}Eq. {(}\ref{eq:2}{)}{]}, and composite {[}Eq. {(}\ref{eq:3}{)}{]}fitting forms. \label{fig:8}} \end{figure} To understand how the explicit fitting form impacts the dephasing rate, in Fig. \ref{fig:8} we plot $\Gamma_{\phi}$ vs. $D_{\Phi}$ calculated for the $\alpha = 1$ qubit using the four different fitting forms: exponential, Gaussian {[}Eq. {(}\ref{eq:1}{)}{]}, $\gamma$-exponent {[}Eq. {(}\ref{eq:2}{)}{]}, and composite {[}Eq. {(}\ref{eq:3}{)}{]}. We first note that any differences in the rate of dephasing calculated at each point using the various fit methods are subtle and the fits are reasonably consistent with one another within the fit error bars and scatter. We do observe, though, that a purely exponential fit results in a dephasing rate that is slightly higher than the values from the Guassian fits for all flux points, resulting in the largest slope and thus the highest effective flux-noise level. Therefore, we conclude that forcing a purely exponential fit to the Ramsey decay envelopes measured for qubits that are strongly influenced by $1/f$ flux noise simply puts an upper bound on the absolute flux noise strength. The $\gamma$-exponent fitting approach provides a dephasing rate that agrees well with that extracted from the exponential fit form at low $D_{\Phi}$ values where background-dephasing processes dominate. However, at higher $D_{\Phi}$ values where the qubit is heavily impacted by $1/f$ flux noise, the $\gamma$-exponent fit provides better agreement with the Gaussian-fitted dephasing rate. The composite fit is rigidly fixed in the $\Gamma_{\phi}$ axis by the value chosen to match the background dephasing rate, in this case chosen to match the rate observed at the lowest $D_{\Phi}$ for the pure exponential fit. For this reason, direct comparisons between this fit and the others at individual flux points is more difficult. Despite all of these potential issues, the slope of $\Gamma_{\phi}$ vs. $D_{\Phi}$ is independent of the chosen background-dephasing rate. Therefore, this composite fit can be used to calculate a flux-noise level for this $\alpha = 1$ qubit that takes into account both the exponential nature of non-flux dependent dephasing and the Gaussian nature of $1/f$ flux-noise decay. Using the same methods outlined in our paper, where we specified $\Gamma_{\phi}=2\pi\sqrt{A_{\Phi}|\ln{(2\pi f_{IR}t)}|}D_{\Phi}$, following the approach described in Ref. {[}\citep{ithier2005decoherence_s}{]}, we use the slope of this composite fit to extract a $1/f$ flux noise level of $A_{\Phi}^{1/2}=1.3~\pm~0.2~\mu \Phi_0$. This $\sim10\%$ reduction in the extracted flux-noise level for the $\alpha = 1$ qubit compared to the purely exponential fit {(}$A_{\Phi}^{1/2}~=~1.4~\pm~0.2~\mu \Phi_0${)} brings it closer to the flux-noise level extracted from the fits to the measurements on the $\alpha = 7$ and 4 qubits: $1.3~\pm~0.2~\mu \Phi_0$ and $1.2~\pm~0.2~\mu \Phi_0$, respectively. The Ramsey measurements for these qubits were fit using a purely exponential fit form. It is important to note though, that the $\sim10\%$ reduction in the composite fit extracted flux-noise level for the $\alpha = 1$ qubit is within the errors associated with our flux-noise calculations. To conclude this fitting study, we have shown that: \begin{enumerate} \item The $\alpha = 1$ qubit in this study has a Ramsey decay envelope that is more Gaussian in nature at high $D_{\Phi}$ values where the dephasing of this qubit is strongly influenced by low-frequency flux noise. \item Though we have discussed different fitting approaches that better model the Ramsey decay envelope of qubits influenced by $1/f$ flux-noise, using a purely exponential decay form for the Ramsey decay simply puts an upper bound on the extracted flux noise strength. Also, the value of the flux-noise level and the dephasing rates are comparable to those we obtained with the various other fitting approaches. \item Using a Ramsey fit function that takes into account both the exponential nature of the $T_{1}$ contribution to the decay envelope and non-flux dependent dephasing, as well as the Gaussian nature of dephasing due to $1/f$ flux noise, allows us to calculate a flux noise level for the $\alpha = 1$ qubit that agrees well with the other, asymmetric qubits on the same sample. This is expected, as qubits of the same geometry on the same chip should experience similar flux noise \citep{sendelbach2008magnetism_s}. \end{enumerate} \section{Dephasing Rate Discussion} In Fig. \ref{fig:9} we present dephasing rates for several additional qubits, plotted against $D_{\Phi}$. These qubits were similar to those in our paper, but were prepared on additional chips and measured during additional cools of our cryostats. These data are not included in our paper for reasons of clarity and consistency. However, they are presented here to support the observations found in this study across all qubits measured in both of our labs. \begin{figure}[!b] \includegraphics[width=1.0\columnwidth]{RatevsGrad_SUP2} \caption{$\Gamma_{\phi}$ vs $D_{\Phi}$ for qubits measured during this study that were not included in the main paper. $\Gamma_{\phi}$ for fixed-frequency qubits included as dashed lines. Type A/B qubits were similar in design to those on sample A/B, measured using similar methods and device designs as those described for the corresponding sample type. \label{fig:9}} \end{figure} The first observation we make from Fig. \ref{fig:9} is that a spread in background dephasing rates is measured between both fixed-frequency and tunable qubits. As discussed in our paper, these subtle variations in qubit dephasing rate are not unexpected and are commonly observed in multi-qubit devices \citep{corcoles2015demonstration_s,chow2014implementing_s,Takita2016Dem_s}. While these variations in dephasing rate make the figure somewhat challenging to interpret, we can still draw the same conclusions for this data as those from our main paper. We still observe that the dephasing rate due to flux-noise increases linearly with $D_{\Phi}$ for the lower asymmetry qubits. Again at lower $D_{\Phi}$ values, below $\sim 1\,{\rm GHz}/\Phi_0$, the rate of dephasing is constant within the experimental spread for all qubits. Here, it is important to note that, for several of the qubits shown here and those discussed in our paper, there are specific flux bias points for each qubit where the dephasing rate is anomalously high. These points almost always coincide with places where $T_{1}$ drops sharply at specific frequencies, presumably due to localized coupling to defects in these qubits. Again, this sharp frequency dependence in $T_{1}$ is not unusual for tunable superconducting qubits and is consistent with what others have observed \citep{barends2013coherent_s}. The relatively flux-independent dephasing rate at low $D_{\Phi}$ is particularly apparent in the 9:1 qubits we measured. Several of these qubits exhibited the lowest background depahsing rates we observed in our study, between 20 and 40~kHz. These dephasing rates are comparable to current state-of-the-art superconducting qubits \cite{Takita2016Dem_s}. No fixed-frequency qubits were included on the same chips as these 9:1 asymmetric transmons, which prevents us from making a direct comparison with non-flux-noise-driven background dephasing rates as is done in the main paper. Nonetheless, for these 9:1 qubits, we can clearly see that the dephasing rate is essentially flux independent below $\sim 1\,{\rm GHz}/\Phi_0$ even at these low background dephasing levels. This reinforces our statement that asymmetric qubits with a useful level of tunability can be incorporated into future fault-tolerant superconducting qubit devices, significantly aiding scalability in these systems.
1,116,691,497,032
arxiv
\section{Introduction} In human-robot interaction, a robot relies on its \ac{SSL} mechanism to direct its attention. Traditionally, \ac{SSL} approaches only use audio signals and attempt as a signal processing problem \cite{knapp1976generalized,brandstein1997robust,schmidt1986multiple}. However, those approaches are adversely affected by acoustically challenged conditions, such as noise and reverberation scenarios \cite{he2018deep}. To address that, several \ac{NN}-based approaches were explored \cite{chakrabarty2019multi,adavanne2018direction,he2018deep,pan2020multitones} assuming a sufficient amount of data are available. Specifically, location-related \ac{STFT} cues are mapped to sound \ac{DoA} information in \cite{chakrabarty2019multi,adavanne2018direction} while the \ac{GCC-PHAT} cues are used in \cite{he2018deep,pan2020multitones}. Despite the progress, many research problems remain. One of them is multi-speaker localization in real multi-party human-robot interaction scenarios under acoustic challenging conditions \cite{he2018deep}. Considering seeing and hearing are the two most essential human cognitive abilities, studies observed that audio and video convey complementary information and may help to overcome uni-modal limitations of a degradation condition for scene analysis \cite{katsaggelos2015audiovisual,atrey2010multimodal,shivappa2010audiovisual}. There is a very broad literature of audio-visual approaches for speaker localization over the past decades \cite{beal2003a,qian2019multi,ban2019variational}. However, it was not until recently that the deep learning-based approaches have attracted more attention, thanks to the increasing computational power and rapid development in \ac{NN} techniques. Nevertheless, most of these methods aim at locating sound sources in visual scenes \cite{senocak2018learning,tsiami2020stavis,tian2018audio, ramaswamy2020see}. Specifically, an attention mechanism is incorporated into the individual sound and vision network to model the audio-visual image correspondence \cite{senocak2018learning} . A visual saliency network is employed in \cite{tsiami2020stavis}, together with an audio representation network, to feature a \ac{SSL} module for producing an audio-visual saliency map. An attention network is proposed in \cite{tian2018audio} to learn the visual regions of a sounding event. By fusing audio and visual features using LSTM and bilinear pooling, the audio assisted visual feature extraction is described in \cite{ramaswamy2020see}. All the research studies use audio as a supplementary modality for visual localization and require the sound sources to be both audible and visible. Unlike the prior studies, we aim to perform audio-visual speaker localization in the spatial \ac{DoA} domain where targets can appear either inside (visible) or outside (invisible) the camera's \ac{FoV}. We propose two neural network architectures and make the following contributions in this paper: (1) we propose a novel video simulation method to deal with the lack of video data; (2) for the first time, we design a deep learning network for audio-visual multi-speaker \ac{DoA} estimation, and (3) we adopt an adaptive weighting mechanism in a simple feedforward network to estimate the multi-modal reliability under different conditions. \section{Proposed Method} \label{sec:proposed_method} Given a sequence of frame-synchronized audio and video signals captured by a microphone array and a calibrated camera, we aim to estimate the \ac{DoA} information $\theta=[-180^\circ,180^\circ)$ for each sound source at each frame. Next, we describe the way we characterize audio and video signals, the video simulation method, and the proposed neural networks. \subsection{Audio features} The \ac{GCC-PHAT} is widely used to calculate the time different of arrival (TDOA) between any two microphones in a microphone array~\cite{he2018deep,pan2020multitones}. We adopt it as the audio feature ~\cite{knapp1976generalized} due to its robustness in the noisy and reverberant environment \cite{florencio2008does} and the fewer tunable parameters than the other counterparts \text{e.g. } \ac{STFT} \cite{chakrabarty2019multi}. Let $S_l$ and $S_p$ be the Fourier transforms of audio sequence at $l$ and $p^{th}$ channels of the microphone array, respectively. We compute the GCC-PHAT features with different delay lags $\tau$ as: \begin{equation}\label{eq:gccphat} \text{GCC-PHAT}_{lp}(\tau) = \sum_{k}\mathcal{R}\left(\frac{S_l[k](S_p[k])^{*}}{|S_l[k](S_p[k])^{*}|} e^{j \frac{2\pi k}{N} \tau}\right) \end{equation} where $*$ denotes the complex conjugate operation, $\mathcal{R}$ denotes the real part of complex number and $N$ denotes the FFT length. Here, the delay lag $\tau$ between two signals arrived is reflected in the steering vector $e^{j \frac{2\pi k}{N} \tau}$ in \eq~\ref{eq:gccphat}. \vspace{-0.1cm} \subsection{Visual features and simulation}\label{ssec:videofeature} With the advent of deep learning, accurate face detection at low computational cost becomes widely available ~\cite{zou2019object}. Let us define ${\bf b}_{d}=(u,v,w,h)^\intercal_{d}$ as the face detection bounding box $d \ (d\leq D)$, where $^\intercal$ denotes transpose, $(u,v)$ are the horizontal and vertical positions of the top-left point, $(w, h)$ are the width and height, and $D$ is the number of detected faces. The central point of detection is thus computed as: \begin{equation}\label{eq:bboxcentral} {\boldsymbol{\mu}_{d}}=(u+\frac{1}{2}w, v+\frac{1}{2}h)^\intercal_{d} \end{equation} The visual feature is encoded as the exponential part of the multi-variant Gaussian distribution (in $u$ and $v$ direction) with the standard deviations specified by the detection width and height and achieves the maximum at the central point: \begin{equation}\label{eq:visualfeature} \mathcal{V}({\bf x})= \left\{ \begin{matrix} \max_{d} \ e^{ -\frac{1}{2}\left({\bf x}-{\boldsymbol{\mu}_{d}}\right)\Sigma^{-1}_{d} \left({\bf x}- {\boldsymbol{\mu}_{d}} \right)^\intercal} & D>0, \\ \mathcal{U}({\bf x}) & otherwise \end{matrix}\right. \end{equation} where ${\bf x}$ indicates the potential image positions, $\Sigma_{d}=diag(w^2_d, h^2_d)$ is a diagonal covariance matrix, and $\mathcal{U}(\bf x)$ indicates uniform distribution. The components in $\mathcal{V}({\bf x})$ are re-sampled to the same length of GCC-PHAT. \begin{figure}[!t]\label{tab:video encoding} \begin{center} \subfigure[face detections]{\label{subfig:pic} \includegraphics[width=0.485\columnwidth]{figures/facedetection2.pdf}} \subfigure[visual feature encoding]{\label{subfig:encoding} \includegraphics[width=0.485\columnwidth]{figures/faceencoding.pdf}} \end{center} \caption{Visual feature encoding from face detection bounding boxes. The feature resembles the horizontal (top) and vertical (bottom) axis of the image.} \label{fig:NNs} \end{figure} \begin{figure}[!htb] \begin{center} \subfigure[]{ \label{subfig:videogeneration} \includegraphics[width=.95\columnwidth]{figures/AVDoA-video_feature.pdf}} \subfigure[]{ \label{subfig:camprojection} \includegraphics[width=.95\columnwidth]{figures/AVDoA-cameramodel.pdf}} \end{center} \caption{(a) Pipeline to generate face bounding boxes and visual features and (b) 3D-to-image bounding box projection. $(x,y,z)$: world coordinates; $(x_c, y_c, z_c)$: camera coordinates; $(u, v)$: image coordinates. } \label{fig:cameraprojection} \end{figure} Audio-visual parallel data are not abundantly available. However, it is possible to obtain the camera's extrinsic and intrinsic calibration parameters $\mathbf{\zeta}^e$ and $\mathbf{\zeta}^i$, the 3D location ${\bf p}=(x,y,z)^\intercal$ of a sound source. We propose a novel method to synthesize visual features in synchrony with the audio features by \eq~\ref{eq:visualfeature}. The overall pipeline of visual feature generation is illustrated in \fig~\ref{subfig:videogeneration} and the process is formulated next. We first add three-variant Gaussian distributed spatial noise to the target 3D location ${\bf p}$ to account for possible face detection error, and transfer the resulting point to the camera coordinates given the extrinsic parameters: \begin{equation}\label{eq:3Dtransfer} \Tilde{{\bf p}}_c = \Phi (\mathcal{N}({\bf p},\Sigma_{p}) \ | \ \mathbf{\zeta}^e) \end{equation} with noise covariance matrix $\Sigma_{p}=diag(\sigma_x^2, \sigma_y^2, \sigma_z^2)$ assuming that the additive noises to $(x,y,z)$ are independent, and $\Phi$ is the transformation using the pin-hole camera model \cite{hartley2003multiple}. Then, we geometrically create the 3D face bounding box whose plane is perpendicular to the camera's optical axis ($z_c$ in \fig~\ref{subfig:camprojection}), and project to the image plane: \begin{equation}\label{eq:imageprojection} \chi = \Psi(\Tilde{\bf p}_c + \mathbf{v} \ | \ \mathbf{\zeta}^i) \end{equation} where $\Psi$ is the 3D-to-image projection, $\mathbf{v}$ is the translation vector which equals to $ (-\frac{W}{2},-\frac{H}{2},0)^\intercal$ for the top-left point $\chi^{tl}$ and $(\frac{W}{2},\frac{H}{2},0)^\intercal$ for the bottom-right point $\chi^{br}$, respectively. $W$ and $H$ are the width and height assumptions of a real human face. Finally, the simulated face detection bounding box ${\bf b}$ is computed as $ {\bf b} = cat(\chi^{tl}, \chi^{br}-\chi^{tl}) $, where $ cat$ denotes a concatenation operation to form a column vector. \subsection{Neural network architecture} We propose two \ac{NN} architectures for audio-visual speaker \ac{DoA} estimation based on \ac{MLP}, namely \ac{MLPAVC} and \ac{MLPAVAW}, which specify different ways of audio-visual feature fusion and classifier design as illustrated in \fig~\ref{fig:NNs}. \ac{MLPAVC} consists of three hidden layers, denoted as MLP3 in \fig~\ref{subfig:MLP-AVC} by a dotted blue box, each one is a fully-connected layer with ReLU activation \cite{nair2010rectified} and batch normalization~ \cite{ioffe2015batchnorm}. It takes the flattened and concatenated GCC-PHAT and visual features as an input vector. The network is trained to predict the probability of \ac{DoA} labels, as in \cite{he2018deep}, using a sigmoid output layer. \ac{MLPAVC} adopts an early fusion strategy by concatenating audio and visual features. We hypothesize that such early fusion doesn't learn to pay selective attention to uni-modal features, that are crucial in face of missing data or noisy data. \ac{MLPAVAW} introduces an adaptive weighting mechanism, which uses a tiny NN with two fully-connected layers, colored in purple in \fig~\ref{subfig:MLP-AVAW}), to learn three adaptive weights for the audio GCC-PHAT feature, video image horizontal and vertical features, respectively. A softmax activation function is applied for weights normalization. We call this as `adaptive weighting' mechanism as the weights are adapted according to the live input during inference. Finally, the weighted multi-modal features are concatenated for MLP3 to compute DoA. \begin{figure}[!tb] \begin{center} \subfigure[MLP-AVC]{ \label{subfig:MLP-AVC} \includegraphics[width=0.46\columnwidth]{figures/AVDoA-MLPconcate.pdf}} \subfigure[MLP-AVAW]{\label{subfig:MLP-AVAW} \includegraphics[width=0.485\columnwidth]{figures/AVDoA-MLPsoftmax3.pdf} } \end{center} \caption{Proposed \ac{NN} architectures for $360^\circ$ DoA estimation (red: audio block; green: video block; blue: standard \ac{MLP} network; purple: adaptive weighting block; orange: feature reformatting block). The input dimension $(6,51)$ represents 51 GCC-PHAT coefficients for each of the 6 microphone pairs and $(2, 51)$ represents 51 visual feature encoding for the image horizontal and vertical directions.} \label{fig:NNs} \end{figure} \begin{table}[!tb] \centering \caption{ MAE ($^\circ$) and ACC (\%) of the noisy target 3D locations ($\mathcal{N}({\bf p},\Sigma_{p})$ in \eq~\ref{eq:3Dtransfer}) for visual feature generation of the loudspeaker cases. Results are measured on frames accounting into \ac{DR}.} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{\textbf{Train (loudspeaker)}} & \multicolumn{3}{c|}{\textbf{Test-loudspeaker} } \\ \hline \hline DR & MAE & ACC & DR & MAE & ACC \\ \hline 11.3 \% & 6.67 & 46.0\% & 9.2 \% & 6.28 & 48.9 \% \\ \hline \end{tabular} \label{tab:facesimulation} \end{table} \begin{table*}[!tb] \centering \caption{A summary of MAE ($^\circ$) and ACC (\%) of speaker \ac{DoA} estimation on the SSLR test set ($N$ indicates the number of speakers; the number of audio frames for each subset is given in bracket). We reproduce the results of \cite{he2018deep} for comparison.} \begin{tabular}{cl||c|c|c|c|c|c|c|c||c|c|c|c|} \cline{3-12} & \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{\textbf{Loudspeaker}} & \multicolumn{4}{c||}{\textbf{Human}} & \multicolumn{2}{c|}{\textbf{Overall}} \\ \cline{3-10} & \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{\textbf{N=1 (178k)}} & \multicolumn{2}{c|}{\textbf{N=2 (29k)}} & \multicolumn{2}{c|}{\textbf{N=1 (788)}} & \multicolumn{2}{c||}{\textbf{N=2 (141) }} & \multicolumn{2}{c|}{} \\ \cline{3-12} & \multicolumn{1}{c|}{\multirow{-3}{*}{ }} & \multicolumn{1}{l|}{MAE} & \multicolumn{1}{l|}{ACC} & \multicolumn{1}{l|}{MAE} & \multicolumn{1}{l|}{ACC} & \multicolumn{1}{l|}{MAE} & \multicolumn{1}{l|}{ACC} & \multicolumn{1}{l|}{MAE} & \multicolumn{1}{l||}{ACC} & \multicolumn{1}{l|}{MAE} & \multicolumn{1}{l|}{ACC} \\ \hline \hline \multicolumn{1}{|l}{} & \multicolumn{1}{|c||}{SRP-PHAT \cite{brandstein1997robust} } & {19.00} & {82.0} & {36.95} & {50.0} & {2.62} & {93.0} & {20.90} & {56.0} & {21.44} & {78.0} \\ \cline{2-12} \multicolumn{1}{|c}{\multirow{-2}{*}{\textbf{audio}} } & \multicolumn{1}{|l||}{MLP-GCC \cite{he2018deep}} & { 4.06} & {94.9} & {8.10} & {71.5} & {4.75} & {95.1} & {5.98} & {75.5} & {4.63} & 91.6 \\ \hline \multicolumn{1}{|l}{} & \multicolumn{1}{|l||}{MLP-AVC } & {3.87} & {94.8} & {7.80} & {71.9} & \textbf{1.84} & {97.1} & {3.89} & {81.9} & {4.42} & 91.7 \\ \cline{2-12} \multicolumn{1}{|l}{ \multirow{-2}{*}{\textbf{audio-visual}}} & \multicolumn{1}{|l||}{MLP-AVAW } & {\textbf{3.73}} & {\textbf{95.0}} & {\textbf{7.28}} & {\textbf{73.6}} & {2.04} & {\textbf{98.0}} & \textbf{3.49} & {\textbf{86.5}} & {\textbf{4.22}} & {\textbf{92.0}} \\ \hline \end{tabular} \label{tab:results} \end{table*} \begin{table*}[!tb] \centering \caption{A summary of MAE ($^\circ$) and ACC (\%) of speaker \ac{DoA} estimation on the SSLR test set with different \ac{SNR}s and face detection swap percentage (FDSP)s. The results are obtained using the \ac{MLPAVAW} network architecture. } \begin{tabular}{cc|c|c||c|c|c|c|c|c|c|c|c|c|} \cline{3-14} \multicolumn{1}{l}{} & \multicolumn{1}{l|}{} & \multicolumn{2}{c||}{\multirow{2}{*}{\textbf{MLP-GCC \cite{he2018deep}}}} & \multicolumn{10}{c|}{\textbf{ Face Detection Swap Percentage }} \\ \cline{5-14} & & \multicolumn{2}{c||}{} & \multicolumn{2}{c|}{\textbf{0\%}} & \multicolumn{2}{c|}{\textbf{10\%}} & \multicolumn{2}{c|}{\textbf{30\%}} & \multicolumn{2}{c|}{\textbf{50\%}} & \multicolumn{2}{c|}{\textbf{70\%}} \\ \cline{3-14} & & \textbf{MAE} & \textbf{ACC} & \textbf{MAE} & \textbf{ACC} & \textbf{MAE} & \textbf{ACC} & \textbf{MAE} & \textbf{ACC} & \textbf{MAE} & \textbf{ACC} & \textbf{MAE} & \textbf{ACC} \\ \hline\hline \multicolumn{1}{|c|}{\multirow{7}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}audio \\ \\ SNR ($dB$)\end{tabular}}}} & \textbf{-10} & 52.74 & 24.2 & 49.69 & 26.59 & 49.98 & 26.5 & 50.51 & 26.2 & 50.87 & 26.0 & 51.21 & 25.8 \\ \cline{2-14} \multicolumn{1}{|c|}{} & \textbf{0} & 26.19 & 54.8 & 22.19 & 57.3 & 22.37 & 57.2 & 22.73 & 57.0 & 23.01 & 56.7 & 23.22 & 56.6 \\ \cline{2-14} \multicolumn{1}{|c|}{} & \textbf{10} & 10.64 & 78.8 & 9.34 & 79.2 & 9.41 & 79.1 & 9.52 & 79.0 & 9.61 & 78.9 & 9.67 & 78.8 \\ \cline{2-14} \multicolumn{1}{|c|}{} & \textbf{20} & 6.02 & 89.0 & 5.68 & 89.0 & 5.70 & 89.0 & 5.75 & 89.0 & 5.79 & 88.9 & 5.82 & 88.9 \\ \cline{2-14} \multicolumn{1}{|c|}{} & \textbf{Clean} &4.63 & 91.6 & 4.22 & 92.0 & 4.24 & 92.0 & 4.28 & 91.9 & 4.31 & 91.9 & 4.32 & 91.9\\ \hline \end{tabular} \label{tab:results_MAE} \end{table*} \section{Experiments} \subsection{Dataset and performance metrics} The existing audio-visual datasets, such as AV16.3 \cite{lathoud2004av16}, CAV3D \cite{qian2019multi}, and AVASM \cite{deleforge2015co}, are either of limited size, or don't provide the spatial ground truth. We, therefore, simulate the synchronized visual features for a \ac{SSL} dataset of the loudspeaker cases. We choose the recently released SSLR dataset\footnote{SSLR dataset: \url{https://www.idiap.ch/dataset/sslr/}} \cite{he2018deep}, that is recorded in a physical setup from one or two concurrent speakers, and with adequate target 3D annotations. It consists of 4-channel audio recordings at $48 \ kHz$ sampling rate, that is organized into three subsets, namely train (loudspeaker), test-human, and test-loudspeaker. \begin{figure}[!tb] \begin{center} \subfigure[camera and target 3D locations]{\label{subfig:SSLcameradata} \includegraphics[width=0.9\columnwidth]{figures/AVDoA-camera2.pdf}} \subfigure[train]{\label{subfig:deGCF1} \includegraphics[width=0.31\columnwidth]{figures/train.pdf}} \subfigure[test-loudspeaker]{\label{subfig:deGCF2} \includegraphics[width=0.31\columnwidth]{figures/testloudspeaker.pdf}} \subfigure[test-human]{\label{subfig:deGCF2} \includegraphics[width=0.31\columnwidth]{figures/testhuman.pdf}} \end{center} \caption{(a) Camera and target 3D locations (the gray section indicates the camera's \ac{FoV}); (b-c) The distribution of the projected face detection bounding boxes (from points in gray region in (a)) on the image plane of different SSLR subsets (blue: train; green: test-loudspeaker); (d) RetinaFace detections \cite{Deng_2020_CVPR} on test-human.} \label{fig:detection} \end{figure} We evaluate the performance of \ac{DoA} estimates using the same metrics of \cite{he2018deep} \text{i.e. } \ac{MAE} and \ac{ACC}, where \ac{MAE} is defined as the mean absolute error between the actual and the estimated DoA, while the accuracy allowance of \ac{ACC} is $5^\circ$ in the classification prediction. For the test-human subset, we apply the RetinaFace detector \cite{Deng_2020_CVPR} to achieve the face bounding boxes. For the train and test-loudspeaker subsets, the visual features are simulated with the method proposed in \Sec~\ref{ssec:videofeature} with a noise covariance matrix $\Sigma_{p}=diag(0.2,0.2,0.2)$. \fig~\ref{subfig:SSLcameradata} illustrates the ground truth camera (magenta) and target 3D locations for the train (blue), test-loudspeaker (green) and test-human (red) subsets for all frames. Targets in the gray region are inside the camera's \ac{FoV}, therefore, visible to the camera. We only generate face bounding boxes of visible targets, as visualized in \fig~\ref{fig:detection}(b-c) and formulated in \eq~\ref{eq:3Dtransfer}-\ref{eq:imageprojection} with the simulated bounding box ${\bf b}$. \fig~\ref{fig:detection} shows that the face bounding boxes spread well across the FoV with a balanced distribution. We don't generate bounding boxes for speakers that are outside the FoV. As a result, the visual features for the invisible speakers become missing data ({the normal distribution in \eq~\ref{eq:visualfeature} for visual feature representation) in the audio-visual dataset.} The statistics of simulated visual features are summarized in \tab~\ref{tab:facesimulation} where \ac{DR} represents the percentage of video frames having targets inside the FoV. Low DR means a high percentage of missing visual features. We also report in \tab~\ref{tab:facesimulation} the \ac{DoA} \ac{MAE} and \ac{ACC} of the simulated visual features, indicating that the simulated data is of enough difficulty to represent real scenarios. \subsection{Parameter settings} The \ac{GCC-PHAT} is computed for every $170 \ ms$ segments with delay lags $\tau \in [-25,25]$, resulting in 51 coefficients for each microphone pair as in \cite{he2018deep}. With 6 microphone pairs, each pair contributing 51 GCC-PHAT coefficients, we obtain 306 GCC-PHAT coefficients. For visual features, the human face width and height are assumed to have $W=0.14\ m, H=0.18\ m$, respectively as such in \cite{qian2019multi}. We adjust the size of the horizontal and vertical visual feature encoding to 51 to match that of GCC-PHAT coefficients. We use the Adam optimizer \cite{kingma2014adam}. All models are trained for 10 epochs with a batch size of 256 samples and a learning rate of 0.001. Since multi-speaker localization is not a single-label classification problem, we use \ac{MSE} instead of cross-entropy as the loss function. \subsection{Results} \tab~\ref{tab:results} provides the experimental results on the SSLR test set. Results are separately reported for different subsets and the speaker number (assumed to be known). The best result for each column is in the bold font. We compare the results of MLP-AVC and MLP-AVAW with two audio baseline methods: the traditional \ac{SRP-PHAT} method \cite{brandstein1997robust} and the state-of-the-art MLP-GCC method \cite{he2018deep}. As speakers are not always visible, we don't provide the video-only baseline to avoid unfair comparison. Furthermore, \tab~\ref{tab:facesimulation} suggests that it is challenging to expect visual features alone to outperform the audio \ac{DoA} estimation. \tab~\ref{tab:results} shows that, by both early fusion of audio-visual features. In particular, \ac{MLPAVC} reduces \ac{MAE} from $4.63^\circ$ (MLP-GCC) to $4.42^\circ$, which confirms the audio-visual fusion benefits. For the test-human subset, speakers are mostly inside the camera's FoV (the red points locate in the gray region in \fig~\ref{subfig:SSLcameradata}) and \ac{DR} of the RetinaFace detector \cite{Deng_2020_CVPR} achieves 100 \%, which is much higher than \ac{DR} in test-loudspeaker (9.2 \%). Thus, the \ac{MAE} degradation in test-human (from $4.75^\circ$ to $1.84^\circ$ and from $5.98^\circ$ to $3.89^\circ$) is more significant than in test-loudspeaker (from $4.06^\circ$ to $3.87^\circ$ and from $8.10^\circ$ to $7.80^\circ$). Besides, further improvements are introduced by the adaptive weighting mechanism in \ac{MLPAVAW}, which achieves the best results in most cases with the overall \ac{MAE} at $4.22^\circ$ and ACC at $92.0$\%. Next, we further evaluate the noise robustness of the proposed networks. For audio, we apply additive white Gaussian noise of \ac{SNR}s varying from $-10$ $dB$ to $20$ $dB$ on the original SSLR audio signals. For video, we randomly swap up to $70\%$ face detections to the other frames to generate false positives and false negatives. \tab~\ref{tab:results_MAE} lists the overall \ac{MAE} and \ac{ACC} of \ac{MLPAVAW} in comparison with those under clean audio condition. We also provide the MLP-GCC results in the first two columns indicating the audio-only performance without swapping the face detection. From the results, we can see that fusing visual features always brings benefits. Additionally, audio is of more importance than video since with the degradation of \ac{SNR}, both \ac{MAE} and \ac{ACC} are getting worse as \ac{FDSP} increases, the performance degradation is also obvious but not so significant. Even at \ac{FDSP}=$70 \%$, the proposed network still outperforms the MLP-GCC. The performance gains by \ac{MLPAVAW} suggest that visual features provide additional information in degraded acoustic conditions. \vspace{-.2cm} \section{Conclusions} This paper presented two neural network architectures for multi-speaker DoA estimation using audio-visual signals. The comprehensive evaluation results confirm the benefits of audio-visual fusion and the adaptive weighting mechanism. Besides, we proposed a technique to synthesize visual features from geometric information about the sound sources to deal with lack of annotated audio-visual data. Future work will include exploring network models that can generalize with limited training data. \bibliographystyle{IEEEbib}
1,116,691,497,033
arxiv
\section{} \section{Introduction} The inclusive description of minimum bias LHC events is not as spectacular as, e.g., Higgs hunting, but is essential for other very important scientific endeavours. One of them is the Ultra High-Energy Cosmic Ray (UHECR) problem and the answer to the question of an existence of Greizen-Zatsepin-Kuzmin (GZK) cut-off \cite{gzk}. The origin and nature of cosmic rays is studied for almost exactly 100 years. The great experimental effort has been taken recently by two groups: the Pierre Auger Observatory \cite{pao} and the Hi-Res experiment \cite {hires}. The progress is observed, but the answers are still not decisive. The cosmic rays of energies of about $10^{20}$eV, if they are protons, should not reach us from cosmological distances. On the other hand anisotropy measurements show that they probably actually do. Our knowledge about the nature of UHECR is based on observation of giant Extensive Air Showers (EAS) - cascades of secondary particles created in the atmosphere when the single atomic nucleus (proton in a simplest case) enters from above. It is expected that the EAS initiated by protons and iron nuclei should differ. This difference is determined by the rate of energy dissipation. Thus it depends strongly on the distribution of secondaries produced in the forward direction and on the nature of primary particle: its atomic mass. The long-lasting discussions on the primary cosmic ray mass composition at the very end of the cosmic ray energy spectrum, in the so-called "ankle" region ( $E_{\rm lab} > 10^{18}$~eV), could not be conclusive also because of the lack on the more exact knowledge of the very high energy interaction physics, what makes the importance of the high energy proton fragmentation even greater for cosmic ray physicist, astronomers and cosmologists. Searching for regularities and phenomenological description of the multiparticle production model is as old as the modeling in high-energy physics itself. Starting from simple Fermi thermodynamical model, to the first parton (quark) model propositions by Feynman, the model extrapolation to much higher, cosmic ray energies was one of the most important and most wanted model predictions. It is usually in the form of a kind of scaling. The idea of limited fragmentation \cite{limi-fra} applied to the quark-jet hadronization led to introduction of the Feynman scaling variable of $x_F$ and the universal fragmentation function $f(x_F,\;s)=f_F(x_F)$ \cite{feynman}. This brilliant idea works well for the first collider experiments up to $\sqrt{s}\sim 60$~GeV. However, when applied to cosmic ray EAS development, it was questioned already at the "knee" energies of $E_{lab}\sim 10^{15}$~eV. The SPS ($\sqrt{s}\sim 200 - 900$~GeV) experiments allow to quantify the scaling violation. The scale-breaking model of Wdowczyk and Wolfendale has been proposed to described the CR data at the beginning of '70 \cite{ww}. It is, in a sense, a generalization of the Feynman scaling idea introducing the one scaling violation parameter. In Ref.~\cite{twphlww} we have shown that the light composition suggested by the studies of the anisotropy and the average depth of the shower maximum ($x_{\rm max}$) does not contradict other results, mainly the width of the $x_{\rm max}$ distribution, only if one assume strong Feynman scaling violation. The rapidity (pseudorapidity) distributions were measured by LHC experiments: ALICE\cite{alice}, CMS\cite{cms900,cms7} and ATLAS \cite{atlas} (the last for $p_\bot >0.5$~GeV only) in the central rapidity region $| \eta | \lesssim 2.5$ for c.m.s. energies of 900 GeV, 2.3 TeV and 7 TeV. Narrow range of a rapidity (pseudorapidity) at first sight does not allow to study important characteristics of very forward particle production. To study the fragmentation region new measurements, specially by much forward detectors (LHCf), are welcome. But, as it will be shown below, the existing data can be used to test the scaling violation picture found in UHECR physics domain. \section{Rapidity distribution} Rapidity distributions measured in LHC experiments cover the central region where the produced particles are dynamically separated from the valence quarks of colliding hadrons. The central rapidity density $\rho(0) = 1 / \sigma \left. \left( {\rm d \sigma / {\rm d} y} \right) \right|_{y=0}$ is the variable describing the particle production there. The original Feynman scaling preserves the value of the central rapidity density. The plateau in rapidity is characteristic feature of independent jet fragmentation model as well as statistical models with limited transverse momentum phase space. Unfortunately, it is known for long, that such simple picture does not work. The phenomenological fit of the $\rho(0)$ rise made more than twenty years ago in Ref.~\cite{alner} is still valid. The 900 GeV LHC measurements match well SPS UA5 result. The systematic discrepancy seen by CMS detector \cite{cms900} does not change this general opinion. \subsection{Feynman scaling} Feynman scaling \cite{feynman} can be expressed introducing one universal function $f_F$ of the variable $x = p_\| / p_{\rm max}$ which describes the invariant momentum (longitudinal $p_\|$) distribution of particles crated in the high-energy inelastic (and non single diffractive) interaction \begin{equation} {E \over {\sqrt{s}/2}}~ {1 \over \sigma }~{{d^3 \sigma} \over {d x \: d^2 p_\bot} } ~=~f(x,\:p_\bot,\: s)~=~f_F(x,\:p_\bot) \label{xf} \end{equation} \noindent where $\sqrt{s}$ is the interaction c.m.s. energy, $E$, $p_\|$ and $p_\bot$ are energy, and longitudinal and transverse momenta of outgoing particles ($p_{\rm max}\approx \sqrt{s}/2$). Change of variable from Feynman $x$ to rapidity $y$ gives \begin{equation} {1 \over \sigma }~{{d^3 \sigma} \over {d y \: d^2 p_\bot} }~=~f_F \left( x(y),\: p_\bot \right) \label{ysc} \end{equation} where $x(y)\:=\: \sqrt{p_\bot^2+m^2} / (\sqrt{s}/2) \sinh (y)$. Using an approximate relation $\sqrt{p_\bot^2+m^2} \sinh (y) \approx p_\bot \sinh (\eta)$ and introducing the very convenient variable: pseudorapidity $\eta= - \ln \tan ({\Theta/2})$ we have \begin{equation} {1 \over \sigma }~{{d^3 \sigma} \over {d \eta \; d^2 p_\bot}}~=~f_F \left( {{2 p_\bot}\over {\sqrt{s} }} \sinh (\eta)\;,p_\bot \right)~ ~. \label{etasc} \end{equation} The integration over all $p_\bot$ is obvious with uncorrelated $p_\bot$ and $p_\|$ and the universality of the $p_\bot$ distribution \begin{equation} {1 \over \sigma }~{{d \sigma} \over {d \eta} }~=~ F_F \left( {{2 \langle p_\bot \rangle} \over {\sqrt{s} }} \sinh (\eta)\:\right) ~~. \label{fsc} \end{equation} The factor ${\langle p_\bot \rangle}$ is a constant related to the transverse momentum scale. We are interested of the extremely forward part of the (pseudo)rapidity distribution -- projectile fragmentation region. It is convenient to move the longitudinal momentum distribution to the anti-laboratory frame ($\eta \rightarrow \eta '$) where the projectile is at rest prior to the collision. This is done shifting the c.m.s. (pseudo)rapidity distribution by $\Delta y = \ln \:(\sqrt{s}/m)$ \begin{eqnarray} ~ \sinh ( \eta' ) ~= ~ \sinh \left( \eta \:-\: \Delta y \right) ~= ~ \sinh \left( \eta \:-\: \ln (\sqrt{s}/m)\right) ~\approx \nonumber\\ ~{\rm e}^{\eta \:-\: \ln (\sqrt{s}/m)} /2 ~=~ {{\rm e}^{\eta} \over 2} \: {m \over {\sqrt{s}}} ~\approx~ { m \over \sqrt s} \: \sinh ( \eta )~~. \end{eqnarray} \noindent After such transformation the direct comparison of particle production at different values of interaction c.m.s. energy is possible \begin{equation} {1 \over \sigma }~{{d \sigma} \over {d \eta '}}~ \approx~ F_F \left({{2 \langle p_\bot \rangle} \over m }\:\sinh (\eta ') \right)~=~ F_\eta \left(\eta '\right)~~. \label{fscprim} \end{equation} This form of Feynman scaling was tested e.g. in ref.~\cite{alner} and it is found that it is valid only very approximately. We can see this in Fig.~\ref{f1}a, where previous millennium data are plotted as a function of the anti-laboratory pseudorapidity. The recent data from CMS \cite{cms900,cms7} and ALICE \cite{alice} are shown in Fig.~\ref{f1}b. \begin{figure} \centerline{ \includegraphics[width=7.2cm]{fig1a} \includegraphics[width=7.2cm]{fig1b}} \caption{Pseudorapidity distributions shifted by $\Delta y=\ln (\sqrt{s} / m )$ for ISR, SPS and Tevatron measurements (a), and distributions measured by LHC experiments at energies from 900 GeV to 7 TeV compared with SPS $\sqrt{s} = 546$ GeV UA5 result (b).} \label{f1} \end{figure} It is known that Feynman scaling is violated at least by the continuous increase of the central rapidity density what is easily seen in Fig.~\ref{f1}. \subsection{Feynman scaling violation} The original Feynman scaling implies that the inelasticity of proton-proton interaction, defined as a fraction of incoming energy carried by newly created particle, is universal, the same for all interaction energies. The first observations suggested an attractive value of 0.5. The rise of some characteristics of the interactions (like, e.g., average $p_\bot$ or central rapidity density we mentioned above) makes the assumption about the constancy of the inelasticity not quite well justified. Introducing the multiplicative factor proportional to the observed rise of the rapidity plateau to the right-hand side of Eq.(\ref{fscprim}) we can try to recover a form of scaling. Applying this procedure the simplicity of the original Feynman idea is lost and the next correction for the rise of the average transverse momentum could be introduced here as well. We have used in the present work the average transverse momentum rise of the form $\langle p_\bot \rangle\:=\:0.413\:-\:0.017\:\ln (s)\:+\:.00143\:\ln^2(s)$ shown in Fig.~4 of Ref.~\cite{cms7}. The additional inelasticity control parameter is an index in a power law multiplicative factor. These two modifications lead according to Eq.(\ref{fsc}) to only slightly more complicated scaling formula \begin{equation} {1 \over \sigma }~ {{d \sigma} \over {d \eta} }~ = \: \left( s \over s_0 \right)^{\alpha_F F_F \left( {2 \langle p_\bot \rangle \over \sqrt{s} }\: \sinh (\eta) \:\right)~~. \label{fscprimptinel} \end{equation} We have used the UA5 data measured at $\sqrt{s_0} = 546$ GeV c.m.s. energy \cite{alner} as a datum. The very accurate measured NSD pseudorapidity distribution have been used as a definition of the universal $F_F$ function. We adjusted the $\alpha_F$ parameter value to minimize the discrepancy between Eq.(\ref{fscprimptinel}) scaling prediction and the distributions of pseudorapidity measured at different energies: from ISR to 7 TeV of LHC. The results are given in Fig.~\ref{f2}. \begin{figure} \centerline{ \includegraphics[width=7.2cm]{fig2a} \includegraphics[width=7.2cm]{fig2b}} \caption{Pseudorapidity distributions shifted and transformed respectively adjusting $\alpha_F$ for ISR, SPS and Tevatron measurements (a), and distributions measured by LHC experiments at energies from 900 GeV to 7 TeV compared with SPS $\sqrt{s}=546$~GeV UA5 result (b).} \label{f2} \end{figure} Values of $\alpha_F$ increase from $\sim 0.05$ found for ISR 53 GeV to $\sim 0.11$ at LHC 7 TeV. The increase is statistically not very significant, at least for the overall inelasticity, what will be discussed later. The accuracy of the data scaling according to Eq.(\ref{fscprimptinel}) can be estimated with the help of statistical tests. The $\chi^2$ values for the ISR and SPS are of about $\chi^2/NDF \approx 40/20$. The systematic uncertainties of the Tevatron and LHC results makes the $\chi^2/NDF$ smaller but the overall tendency seen in Fig.~\ref{f2} suggests strongly that proposed modification of the Feynman scaling is not a right solution for the extrapolation of interaction properties to the very high interaction energies. \subsection{Wdowczyk and Wolfendale scaling} It was shown in Ref.~\cite{twphlww} that the almost forty years old modification known as Wdowczyk and Wolfendale (WW) scaling \cite{ww} could be still satisfactory used to scale the interaction properties to the ultra high ($> 10^{19}$~eV) cosmic ray energies. The original idea of the WW scaling \begin{equation} f \left(x,\:p_\bot,\:s \right)~= (s/s_0)^\alpha \: f_{WW}\left( {x \: (s/s_0)^\alpha,\:p_\bot }\right) \label{wwsc} \end{equation} is an extension of the Feynman fragmentation formula of Eq.~(\ref{xf}) (the limit for $\alpha=0$) with the possibility to get the 'thermodynamical limit' of $n \sim s^{1/4}$ with $\alpha = 0.25$. The WW model in its version of mid '80 has been successfully used for the EAS studies around 'the knee'. Its extension introducing partial inelasticities (energy fraction carried by specific types of particles), and the transverse momentum rise with interaction energy dependencies, as discussed above, gave better description of the production of different kinds of secondaries. As a result of this improvements the first power-law factor index was released and gave an extra model parameter. This more flexible formula was applied, e.g., in Ref.~\cite{alner} where the agreement of the WW model predictions and the UA5 measured rapidity distributions was shown. It should be mentioned that original Wdowczyk and Wolfendale model gave a complete description of the multiparticle production process to be used mainly in EAS studies, so it contains such details as partial inelasticities, transverse momenta, semiinclusive properties etc. The fit shown in Ref.~\cite{alner} is the effective, average description of inclusive data of rapidity (pseudorapidity) only. In the present work we explore the WW scaling of the form \begin{equation} {1 \over \sigma }~{{d \sigma} \over {d \eta} }~=~ {\left( s \over s_0 \right) ^{\alpha '} }~ F_{WW} \left( {\langle p_\bot \rangle \over \langle p_\bot^0 \rangle }\: \sinh (\eta)\: {\left( s \over s_0 \right) ^{\alpha - 1/2} }\:\right)~~, \label{wwfin} \end{equation} \noindent where $\langle p_\bot^0 \rangle$ is the average transverse momentum at the datum interaction energy ($\sqrt{s_0}=546$ GeV). \begin{figure} \centerline{ \includegraphics[width=7.2cm]{fig3a} \includegraphics[width=7.2cm]{fig3b}} \caption{Wdowczyk and Wolfendale scaling with both parameters $\alpha$ and $\alpha '$ adjusted to each experimental data set. \label{f3}} \end{figure} We have adjusted first both $\alpha$ and $\alpha '$ parameters independently to get the best scaling performance. Results are given in Fig.~\ref{f3}. \begin{figure} \centerline{ \includegraphics[width=7.2cm]{fig4a} \includegraphics[width=7.2cm]{fig4b}} \caption{W{\&}W scaling parameters predictions for $\alpha$ (solid symbols and solid lines) and for $\alpha '$ (open symbols and dashed line) adjusted to the data (a), and values of $\alpha$ taken from the UHECR analysis \cite{twphlww} and only $\alpha '$ used as a free parameter of the fit (b). \label{f4}} \end{figure} Obtained values of $\alpha$ and $\alpha '$ are shown in Fig.~\ref{f4}a. Horizontal lines show results from Ref.~\cite{alner} (solid for $\alpha$ and dashed for $\alpha '$, respectively). The thick solid broken line is the result for $\alpha$ of our UHECR analysis \cite{twphlww}. It is seen that the predictions from Ref.~\cite{twphlww} and the LHC data are consistent. Although the large uncertainties, which are result of limited rapidity range as well as possible systematics, do not allow for any stronger conclusions. We can, however, use the UHECR data analysis predictions for the values of $\alpha$ and test if results of the fit, with such reduced free parameter space, remains in agreement with the WW scaling. It can be seen in Fig.~\ref{f5} \begin{figure} \centerline{ \includegraphics[width=7.2cm]{fig5a} \includegraphics[width=7.2cm]{fig5b}} \caption{Wdowczyk and Wolfendale scaling results with $\alpha$ set to the UHECR analysis data and $\alpha '$ adjusted to each experimental data set shown as in the Fig.~\ref{f3}.} \label{f5} \end{figure} The data description is not much worst than the one presented in Fig.~\ref{f3}. The constancy of the $\alpha '$ suggested by WW original papers and seen in Fig.~\ref{f4}a, still holds as presented as in Fig.~\ref{f4}b. \section{Inelasticity} In Ref.~\cite{twphlww} it is found quite unexpected high energy behaviour of interaction inelasticity coefficient. It was obtained as a result of the experimental suggestion that the composition of the UHECR is quite light, contains a significant proton fraction. The WW model with the strong Feynman scaling violation leads to continuous decrease of the energy fraction released to the secondaries produced in very high energy interactions. Eq.(\ref{wwfin}) gives the inelasticity energy dependence \begin{equation} K(s)~=~K_0\:\left(s \over s_0 \right) ^{(\alpha ' - \alpha)}~, \label{kinelww} \end{equation} \noindent while for the modified Feynman scaling formula Eq.(\ref{fscprimptinel}) it is \begin{equation} K(s)~=~K_0\:\left(s \over s_0 \right) ^{\alpha_F}~. \label{kinelf} \end{equation} \begin{figure} \centerline{ \includegraphics[width=9cm]{fig6}} \caption{Inelasticity calculated with WW scaling assumption (filled symbols - circles for both $\alpha$ and $\alpha '$ adjusted (Fig.\ref{f3}) and squares for UHECR inspired $\alpha$ (Fig.~\ref{f5}). \label{f6}} \end{figure} \noindent In the Fig.~\ref{f6} we have shown results of our analysis. Open symbols show the fast rise of the inelasticity for modified Feynman scaling formula. Even if the $\alpha_F$ follow the lower energy, smaller value, in the UHECR domain the saturation is expected. Filled symbols were obtained for WW scaling. The solid line gives the predictions from Ref.~\cite{twphlww} obtained using UHECR data. The dashed line is the fit from Ref.~\cite{alner} of the WW scaling parameters to SPS data. The value of 0.5 is also shown. The open symbols are for the modified Feynman scaling with $\alpha_F$ parameter. Solid line shows the UHECR data analysis prediction from Ref.~\cite{twphlww}. Dashed line is the inelasticity fit from Ref.~\cite{alner}. The 'canonical' value 0.5 is shown by short dashed line. \section{Summary} We have shown that the minimum bias pseudorapidity distributions measured by LHC experiments can be very well described with the scale-breaking Wdowczyk and Wolfendale formula. The scaling violation observed for the energies up to SPS $\sqrt{s}=900$~GeV and 1800 GeV in Tevatron was uphold recently in the analysis of new UHECR data. The phenomenological model of Wdowczyk and Wolfendale introduces two model parameters. The value of one of them: $\alpha$, was originally found to be equal to 0.13 using interpolation of the \mbox{$x_F=p_\| /p_{\rm max}$} distributions between $\sqrt{s} \approx 10$ GeV and ISR energies. Later interpolations including SPS data gave the value of 0.18 and finally the effective value of 0.25 was found in Ref.~\cite{alner}. The increase of the central rapidity density reported also in Ref.~\cite{alner} suggests $\alpha = 2 \times 0.105 = 0.21$. This value gives the Extensive Air Showers development maximum position $x_{\rm max}$ for proton initiated showers not far from measured \cite{pao,hires} as it is shown in Ref.~\cite{twphlww}. The UHECR data suggests further smooth rise of the scale-breaking parameter. The first measurements at LHC up to 7 TeV c.m.s. energy agree with the trend observed at lower energies and seems to smoothly bridge accelerator results and these on very high energy interaction of cosmic ray protons. The limited range of measured pseudorapidities does not allow for a stronger statement. The more forward particle production data is highly welcome. The rising inelasticity for (modified) Feynman scaling is obviously in contrary to the Wdowczyk and Wolfendale scaling and cosmic ray data. Comparing the pseudorapidity distributions in Figs.~\ref{f3}b and \ref{f5}b we can say that the LHC pseudorapidity data analysis favours the second possibility. \bibliographystyle{elsarticle-num}
1,116,691,497,034
arxiv
\section{Introduction} There are two kinds of stable (time persistent) macroscopic organizations found in nature. The first are called ``equilibrium structures'' and result from the minimization of free energy subject to the conservation laws of physics arising from the symmetries in nature. These organizations have little interaction with their environment and are in states of near maximum entropy with zero entropy production. Examples are the crystalline structure of condensed material or the planetary orbits of our solar system. The second kind of organizations are called ``dissipative structures'' and are asymptotically stable (attractive) and are much richer in complexity and variety, have important interaction with their environment, and are the result of the dissipation of a thermodynamic potential, themselves not subject to the conservation laws although their overall interaction with their environment is. The general non-linear nature of these systems means that they can be in any of a multitude of macroscopically different locally stable states dependent on their particular history, and tend to evolve towards states of greater entropy production. Examples are ecosystems, social systems, climate systems, and global elemental cycle systems including the atmosphere and ocean systems. Building on the non-equilibrium thermodynamic foundation established by Th{\'e}ophile de Donder (1936) and Lars Onsager (1931) , Ilya Prigogine (1967) and co-workers have developed a comprehensive framework for dealing with this second kind of organization. Their framework, known as {\it Classical Irreversible Thermodynamics} (CIT), employs the same thermodynamic variables as does equilibrium thermodynamics but allows the variables to vary locally in both space and time. The whole domain of validity of classical irreversible thermodynamics is thus restricted to situations in which the existence of a local equilibrium can be supposed, i.e. equilibrium attained within a small, but still macroscopic region (of the order of 10$^{23}$particles) of the system under study. This restriction is required in order to retain the usual meanings of the thermodynamic variables and thus the validity of, for example, the Gibb's equation relating them. It turns out, however, that this domain is surprisingly large and the framework of classical irreversible thermodynamics can be used to treat most macroscopic dissipative phenomena which are common to our everyday experience. Employing the CIT formalism, this paper aims to demonstrate that the spread organic pigments and water over the Earth's surface arises as a result of the thermodynamic imperative of dissipating high energy solar photons into numerous low energy ones. The affinity for this process will be shown to be related to a photon chemical potential corresponding to the solar photon spectrum at the Earth's surface, and that corresponding to the emitted and reflected Earth spectrum at the black-body temperature of the cloud tops which radiate into space. It is argued that co-evolution of the biotic with the abiotic on Earth has promoted the spread of organic pigments and water over Earth's surface, allowing Earth, in interaction with its solar environment, to evolve towards states of greater global entropy production. What is thermodynamic reason for pigment proliferation related to dissipation? How is present life related to past life from the perspective of dissipation? What is the importance of water to dissipation? Should dissipating life forms be common throughout the universe? Is life as we know it on Earth inevitable? These are some of the questions to be addressed in this paper. First, however, we discuss the relation between life, water, and the physical and chemical characteristics of the biosphere and its evolution, much of which has been uncovered while assessing the theory of Gaia by James Lovelock (1988) and collaborators. In section \ref{sec:photon} the physics of photon absorption and dissipation by organic molecules in water is discussed. In section \ref{sec:autocatalysis} we employ the formalism of non-linear irreversible thermodynamics to show how organic pigments are the products of an auto-catalytic cycle which augments the solar photon dissipation, leading to the expectation of the proliferation of these organic pigments and water throughout Earth's surface, and even over other solar system bodies. In section \ref{sec:entropyprod} we show how Earth dissipates about 70\% more than either of its neighboring planets and that most of the dissipation occurs on Earth's surface, building a case for the importance of life in the global dissipation process. Section \ref{sec:originlife} demonstrates, from within this thermodynamic perspective, how present life could be associated with life at its beginnings in the Archean and reviews a recent theory on the dissipative origin of life (Michealian, 2009b, 2011b). Section \ref{sec:lifeuniverse} discusses the possibility of finding similar forms of dissipating life throughout the universe. Conclusions are presented in section \ref{sec:conclusions}. \subsection{Life, water and the biosphere} In 2005, NASA crash landed a deep impactor onto the surface of comet Tempel 1 in order to study the gas and dust released using instruments on board the mother ship (A'Hearn et al., 2005). Over 200 organic molecules were found in the dust and gas released. Some precursors to the aminos of amino acids, principally hydrogen cyanide and methyl cyanide, were clearly identified (Moulton, 2005). Other more complex organics were also found but the resolution of the instruments was insufficient to unequivocally identity many of these. The surface of the comet was found to be very dark, having an albedo of only 0.04 (visible wavelength), due, in no small part, to the absorption of light by these organic molecules.The most common amino acid of life, glycine, was found in samples returned to Earth by NASA's Stardust mission which flew by Comet Wild 2 in 2004. Nucleobases, the aromatic ring components of nucleic acids RNA and DNA, have been found in meteorites (Callahan et al., 2011). Comets consist mostly of ice (50\% for Tempel 1) and dust and it is probable that during the Archean their delivery contributed a substantial portion of Earth's water. Comets and meteorites could, therefore, have been the providers of both the vital elements of the primordial soup; organic molecules and water (Hoyle and Wickramasinghe, 1978). Alternatively, organic molecules could also have been created in the Earth's somewhat reducing early atmosphere subjected to intense UV light and lightening (Miller and Urey, 1959). Life has kept the amount of water on Earth's surface relatively constant since its beginnings 3.8 billion years ago. There are physical mechanisms active on all planets that disassociate water into its oxygen and hydrogen components; for example, through lightening or UV light. Hydrogen, being a light element can be carried along by the solar wind and lost into space. Earth's magnetic field helps to shield the Earth from the solar wind and so less hydrogen has been lost as compared to Venus for example, which has been very dry since loosing its magnetic shield about 2 billion years ago. However, perhaps more important in retaining water on Earth is photosynthetic life's ability to release oxygen from carbon dioxide, thereby providing the potential for its recombination with hydrogen. For example, aerobic chemoautotrophic bacteria oxidize hydrogen sulfide to produce elemental sulfur and water as waste products (Lovelock, 1988). Also, the oxygen released by life can undergo photochemical reactions in the upper atmosphere and be converted into ozone, which then protects water and methane in the lower atmosphere from UV disassociation. By controlling the amount of greenhouse gases in the atmosphere, principally water, carbon dioxide, and methane, life has controlled the temperature of Earth's surface, keeping it within a narrow range for water to be present in its liquid phase. This control is surprising considering that, according to the standard solar model, the integrated solar output has probably increased by as much as 30\% since the beginnings of life on Earth (Sagan and Chyba, 1997). By studying Sun like proxy stars of different ages, the Sun in Time Program (Ribas, 2005) has determined that the amount of far ultraviolet light reaching Earth's atmosphere at the beginning of life could have been up to 30 times greater than today, extreme ultraviolet and soft x-ray perhaps 100 times greater, and x-ray perhaps 2000 times greater (Ribas, 2005). The reason for this has to do with the higher rotation rates of younger stars which gives them turbulent eddies that mix different layers, bringing the hotter inner layers to the surface. Utilizing the vapor pressure deficit caused by evaporation at the leaves, the roots of plants draw up water through the inter-cellular cavities and into the leaves from where it enters the water cycle. Plants on land increase the amount of water in the water cycle by a factor of almost two (Kleidon, 2008). Almost all of the free energy arriving in the solar photon flux at the leaves is used in evaporation of water, while less than 0.2\% is used in photosynthesis (Gates, 1980), i.e. converted into covalent chemical bonds of organic material. Over water, cyanobacteria play the same role as plants over land, heating the upper surface of the water body and thereby increasing evaporation rates (Jones et al., 2005). The water cycle is a second dissipative process, coupled to photon dissipation by organic pigments, which dissipates the temperature gradient between the warm surface of the Earth and the cold upper atmosphere. The water cycle also helps move water over land masses where it can be used by organic pigments to dissipate solar photons. Life has gradually adjusted the gases of Earth's atmosphere in such a manner so as to obtain a transparency for the most intense (enthropically speaking) part of the solar spectrum such that it can be intercepted by organic pigments on the surface which converts it into infrared photons that can be readily absorbed by water. Evolution has continually invented new pigments, absorbing ever more completely the solar spectrum (Michaelian, 2011a). Cnossen et al. (2007), following Sagan (1973), have shown that, using the best present knowledge for the gases of Earth's Archean atmosphere at the beginning of life on Earth, principally CO$_2$, N$_2$, H, and some CH$_4$, the atmosphere would have had a window of transparency of between 200 and 300 nm (although perhaps reduced to between approximately 240 nm and 290 nm taking into account the aldehydes formed by UV photochemical reactions on these gases (Sagan, 1973)). It is then probable that life on the surface obtained its thermodynamic reason for being by dissipating photons within this wavelength region. DNA and RNA, in fact, absorb very strongly at 260 nm and dissipate the excitation energy rapidly into heat. A theory for the origin of life based on the extraordinary dissipation properties of the nucleic acids in the UV region has been given by Michaelian (2009b, 2011b) and will be reviewed in section \ref{sec:originlife}. Photons, life, and water thus share an intimate relation, probably since the beginnings of life, based on entropy production. Over land or over water, plants or cyanobacteria dissipate high energy photons into heat. The details of this relation will be reviewed in the following sections. \subsection{Organic pigments and photon dissipation} \label{sec:photon} Organic molecules differ from inorganic molecules in the nature of their chemical bonding. Atoms in organic molecules are bound by covalent bonds which are both strong and directional. The directionality of these bonds allows for an almost infinite set of stable configurations with distinct physical, chemical, and electronic properties. Inorganic bonding, on the other hand, is usually of ionic, metallic, or of the van der Waals type. These central interactions have no directionality characteristics and therefore configurations of numerous atoms tend to be constrained to compact spherical clusters, with only slight variation of their properties with the size of the cluster. Organic pigments absorb strongly over the visible and ultraviolet regions. They can be classified into three groups according to their wavelength absorption characteristics. The first group absorbing strongly in the far ultraviolet, (below approximately 300 nm) consists of the aromatic rings where the promotion of an electron from a $\pi$-bonding molecular orbital to a $\pi$-anti-bonding orbital, referred to as a $\pi,\pi$* transition, and has high oscillator strength and a large extinction coefficient (absorptivity). Examples are the nucleic acid bases adenine, guanine, cytosine and thymine, and the amino acids phenylalanine, tryptophan, histidine and tyrosine. The second group consists of the mycosporines and mycosporine-like amino acids (MAAs) characterized by a cyclohexenone or cyclohexenimine chromophore conjugated with the nitrogen substituent of an amino acid and have UV-absorption maxima in the range 310 to 360 nm. The third group absorbing in the visible region are the porphyrins, which are organic rings conjugated by a metallic ion, such as magnesium in chlorophyll, the carotenoids, flavonoides, etc., or the phycobilins found in cyanobacteria. Due to their covalent bonding, organic pigments are thus predilect to form the foundations of a photon dissipative system relevant over a large region of the solar spectrum. The first two groups absorbing in the ultraviolet have ultrafast (picosecond) non-radiative de-excitation rates compatible with internal conversion through a conical intersection of the excited state with the ground state (Pecourt et al. 2000, Conde et al., 2000). A conical intersection results when the vibrational states superimposed on the electronic first excited state overlap in energy with the vibrational states superimposed on the electronic ground state because of a slight deformation of the molecule. The de-excitation time through vibrational dissipation are significantly reduced if the molecules are in water since the vibrational energy can be dissipated rapidly through resonances with the high frequency vibrational modes of the water molecule. From this dissipative perspective, it is clearly this fact, more than any other, that has defined life's association with water. Such rapid dissipation of the excitation energy makes these molecules resistant to destruction by UV light since there simply is not enough time for photochemical reactions to occur. These molecules are thus very robust and efficient entropy producers since they dissipate the strongly absorbed photon energy to heat that can be absorbed by water and are ready to receive a new photon on the order of picoseconds. It is interesting to note that, for example, the non-natural tautomers of the nucleic acid bases have orders of magnitude longer lifetimes and are affected by non-radiationless decay modes including fluorescence and photochemical destruction (Serrano-And\'es and Merch\'an, 2009). \section{Non-linear Irreversible thermodynamic model} \subsection{Proliferation of organic pigments due to their catalytic properties in dissipating the solar photon flux} \label{sec:autocatalysis} An auto-catalytic reaction is one in which at least one of the products acts as a catalyst for the reaction. Most reactions performed by life today are auto-catalytic. Examples are; protein folding in which the protein FKBP catalyses its own folding (Gadgil and Kulkarni, 2009), or the glycosis cycle which produces ATP from glucose and is auto-catalyzed by the enzyme phosphofuctokinase (PFK). It is generally believed that the first chemical reactions of life must also have been auto-catalytic. Auto-catalytic reactions, like all chemical reactions, arise to dissipate a chemical potential. We now consider the production of organic pigments (e.g. nucleic acid bases, mycosporine-like amino acids, porphyrins, etc.) as an auto-catalytic photochemical reaction promoting the dissipation of the solar photon flux. It will be shown that if organic pigments are good catalysts for the irreversible process of photon dissipation, then their concentration will increase on the Earth's surface, over and above that which would be expected under equilibrium conditions. This proliferation of organic pigments over the surface of Earth is basically what has defined biological evolution. The derivation will be similar to that given by Prigogine (1967) for a purely chemical auto-catalytic reaction. However, instead of affinities derived from the chemical potentials dependent on material concentrations of the chemical constituents, we use an affinity derived from the photon chemical potentials which are dependent on the photon gas pressure (Herrmann amd W\"urfel, 2005) and which go as the fourth power of the temperature of the gas for an equilibrium distribution of the photons (black-body spectrum). The generalized flows corresponding to these generalized forces defined by the affinities are the flows of energy from one spectrum to another. We assume that the rate of energy conversion will be proportional to the difference of the photon pressures of the different spectra. There are three relevant photon pressures, 1) $P_S$ corresponding to the photon spectrum arriving at Earth, approximately black-body with a temperature equal to that of the surface of the Sun, 2) $P_E$ corresponding to the emitted photon spectrum by Earth, an approximate black-body spectrum with temperature equal to that of Earth's surface, and 3) $P_C$ corresponding to the temperature of the cloud tops at which Earth emits radiation into space. As a cautionary note, it is emphasized that using a black-body approximation is wrought with error since the radiation arriving at the Earth's surface from the Sun is very directional and only approximates a black-body spectrum. However, the purpose of the calculation is to show qualitatively how the proliferation of organic pigments on Earth can be explained from a non-linear irreversible thermodynamic analysis of the photon dissipation process. The formalism of classical irreversible thermodynamics (Prigogine, 1967) gives that the entropy production is a product of generalized forces, $X$, times generalized flows, $J$, \begin{equation} {\cal P} = {\frac{d_iS }{dt}} = \sum_k J_k X_k \ge 0. \end{equation} For chemical reactions, the generalized force is the affinity over the temperature $A/T$, where the affinity $A$ for the reaction is equal to the stoichiometric coefficients of the reactants multiplied by their chemical potentials plus the same of the products. The generalized flow is the rate of the reaction. For our photon dissipation reaction to be considered here, the generalized forces are the photon spectrum affinities determined from their pressures and the generalized flows are the flows of energy from one spectrum to another. The time change of the entropy production $d{\cal P}$ can be decomposed into two parts, one related to the change of forces and the other to the change of flows \begin{equation} d{\cal P} = d_X{\cal P} + d_J{\cal P} = \sum_k J_k dX_k + \sum_k X_kdJ_k. \end{equation} In the whole domain of the validity of thermodynamics of irreversible processes, and under constant external constraints, the contribution of the time change of the forces to the entropy production is negative or zero, \begin{equation} d_{X}{\cal P}\le 0. \end{equation} This is known as the {\em general evolutionary criterion} and was established by Prigogine (1967). For systems with constant external constraints, the system will eventually come to a steady state in which case (Prigogine, 1967) \begin{equation} d_{X}{\cal P}= 0. \label{eq:genev} \end{equation} With this brief background into classical irreversible thermodynamics, consider now the following irreversible processes consisting of the conversion of energy through different photon spectra. First, the flow of energy of the photon spectrum coming from the surface of the Sun $I_S(\lambda)$, which can assumed to be black-body for convenience, $I(T_S)$, with $T_S$=5760 K, is converted, by absorption and dissipation of organic pigments in water, into the emitted photon spectrum of the Earth's surface (also assumed to be black-body), $I(T_E)$ with $T_E$= 287 K, which is then converted into the emitted photon spectrum of the cloud tops $I(T_C)$, with $T_C$= 259 K. We assume that a photochemical reaction takes place as part of the first dissipation process which creates organic pigments of concentration $C$. These pigments themselves act as catalysts for the first dissipation process of $I(T_S) \rightarrow I(T_E)$ making the process auto-catalytic. A schematic diagram for this auto-catalytic process can be given as follows, \begin{eqnarray} I(T_S)\stackrel{1}{\rightleftharpoons } &I(T_E)&\stackrel{2}{\rightleftharpoons }I(T_C) \nonumber \\ &3\updownarrow& \nonumber\\ &C& \label{eq:reaction} \end{eqnarray} Since the system is far from equilibrium, the backward rate constants can be considered as being essentially zero. The conversion of the energy through the different spectra and the energy of formation of the pigments can be characterized, in a first approximation, by the pressures corresponding to the assumed black-body spectra. We can therefore write the above schematic process alternatively in terms of pressures, \begin{eqnarray} P_S\stackrel{1}{\rightleftharpoons } &P_E&\stackrel{2}{\rightleftharpoons }P_C \nonumber \\ &3\updownarrow& \nonumber\\ &P_P& \label{eq:reaction} \end{eqnarray} where $P_P$, the pressure of the organic pigments, is related to their concentration $C$. In terms of the affinities and flows of the three different processes, we can write the general evolutionary criterion, Eqn. (\ref{eq:genev}), once arriving at the stationary state, in the following form \begin{equation} d_{X}{\cal P}=d{\cal P}-d_{J}{\cal P}=d\left(\sum_{\rho=1}^3\frac{A_{\rho }}{T_{\rho }} v_{\rho }\right)-\sum_{\rho=1}^3 \frac{A_{\rho }}{T_{\rho }} dv_{\rho }= 0 \label{eq:evcrit} \end{equation} where $A_1$ is the affinity for the conversion of energy of the solar photon spectrum into energy of the Earth surface photon spectrum, $A_2$ is the affinity for the conversion of energy of the Earth surface photon spectrum to energy of the cloud top photon spectrum, and $A_3$ is the affinity for the photochemical reaction producing the organic pigment. For the case of equilibrium photon distributions (black-body spectra), the affinities will go as the logarithm of the ratio of the photon pressures (Herrmann amd W\"urfel, 2005) which are proportional to the temperatures to the fourth power, \begin{equation} A_1=kT_E\log{P_S \over P_E}\ \ \ \ \ A_2=kT_C\log{P_E \over P_C}\ \ \ \ \ A_3=kT_P\log{P_E \over P_P}. \label{eq:forces} \end{equation} The $v_\rho$ in Eqn. (\ref{eq:evcrit}) are the rates of the corresponding energy conversion (dissipation) processes which, of course, are related to the amount of photon-material interaction which we assume are related to the differences of the photon pressures attributed to the different spectra. A more analytic justification for this comes from the continuity equation for momenta $p$ in continuous media (a kind of Navier-Stokes equation without external forces and without viscosity for photons) \begin{equation} {dp \over dt} \propto -{dP \over dx}, \end{equation} and, since for photons $E=pc$, the energy conversion rate between spectra would go like \begin{equation} v={dE \over dt} \propto -{dP \over dx}. \end{equation} Since the organic pigments are assumed to act as catalysts for the conversion of energy from the solar spectrum to energy of the Earth surface spectrum, the rate of the first dissipation process, $I(T_S) \rightarrow I(T_E)$, is multiplied by a factor $(1+\alpha P_P)$ where $\alpha$ represents the effectiveness of the organic pigment as a catalyst for energy conversion (i.e. $\alpha \rightarrow \infty$ for an excellent catalyst, and $\alpha \rightarrow 0$ for a completely ineffective catalyst). Therefore, the rates of conversion, assuming all constants of proportionality equal to one for convenience (again, we are only interested in showing qualitatively the dynamics of pigment proliferation), are given by \begin{equation} v_1= (1+\alpha P_P)(P_S - P_E)\ \ \ \ \ v_2=P_E - P_C\ \ \ \ \ v_3=P_E-P_P \label{eq:rates} \end{equation} Note the non-linear relation between the forces, Eq. (\ref{eq:forces}), and flows, Eq. (\ref{eq:rates}). Such non-linearity is what gives rise to multiple solutions for the steady state when the system is far from equilibrium. Using Eq. (\ref{eq:evcrit}) for the steady state together with Eqs. (\ref{eq:rates}) and (\ref{eq:forces}), taking the Boltzmann constant $k=1$ for convenience, and observing that the free forces can be characterized in terms of the two free pressures, $P_E$ and $P_P$ (since $P_S$ and $P_C$ are fixed and given by the Sun surface temperature to the fourth power and cloud top temperature to the fourth power respectively) gives \begin{eqnarray} \frac{\partial }{\partial P_E}\left[(1+\alpha P_P)(P_S-P_E)\log\frac{P_S}{P_E} + (P_E-P_C)\log\frac{P_E}{P_C}+(P_E-P_P)\log\frac{P_E}{P_P} \right] &\nonumber\\ +(1+\alpha P_P)\log\frac{P_S}{P_E}-\log\frac{P_E}{P_C}-\log\frac{P_E}{P_P}=0& \end{eqnarray} \begin{eqnarray} \frac{\partial }{\partial P_P}\left[(1+\alpha P_P)(P_S-P_E)\log\frac{P_S}{P_E} + (P_E-P_C)\log\frac{P_E}{P_C}+(P_E-P_P)\log\frac{P_E}{P_P} \right] &\nonumber\\ -\alpha (P_S-P_E)\log\frac{P_S}{P_E}+\log\frac{P_E}{P_P}=0& \end{eqnarray} which, after algebra gives for the steady state, \begin{equation} v_{1}=v_{2},\ \ \ v_{3}=0, \end{equation} and (for one of the solutions of the steady state) \begin{eqnarray} P_P=P_E &=&{\frac{1}{2\alpha }}[\alpha P_S-2+[4+4\alpha P_S(1-\gamma )+\alpha ^{2}P_S^2]^{\frac{1}{2}}] \nonumber \\ &\rightarrow &{\frac{1}{2}}(P_S+P_C)\ \ \ for\ \ \ \alpha \rightarrow 0 \nonumber \\ &\rightarrow &P_S\ \ \ for\ \ \alpha \rightarrow \infty \label{eq:concens} \label{eq:limits} \end{eqnarray} with $1-\gamma\equiv P_C/P_S$ ($\gamma$ is, therefore, a measure of the ``distance" from equilibrium of the system). Therefore, since $P_S$ is much greater than $P_C$ (the pressures go as the temperature to the fourth power for black-body spectra) equation (\ref{eq:limits}) indicates that the pressure of the organic pigments $P_P$, or in other words its concentration $C$, has increased due to its catalytic activity in dissipating the solar photon spectrum into the Earth emitted spectrum. The entropy production of the energy conversion processes, including catalytic activity of the organic pigment, is given by \begin{equation} {\frac{d_{i}S}{dt}}=\sum vA =(P_S-P_E)(1+\alpha P_P)\log {\frac{P_S}{P_E}}+(P_E-P_C)\log {\frac{P_E}{P_C}}+(P_E-P_P)\log {\frac{P_E}{P_P}}. \end{equation} Although it will not be demonstrated here, it can also be shown that the entropy production at the stationary state shifts to larger values as a result of the catalytic activity (see Prigogine, 1967, for the corresponding case of purely chemical reactions). These results give a non-linear irreversible thermodynamic explanation for the proliferation of organic pigments over Earth's surface. Pigment concentrations can therefore attain values much greater than that expected in equilibrium, depending on the ratio of $P_S/P_C$. Given the above derivation, and now imagining a much more complex system with many coupled irreversible processes operating, it is not difficult to visualize an associated biotic-abiotic co-evolution of Earth's physical and chemical characteristics towards non-equilibrium thermodynamic stationary states with ever greater global entropy production for Earth in its interaction with its solar environment. This thermodynamic explanation of organic pigment proliferation has hitherto been characterized as biological evolution acting through natural selection. In referring to purely chemical reactions, Prigogine (1967), in fact, noticed that such a result may shed light on the problem of the occurrence of complicated biological molecules in steady state concentrations which are of orders of magnitude larger than the equilibrium concentrations. In his 1967 book ``Thermodynamics of Irreversible Processes" (Prigogine, 1967) Prigogine states ``Thus, for systems sufficiently far from equilibrium, kinetic factors (like catalytic activity) may compensate for thermodynamic improbability and thus lead to an enormous amplification of the steady state concentrations. Note that this is a strictly non-equilibrium effect. Near equilibrium, catalytic action would not be able to shift in an appreciable way the position of the steady state." An example of a present day auto-catalytic photochemical reaction tied to solar photon dissipation is that of UV light induced mycosporine production (Sinha et al., 2002). The pigments mycosporine can be considered as catalysts that promote the dissipation of UV light into heat. The biologists see this UV induced mycosporine production as an evolved response of a plant or cyanobacteria to protect itself from its harsh environment. However, it is more likely the production of mycosporine under UV light, or of chlorophyll under visible light, has nothing to do with imaginary ``vital" forces underlying a metaphysical ``will to survive" but rather with non-linear irreversible thermodynamic imperatives founded on the well known and well characterized forces and symmetries of nature. \section{Results} \subsection{Global entropy production} \label{sec:entropyprod} Planck's formula (Planck, 1913) for the entropy flow of an arbitrary beam of photons is (Wu et al., 2011), \begin{eqnarray} L (\lambda) & = &{n_0 k c \over \lambda^4} \left[\left(1+{\lambda^5I(\lambda) \over n_0 h c^2}\right)\ln\left(1+{\lambda^5I(\lambda) \over n_0 h c^2}\right) - \left({\lambda^5I(\lambda) \over n_0 h c^2}\right)\ln\left({\lambda^5I(\lambda) \over n_0 h c^2}\right)\right], \label{eq:entropyflux} \end{eqnarray} where $n_0=2$ for an unpolarized photon beam and $n_0=1$ for a polarized beam and the units are [J/(m$^3$$\cdot$K$\cdot$ s)]. Using this equation and approximating the incoming solar spectrum at the top of Earth's atmosphere as a black-body spectrum at temperature of the Sun's surface (5760 K) and the Earth's out going spectrum as a gray-body spectrum at an equivalent temperature of 254.3 K, the total entropy production of Earth can be calculated to be about 1.196 [W/(m$^2 \cdot K$)] averaged over day and night and Earth's entire surface (Michaelian, 2012). Earth's entropy production per square meter, in fact, is found to be roughly 70\% larger than either that of Venus and Mars,(Michaelian, 2012). It is tempting to assign this ``additional" entropy production to the process of life. But, if life is indeed responsible for this additional entropy production, then a significant portion of Earth's total entropy production would have to occur at its surface where the organic pigments are located. A rough estimate can be made of the relative contribution of the entropy production at the surface of Earth to the total by assuming a heat flow approximation for entropy production; \begin{equation} \sigma=Q\left({1\over T_2} - {1\over T_1} \right). \label{eq:entropyprod} \end{equation} Of the solar energy incident on Earth, with a global average of about 238 [W/m$^2$] impinging on Earth's upper atmosphere, about $Q=170$ [W/m$^2$] makes it to the Earth's surface, while the remaining 68 [W/m$^2$] is absorbed in the atmosphere. Using equation (\ref{eq:entropyprod}) with $T_1=5760$K the temperature of the Sun's surface, and $T_2=288$K as the temperature of the Earth's surface, and $T_2=252$K for the temperature of the Earth's atmosphere, one can easily show (Michaelian, 2012) that surface dissipation makes up 71\% of the entropy production and atmospheric dissipation the remaining 29\%. Since organic pigments are densely spread everywhere on Earth's surface where there is water, it follows that life, through organic pigments in water, is probably very important to the entropy production of Earth. It might be inquired as to what is the function of animals if the main thermodynamic function of life is the dissipation of solar photons through the spread of organic pigments over Earth's surface? In terms of biomass or number, animals are negligible compared with photosynthetic organisms and thus, at first sight, may appear as a mere curiosity. However, their existence appears to be crucial in allowing plants and cyanobacteria to spread over the entire surface of Earth. For example, most of the ocean surface would be as barren as a desert in nutrients, unable to support surface cyanobacteria, if it were not for the mobility of marine animals that spread nutrients over vast distances of ocean surface through mobility, excrement and death. Over land, animals play this same role of the faithful, but unwitting, gardener. Indeed, it has always been argued that animals brought nutrients to the barren land surface perhaps 700 million years ago when life first left the ocean and organic pigments began to proliferate on land (Pisani et al., 2004). \subsection{The dissipative origin of life} \label{sec:originlife} If the reason for life is the dissipation of solar photons through the spread of organic pigments over the Earth's surface, then it would of course be interesting to look for the primordial organic pigments which could have represented life's initiation. Fortunately, the search appears trivial since the presently presumed first molecules of life, RNA and DNA, both absorb very strongly at 255 nm, just where Sagan (1973) and Cnossen et al. (2007) have predicted a peak in the transparency of Earth's atmosphere during the Archean. As mentioned in section \ref{sec:photon}, the excitation energy due to a photon absorbed on RNA or DNA is dissipated into vibrational energy of the surrounding water very rapidly, making these molecules excellent photon dissipaters in the UV. The reason for the proliferation of these molecules under an intense UV flux is thermodynamic and was given in section \ref{sec:autocatalysis}. However, knowing the particular mechanism of reproduction would also be useful and this mechanism must also be related to dissipation. It is also important to explain how and why these molecules would have transformed into the information carrying molecules that they are today. A possible mechanism for replication of RNA and DNA without the need for enzymes (and thus information content and reproductive fidelity) called Ultraviolet and Temperature Assisted Reproduction (UVTAR) has been suggested (Michaelian, 2009b, 2011b) which employs the physical characteristics present on Archean Earth's surface; high surface temperature $\approx$ 80$^\circ$C (Lowe and Tice, 2004), intense UV light, and moderate $\approx$ 5$^\circ$C day/night temperature cycling of the ocean surface (Michaelian, 2009a, 2012). It is probable that nucleotides would have proliferated on the ocean surface due to their propensity to act as catalysts for UV dissipation (see section \ref{sec:autocatalysis}) and due to their resistance to UV destruction, and that some phosphate bonding among these would have occurred under the action of UV light (Strigunkova et al., 1986). As the Earth's surface temperature cooled to just below the denaturing temperature of certain RNA or DNA segments, then during the day absorption of infrared and visible light on water, and absorption of UV on nucleotides, would be sufficient to raise the local surface water temperature to above the denaturing temperature of RNA or DNA, separating the double strands. During the cool periods overnight, these single strands could have acted as templates for the production of new double strand. This proposed mechanism bears resemblance to polymerase chain reaction (PCR) which is today used routinely in the laboratory to amplify a particular segment of DNA or RNA (Mullis, 1990). It is probable that as the Earth's surface cooled further, the daytime increase in surface temperature was insufficient to permit denaturation of RNA or DNA by itself and so those strands which happened to code for an amino acid which could act as a simple denaturing enzyme would become more prevalent (Michaelian, 2011b). These amino acids could have been those with an aromatic ring; tyrosine, tryptophan, phenylalanine, or histidine, which would have acted as antenna molecules to attract more UV light for more local heating. Information content and replication fidelity leading to evolution through natural selection would therefore have arisen as a thermodynamic response to retaining entropy production through the multiplication of these UV dissipating pigments as the seas cooled further. The details of the mechanism have been presented elsewhere (Michaelian, 2009b, 2011b), suffice to say here that this proposed mechanism enjoys a number of advantages over previous theories for the origin of life. First and foremost, life's origin is clearly recognized as a dissipative process, and, furthermore, it is tied directly to the dissipation of the most intense free energy source available at the Earth's surface. Second, since the mechanism of replication relies on the physical characteristics of the Archean environment, and not on specialized enzymes, there is no requirement on initial reproduction fidelity to protect information content of RNA or DNA. Under the UVTAR mechanism, information content was not required for replication. Third, since there would be a slight denaturation advantage to right handed RNA or DNA during the late afternoon when the ocean surface temperature is highest and the submarine light at the surface is slightly right-circularly polarized, the present theory could explain the chirality of life (Michaelian, 2010). \subsection{Dissipation through organic molecules throughout the universe} \label{sec:lifeuniverse} On Earth, organic molecules are found only in association with water. As described above, this is most likely related to the efficiency of organic pigments dissipating solar photons using the high frequency water vibrational modes to facilitate their de-excitation. Without water they are poor photon dissipaters and easily destroyed by photochemical reactions. This is probably the primordial reason for the association of life with water. Organic molecules have been found in significant quantities on the surfaces of inner orbit comets. It is probable that these molecules were not formed in the atmospheres of red giant stars and later collected by these comets as suggested by Hoyle (1978), but rather that they formed on the comets themselves as the comet passed sufficiently close to the Sun to cause regions of liquid water to form on the surface of the comet. As on Earth, on the comet surface, the proliferation of these molecules to beyond their expected equilibrium concentrations would be a thermodynamic imperative, a result of their catalytic activity in dissipating high energy photons from the Sun. However, it is entirely possible that a solvent other than water, with high frequency vibrational modes that can couple to the vibrational modes of the same or similar UV absorbing pigment might exist on another planet. In this case, the temperature range for analogous dissipating ``life" in our universe might not be confined to that required for liquid water. This would greatly increase the variety of possible life and the expectation of finding evolved dissipating life spread on the surface of another planet. However, the evolution of our particular form of life based RNA and DNA and water, assuming the correctness of the above proposed mechanism for its origin, would be very dependent on the particular initial conditions of the planet. For example, an intense UV photon flux, and liquid water temperatures that gradually dropped below the denaturing temperature of RNA and DNA, could only happen on a planet at a similar distance from a similar star as our Sun. Finally, this delicate dependence of Earth biological evolution on its initial conditions, and thus probably also on external perturbations, suggests that it was probably not inevitable that life as we know it appeared on Earth. For example, the elastic dispersion into a greater volume of an initially collimated photon beam also produces a significant amount of entropy, contributing, in fact, almost one half of the entropy production on Venus (Michaelian, 2012). It may be that during ice-ages, the majority of the entropy production was shifted from photon dissipation by photosynthetic organisms to photon elastic dispersion from ice, or, more probably, a combination of the two forms of dissipation which becomes competitive with photon dissipation by organic pigments alone under the prevailing conditions. \section{Conclusions} \label{sec:conclusions} As for all irreversible processes, life is directly dependent on the dissipation of a generalized thermodynamic potential. This potential is today, and always has been, the high energy solar photon flux, and this dissipation is the largest source of entropy production on Earth. There is evidence that life has co-evolved with its abiotic environment, adjusting the physical characteristics of Earth such that the most intense part of the solar photon spectrum can arrive at the Earth's surface where it can be dissipated by life. About 71\% of the entropy production due to the dissipation of solar photons occurs at Earth's surface. Earth produces approximately 70\% more entropy per unit surface area than either of its neighbors (Michaelian, 2012). Both of these facts are probably due to life on the surface of Earth. Classical irreversible thermodynamics indicates that if organic pigments act to catalyze the dissipation of the solar photon potential then their quantity on Earth's surface can be expected to exceed greatly the expected equilibrium values and the entropy production of the dissipation process will increase accordingly. This is undoubtedly the reason for the proliferation of organic pigments over the Earth's surface. For a system with many more coupled irreversible processes operating than considered here, this thermodynamic result may explain the motor behind all biological and biotic-abiotic evolution. From this thermodynamic dissipative perspective, animals would only have thermodynamic relevance in how they assist in the growth and spread of plants over land and cyanobacteria over the water. Structuring due to dissipation is a universal phenomena. Forms of dissipation are varied. Life as we know it is only one based on the direct dissipation of visible or UV light. Organic pigments are the most efficient and robust dissipater in this wavelength region given Earth's physical and chemical characteristics. On the solar systems of different stars it may be that very different dissipaters exist, some based on different organic molecules dissipating different photon chemical potentials (spectra), others based on a different type of solvent molecule that couples to the vibrational modes of the excited organic molecule, and this solvent could be operating in different physical (temperature, pressure, etc.) regimes. It is well known, for example, that organic molecules in the clouds of Venus are absorbing UV and visible light and dissipating this energy into the heat that drives the great southern vortex (ESA, 2006). It would be interesting to show that these dissipating molecules have proliferated to beyond their expected equilibrium concentrations and thus we must acknowledge a different type of living ecosystem operating on Venus. \ack The financial assistance of DGAPA-UNAM, grant numbers IN112809 and IN103113 is greatly appreciated. \section*{References} \begin{thereferences} \item \'AHearn M F et al. 2005 Deep impact: excavating comet Tempel 1 {\em Science} {\bf 310} 258 DOI: 10.1126/science.1118923 \item Callahana M P et al. 2011 Carbonaceous meteorites contain a wide range of extraterrestrial nucleobases, {\bf PNAS} Early Edition, 13995 \item Cnossen I, Sanz-Forcada J, Favata F, Witasse O, Zegers T, and Arnold N F 2007 The habitat of early life: Solar X-ray and UV radiation at Earth's surface 4--3.5\,billion years ago {\em J. Geophys. Res.} {\bf 112} E02008 http://dx.doi.org/10.1029/2006JE002784 {doi:10.1029/2006JE002784} \item Conde F R, Churio M S, Previtali C M 2000 The photoprotector mechanism of mycosporine-like amino acids. Excited-state properties and photostability of porphyra-334 in aqueous solution {\em Journal of Photochemistry and Photobiology B: Biology }{\bf56} 139–144 \item de Donder T 1936 Thermodynamic Theory of Affinity: A Book of Principles. Oxford, England: Oxford University Press \item European Space Agency 2006, June 27 Double vortex at venus south pole unveiled {\em ScienceDaily} Retrieved January 10, 2012, from http://www.sciencedaily.com­ /releases/2006/06/060627104232.htm \item Gadgil C J and Kulkarni D B 2009 Autocatalysis in Biological Systems {\em AIChE Journal} {\bf 55} No. 3, 556--562 \item Gates D M 1980 Biophysical Ecology ISBN~0-387-90414-X, Springer-Verlag, New York \item Herrmann F and W\"urfel P 2005 Light with nonzero chemical potential {\em Am. J. Phys.} {\bf 73} 717--721 \item Hoyle F and Wickramasinghe N C 1978 Lifecloud -- The Origin of Life in the Universe, ISBN~0-460-04335-8, J.~M.~Dent and Sons, London \item Jones I, George G, Reynolds C 2005 Quantifying effects of phytoplankton on the heat budgets of two large limnetic enclosures {\em Freshwater Biology} {\bf 50} 12391247 \item Kleidon A 2008 Entropy Production by Evapotranspiration and its Geographic Variation {\em Soil \& Water Res.} {\bf 3} S89–S94 \item Lovelock J E 1988 The Ages of Gaia: A Biography of Our Living Earth W. W. Norton \& Company, New York \item Lowe D R and Tice M M 2004 Geologic evidence for Archean atmospheric and climatic evolution: Fluctuating levels of CO2, CH4, and O2 with an overriding tectonic control {\em Geology} {\bf 32} 493-496 \item Michaelian K 2009a Thermodynamic function of life (PDF) arXiv. http://arxiv.org/abs/0907.0040 \item Michaelian K 2009b Thermodynamic origin of life (PDF) arXiv. http://arxiv.org/abs/0907.0042 \item Michaelian K 2010 Homochirality through photon-induced melting of RNA/DNA: the thermodynamic dissipation theory of the origin of life. Available from {\em Nature Precedings} http://hdl.handle.net/10101/npre.2010.5177.1 \item Michaelian K and Manuel O 2011a Origin and evolution of life constraints on the solar model {\em Journal of Modern Physics} {\bf 2} No. 6A, 587-594 DOI: 10.4236/jmp.2011.226068 \item Michaelian K 2011b Thermodynamic dissipation theory for the origin of life {\em Earth Syst. Dynam.} {\bf 2} 37–51 doi:10.5194/esd-2-37-2011 \item Michaelian K 2012 Biological catalysis of the hydrological cycle: life's thermodynamic function {\em Hydrol. Earth Syst. Sci.} {\bf 16} 2629-2645, 2012. www.hydrol-earth-syst-sci.net/16/2629/2012/ doi:10.5194/hess-16-2629-2012 \item Miller S L and Urey H C 1959 Organic compound synthesis on the primitive earth {\em Science} {\bf 130} 245--251 \item Mullis K 1990 The unusual origin of the Polymerase Chain Reaction {\em Scientific American} April, 56–65 \item Onsager L 1931 Reciprocal Relations in Irreversible Processes, I. {\em Phys. Rev.} {\bf 37} 405--426 \item Pecourt JM L, Peon J, and Kohler B 2000 Ultrafast internal conversion of electronically excited RNA and DNA nucleosides in water {\em J. Am. Chem. Soc.} {\bf 122} 9348-9349 \item Pisani D, Poling L L, Lyons-Weiler M and Hedges S B 2004 The colonization of land by animals: molecular phylogeny and divergence times among arthropods {\em BMC Biology} {\bf 2:1} doi:10.1186/1741-7007-2-1 \item Prigogine I 1967 Thermodynamics of Irreversible Processes, Wiley, New York \item Ribas I, Guinan E F, Güdel M and Audard M 2005 Evolution of the solar activity over time and effects on planetary atmospheres. I. High-energy irradiances (1 to 1700 Å) {\em ApJ.} {\bf 622} 680-- 694 \item Sagan C 1973 Ultraviolet selection pressure on the earliest organisms {\em J. Theor. Biol.} {\bf 39} 195--200 \item Sagan C and Chyba C 1997 The early faint Sun paradox: organic shielding of ultraviolet-labile greenhouse gases {\em Science} {\bf 276} 1217--1221 \item Serrano-Andr\'es L and Merch\'an M 2009 Are the five natural DNA/RNA base monomers a good choice from natural selection? A photochemical perspective, {\em Journal of Photochemistry and Photobiology C: Photochemistry Reviews} {\bf 10} 21–32 \item Sinha R P, Sinha J P, Groniger A, Hader D-P 2002 Polychromatic action spectrum for the induction of a mycosporine-like amino acid in a rice-field cyanobacterium, Anabaena sp. {\em Journal of Photochemistry and Photobiology B: Biology} {\bf 66} 47–53 \item Strigunkova TF, Lavrentiev G A, and Otroshchenko V A 1986 Abiogenic synthesis of oligonucleotides on kaolinite under the action of ultraviolet radiation {\em J Mol Evol} {\bf 23} 290--293 \item Wu W, Liu Y, Wen G 2011 Spectral solar irradiance and its entropic effect on Earth’s climate {\em Earth Syst. Dynam. Discuss.} {\bf 2} 45–70 www.earth-syst-dynam-discuss.net/2/45/2011/ doi:10.5194/esdd-2-45-2011. \end{thereferences} \end{document}
1,116,691,497,035
arxiv
\section*{Abstract} \end{center} Dropout Regularization, serving to reduce variance, is nearly ubiquitous in Deep Learning models. We explore the relationship between the dropout rate and model complexity by training 2,000 neural networks configured with random combinations of the dropout rate and the number of hidden units in each dense layer, on each of the three data sets we selected. The generated figures, with binary cross entropy loss and binary accuracy on the z-axis, question the common assumption that adding depth to a dense layer while increasing the dropout rate will certainly enhance performance. We also discover a complex correlation between the two hyperparameters that we proceed to quantify by building additional machine learning and Deep Learning models which predict the optimal dropout rate given some hidden units in each dense layer. Linear regression and polynomial logistic regression require the use of arbitrary thresholds to select the cost data points included in the regression and to assign the cost data points a binary classification, respectively. These machine learning models have mediocre performance because their naive nature prevented the modeling of complex decision boundaries. Turning to Deep Learning models, we build neural networks that predict the optimal dropout rate given the number of hidden units in each dense layer, the desired cost, and the desired accuracy of the model. Though, this attempt encounters a mathematical error that can be attributed to the failure of the vertical line test. The ultimate Deep Learning model is a neural network whose decision boundary represents the 2,000 previously generated data points. This final model leads us to devise a promising method for tuning hyperparameters to minimize computational expense yet maximize performance. The strategy can be applied to any model hyperparameters, with the prospect of more efficient tuning in industrial models. \clearpage \section*{Introduction} Dropout Regularization is a technique used in Deep Learning and other branches of Machine Learning to combat overfitting on training data with respect to the validation set \cite{dropout}. Another hyperparameter, the architecture of a neural network, determines the complexity of the functions the model will be able to learn, with a deeper model that contains more hidden units able to reduce bias in the network \cite{DNN}. \\\\ Superficially, the implementation of dropout regularization seemingly counteracts the effect of building a model with more hidden units. While increased model complexity mitigates bias, higher dropout rates serve to restore bias to prevent overfitting. Thus, it is crucial for the Machine Learning practitioner to equilibrate these two hyperparameters to find the tuning that induces the finest performance. The common approach is to build a complex model accompanied by substantial regularization. However, the problem with this attempted one-size-fits-all approach is that training the model can require extensive computational resources and prolonged train times, not to mention the greater issue of suboptimal convergence. \\\\ Thus, we seek to identify the relationship between the dropout rate and the number of hidden units in a neural network. We aim to demonstrate brief hyperparameter tuning with respect to cost and accuracy, specifically intending to illustrate that adding complexity in a neural network is not always the steadfast solution to unlock a better-performing model. Rather, a simpler model, which can be trained in much less time, can offer similar performance. We experiment with the relationship between variance and bias by studying model complexity and dropout regularization, showing that multiple model configurations can result in similar performance. In addition, we explore a method to find the best dropout rate to use in a Deep Learning model given the model complexity of each dense hidden layer. Finally, we share a generalized version of this insightful procedure that Machine Learning practitioners can utilize to more quickly perform hyperparameter tuning to balance bias and variance while optimizing validation cost and accuracy. \subsection*{Data Availability Statement} For experimentation, we used three publicly available cardiovascular disease data sets. Each of the training examples in these three data sets represents individual patients. Medical information about the patients and a binary diagnosis label for cardiovascular disease are included for each data example. We selected these data sets because they are all relatively small, allowing for faster hyperparameter tuning. The following describes each of the data sets used (\textit{data sets will henceforth be referred to by the names below in bold}): \paragraph{High-Level Data Set} The High-Level Data Set is available on the Kaggle \cite{kaggle_dataset} database and contains 70,000 examples and 11 high-level features, most binary and ternary. Prior to experimentation, this data set underwent slight preprocessing to remove statistical outliers using the 1.5 x IQR Rule with a modified coefficient of 2.5, which we believed better represented the spread of the data set. Training examples with features beyond the range permitted by the IQR Rule were excluded from the data set. \paragraph{Cleveland Data Set} The Cleveland Data Set is a famous source of cardiovascular disease data containing 303 examples and 75 features available on the UCI Machine Learning Repository \cite{uci}. Since only 14 of the 75 features are used by Machine Learning researchers when studying cardiovascular disease, we also experimented with this subset of features. \paragraph{Combined Data Set} We assembled a data set that is a fusion of cardiovascular disease data sourced from hospitals in Cleveland, Hungary, Switzerland, and Long Beach, California. The individual data sets are available on the UCI Machine Learning Repository \cite{uci}. The Combined Data Set contains 1,025 examples with the same 14 features included in the Cleveland Data Set. \section*{Methods} \subsection*{Measuring Performance of Deep Learning Models Configured With Unique Combinations of Dropout Rate and Layer Size} To investigate the relationship between dropout regularization and model complexity, we explored hyperparameter tuning. We randomly chose combinations of model complexity and regularization, then trained 2,000 different neural networks \footnote{All Deep Learning models constructed during our research were trained using the highly optimized TensorFlow Keras library \cite{keras}.} using these randomly chosen hyperparameters to predict the cardiovascular disease diagnosis label given the medical information of the patients. Finally, we determined the binary cross entropy cost and binary accuracy of each model after 150 iterations of training. The architecture of the networks comprised six dense hidden layers each with the same number of hidden units and the same dropout rate. However, each model trained used a unique number of hidden units and a unique dropout rate. \begin{adjustwidth}{0cm}{} \LinedItem{Let \textit{\textbf{x}} be the number of hidden units in each of the six hidden layers of a constructed Deep Learning model.} \LinedItem{Let \textit{\textbf{y}} be the dropout rate for this Deep Learning model.}\\ \end{adjustwidth} The value of \textit{\textbf{x}} ranged from $2^3=8$ to $2^{10}=1024$. To sample as many models with $2^3=8$ to $2^4=16$ hidden units as models with $2^9=512$ to $2^{10}=1024$ hidden units, we chose \textit{\textbf{x}} based on a logarithmic scale. Specifically, $\textit{\textbf{x}}=\floor{2^c}$, where $c$ was a uniformly random number chosen from the interval $(3,10)$. (The number \textit{\textbf{x}} must be an integer, so the floor function was performed.) The value of \textit{\textbf{y}} was a uniformly random number chosen from the interval $(0,1)$. \\ \\ \noindent All other hyperparameters remained constant during experimentation: each model's weights were initialized with Xavier initialization using the same random seed, and each model was optimized with Adam Optimization \cite{optimization} using a mini-batch \cite{minibatch} size of 128 examples, undergoing 150 epochs of training. \[\includegraphics[scale=0.35]{Network_Visual.png}\] \begin{align*} &\textbf{Figure 1: }\text{Generalized structure of each of the 2,000 neural networks built, where \textbf{X} is the number of }\\ &\text{hidden units in each hidden layer and \textbf{Y} is the dropout rate.} \end{align*} After training 2,000 such models on each of the three data sets and recording the performance metrics of each model, we proceeded to plot all generated points on three-dimensional graphs: the number of hidden units on the $x$ axis, the dropout rate on the $y$ axis, and the attained cost/accuracy on the $z$ axis. \[\hspace{-5mm}\includegraphics[scale=0.35]{Cost_Figures.png}\] \begin{align*} &\textbf{Figure 2: }\text{Cost plots generated by training 2,000 models on each of the three data sets, where the color map is }\\ &\text{determined by the cost, the z-axis parameter. Notice that all plots have distinct contours that are able to be modeled.} \end{align*} \[\includegraphics[scale=0.34]{Accuracy_Figures.png}\] \begin{align*} &\textbf{Figure 3: }\text{Accuracy plots generated by training 2,000 models on each of the three data sets, where the color map is }\\ &\text{determined by the binary accuracy, the z-axis parameter. } \end{align*} \subsection*{Modeling a Generalized Relation Between Dropout Rate and Layer Size} Analyzing the figures above, we identified respective regions of lower cost and higher accuracy. For example, we observed a trough spanning the length of the cost plot of the High-Level data set that looked linear in nature. Considered together, our observations compelled us to answer the question: \textit{What is the correlation between dropout rate and model complexity that unlocks superior performance?} We hypothesized the optimal dropout rate to be a function of the number of hidden units in a hidden layer (assuming the number of hidden layers upon other variables stayed constant), and thus built a series of predictive models to fit the collected 2,000 points of data and quantify the hypothesized relationship. \paragraph{Linear Regression} Linear Regression was the first technique utilized to fit points in the ``trough'' regions of all graphs. Initially, the boundaries of these troughs needed to be specified, requiring the manual setting of numerical and percentile thresholds of the 2,000 cost data points. Combinations of dropout rate and model complexity that achieved a cost lower than this numerical threshold or within this percentile threshold were included in the regression, while other inferior combinations were excluded. We experimented with numerical thresholds incrementing in 0.001 from 0.54 to 0.55 and percentile thresholds incrementing in 5\% from 5\% to 25\%. Clearly, the selection of thresholds was inherently subjective, not to mention that abundant amounts of usable data were being wasted because of the procedure's selective nature. \paragraph{Polynomial Logistic Regression} To utilize all data points collected, we turned to Logistic Regression to assign models a binary classification based on their cost. Because cost is on a continuous scale, we transformed cost into a binary feature by labeling costs under a hand-chosen percentile threshold as 1 (superior) and all other costs as 0 (inferior). We tested 10\% and 25\% as these percentile thresholds, after which we built Logistic Regression models to classify combinations of dropout rate and number of hidden units depending on their achieved cost. Feeding second-order and third-order polynomial features into the models allowed for the learning of nonlinear decision boundaries. \\\\ Though the Logistic Regression models employed all 2,000 generated data points, the arbitrary selection of a percentile for the classification of costs was still required. In addition, the selection of the degree of polynomial features was not methodical. Furthermore, the Logistic Regression models were quite naive in the sense that they assigned the trained Deep Learning models just two elementary labels, which meant they were incapable of differentiating between a very poor model and a below average model or between sufficient model and an exceptional model. \subsubsection*{Neural Networks} \paragraph{1. Neural Networks for Optimal Dropout Rate Prediction} Deficiencies of previous machine learning models led us to utilize the complex hypothesis functions of neural networks \cite{DNN}. With these models, we wished to rigorously quantify the performance of model using the exact cost rather than using discrete labels. Accordingly, we constructed three neural networks \footnote{These neural networks are not to be confused with the networks trained on the cardiovascular disease data or the network that outputs a continuous dropout rate.} that received the number of hidden units, the desired cost, and the desired accuracy as inputs. At test time, the desired cost was interpreted as the minimum cost attained in our generated data set, and the desired accuracy was interpreted as the maximum accuracy attained in our generated data set. Thus, we were able to feed an array of hidden units ranging from $2^{3}$ to $2^{10}$ into these neural networks to obtain the optimal dropout rate for each value. Instead of yielding a binary label, the network yielded a continuous output between 0 and 1, representing the optimal dropout rate given the inputs. \[\includegraphics[scale=0.25]{Network_rate_output_Visual.png}\] \begin{align*} &\textbf{Figure 4: }\text{Generalized architecture of the neural network trained on the 2,000 generated points to predict optimal }\\ &\text{dropout rate given a number of hidden units in each dense layer, the desired cost, and the desired accuracy. } \end{align*} \noindent The suboptimal performance of these neural networks gave us insight on how to implement the same type of model more effectively. \paragraph{2. Neural Networks for Surface Plots} \noindent We turned to a more rational approach, training neural networks to predict the cost of a Deep Learning model given the number of hidden units in the hidden layers and the dropout rate of the Deep Learning model. \\\\ Two neural networks were trained for each of the three data sets to model cost and accuracy, these networks containing six hidden layers with 16 hidden units each and regularized with a dropout rate of 0.1. We created three-dimensional surface plots which were representative of the continuous cost predictions given hyperparameter combinations. These decision boundaries were juxtaposed with the generated data points to visualize the models' performance. (These plots are shown in the Results section.) \section*{Results} \subsection*{Performance of Deep Learning Models With Unique Hyperparameter Combinations} Analyzing the generated data led to various findings about the nature of the interactions between dropout rate, number of hidden units in a hidden layer, and the cost and accuracy achieved. Some figures, such as the accuracy plot of the Cleveland Data Set, depicted succinct regions in which the cost or accuracy was determined entirely by only a single hyperparameter of the two we analyzed. Other figures, such as the cost plot of the High-Level Data Set, showed that using a more hidden units with a higher dropout rate yielded the same performance as using less hidden units with a lower dropout rate. In contrast, still other figures, such as the accuracy plot of the Combined Data Set, favored the use of a lower dropout rate without regard for the number of hidden units. \\\\ Such findings compelled us to rethink common presupposed relationships between the these two hyperparameters, challenging the frequent assumption that a larger number of hidden units in a dense layer simply requires a larger dropout rate to enhance performance. In addition, the plots discredited the notion that increasing the number of hidden units results in better performance. In fact, many spatial regions of the highest cost were achieved by models that had a relatively large amount of hidden units. We urge network designers to consider that an increase in model complexity, though theoretically reducing bias, can actually lower the performance of a model. The involuted balance created by regularization parameters such as the dropout rate cannot be readily oversimplified into a questionable cliche that adding hidden nodes unlocks superior performance. \subsection*{Generalized Relation Between Dropout Rate and Layer Size} \paragraph{Linear Regression} When examining the performance of the linear models which were fitted on a subset of the generated data to predict the optimal dropout rate, we noticed that while effective for the High-Level data set, linear models were unable to perform well on the Combined data set and the Cleveland data set, particularly due to the non-linearity of the problem and the disarray of the selected points. In addition, with the threshold percentile being set at the relatively low level of 25\%, 75\% of the data or 1500 points in our case were unused, amounting to waste in computational resources. Arbitrary selection of the threshold percentile was another undesirable byproduct of this model, as it was unclear which threshold percentile would lead to better performance. \renewcommand{\arraystretch}{1.5} \begin{center} \begin{tabular}{|p{2.8cm}||p{3.3cm}|p{3.3cm}|p{3.3cm}|} \hline \multicolumn{4}{|c|}{\textbf{Mean Absolute Error of Linear Regression}}\\ \hline & High-Level Data Set &Cleveland Data Set&Combined Data Set\\ \hline 10\% Threshold & 0.0609 & 0.0976 & 0.0768\\ \hline 25\% Threshold & 0.0942 & 0.1084 & 0.1137\\ \hline \end{tabular} \end{center} \[\hspace{-5mm}\includegraphics[scale=0.5]{LR_Figures.png}\] \begin{align*} &\textbf{Figure 5: }\text{Visualizations of Linear Regression on all three data sets, with different percentile thresholds }\\ &\text{being used to select points included in the regression. Notice that this approach had different levels of }\\ &\text{effectiveness across data sets and performed especially poorly on the Combined Data Set.} \end{align*} \paragraph{Polynomial Logistic Regression} The Logistic Regression models greatly improved upon the performance of the Linear Regression due to the extra data employed. When using Logistic Regression, the non-linearity of separating optimal models from non-optimal models became apparent, and thus we employed higher-order features. When utilizing these higher-order features, we found that including third-order features greatly improved performance without overfitting the data. Logistic Regression with higher order features yielded promising results for distinguishing ``superior'' models from ``inferior'' models. Nonetheless, Logistic Regression still fell short in that it offered a binary classification of the data. Due to the absence of clear definitions of optimal and non-optimal models, arbitrary thresholds had to be selected. The more desirable solution would be to utilize the continuous cost values. \renewcommand{\arraystretch}{1.5} \begin{center} \begin{tabular}{|p{3.9cm}||p{3.3cm}|p{3.3cm}|p{3.3cm}|} \hline \multicolumn{4}{|c|}{\textbf{Binary Accuracy of Polynomial Logistic Regression (25\% Threshold)}}\\ \hline & High-Level Data Set &Cleveland Data Set&Combined Data Set\\ \hline Second Degree Features & 0.8490 & 0.8695 & 0.9825\\ \hline Third Degree Features & 0.9005 & 0.8870 & 0.9820\\ \hline \end{tabular} \end{center} \[\hspace{-5mm}\includegraphics[scale=0.5]{Logistic_Polyreg_Figures.png}\] \begin{align*} &\textbf{Figure 6: }\text{Visualizations of the Polynomial Logistic Regression decision boundaries on all three data sets, using a }\\ &\text{percentile threshold of 25\%. Red areas are regions that the models classify as having a superior cost (``good'', 1), }\\ &\text{while the background color represents the models' classification of inferior cost (``bad'', 0). Blue points symbolize } \\ &\text{combinations of hyperparameters that achieve costs within the 25\% threshold, while green points }\\ &\text{achieve costs that exceed the threshold.} \end{align*} \paragraph{Neural Networks for Optimal Dropout Rate Prediction} The relationship between the number of hidden units in each dense layer and the optimal dropout rate, as modeled by the first type of neural network introduced in the Methods section, is plotted below. \renewcommand{\arraystretch}{1.5} \begin{center} \begin{tabular}{|p{3.4cm}|p{3.4cm}|p{3.4cm}|} \hline \multicolumn{3}{|c|}{\textbf{Mean Absolute Error of Neural Network Used to Predict Dropout Rate}}\\ \hline High-Level Data Set &Cleveland Data Set&Combined Data Set\\ \hline 0.1384 & 0.0995 & 0.1640\\ \hline \end{tabular} \end{center} \[\includegraphics[scale=0.5]{Dropout_Pred_NN.png}\] \begin{align*} &\textbf{Figure 7: }\text{These figures plot the number of hidden units in each dense layer on the $x$ axis against the predicted }\\ &\text{optimal dropout rate on the $y$ axis.} \end{align*} \noindent When we plotted the decision boundaries of the the neural networks in three dimensions, we saw that the models had disreputable performances, which caused us to speculate that this was due to the fundamental nature of the idea itself. We realized that theoretically, more than one dropout rate can be configured with the same number of hidden units to achieve the same cost and accuracy. Thus, we ventured to discover why the dropout rate is not always a function of the number of hidden units, cost, and accuracy. To explain this mathematically: \begin{adjustwidth}{0cm}{} \LinedItem{Let $n$ be the number of hidden units in each hidden layer for some Deep Learning model.} \LinedItem{Let $p$ be the optimal dropout rate given $n$.} \LinedItem{Let $\epsilon_1$ and $\epsilon_2$ both be some positive numbers.}\\ \end{adjustwidth} By definition, the configuration of $p$ dropout rate achieves the lowest possible cost for all possible models with $n$ hidden units in each hidden layer. (Side Note: This configuration \textbf{is not} guaranteed to achieve the highest possible accuracy, only the lowest possible cost.) \\\\ Using $p+\epsilon_1$ or $p-\epsilon_2$ as the dropout rate will result in a higher cost than using $p$ as the dropout rate. The cost function graph, though not appearing as completely continuous, strongly suggests that there exists some values for $\epsilon_1$ and $\epsilon_2$ such that using $p+\epsilon_1$ or $p+\epsilon_2$ as the dropout rate will result in the \textit{same} cost. (If not exactly the same, then our data indicates that $\epsilon_1$ and $\epsilon_2$ can be found such that the resulting costs have a negligible difference.) This observation explains why it is impossible to perfectly predict the optimal dropout rate given the number of hidden units in each of the hidden layers, the cost of the model, and the accuracy of the model. \paragraph{Network Networks for Surface Plots} These neural networks were significant because they enabled us to test thousands of configurations of hyperparameters through forward propagation in a matter of seconds, demonstrating an immense reduction in time of the hyperparameter tuning process.\\ \renewcommand{\arraystretch}{1.5} \begin{center} \begin{tabular}{|p{4.1cm}||p{3.3cm}|p{3.3cm}|p{3.3cm}|} \hline \multicolumn{4}{|c|}{\textbf{Mean Absolute Error of Neural Networks for Surface Plots}}\\ \hline & High-Level Data Set &Cleveland Data Set&Combined Data Set\\ \hline Cost Points & 0.0115 &0.0897& 0.1156\\ \hline Accuracy Points& 0.0100 & 0.0459 &0.0479\\ \hline \end{tabular} \end{center} \[\includegraphics[scale=0.4]{Surface_Plots.png}\] \begin{align*} &\textbf{Figure 8: }\text{Visualization of the decision boundaries of the neural networks used to model the 2,000 generated }\\ &\text{points, creating surface plots.} \end{align*} \section*{Discussion} Allowing more flexibility in designing network architectures opens up many new avenues for the network designer. Our findings regarding the effects of regularization and hyperparameter tuning should motivate network designers to prioritize the conservation of valuable computational resources by designing more efficient architectures. \\ \\ Rapid prototyping in industrial applications can also be made more accessible when designers are not as inclined to tediously tune bigger models, but instead implement a larger volume of smaller models. While each neural network used in our research was small and was trained in roughly 10 minutes on GPU runtime, most industrial Deep Learning models utilize massive data sets with numerous features to tackle difficult problems (e.g. image recognition, sentiment analysis, self-supervised learning tasks). In these cases, training many models in a short period of time and iterating through the development process is essential to creating deployable products. Hence, the results of our research call for a more insightful approach to tune hyperparameters in neural networks. \subsection*{A Rapid and Effective Hyperparameter Tuning Method} The following is an outline of a new method to accelerate the meticulous process of hyperparameter adjustment in Deep Learning models. \begin{adjustwidth}{0.4cm}{} \LinedItem{Let \textit{\textbf{n}} be the initial number of models trained on random hyperparameter settings chosen according to their respective range of values (in our case \textit{\textbf{n}} = 2,000; the number of hidden units in each dense layer was chosen on a logarithmic scale from $2^3$ to $2^{10}$ and the dropout rate was chosen on a linear scale from 0 to 1).} \LinedItem{The \textit{\textbf{n}} models with can be trained and the random hyperparameter settings, the cost, and the final accuracy after training can be recorded.} \LinedItem{Using this generated collection of points, a network designer can then train a separate neural network (or some other Machine Learning model as they see fit) to fit either the cost or the accuracy, in essence to predict the cost/accuracy of the model given the value of each hyperparameter. } \LinedItem{The network designer can use this previous model to sample thousands of hyperparameter configurations and generate a surface plot. } \LinedItem{Using this surface plot, the designer can ``zoom'' into areas which are predicted by the model to have relatively lower costs. Note that using this surface plot is theoretically equivalent to testing out thousands of configurations using forward propagation in a matter of seconds. } \LinedItem{The network designer could then repeat this process by training another \textit{\textbf{n}} models initialized with random hyperparameter settings \textit{sampled from this smaller value range}, iterating through the process until a model with the least cost is reached. } \end{adjustwidth} \subsection*{Misconceptions} \paragraph{Misconception 1:} Some may suppose that even with using the proposed method, the designer still needs to sample many configurations of hyperparameters. However, the method ensures that the designer samples fewer points than would be sampled with traditional hyperparameter tuning, yet still has the same result. \paragraph{Misconception 2:} Some may suppose that training the neural network to generate the surface plot will take more time than simply trying new configurations. However, even while \textit{\textbf{n}} grows very large, the neural network can be easily trained because there are only two features and one label. \subsection*{Generalizations and Implementation Details} As one might imagine, this hyperparameter tuning process is not restricted to only the hyperparameters we discussed in this example. Rather, the process can be extrapolated to a larger scale containing different hyperparameters or more hyperparameters, allowing designers to more quickly determine the configurations of their models. \\\\ We remind developers while using this technique to consider the trade off between time and accuracy that is intrinsic to this process, in that the generation of more points expends more computational resources but ultimately results in a better network, while training less points, though timely, may result in a flawed hyperparameter selection. Symbolically, if \textit{\textbf{n}} is too small, then the surface plot will not generalize to untrained points and the neural network will fail to its job. On the other hand, if \textit{\textbf{n}} is too large, then this defeats the purpose of using the proposed method altogether. One potential solution to calibrate the number of models trained is to incrementally decrease the number of configurations tested. For example, a designer could test 100 configurations, zoom in, test another 10 configurations, zoom in, test another 5 configurations, and so on. \section*{Need For Further Research} Further experiments can be conducted to determine if the phenomena we observed exist in larger data sets and diverse data sets not limited to cardiovascular disease. The Deep Learning models we trained with 150 epochs can instead be trained for a larger number of iterations to verify the usability of our results before extrapolation to industrial models. Different variations of hyperparameters, such as varying the number of hidden units and dropout rate between each dense layer of a model, can also be tested. This is crucial because in more sophisticated applications of Deep Learning (e.g. convolutional neural networks), different layers of the network learn either ``specific'' or ``general'' features, making it practically impossible for each layer to have the same architecture \cite{architecture}. Thus, more experimentation needs to be done to gauge the usefulness of the procedures when applied to different types of Deep Learning models. \\\\ In addition, research can be done to gauge the accuracy and reliability of the surface plot neural networks by training many models with random hyperparameters and comparing the results to those predicted by the surface plot neural networks. The significant trade off between surface plot generalizability and computational expense should also be explored to validate or nullify our proposed method of hyperparameter tuning, and to ultimately find the answer to the optimal number of models to train before generating a surface plot.
1,116,691,497,036
arxiv
\section{Introduction}\label{sec:introduction} Diffuse gas is expected to permeate the large-scale structure (LSS) of the Universe away from galaxy groups and clusters. Detecting and characterising this intergalactic gas is challenging due to the expected low particle number density ($\sim$$10^{-5}$ to $10^{-6}$~\cc) and temperature ($10^5$ to $10^7$~K). Although diffuse, this warm-hot intergalactic medium \citep[WHIM;][]{dave2001,cenostriker2006} potentially contains half the total baryon content of the local Universe \citep{bregman2007,nicastro2018}. In addition, accretion shocks along these LSS filaments are predicted to accelerate particles to relativistic energies and to amplify magnetic fields. Thus, detecting this filamentary structure in synchrotron emission using radio telescopes is a promising avenue for studying the WHIM \citep[e.g.][]{vazza2015a}. Recent statistical studies based on the cross-correlation of diffuse radio synchrotron emission and the underlying galaxy distribution have derived upper limits on the magnetisation of filaments of the order of $0.1$~$\mu$G \citep{vernstrom2017,brown2017}. Furthermore, \cite{vacca2018} found a faint population of sources which might be the tip of the iceberg of a class of diffuse large-scale synchrotron sources associated with the WHIM connected to a large-scale filament of the cosmic web. An alternative approach is to measure the Faraday rotation properties of the magnetised WHIM using many bright, polarised, background radio sources \citep[e.g.][]{stasyszyn2010,akahori2014,vacca2016}. From simulations, the field strength of the intergalactic magnetic field (IGMF) is expected to be in the range of 1 to 100 nG \citep[e.g.][]{dolag1999, brueggen2005, ryu2008,vazza2017}. It is important to constrain the magnetic field in the WHIM in order to determine the unknown origin of the large scale magnetic field in the Universe \citep{zweibel2006}. While large scale fields are commonly detected in galaxies and galaxy clusters, the strong modification of these fields erases the signature of their origin \citep[e.g.][]{vazza2015b}. This may not be the case in the WHIM, as the amplification of primordial magnetic fields in these filamentary regions are likely primarily due to compressive and shearing gas motions, in addition to small-scale shocks, such that the observed level of magnetisation could be connected to the seeding process \citep[e.g.][]{ryu2008,vazza2014}. The AGN and star formation activity in galaxies can also drive powerful outflows that may significantly magnetise the intergalactic medium on large scales \citep[e.g.][]{furlanettoloeb2001,donnert2009,beck2013}. Therefore, distinguishing between a primordial origin and a later injection of magnetic field that was initially generated on smaller scales by galaxies and stars is a key goal for studies of the IGMF \citep[see][and references therein]{akahori2018}. It has also been proposed to study the WHIM using large or `giant' radio galaxies (GRGs) whose linear size can extend beyond 1 Mpc, with the largest such example being 4.7 Mpc in extent \citep{machalski2008}. GRGs are usually FRII type radio galaxies \citep[e.g.][]{dabhade2017}, although some giant FRI also exist \citep[e.g.][]{heesen2018,horellou2018}, that extend well beyond their host galaxy and local environments, into the surrounding intergalactic medium. Asymmetries in the GRG morphology can be used as a probe of the ambient gas density \citep{subrahmanyan2008,safouris2009,pirya2012,malarecki2015} and the Faraday rotation properties of the polarised emission from the lobes can be used to study the magnetic field properties of the surrounding gas on Mpc scales \citep{xu2006,osullivanlenc2018}. Another potential approach to studying the magnetised WHIM in cluster outskirts is by using Faraday rotation observations of the highly polarised emission from radio relics \citep[e.g.][]{kierdorf2017, loi2017}. The effect of Faraday rotation is measured through its influence on the linear polarisation vector as a function of wavelength-squared. The observed Faraday rotation measure, RM [rad~m$^{-2}$], depends on the line-of-sight magnetic field, $B_{||}$~[$\mu$G], threading a region of ionised gas with electron density, $n_{\rm e}$~[cm$^{-3}$], along a path length, $l$ [pc], following \begin{equation} \label{eqn:rm} \mathrm{RM} = 0.812\int_{\rm source}^{\rm telescope} n_{\rm e} \, B_\parallel \, \mathrm{d}l \,\,\,\,\, {\rm rad~m}^{-2}. \end{equation} \noindent In this paper, we present an analysis of the linear polarisation and Faraday rotation properties of an FRII radio galaxy (\grg) with a linear size of 3.4~Mpc. The observations were done with the Low Frequency Array \citep[LOFAR;][]{vanhaarlem2013} which provides excellent sensitivity to diffuse extended structures due to the presence of numerous short baselines and exceptional Faraday rotation measure (RM) accuracy, which depends on the total coverage in wavelength-squared. While low frequency radio telescopes provide the best RM accuracy, sources at these frequencies are most strongly affected by Faraday depolarisation \citep[e.g.][]{burn1966}, which decreases the degree of linear polarisation below the detection limit for many sources \citep[][]{farnsworth2011}. Despite this there is a growing number of polarised sources being found at low frequencies \citep[e.g.][]{Bernardi:2013, Mulcahy:2014, Jelic:2015, orru2015, Lenc:2016, vaneck2018, osullivanlenc2018,neld2018,riseley2018}. \grg~was discovered to be polarised at 144~MHz by \cite{vaneck2018}, in LOFAR data imaged at an angular resolution of 4.3\arcmin. The source was first reported by \cite{schoenmakers2001}, and the first optical identification (SDSS J123458.46$+$531851.3) was proposed by \cite{banfield2015}. However, our new observations show that the previously assumed host galaxy is accidentally located close to the geometric centre between the two lobes and that the real host galaxy is actually connected to the south east (SE) lobe by a faint jet. The radio core is coincident with the galaxy SDSS~J123501.52$+$531755.0, which is identified as PSO~J123501.519$+$531754.911 \citep{flewelling2016} for the radio source ILT~J123459.82$+$531851.0 in \cite{williams2018}. Estimates of the photometric redshift of this galaxy are 0.349 \citep{bilicki2016}, 0.41 \citep{beckdobos2016} and 0.44 \citep{brescia2014,duncan2018}. The host galaxy is identified in \cite{hao2010} as a red-sequence galaxy and a cluster candidate, GMBCG~J188.75636+53.29864. This is intriguing as GRGs are often thought to evolve in underdense galaxy environments \citep[e.g.][]{mack1998}, however, recent work indicates that they are most likely the oldest sources in the general population of powerful radio galaxies \citep{hardcastle2018}. In addition, \cite{hao2010} estimate a total of $\sim$9 galaxies within 0.5~Mpc with luminosities $L > 0.4 L^*$, using a weak-lensing scaling relation, which suggests a poor cluster environment. There is also no evidence for a massive cluster at this location in the sky in the Planck thermal Sunyaev-Zeldovich map \citep{planckymap}. This paper presents a follow-up study using the same LOFAR data as \cite{vaneck2018}, but imaging at higher angular resolution. We also confirm the new optical host identification and determine its spectroscopic redshift as $z\sim0.34$, giving the projected linear size of 3.4~Mpc. In Section~\ref{sec:obs}, we describe the radio polarisation and optical spectroscopic observations. Section~\ref{sec:results} presents the physical properties of \grg, the inference on the properties of its environment based on dynamical modelling of the jets, and the RM and depolarisation behaviour. In Section~\ref{sec:discuss} we discuss the results in the context of the study of the intergalactic medium and its magnetisation. The conclusions are listed in Section~\ref{sec:conclusion}. Throughout this paper, we assume a $\Lambda$CDM cosmology with H$_0 = 67.8$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_M=0.308$ and $\Omega_{\Lambda}=0.692$ \citep{planck2016xiii}. At the redshift of the source, 1\arcsec~corresponds to a linear size of 5.04~kpc. We define the total intensity spectral index, $\alpha$, such that the observed total intensity ($I$) at frequency $\nu$ follows the relation $I_{\nu}\propto\nu^{\rm{+}\alpha}$. \section{Observations \& Data Analysis} \label{sec:obs} \subsection{Radio observations} The target source \grg~was observed as part of the LOFAR Two-Metre Sky Survey \citep[LoTSS;][]{shimwell2017,shimwell2018}, which is observing the whole northern sky with the LOFAR High-Band Antenna (HBA) from 120 to 168 MHz. The data relevant to our target were observed in full polarisation for 8 hours on 26 June 2014, as part of the observing program LC2\_038 and with a pointing centre of J2000 12$^{\rm{h}}$38$^{\rm{m}}$06$\fs$7, $+52$\degr07$\arcmin$19$\arcsec$. This gives a distance of $\sim$1.26\degr~of the target \grg~from the pointing centre (the FWHM of the primary beam is $\sim$4\degr). Direction-independent calibration was performed using the {\sc prefactor} pipeline\footnote{https://github.com/lofar-astron/prefactor}, as described in detail in \cite{shimwell2017} and \cite{degasperin2018}, which includes the ionospheric RM correction using {\sc rmextract}\footnote{https://github.com/lofar-astron/RMextract}. Residual ionospheric RM correction errors of $\sim$0.05\rad~are estimated between observations \citep{vaneck2018}, while slightly larger errors of $\sim$0.1 to 0.3\rad~are estimated across a single 8-hour observation \citep{sotomayor2013}. The resulting measurement set, after the {\sc prefactor} pipeline, has a time resolution of 8~s and a frequency resolution of 97.6~kHz. The direction-independent calibrated data are used throughout for the polarisation and rotation measure analysis, while the direction-dependent calibrated total intensity image \citep{shimwell2018} is used to determine the source morphological properties with high precision and for the identification of the host galaxy location. Analysis of polarisation and rotation measure data products after direction-dependent calibration will be presented in future work. \subsection{Polarisation and Faraday rotation imaging} \label{sec:rmspecs} To analyse the polarisation and Faraday rotation properties of the target, we phase-shifted the calibrated uv-data to the coordinates of the host galaxy (12$^{\rm{h}}$35$^{\rm{m}}$01$\fs$5, $+53$\degr 17$\arcmin$55$\arcsec$), which lies almost at the centre of the extended emission. We calibrated the data for short-timescale phase variations caused by the ionosphere, then averaged to 32 s to reduce the data size and to help speed up the subsequent imaging, while avoiding any significant time smearing \citep[e.g.][]{neld2018}. Both the phase-shifting and time-averaging were done using NDPPP \citep{2018ascl.soft04003V}\footnote{https://support.astron.nl/LOFARImagingCookbook/}. The imaging software {\sc wsclean} \citep{offringa-wsclean-2014}\footnote{https://sourceforge.net/projects/wsclean} was used to create $I$, $Q$, $U$, $V$ channel images at 97.6~kHz resolution, for a 25\arcmin~field of view ($\sim$twice the linear size of \grg). A minimum uv-range of 150~$\lambda$ was used to avoid sensitivity to Galactic polarised emission on scales of $\gtrsim25\arcmin$. The maximum uv-range was set to 18~k$\lambda$, and combined with a Briggs weighting of 0, resulted in a beam size of $26\arcsec \times 18\arcsec$, sampled with $3\arcsec \times 3\arcsec$~pixels. The differential beam correction per channel was applied using {\sc wsclean}, as the correction for the LOFAR beam gain at the pointing centre was already applied during the initial calibration of the data. All channel images with $Q$ or $U$ noise higher than five times the average noise level were removed from subsequent analysis, leaving a total of 404 images covering 120 to 167 MHz (with a central frequency of 143.5 MHz). RM synthesis and {\sc rmclean} \citep{bdb2005,heald2009} were then applied to the $Q$ and $U$ images using {\sc pyrmsynth}\footnote{https://github.com/mrbell/pyrmsynth}. The data have an RM resolution of 1.16\rad, are sensitive to polarised emission from Faraday thick regions up to $\sim$0.98\rad, and $|$RM$|$ values for Faraday thin regions as high as 450\rad~can be detected. An RM cube with a Faraday depth ($\phi$) axis covering $\pm500$\rad~and sampled at 0.5\rad~intervals was constructed for initial inspection of the data. The concept of Faraday depth \citep{burn1966} can be useful to introduce here to describe regions with complicated distributions of Faraday rotation along the line of sight, such as multiple distinct regions of polarised emission experiencing different amounts of Faraday rotation, which could be identified through multiple peaks in a Faraday depth spectrum or Faraday dispersion function (FDF). As no significant emission was found at large Faraday depths, the final RM and polarisation images were constructed from FDFs with a range of $\pm150$\rad, sampled at 0.15\rad. To identify peaks in the FDF, a threshold of 8$\sigma_{QU}$ was used, where $\sigma_{QU}$ is calculated from the outer 20\% of the Faraday depth range in the {\sc rmclean} $Q$ and $U$ spectra. The mean $\sigma_{QU}$ across the field was $\sim$90~$\mu$Jy~beam$^{-1}$. Since no correction was made for the instrumental polarisation, peaks in the Faraday dispersion function appears near $\phi\sim0$\rad~at a typical level of $\sim$1.5\% of the Stokes $I$ emission. This instrumental polarisation signal is also smeared out by the ionospheric RM correction making it difficult to identify real polarised emission at low Faraday depths ($\lesssim\pm3$\rad). Thus, when identifying real polarised emission peaks in the FDF, the range $\pm3$\rad~is excluded. RM and polarised intensity images are created from the brightest, real polarised peak above 8$\sigma_{QU}$ at each pixel, after fitting a parabola around the peak to obtain the best-fitting RM and polarised intensity. In the case of the polarised intensity image, a correction for the polarisation bias was also made following \cite{george2012}. The error in the RM at each pixel was calculated in the standard way as the RM resolution divided by twice the signal to noise ratio of the detection \citep{bdb2005}. A full-band Stokes $I$ image was made using the same image parameters as the channel images specified above, with multi-scale cleaning applied for an automatic threshold of 3$\sigma$ and deeper cleaning (to 0.3$\sigma$) within an automatic masked region created from the clean components. The degree-of-polarisation image was created by dividing the band-averaged polarised intensity image from RM synthesis (with a cutoff at 8$\sigma_{QU}$) by the full-band Stokes $I$ image (with a cutoff at 3 times the local noise level). \begin{figure} \begin{center} \includegraphics[width=0.98\linewidth]{SDSS1235.eps} \caption{Optical spectrum of the host galaxy SDSS~J123501.52$+$531755.0 taken with AIFOSC instrument on the Nordic Optical Telescope, which shows emission lines H$\alpha$, [O{\sc ii}] and [O{\sc iii}] at a redshift of 0.34. } \label{fig:redshift} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1\columnwidth,clip=true,trim=0.0cm 0.0cm 1.0cm 0.0cm]{GRG_I_pink.eps} \caption{LoTSS total intensity image at 144 MHz at 6\arcsec~resolution (after direction-dependent calibration). The contours start at 300~$\mu$Jy~beam$^{-1}$ and increase by factors of 2 (with one negative contour at $-300$~$\mu$Jy~beam$^{-1}$). The greyscale image is tuned to show the noise variation across the image ($\sim$70~$\mu$Jy~beam$^{-1}$ away from bright sources and $\sim$100~$\mu$Jy~beam$^{-1}$ near the hotspots), as well as a faint hint of the south-east jet. The radio galaxy core coincident with the host galaxy SDSS~J123501.52$+$531755.0 is indicated by the horizontal arrow. The synthesised beam size is shown in the bottom left hand corner of image.} \label{fig:IDDF} \end{center} \end{figure} \subsection{Optical spectroscopic observations} SDSS~J123501.52$+$531755.0 was observed with the Nordic Optical Telescope on March 25 and March 26 2018 for a total integration time of 5400 sec. We used the Andalucia Faint Object Spectrograph and Camera (AlFOSC) and a 1.3 arcsec wide longslit and grism 4 with 300 rules per millimetre providing a spectral resolution of 280 and a useful spectral range of 3800 to 9100~\AA. The slit was placed at a parallactic angle of 60 degrees east of north on both nights at the onset of integration. The airmass ranged from 1.20 to 1.15. The observing conditions were poor with a variable seeing above 2 arcsec and with passing clouds. Despite this we clearly detected several emission lines (Figure~\ref{fig:redshift}) consistent with a mean redshift of $0.3448\pm0.0003$ (1-sigma error). The [O{\sc ii}] and [O{\sc iii}] images have a peculiar morphology extending away from the continuum source to the northern side of the galaxy. In particular [O{\sc iii}],$\lambda$5008\,\AA~can be traced over 4 arcseconds below the continuum trace (20 kpc at $z=0.34$). This indicates the presence of an extended emission line region. \begin{figure*} \begin{center} \includegraphics[width=0.49\linewidth]{GRG_RM_insets_plasma_r.eps} \includegraphics[width=0.49\linewidth]{GRG_pol_insets_viridis_inferno.eps} \caption{Left image: Main image: Faraday rotation measure distribution (colour scale) of the north-west (NW) and south-east (SE) lobe regions that are detected above the threshold of $8\sigma_{QU}$, overlaid by the total intensity contours starting at 5~mJy~beam$^{-1}$ and increasing in factors of two. Insets: The absolute value of the {\sc rmclean} Faraday dispersion function for the brightest polarised pixel in the NW lobe (top) and SE lobe (bottom). Right image: Main image: polarized intensity greyscale, in mJy~beam$^{-1}$, overlaid by the total intensity contours. Insets: degree of polarisation colourscale (in per cent) from zoomed in regions of the NW and SE lobes.} \label{fig:RMfpol} \end{center} \end{figure*} \section{Results} \label{sec:results} \subsection{Radio morphology of \grg}\label{sec:stokesi} Figure~\ref{fig:IDDF} shows the total intensity image at 6\arcsec~resolution from the LoTSS direction-dependent calibrated data \citep{shimwell2018}. This provides the best radio image to date for this source, enabling an unambiguous host galaxy identification with SDSS~J123501.52$+$531755.0. The noise level in the image ranges from $\sim$70~$\mu$Jy~beam$^{-1}$ in areas away from bright sources to $\sim$100~$\mu$Jy~beam$^{-1}$ near the hotspots/lobes. The core of this FRII radio galaxy, located at J2000 12$^{\rm{h}}$35$^{\rm{m}}$01$\fs$5, $+53$\degr 17$\arcmin$55$\arcsec$, has an integrated flux density of $\sim$1.1~mJy at 144 MHz and 1.4 GHz \citep[FIRST;][]{becker1995} suggesting a flat spectrum. However, the core is also detected in the VLASS\footnote{https://archive-new.nrao.edu/vlass/} Quick-Look (QL) image at 3~GHz ($\sim$2.9~mJy) and the 9C catalogue \citep{waldram2010} at 15 GHz ($\sim$4~mJy) indicating an inverted spectral index of $\alpha_{\rm core}\sim+0.3$ when combined with the LoTSS core flux density. As the LoTSS, VLASS and 9C observations are closest in time, we consider the core to have an inverted spectral index, with time variability explaining the lower than expected flux density from FIRST at 1.4~GHz. There is also a faint hint of a jet connecting the host with the south-east (SE) lobe. If this is real, then it suggests that the SE jet and lobe are orientated slightly towards us on the sky. Using the $3\sigma$ contour to define the lobe edges, we find the lobes have a width of $\sim$83\arcsec~and $\sim$94\arcsec, giving an axial ratio of $\sim$4.4 for the north-west (NW) lobe and $\sim$3.3 for the SE lobe, respectively. This is consistent with the typical axial ratios from 2 to 7 for the lobes of most (smaller) GRGs \citep[e.g.][]{machalski2006}. In Table~\ref{tab:fluxes}, we compile the integrated flux densities of the NW and SE lobes and hotspots from both current and archival data. The integrated flux densities of the NW lobe and hotspot are slightly higher than the SE lobe and hotspot at 144~MHz, with both having spectral index values of $\alpha_{\rm lobe}\sim-0.8$. The NW hotspot is resolved into primary and secondary hotspot regions in the VLASS at 3 GHz ($2.4\arcsec\times2.1\arcsec$ beam), while the SE hotspot maintains a single component. The straight-line distance from the core to the NW hotspot is $\sim$365\arcsec~(1.84~Mpc), compared to $\sim$311\arcsec~(1.56~Mpc) from the core to the SE hotspot, giving a lobe length ratio of 1.17. The inferred jet-misalignment (from co-linearity) of $\sim13.6\degr$ is most likely due to bending of the NW and/or SE jets on large scales, as is sometimes observed in other FRII radio sources \citep{black1992}. We expect that the lobe-length asymmetry and jet-misalignment are caused by interactions between the jet and the external environment on large scales, as opposed to light travel time effects \citep{longair1979}. Asymmetries in the jet and lobe lengths of GRGs are often attributed to interactions with the large scale structure environment \citep{pirya2012,malarecki2015}. The advancing NW jet may be influenced by a nearby filament (see Section~\ref{sec:filaments} and the filament in the $z\sim0.335$ slice), although deeper optical spectroscopic observations would be required to determine whether or not this filament is indeed close enough in redshift to that of the host galaxy to have an influence. \subsection{Faraday rotation measure distribution}\label{sec:RM} Figure~\ref{fig:RMfpol} shows the RM distribution for \grg, using an $8\sigma_{QU}$ threshold, overlaid by Stokes $I$ contours at the same angular resolution. The Faraday dispersion functions for the brightest pixel in polarised intensity in each lobe are also shown, with a red cross marking the peak polarisation at which the RM was found. Other peaks in the spectrum are either noise peaks or related to the instrumental polarisation near RM~$\sim0$\rad. The RM distributions of each lobe are shown in Figure~\ref{fig:RMhist}. The mean and standard deviation of the RM are $+7.42$\rad~and 0.07\rad~for the NW lobe, and $+9.92$\rad~and 0.11\rad~for the SE lobe, respectively. The median RM errors for the NW and SE lobe regions are 0.04\rad~and 0.06\rad. The mean RM difference between the lobes of $2.5\pm0.1$\rad~is thus highly significant. At the angular separation of the lobes (11\arcmin), systematic errors in the ionospheric RM correction would affect both lobes equally and thus do not contribute to the RM difference between the lobes. We can estimate the significance of the small RM variations within each lobe accounting for the number of pixels in each synthesised beam following \cite{leahy1986}, where a reduced-chi-squared of $\sim$1 is expected if noise errors dominate the RM fluctuations. We find no evidence for the detection of significant RM variations across the NW lobe, with a reduced-chi-squared of 1.1. However, a reduced-chi-squared of 1.8 provides evidence, at a level of $\sim$1.35$\sigma$, for RM variations across the SE lobe of $\sim$0.1\rad. \begin{figure} \begin{center} \includegraphics[width=0.93\linewidth,clip=true,trim=0.5cm 0.0cm 1.0cm 1.0cm]{RMhist_N_gauss.eps} \includegraphics[width=0.93\linewidth,clip=true,trim=0.5cm 0.0cm 1.0cm 1.0cm]{RMhist_S_gauss.eps} \caption{Histograms of the RM distribution from the north-west lobe (top) and south-east lobe (bottom) regions of \grg. The red dashed line shows a Gaussian distribution with the same mean and standard deviation as the observed data. } \label{fig:RMhist} \end{center} \end{figure} \subsection{Faraday depolarisation}\label{sec:depol} The polarised intensity and degree of polarisation distributions are shown in Figure~\ref{fig:RMfpol}. The NW lobe is much brighter with a peak polarised intensity of 6.5~mJy~beam$^{-1}$ (coincident with the hotspot) and a degree of polarisation of 4.9\% at that location (ranging from 1.2\% to 5.1\% across the detected emission). The SE lobe is fainter with a peak polarised intensity of 1.1~mJy~beam$^{-1}$. The degree of polarisation at that location is 2.8\%, and it ranges from 1.1 to 3.3\% across the lobe. The non-detection of polarised emission from the SE hotspot is likely due to intrinsic non-uniform field structures and Faraday depolarisation on scales smaller than the resolution of our observations. The fainter, extended lobe emission would have to be $\gtrsim10\%$ polarised to be detected in these observations. In order to estimate the amount of depolarisation between 1.4 GHz and 144 MHz, the LoTSS data were compared with those of the NRAO VLA Sky Survey \citep[NVSS;][]{condon1998}. To determine the degree of polarisation at the same angular resolution as the NVSS survey, the RM pipeline was re-applied to the LoTSS data imaged at a lower angular resolution of $\sim$45\arcsec. At the peak polarised intensity location in the NW lobe of the LOFAR image, matched to the NVSS resolution, the degree of polarisation is $4.0\pm0.3$\%. At the same location in the NVSS image at 1.4 GHz, the degree of polarisation is $6.4\pm1.4$\%. This gives a depolarisation factor of ${\rm DP}_{1400}^{144}\sim0.6$, where ${\rm DP}_{1400}^{144}$ is the degree of polarisation at 144 MHz divided by the degree of polarisation at 1.4 GHz. Assuming the commonly used external Faraday dispersion model for depolarisation, $p(\lambda)\propto{\rm e}^{-2\sigma_{\rm RM}^2\lambda^4}$ \citep{burn1966}, provides a value of $\sigmaRM\sim0.1$\rad. For the SE lobe, the degree of polarisation at the peak polarised intensity at 144~MHz is $1.8\pm0.7$\% (at 45\arcsec~resolution) and $10.1\pm2.1$\% at the same location at 1.4~GHz. This gives ${\rm DP}_{1400}^{144}\sim0.2$, corresponding to larger amounts of depolarisation than in the NW lobe. In the case of external Faraday dispersion, this corresponds to $\sigmaRM\sim0.2$\rad. The observed difference in depolarisation between the NW and SE lobes may be due to the different location within each lobe from which the polarised emission arises. In the case of the NW lobe, the peak polarised emission is coincident with the hotspot location, whereas in the SE lobe, the peak polarised emission is significantly offset from the hotspot ($\sim$40\arcsec~away, in the bridge emission, with the offset also present in the NVSS images). Furthermore, from the non-detection of polarisation in the SE hotspot at 144~MHz, with a degree of polarisation $<0.35$\%, we can place a lower limit on the Faraday depolarisation at this location of $\sigmaRM\sim0.25$\rad, based on comparison with the NVSS degree of polarisation of $\sim$5\% at this location. From inspection of the VLASS QL image at 3~GHz, the physical extent of the NW hotspot ($\sim$2.4\arcsec) is smaller compared to the SE lobe region (of order 20\arcsec~in size) and thus less affected by depolarisation caused by RM variations within the synthesised beam at 144~MHz. Since the amount of depolarisation scales roughly as the square-root of the number of Faraday rotation cells, this could reasonably explain the difference in the observed depolarisation between the lobes. However, the enhanced depolarisation at the location of the SE hotspot is more difficult to explain and may indicate a significant interaction between the hotspot/lobe magnetic field and the ambient medium. This warrants further investigation with more sensitive observations at low frequencies. Overall, given the small amount of observed Faraday depolarisation, it is important to consider the accuracy of the correction for Faraday rotation from the ionosphere. \cite{vaneck2018} estimate a residual error in the ionosphere RM correction between observations of 0.05\rad. As the ionosphere RM corrections across an observation (i.e.~8 hours) are linearly interpolated in time between direct estimates every 2 hours, a rough estimate can be made for the residual error within the observation of $\sim$$0.05\sqrt{4}\sim0.1$\rad. This means that most (or all) of the observed depolarisation in the NW hotspot is possibly due to residual errors in the ionospheric RM correction. However, the difference in depolarisation between the NW hotspot and SE lobe cannot be explained by ionosphere RM errors. Therefore, a $\sigmaRM$ of at least $\sim$0.1\rad~in the SE lobe can be considered astrophysically meaningful. This is comparable to the RM variations across the SE lobe of $\sim0.1$\rad~found in Section~\ref{sec:RM}. \begin{table \scriptsize{ \caption{Archival and measured flux densities, as well as the best-fit flux densities (in the self-consistent, s.c., fits) for the north-west and south-east lobes of \grg.} \begin{tabular*}{91mm}{lrclrcl} \hline \hline &$<$----------& N-lobe &-------$>$&$<$----------& S-lobe& -------$>$ \\ Freq. & Entire Lobe & Hotspots & s.c. fit & Entire Lobe & Hotspots & s.c. fit \\ (MHz) & [mJy] & [mJy] & [mJy] & [mJy] & [mJy] & [mJy] \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline \\ 143.6$^{(9)}$ & 403$\pm$40 &151$\pm$21 & 356.6 & 378$\pm$40 & 132$\pm$25 & 345.3 \\ 151$^{(1)}$ & 350$\pm$52 & & 344.4 & 320$\pm$52 & & 333.3 \\ 151$^{(2)}$ & 375$\pm$32 & & 344.4 & 302$\pm$31 & & 333.3 \\ 325$^{(3)}$ & 177$\pm$36 & & 193.0 & 149$\pm$36 & & 185.1 \\ 325$^{(9)}$ & 154$\pm$58 & & 193.0 & 153$\pm$58 & & 185.1 \\ 408$^{(4)}$ & 160$\pm$40 & & 160.6 & 145$\pm$34 & & 153.2 \\ 1400$^{(5)}$ & 59$\pm$4 & & 55.9 & 50$\pm$2 & & 51.0 \\ 1400$^{(9)}$ & 55$\pm$19 & 36$\pm$4 & 55.9 & 47$\pm$19 & 33$\pm$5 & 51.0 \\ 2980$^{(7)}$ & & 21$\pm$3 & & & 20$\pm$3 \\ 4850$^{(6)}$ & 21$\pm$4 & & 18.2 & 18.4$\pm$4 & & 15.6 \\ 15200$^{(8)}$ & (5.2$\pm$2) & 5.2$\pm$1 & 6.3 & (6.6$\pm$2) & 6.6$\pm$1 & 5.1 \\ \\ \hline \label{tab:fluxes} \end{tabular*}\\ {\bf References.} (1) 6C3 \citep{hales1990}; (2) 7Cn \citep{riley1999}; (3) WENSS \citep{rengelink1997}; (4) B3.3 \citep{pedani1999}; (5) NVSS \citep{condon1998}; (6) GB6 \citep{gregory1996}; (7) VLASS (Lacy et al.~in prep.) (8) 9Cc \citep{waldram2010}; (9) this paper. } \end{table} \begin{figure} \begin{center} \includegraphics[width=1\columnwidth,clip=true,trim=0.5cm 3.0cm 2.0cm 1.5cm]{J1235rev.eps} \caption{DYNAGE fits (solid lines) to the total intensity spectra of the north-west and south-east lobes (open circles), and the spectral points of the hotspot regions (filled dots; not used in the fits). Note that the north-west lobe flux density scale is shifted one decade up in relation to the given ordinate scale. } \label{fig:dynage} \end{center} \end{figure} \subsection{Dynamical modelling}\label{sec:dyno} In order to decouple the properties of the electron density and magnetic field along the line of sight in the measured Faraday rotation and depolarisation, additional information is required on the the physical characteristics of \grg~(i.e.~the magnetic field strength of the emission region) and the properties of its surrounding environment (i.e.~the ambient gas density). These properties can be estimated through dynamical modelling of the radio lobes, while simultaneously accounting for energy losses of relativistic particles (electrons and positrons) injected into the expanding lobes by the relativistic jets \citep[e.g.][and references therein]{machalski2011,machalski2016}. This is important because we lack X-ray data that could constrain the properties of the external medium \citep[e.g.][]{ineson2017} and/or the magnetic field strength of the hotspot and lobes, without the need for the assumption of equipartition between the radiating particles and magnetic field \citep[e.g.][]{mingo2017}. Therefore, here we apply the evolutionary DYNAGE code of \cite{machalski2007} to the radio lobes of \grg, primarily to obtain an estimate of the external gas density, as well as estimates for the magnetic field strength of the lobes. The fitting procedure is performed separately for each lobe using the observational data given in Section~\ref{sec:stokesi}, together with the radio luminosities calculated from the flux densities listed in Table~\ref{tab:fluxes}. The input model parameters that are assumed are given in Table~\ref{tab:input}. Characteristic of almost all FRII sources is a modest asymmetry in the length and radio luminosity of the lobes. Therefore, as might be expected, the DYNAGE results for the jet power $Q_{\rm j}$, the central density of the external medium $\rho_{0}$, and other physical parameters can appear different for the two lobes of the same source. This aspect has been analysed by \cite{machalski2009} and \cite{machalski2011} for a sample of thirty GRGs. While some of the differences were within the uncertainties of the fitted values for the model parameters, significant differences were possible in cases where the evolution of the magnetic field and/or various energy losses and acceleration processes of the relativistic particles are different at the hotspots of the opposite lobes. Alternatively, such differences, especially in GRGs, may reflect different external conditions well beyond the host galaxy and cluster/group environment. Following \cite{machalski2009}, we averaged the values of $Q_{\rm j}$ and $\rho_{0}$ initially found in the `independent solution' and treated them as fixed parameters in the `self-consistent' model, $\langle Q_{\rm j}\rangle$ and $\langle\rho_{0}\rangle$, respectively. New values of the slope of the ambient density distribution ($\beta$) and the age ($t$) for the NW and SE lobes, are denoted as $\beta_{\rm s.c.}$ and $t_{\rm s.c.}$ (Table~\ref{tab:output}). The DYNAGE fits to the observed data points are shown with solid lines in Figure~\ref{fig:dynage}. Table~\ref{tab:output} presents the derived physical properties of the lobes, including a minimum-energy magnetic field strength in the lobes of $B_{\rm me}\sim1$~$\mu$G and an external density of $\sim2\times10^{-31}$~g~\cc~(i.e.~$n_{\rm e}\sim10^{-7}$\cc). This density is similar to the mean density of the Universe assuming half the baryons are in the WHIM \citep{machalski2011}, and implies that the radio lobes are likely propagating into a low-density region of the Universe. We also used the synchrotron minimum energy (equipartition) magnetic field formulation in \cite{worrallbirkinshaw2006} to estimate the lobe magnetic field strength. From this we find an equipartition magnetic field strength that is 2.6 times higher than the 1~$\mu$G derived from the dynamical modelling (for $\gamma_{\rm min}=10$). When calculated in this manner the lobe equipartition field strength is usually found to be overestimated, by a factor of 2 to 3, compared to that found from X-ray Inverse Compton observations of lobes \citep[e.g.][]{ineson2017,mingo2017}. This highlights some of the uncertainties in the calculation of equipartition magnetic field strengths in radio galaxies \citep[e.g.][]{beckkrause2005, konar2008}. Here we adopt the lobe magnetic field strength obtained from the dynamical modelling as it takes into account more physical effects, such as the jet power, adiabatic expansion and age of the lobes. \begin{table \scriptsize{ \caption{Dynamical modelling input model parameters} \begin{tabular*}{75mm}{lcc} \hline \hline Parameter & Symbol & Value \\ \hspace{5mm}(1)&\hspace{-2mm} (2) & (3) \\ \hline \\ {\bf Set:} \\ Adiabatic index of the lobes' material & $\Gamma_{\rm lb}$ & 4/3 \\ Adiabatic index of the ambient medium & $\Gamma_{\rm x}$ & 5/3 \\ Adiabatic index of the lobes' magnetic field & $\Gamma_{\rm B}$ & 4/3 \\ Minimum electron Lorentz factor (injected) & $\gamma_{\rm min}$ & 1 \\ Maximum electron Lorentz factor (injected) & $\gamma_{\rm max}$ & 10$^{7}$\\ Core radius of power-law \\ \hspace{15mm}ambient density distribution & $a_{0}$ & 10\,kpc \\ Initial slope of power-law \\ \hspace{15mm}ambient density distribution & $\beta$ & 1.5 \\ Thermal particles within the lobes & $k$ & 0 \\ Jet viewing angle & $\theta$ & 90$\degr$\\ \\ {\bf Free:} \\ Jet power & $Q_{\rm j}$[erg s$^{-1}$] \\ External density at core radius & $\rho_{0}$[g cm$^{-3}$] \\ Exponent of initial power-law energy \\ \hspace{10mm}distribution of relativistic particles & $p=1+2\alpha_{\rm inj}$ \\ Source (lobe) age & $t$[Myr]\\ \hline \label{tab:input} \end{tabular*} } \end{table} \begin{table} \scriptsize{ \caption{Fitted values of the model free-parameters in the `self-consistent' dynamical modelling solution} \begin{tabular*}{90mm}{lccc} \hline \hline Parameter & Symbol & Value & Value \\ & & for N-lobe & for S-lobe \\ \hspace{5mm}(1) & (2) & (3) & (4) \\ \hline \\ Initial effective spectral index & $\alpha_{\rm inj}$ & $-0.45$$\pm$0.05 & $-0.52$$\pm$0.03 \\ Source (lobe) age [Myr] & $t_{\rm s.c}$ & 95$\pm$23 & 80$\pm$16 \\ Jet power [$\times 10^{45}$erg\,s$^{-1}$] &$\langle Q_{\rm j}\rangle$ & 1.1$\pm$0.1 & 1.1$\pm$0.1\\ Core density [$\times 10^{-28}$g\,cm$^{-3}$] &$\langle\rho_{0}\rangle$ & 4.7$\pm$0.4 & 4.7$\pm$0.4 \\ Slope of ambient density distribution & $\beta_{\rm s.c.}$ & 1.431 & 1.613 \\ External density [$\times 10^{-31}$g\,cm$^{-3}$] & $\rho(D)$ & 2.8$\pm$1.1 & 1.4$\pm$0.7 \\ Lobe pressure [$\times 10^{-14}$dyn\, cm$^{-2}$] & $p_{\rm lb}$& 3.0$\pm$0.1 & 3.1$\pm$0.1 \\ Minimum energy magnetic field [$\mu$G] &$B_{\rm me}$ & 1.0$\pm$0.2 & 1.0$\pm$0.2 \\ Longitudinal expansion speed &$v_{\rm h}/c$ &0.05$\pm$0.02 & 0.06$\pm$0.02 \\ \\ \hline \label{tab:output} \end{tabular*} } \end{table} \section{Interpretation} \label{sec:discuss} The difference in the mean RM between the NW and SE lobes is $2.5\pm0.1$\rad. This may be due to variations in the Galactic RM (GRM) on scales of $\sim$11\arcmin, differences in the magnetoionic material of the intergalactic medium on large scales, and/or line-of-sight path length differences towards either lobe. The observed Faraday depolarisation of $\sigmaRM\sim0.1$\rad~associated with the SE lobe could be due to small scale fluctuations of the magnetic field in the local external medium and/or from Faraday rotation internal to the source. Constraining the likelihood of these possibilities requires some considerations of the expected variations in the GRM, knowledge of the geometry and physical properties of the radio lobes, and details of the environment surrounding the radio galaxy and in the foreground. \subsection{Galactic RM variations} \label{sec:GRM} The reconstruction of the GRM by \cite{oppermann2012,oppermann2015} gives $+14.8\pm4.5$\rad~across both the NW and SE lobe (the Galactic coordinates of \grg~are $l=128.46\degr$, $b=63.65\degr$). This is higher than the mean RMs of $+7.4$ and $+9.9$\rad~found for the NW and SE lobes, respectively. However, it should be kept in mind that the LoTSS RM values have been corrected for the time-variable ionosphere RM ($+1.6$ to $+1.9$\rad), while the catalogue from which the GRM map is mainly made \citep{taylor2009} does not have this correction applied. Thus, the RM of the NW and SE lobe are within the 1-sigma and 2-sigma errors in the GRM, respectively. The variation in the GRM map for three adjacent pixels (in the direction of the largest gradient) across the source is $\sim2.2$\rad~(on a scale of $\sim$1~deg). As the GRM map has a resolution of $\sim$1 degree, which is the typical spacing of extragalactic sources in the \cite{taylor2009} catalogue, it cannot be used to probe RM variations on smaller scales. The true GRM variation on smaller scales at this location is unknown, but RM structure function analyses for GRM variations at high Galactic latitudes have probed scales smaller than 1 degree in both observations \citep[e.g.][]{mao2010, stil2011} and simulations \citep[e.g.][]{sunreich2009}. In particular, using the results from \cite{stil2011}, we find that GRM variations ranging from approximately 3\rad~to 13\rad~are possible on angular scales of $\sim$11\arcmin, depending on the highly uncertain slope of the RM structure function on angular scales less than 1 degree. Better estimates of the GRM are required to reliably remove the GRM and its variation across the extent of \grg. \subsection{Local environment RM contribution} \label{sec:localRM} The hot gas in rich groups and clusters is known to be magnetised from observations of synchrotron radio halos and relics, as well as Faraday rotation observations of embedded and background radio sources \citep[see][and references therein]{carillitaylor2002}. For radio galaxy lobes that have not expanded significantly beyond their host galaxy or cluster/group environment, the Laing-Garrington effect is often present \citep{laing1988,garrington1988,garringtonconway1991}. This is where the polarised emission from the counter-lobe travels through a greater amount of magnetoionic material and thus incurs a larger amount of Faraday depolarisation. However, as the lobes of \grg~are expected to be orientated close to the plane of the sky and extend well outside the influence of the group/cluster environment, the Laing-Garrington effect is not expected to be strong \citep[e.g.][]{laingbridle2014}. Additionally, if the faint collimated emission SE of the host is indeed a jet, then the larger amount of depolarisation towards the SE lobe is opposite to that expected for the Laing-Garrington effect. Models of the variations in RM across radio galaxies in groups and clusters are typically constructed assuming turbulent magnetic field fluctuations over a range of scales embedded in a spherically-symmetric gas halo whose radial density profile is derived from X-ray observations \citep[e.g.][]{guidetti2008}. For \grg~we do not have X-ray data to constrain the properties of the hot gas environment, although it is likely that the red-sequence host galaxy is close to the centre of a poor cluster \citep{hao2010}. Therefore, we attempt to estimate the required density and field strength to self-consistently explain the mean RM and depolarisation \citep[e.g.][]{murgia2004}, for a single-scale model of a randomly orientated field structure \citep{felten1996}. In reality, the magnetic field will fluctuate on a range of scales, from an inner scale to an outer scale \citep{ensslinvogt2003}, but a single-scale model can provide a reasonable approximation to the RM variations if the scale length is interpreted as the correlation length of the magnetic field \citep[see][section 4.4 for details]{murgia2004}. An appropriate gas density profile, $n(r)$, for a galaxy group or cluster is a ``beta-profile'', where $n(r)=n_0(1+r^2/r_c^2)^{-3\beta/2}$. We assume that the magnetic field strength scales linearly with the gas density, $B(r)=B_0 n(r)/n_0$, where $B_0$ is the central magnetic field strength \citep[e.g.][]{dolag2001,laing2006,vacca2012, govoni2017}. Values of $n_0\sim10^{-3}$\cc, $r_c\sim100$~kpc and $\beta\sim0.5$ are not unreasonable for a poor cluster \citep[e.g.][]{laing2008, bonafede2010, guidetti2012}. The choice of these parameters is arbitrary given our limited information about the environment of the host galaxy (Section~\ref{sec:introduction}) but we use them simply as a plausible example. Following \citet[][eqn.~15]{murgia2004}, we find a Faraday dispersion of $\sigmaRM\sim0.1$\rad~at $r\sim1.5$~Mpc requires $B_0\sim5$~$\mu$G with a magnetic field correlation length of $\sim$25~kpc. This implies an ambient density of $\sim$$1.7\times10^{-5}$\cc~and field strength $B\sim0.09$~$\mu$G at the location of the hotspots.\footnote{ For comparison, using a simple model with a constant electron number density of $n_e\sim10^{-5}$\cc~and constant magnetic field strength of $B_{||}\sim0.1$~$\mu$G, with a magnetic field reversal scale of $l\sim20$~kpc over a total path length of $L\sim1$~Mpc gives $\sigmaRM\sim0.81n_e\,B_{||}\,\sqrt{l\,L} \sim 0.1$\rad. } Using these values and a large outer scale for the magnetic field fluctuations of 500~kpc \citep{vacca2010} gives a mean $|$RM$|$ of $\sim$0.4\rad. Therefore, while we can reasonably explain $\sigmaRM\sim0.1$\rad~at $r\sim1.5$~Mpc, we cannot self-consistently explain the large mean RM excess of $\sim$2.5\rad, even for a large outer scale of turbulence in the magnetic field power spectrum \citep[][]{ensslinvogt2003,murgia2004}. Note that the outer scale is mainly responsible for the observed mean RM and the inner scale for the value of $\sigmaRM$. We used a large outer scale here to show that this model cannot self-consistently explain both $\sigmaRM$ and the mean RM. Draping of the ambient field in addition to compression of the ambient magnetoionic gas could enhance the mean RM near the surface of the lobes \citep{guidetti2011, guidetti2012}, and may also help explain the higher depolarisation of $\sigmaRM\gtrsim0.15$\rad~at the location of the SE hotspot. Enhancements in the field strength and gas density by factors of 4 over a path length of $\sim$50~kpc outside the lobes could produce an additional $|$RM$|$ of $\sim$0.5\rad. More sensitive observations at high angular resolution are required to determine if such ordered field structures are indeed present. We note that the external gas density used here is two orders of magnitude higher than estimated from the dynamical modelling. This means that either the observed depolarisation does not occur in the external medium local to the source or that the dynamical modelling is severely underestimating the external density. Such low density gas may be challenging to detect in X-rays, but extrapolation of an X-ray profile from the inner region would be very instructive. In general, comparison with simulations of the propagation of large scale jets within a realistic cosmological environment may provide the best avenue for progress in this area \citep[e.g.][]{huarte2011, hardcastlekrause2014, turnershabala2015, english2016, vazza2017}. \subsection{Internal Faraday depolarisation} Our observations are insensitive to polarised emission from RM structures broader than $\sim1$\rad~(Section~\ref{sec:rmspecs}). Therefore, the large amounts of internal Faraday rotation required to explain the mean RM excess are ruled out. However, it is worth considering if the small amount of Faraday depolarisation ($\sigmaRM\sim0.1$\rad) can be explained by Faraday rotating material mixed with the synchrotron emitting material in the lobes. One of the most commonly used magnetic field models for the lobes of extragalactic sources is one where the field is highly tangled on small scales, with the observed appreciable degrees of polarisation produced due to stretching and compression \citep{laing1980}. Given the equipartition magnetic field strength of $\sim1$~$\mu$G within the lobes (Section~\ref{sec:dyno}), and as an illustrative example, we choose a thermal gas density internal to the lobes of $n_{\rm e}\sim 10^{-5}$\cc, with 500 field reversals through a lobe depth of $\sim$500~kpc, to produce $\sigmaRM\sim0.1$\rad~(using Eqn.~\ref{eqn:rm} and assuming $B_{||}=B/\sqrt{3}$). Observations at even lower frequencies would be required to resolve a Faraday depth width of 0.1\rad~in the Faraday spectrum (e.g.~using LOFAR observations down to at least 30~MHz, in combination with the data in this paper). In addition, broadband polarisation modelling would be needed to distinguish between internal and external Faraday depolarisation scenarios \citep[e.g.][]{anderson2018, osullivanlenc2018}. Using the LOFAR international baselines to obtain sub-arcsecond resolution would further enhance the ability to isolate different contributions by resolving the external RM variations across the emission region. For now, we can assess the likelihood of this scenario in terms of the implied energetics. For expected internal thermal gas temperatures of $\gtrsim$10~keV \citep{gitti2007}, the lobe thermal gas pressure is $p_{\rm th}\sim 2n_{\rm e}kT\sim 3\times10^{-13}$~dyn~cm$^{-2}$, which is an order of magnitude larger than the pressure from the synchrotron-emitting plasma in the lobes ($p_{\rm lb}$ in Table~\ref{tab:output}). This is inconsistent with expectations from studies of other FRII lobes \citep{croston2005,ineson2017}, and thus unlikely, unless the internal thermal gas is much cooler than assumed here. \subsection{RM contribution from large-scale structure} Significant asymmetries in the magnetoionic material in the foreground IGM, far from the local source environment, could also contribute to the observed mean RM difference between the lobes. Such variations could be caused by the magnetised component of the large scale structure (LSS) at low redshift, as \cite{ryu2008}, \cite{choryu2009} and \cite{akahoriryu2010} predict a root-mean-square RM ($\rm RM_{\rm rms}$) through LSS filaments of order 1\rad. In our case, the polarised emission of one lobe needs to pass through more foreground filaments than the other to explain the observed RM difference of 2.5\rad. Therefore, information is required on the location of LSS filaments with respect to the lines of sight probed by the polarised emission from the lobes of \grg. \subsubsection{Location of large scale structure filaments} \label{sec:filaments} The catalogue of \cite{chen2015,chen2016} provides a cosmic filament reconstruction from the SDSS data for 130 redshift slices in the range $0.05 < z < 0.7$. In Figure~\ref{fig:filaments}, we plot the location of the filaments that are in the foreground of \grg~(i.e.~at $z<0.34$). There are five filaments identified in different foreground redshift slices that pass through the field. We assign a thickness of 1~Mpc to each filament \citep{vazza2015} to determine which filaments most likely intersect lines of sight towards the polarized lobes (Figure~\ref{fig:filaments}). For a thickness of 1~Mpc, there are four filaments that cover the NW lobe and one filament that covers the SE lobe. Therefore, we estimate that there is an excess of three filaments covering the NW lobe. Considering different filament thicknesses results in different numbers of filaments covering each lobe, with an excess of filaments covering the NW lobe remaining for filaments up to a thickness of $\sim$3.8~Mpc (i.e.~the thickness above which the same number of filaments cover both lobes). In light of this result, we consider if the RM difference between the lobes can be explained by magnetised gas in these filaments. We note that there is no evidence of an individual intervening galaxy in the SDSS images that could explain the RM difference. \begin{figure} \begin{center} \includegraphics[width=1\columnwidth,clip=true,trim=0.0cm 0.0cm 0.4cm 0.0cm]{contours_GRGfilaments_dash.pdf} \caption{Location of foreground large-scale-structure filaments (lines) in relation to the background radio galaxy (contours) and its Faraday rotation measure (colour scale), as described in Fig.~\ref{fig:RMfpol}. The width of the lines corresponds to $\sim$1~Mpc at the redshift of the filament. } \label{fig:filaments} \end{center} \end{figure} \subsubsection{Magnetic field stength in filaments}\label{sec:meanbfield} To explain the RM difference between the lobes, an RM excess of $-2.5$~\rad~must be provided by the three extra filaments covering the NW lobe. Simulations suggest that the electron number density of LSS filaments can vary from $10^{-6}$ to $10^{-4}$\cc~ \citep{cenostriker2006,ryu2008,choryu2009,akahoriryu2010,vazza2015}, thus we adopt a mean electron density of $10^{-5}$\cc. \cite{akahoriryu2011} found a peak in the RM power spectrum, due to their simulated IGMF in filaments, on scales corresponding to a proper length of $\sim$3~Mpc, which they expect to correspond to the typical line-of-sight path through LSS filaments. Therefore, using a path length ($L$) of 3~Mpc and a coherence length ($l$) of 300~kpc \citep{choryu2009} leads to a magnetic field strength in the filaments ($B_{\rm LSS}$) of approximately \begin{equation}\label{eqn:Brms} B_{\rm LSS} \sim 0.3 \left( \frac{n_{\rm e}}{10^{-5}\,{\rm cm^{-3}}} \right)^{-1} \left( \frac{L}{3(3\,{\rm Mpc})} \frac{l}{300\,{\rm kpc}} \right)^{-1/2} \mu{\rm G}, \end{equation} for $B_{\rm ||}=B_{\rm LSS}/\sqrt{3}$. This estimate of the density-weighted IGMF strength of $\sim0.3$~$\mu$G has significant uncertainty given our limited knowledge of the particle number density of the gas in these filaments, as well as the observationally unconstrained coherence length of the field and the path length though each filament. Furthermore, this estimate cannot be treated as an upper limit as a large Galactic RM variation across the source (Section~\ref{sec:GRM}) could make the difference in RM between the lobes even larger (since the RM can be positive or negative). Furthermore, much larger RM variations are observed across radio relics which cannot be explained by Galactic RM variations, indicating the presence of large scale ordered fields in the outskirts of galaxy clusters \citep[e.g.][]{kierdorf2017,loi2017}. Therefore, a better approach may be to compare directly with cosmological simulations of the RM contribution from such LSS filaments. These simulations suggest that the magnetic field strength in filaments could range somewhere from $\sim$1 to 100~nG \citep[e.g.][]{vazza2015}. Early hydrodynamic simulations by \cite{ryu2008} used a prescription to produce magnetic fields from the kinetic energy of turbulent gas flows (guided by expectations from small-scale magnetic dynamo simulations), which produced average IGMF strengths of $\sim10$~nG. Subsequent work by \cite{choryu2009} and \cite{akahoriryu2010,akahoriryu2011}, using the results of these simulations, provided estimates of the ``typical'' RM contribution from LSS filaments. The most relevant number for Faraday rotation is the gas density ($\rho$) weighted average of the strength of the magnetic field through the filaments, i.e.~$\langle (\rho B)^2 \rangle^{1/2} / \langle \rho^2 \rangle^{1/2} $, which gave a few $\times$~$0.1$~$\mu$G in the above simulations. From this, it was found that the root-mean-square RM (RM$_{\rm rms}$) through the filaments scales with the number of filaments ($N_{\rm f}$) as RM$_{\rm rms}\sim1.5N_{\rm f}^{1/2}$\rad, up to a saturation point that corresponds to $\sim$25 filaments for $z>1$. In the case of three filaments, the predicted RM$_{\rm rms}\sim2.6$\rad, which is consistent with our observations (where we have an RM difference of 2.5\rad~between only two lines of sight, in which one passes though three additional filaments). Therefore, it can be argued that our results are consistent with the expected Faraday rotation signature from an average magnetic field strength in LSS filaments of $\sim10$~nG. We further investigated the above findings by direct comparison with recent MHD cosmological simulations, as described in \cite{va14mhd}. In particular, we analysed the RM distribution in the warm-hot gas simulated in a cosmic volume of $50^3\rm ~Mpc^3$, at a spatial resolution of $20~\rm kpc$ (comoving). To better compare with our observations, we generated a long integration cone for this volume, stacking several randomly oriented, mirrored replicas of the volume, covering the comoving distance out to $z=0.34$. In this way, we could measure the probability of having a contribution as large as 2.5\rad~from LSS filaments for the \grg~observations at $z=0.34$. We found that this occured in only 5\% of cases, for typical magnetisation values of $\sim$10 to 50~nG, amplified from an initial magnetic field strength of 1 nG, which was seeded at an early cosmological epoch and is in line with the upper limits given by the Planck satellite \citep[][]{PLANCK2015}. The probability was negligible for a significantly smaller seed field of 0.1~nG. Lower limits on the primordial field strength of $\sim$$10^{-16}$~G \citep{neronov2010} and $\sim$$10^{-20}$~G \citep{takahashi2013} imply that the true value may indeed be much lower. However, this is not the only possible scenario, as the LSS can be magnetised by a more ``astrophysical'' mechanism, such as galaxy feedback \citep[e.g.][for a recent review]{va17cqg}, or produced by a more efficient dynamo amplification of primordial fields \citep[][]{ryu2008} than is found in current MHD simulations. Therefore, from comparison with the MHD simulations, we consider it unlikely that the true RM contribution from the IGMF is as large as 2.5\rad, and that the observed RM excess is possibly dominated by other contributions along the line of sight, such as small scale GRM variations (Section~\ref{sec:GRM}). \section{Conclusions} \label{sec:conclusion} We have presented a linear polarisation and Faraday rotation study of a giant FRII radio galaxy, \grg, using data from the LOFAR Two-Metre Sky Survey \citep{shimwell2018}. After obtaining the spectroscopic redshift of the host galaxy (SDSS~J123501.52$+$531755.0, $z=0.3448\pm0.003$), we find that the radio galaxy has a projected linear extent of 3.4~Mpc. Both lobes are detected in polarisation with a mean RM difference between the lobes of $2.5\pm0.1$\rad. Small amounts of Faraday depolarisation ($\sim0.1$\rad) are also detected. In the absence of direct tracers of the gas density on large scales, we employ dynamical modelling of the advancing hotspots to infer a particle number density of the ambient gas of $n_{\rm e}\sim10^{-7}$\cc. This implies that the radio galaxy is expanding into an underdense region of the Universe. However, explaining the observed Faraday depolarisation (that most likely occurs in the environment local to the source) requires $n_{\rm e}\sim10^{-5}$\cc~in combination with a turbulent magnetic field strength of $\sim$0.09~$\mu$G at a distance of $\sim$1.5~Mpc from the host galaxy. Therefore, either the dynamical modelling is underestimating the density of the external medium or the depolarisation does not occur in the local source environment. Simulations of the propagation of FRII jets to large scales within a realistic cosmological environment may help distinguish between these scenarios. In general, the estimated magnetic field strength is unable to account for the observed mean Faraday rotation difference of 2.5\rad~between the two lobes. Using a catalogue of large scale structure (LSS) filaments in the local universe derived from optical spectroscopic observations, we find an excess of filaments intersecting lines of sight towards the polarised emission of the NW lobe. Assuming that magnetised gas in these LSS filaments is responsible for the RM difference between the lobes, gives a density-weighted magnetic field strength of 0.3~$\mu$G (assuming $n_{\rm e}\sim10^{-5}$\cc, a line-of-sight path length through each filament of 3~Mpc, and a magnetic field coherence length of 300~kpc). However, we find that predictions from cosmological simulations of the RM contribution from LSS filaments gives a low probability ($\sim$5\%) for an RM contribution as large as 2.5\rad. This probability applies to the case of magnetic fields strengths in the LSS filaments of 10 to 50 nG, which are amplified from primordial magnetic fields close to current upper limits from the CMB of $\sim$1~nG (the probability decreases to $\sim$0\% for weaker fields). Extrapolation of the observed variations in the Milky Way RM to 11\arcmin~scales (i.e.~the angular size of \grg) indicates that this likely contributes significantly to the mean RM difference, however, further observations are required to obtain better constraints. In the near future, large samples of RMs from radio galaxies with known redshifts will allow more advanced statistical analysis techniques to be used, such as RM structure function analyses \citep[e.g.][]{akahori2014} and cross-correlation with other tracers of LSS \citep[e.g.][]{stasyszyn2010, vernstrom2017, brown2017}. This will enable a better separation of the Faraday rotation due to our Galaxy \citep[e.g.][]{haverkorn2004,sunreich2009,mao2010,stil2011} from that due to the cosmic web, and put stronger constraints on the strength and structure of the intergalactic magnetic field. \begin{acknowledgements} This paper is based (in part) on data obtained with the International LOFAR Telescope (ILT) under project codes LC2\_038 and LC3\_008. LOFAR \citep{vanhaarlem2013} is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, that are owned by various parties (each with their own funding sources), and that are collectively operated by the ILT foundation under a joint scientific policy. The ILT resources have benefitted from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Universit\'e d'Orl\'eans, France; BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK; Ministry of Science and Higher Education, Poland. SPO and MB acknowledge financial support from the Deutsche Forschungsgemeinschaft (DFG) under grant BR2026/23. Part of this work was carried out on the Dutch national e-infrastructure with the support of the SURF Cooperative through grant e-infra 160022 \& 160152. The LOFAR software and dedicated reduction packages on https://github.com/apmechev/GRID\_LRT were deployed on the e-infrastructure by the LOFAR e-infragroup, consisting of J. B. R. Oonk (ASTRON \& Leiden Observatory), A. P. Mechev (Leiden Observatory) and T. Shimwell (ASTRON) with support from N. Danezi (SURFsara) and C. Schrijvers (SURFsara). This research has made use of data analysed using the University of Hertfordshire high-performance computing facility (\url{http://uhhpc.herts.ac.uk/}) and the LOFAR-UK computing facility located at the University of Hertfordshire and supported by STFC [ST/P000096/1]. This research made use of Astropy, a community-developed core Python package for astronomy \citep{astropy2013} hosted at http://www.astropy.org/, of Matplotlib \citep{hunter2007}, of APLpy \citep{aplpy2012}, an open-source astronomical plotting package for Python hosted at http://aplpy.github.com/, and of TOPCAT, an interactive graphical viewer and editor for tabular data \citep{taylor2005}. FV acknowledges financial support from the ERC Starting Grant "MAGCOW", no.714196, and the usage of e usage of computational resources on the Piz-Daint supercluster at CSCS-ETHZ (Lugano, Switzerland) under project s701 and s805. Based on observations made with the Nordic Optical Telescope, operated by the Nordic Optical Telescope Scientific Association at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias. KEH and JPUF acknowledge support by a Project Grant (162948-051) from The Icelandic Research Fund. The Cosmic Dawn Center is funded by the DNRF. RJvW acknowledges support from the ERC Advanced Investigator programme NewClusters 321271 and the VIDI research programme with project number 639.042.729, which is financed by the Netherlands Organisation for Scientific Research (NWO). HA benefited from grant DAIP \#66/2018 of Universidad de Guanajuato. KT is partially supported by JSPS KAKENHI Grant Number 16H05999 and 17H01110, MEXT KAKENHI Grant Number 15H05896, and Bilateral Joint Research Projects of JSPS. LKM acknowledges support from Oxford Hintze Centre for Astrophysical Surveys which is funded through generous support from the Hintze Family Charitable Foundation. This publication arises from research partly funded by the John Fell Oxford University Press (OUP) Research Fund. SPO thanks A.~G.~de Bruyn for stimulating discussions on the topic of this paper, and the referee for their helpful comments. \end{acknowledgements} \bibliographystyle{aa}
1,116,691,497,037
arxiv
\section*{Significance statement} Positive natural selection or local adaptation is the driving force behind the adaption of individuals to their environment. To identify genomic regions responsible for local adaptation, we propose to consider the genetic markers that are the most related with population structure. To uncover genetic structure, we consider principal component analysis that identifies the primary axes of variation in the data. Our approach generalizes common approaches for genome scan based on measures of population differentiation. To validate our approach, we consider the human 1000 Genomes data and find well-known targets for positive selection as well as new candidate regions. We also find evidence of polygenic adaptation for two biological pathways related to the innate immune system and to lipid metabolism. \section*{Introduction} Because of the flood of genomic data, the ability to understand the genetic architecture of natural selection has dramatically increased. Of particular interest is the study of local positive selection which explains why individuals are adapted to their local environment. In humans, the availability of genomic data fostered the identification of loci involved in positive selection \cite[]{sabeti07,barreiro08,pickrell09,grossman13}. Local positive selection tends to increase genetic differentiation, which can be measured by difference of allele frequencies between populations \cite[]{sabeti06,nielsen05,colonna14}. For instance, a mutation in the DARC gene that confers resistance to malaria is fixed in Sub-Saharan African populations whereas it is absent elsewhere \cite[]{hamblin02}. In addition to the variants that confer resistance to pathogens, genome scans also identify other genetic variants, and many of these are involved in human metabolic phenotypes and morphological traits \cite[]{barreiro08,hancock10}. In order to provide a list of variants potentially involved in natural selection, genome scans compute measures of genetic differentiation between populations and consider that extreme values correspond to candidate regions \cite[]{luikart03}. The most widely used index of genetic differentiation is the $F_{ST}$ index which measures the amount of genetic variation that is explained by variation between populations \cite[]{excoffier92}. However the $F_{ST}$ statistic requires to group individuals into populations which can be problematic when ascertainment of population structure does not show well-separated clusters of individuals \cite[e.g.][]{novembre08b}. Other statistics related to $F_{ST}$ have been derived to reduce the false discovery rate obtained with $F_{ST}$ but they also work at the scale of populations \cite[]{bonhomme10,fariello13,gunther13}. Grouping individuals into populations can be subjective, and important signals of selection may be missed with an inadequate choice of populations \cite[]{yang12}. We have previously developed an individual-based approach for selection scan based on a Bayesian factor model but the MCMC algorithm required for model fitting does not scale well to large data sets containing a million of variants or more \cite[]{duforet14}. We propose to detect candidates for natural selection using principal component analysis (PCA). PCA is a technique of multivariate analysis used to ascertain population structure \cite[]{patterson06}. PCA decomposes the total genetic variation into $K$ axes of genetic variation called principal components. In population genomics, the principal components can correspond to evolutionary processes such as evolutionary divergence between populations \cite[]{mcvean09}. Using simulations of an island model and of a model of population fission followed by isolation, we show that the common $F_{ST}$ statistic corresponds to the proportion of variation explained by the first $K$ principal components when $K$ has been properly chosen. With this point of view, the $F_{ST}$ of a given variant is obtained by summing the squared correlations of the first $K$ principal components opening the door to new statistics for genome scans. At a genome-wide level, it is known that there is a relationship between $F_{ST}$ and PCA \cite[]{mcvean09}, and our simulations show that the relationship also applies at the level of a single variant. The advantages of performing a genome scan based on PCA are multiple: it does not require to group individuals into populations, the computational burden is considerably reduced compared to genome scan approaches based on MCMC algorithms \cite[]{foll08,riebler08,gunther13,duforet14}, and candidate SNPs can be related to different evolutionary events that correspond to the different PCs. Using simulations and the 1000 Genomes data, we show that PCA can provide useful insights for genome scans. Looking at the correlations between SNPs and principal components provides a novel conceptual framework to detect genomic regions that are candidates for local adaptation. \section*{New method} \subsection*{New statistics for genome scan} We denote by ${\bf Y}$ the $(n\times p)$ centered and scaled genotype matrix where $n$ is the number of individuals and $p$ is the number of loci. The new statistics for genome scan are based on principal component analysis. The objective of PCA is to find a new set of orthogonal variables called the principal components, which are linear combinations of (centered and standardized) allele counts, such that the projections of the data onto these axes lead to an optimal summary of the data. To present the method, we introduce the truncated singular value decomposition (SVD) that approximates the data matrix ${\bf Y}$ by a matrix of smaller rank \begin{equation} \label{eq:svd} {\bf Y}\approx{\bf U}{\bf \Sigma} {\bf V}^T, \end{equation} where ${\bf U}$ is a $(n\times K)$ orthonormal matrix, ${\bf V}$ is a $(p\times K)$ orthonormal matrix, ${\bf \Sigma}$ is a diagonal $(K\times K)$ matrix and $K$ corresponds to the rank of the approximation. The solution of PCA with $K$ components can be obtained using the truncated SVD of equation (\ref{eq:svd}) : the $K$ columns of ${\bf V}$ contain the coefficients of the new orthogonal variables, the $K$ columns of ${\bf U}$ contain the projections (called {\it scores}) of the original variables onto the principal components and capture population structure (Fig. S1), and the squares of the elements of ${\bf \Sigma}$ are proportional to the proportion of variance explained by each principal component \cite[]{jolliffe05}. We denote the diagonal elements of ${\bf \Sigma}$ by $\sqrt \lambda_k$, $k=1,\dots,K$ where the $\lambda_k$'s are the ranked eigenvalues of the matrix ${\bf Y} {\bf Y}^{T}$. Denoting by ${\bf V}_{jk}$, the entry of ${\bf V}$ at the $j^{th}$ line and $k^{th}$ column, then the correlation $\rho_{jk}$ between the $j^{\rm th}$ SNP and the $k^{\rm th}$ principal component is given by $\rho_{jk}= \sqrt \lambda_k V_{jk}/\sqrt{n-1}$ \cite[]{cadima95}. In the following, the statistics $\rho_{jk}$ are referred to as {\it loadings} and will be used for detecting selection. The second statistic we consider for genome scan corresponds to the proportion of variance of a SNP that is explained by the first $K$ PCs. It is called the communality in exploratory factor analysis because it is the variance of observed variables accounted for by the common factors, which correspond to the first $K$ PCs \cite[]{suhr2009}. Because the principal components are orthogonal to each other, the proportion of variance explained by the first $K$ principal components is equal to the sum of the squared correlations with the first $K$ principal components. Denoting by $h_j^2$ the communality of the $j^{th}$ SNP, we have \begin{equation} \label{eq:h} h_j^2=\sum_{k=1}^K \rho_{jk}^2. \end{equation} The last statistic we consider for genome scans sums the squared of normalized loadings. It is defined as $h_j^{\prime2}=\sum_{k=1}^K V_{jk}^2$. Compared to the communality $h^2$, the statistic $h^{\prime2}$ should theoretically give the same importance to each PC because the normalized loadings are on the same scale as we have $\sum_{j=1}^p V_{jk}^2 =1$, for $k=1\dots K$. \subsection*{Numerical computations} The method of selection scan should be able to handle a large number $p$ of genetic variants. In order to compute truncated SVD with large values of $p$, we compute the $n\times n$ covariance matrix ${\bf \Omega}={{\bf YY}^T}/(p-1)$. The covariance matrix ${\bf \Omega}$ is typically of much smaller dimension than the $p\times p$ covariance matrix. Considering the $n\times n$ covariance matrix ${\bf \Omega}$ speeds up matrix operations. Computation of the covariance matrix is the most costly operation and it requires a number of arithmetic operations proportional to $p n^2$. After computing the covariance matrix ${\bf \Omega}$, we compute its first $K$ eigenvalues and eigenvectors to find $\Sigma^2/(p-1)$ and ${\bf U}$. Eigenanalysis is performed with the {\it dsyevr} routine of the linear algebra package LAPACK \cite[]{lapack}. The matrix ${\bf V}$, which captures the relationship between each SNPs and population structure, is obtained by the matrix operation ${\bf V}^T= {\bf \Sigma}^{-1}{\bf U}^T {\bf Y}$ that arises from equation (\ref{eq:svd}). In the software {\it PCAdapt}, data are processed as a stream and never stored in order to have a very low memory access whatever the size of the data. \section*{Results} \subsection*{Island model} To investigate the relationship between communality $h^2$ and $F_{ST}$, we consider an island model with three islands. We use $K=2$ when performing PCA because there are 3 islands. We choose a value of the migration rate that generates a mean $F_{ST}$ value (across the $1,400$ neutral SNPs) of $4\%$. We consider five different simulations with varying strengths of selection for the $100$ adaptive SNPs. In all simulations, the $R^2$ correlation coefficient between $h^2$ and $F_{ST}$ is larger than $98\%$. Considering as candidate SNPs the one percent of the SNPs with largest values of $F_{ST}$ or of $h^2$, we find that the overlap coefficient between the two sets of SNPs is comprised between $88\%$ and $99\%$. When varying the strength of selection for adaptive SNPs, we find that the relative difference of false discovery rates (FDR) obtained with $F_{ST}$ (top $1\%$) and with $h^2$ (top $1\%$) is smaller than $5\%$. The similar values of FDR obtained with $h^2$ and with $F_{ST}$ decrease for increasing strength of selection (Fig. S2). \subsection*{Divergence model} To compare the performance of different PCA-based summary statistics, we simulate genetic variation in models of population divergence. The divergence models assume that there are three populations, $A$, $B_1$ and $B_2$ with $B_1$ and $B_2$ being the most related populations (Figs. \ref{fig:stacked_bar} and \ref{fig:stacked_bar2}). The first simulation scheme assumes that local adaptation took place in the lineages corresponding to the environments of populations $A$ and $B_1$ (Fig. \ref{fig:stacked_bar}). The SNPs, which are are assumed to be independent, are divided into 3 groups: 9,500 SNPs evolve neutrally, 250 SNPs confer a selective advantage in the environment of $A$, and 250 other SNPs confer a selective advantage in the environment of $B_1$. Genetic differentiation, measured by pairwise $F_{ST}$, is equal to $14\%$ when comparing population $A$ to the other ones and is equal to $5\%$ when comparing populations $B_1$ and $B_2$. Performing principal component analysis with $K=2$ shows that the first component separates population $A$ from $B_1$ and $B_2$ whereas the second component separates $B_1$ from $B_2$ (Fig. S1). The choice of $K=2$ is evident when looking at the scree plot because the eigenvalues, which are proportional to the proportion of variance explained by each PC, drop beyond $K=2$ and stay almost constant as $K$ further increases (Fig. S3). We investigate the relationship between the communality statistic $h^2$, which measures the proportion of variance explained by the first two PCs, and the $F_{ST}$ statistic. We find a squared Pearson correlation coefficient between the two statistics larger than $98.8\%$ in the simulations corresponding to Figs. \ref{fig:stacked_bar} and \ref{fig:stacked_bar2} (Fig. S4). For these two simulations, we look at the SNPs in the top $1\%$ (respectively $5\%$) of the ranked lists based on $h^2$ and $F_{ST}$, and we find an overlap coefficient always larger than $93\%$ for the lists provided by the two different statistics (respectively $95\%$). Providing a ranking of the SNPs almost similar to the ranking provided by $F_{ST}$ is therefore possible without considering that individuals originate from predefined populations. We then compare the performance of the different statistics based on PCA by investigating if the top-ranked SNPs (top $1\%$) manage to pick SNPs involved in local adaptation (Fig. \ref{fig:stacked_bar}). The squared loadings $\rho^2_{j1}$ with the first PC pick SNPs involved in selection in population $A$ ($39\%$ of the top $1\%$), a few SNPs involved in selection in $B_1$ ($9\%$), and many false positive SNPs (FDR of $53\%$). The squared loadings with the second PC $\rho^2_{j2}$ pick less false positives (FDR of $12\%$) and most SNPs are involved in selection in $B_1$ ($88\%$) with just a few involved in selection in $A$ ($1\%$). When adaptation took place in two different evolutionary lineages of a divergence tree between populations, a genome scan based on PCA has the nice property that outlier loci correlated with PC1 or with PC2 correspond to adaptive constraints that occurred in different parts of the tree. Because the communality $h^2$ gives more importance to the first PC (equation(\ref{eq:h})), it picks preferentially the SNPs that are the most correlated with PC1. There is a large overlap of $72\%$ between the $1\%$ top-ranked lists provided by $h^2$ and $\rho^2_{j1}$. Therefore, the communality statistic $h^2$ is more sensitive to ancient adaptation events that occurred in the environment of population $A$. By contrast, the alternative statistic $h^{\prime2}$ is more sensitive to recent adaptation events that occurred in the environment of population $B_1$. When considering the top-ranked $1\%$ of the SNPs, $h^{\prime2}$ captures only one SNP involved in selection in $A$ ($1\%$ of the top $1\%$) and 88 SNPs related to adaptation in $B_1$ ($88\%$ of the top $1\%$). The overlap between the $1\%$ top-ranked lists provided by $h'^2$ and by $\rho^2_{j2}$ is of $86\%$. The $h^{\prime2}$ statistic is mostly influenced by the second principal component because the distribution of squared loadings corresponding to the second PC has a heavier tail, and this result holds for the two divergence models and for the 1000 Genomes data (Fig. S5). To summarize, the $h^2$ and $h^{\prime2}$ statistics give too much importance to PC1 and PC2 respectively and they fail to capture in an equal manner both types of adaptive events occurring in the environment of populations $A$ and $B_1$. We also investigate a more complex simulation in which adaptation occurs in the four branches of the divergence tree (Fig. \ref{fig:stacked_bar2}). Among the $10,000$ simulated SNPs, we assume that there are four sets of $125$ adaptive SNPs with each set being related to adaptation in one of the four branches of the divergence tree. Compared to the simulation of Fig. \ref{fig:stacked_bar}, we find the same pattern of population structure (Fig. S1). The squared loadings $\rho^2_{j1}$ with the first PC mostly pick SNPs involved in selection in the branch that predates the split between $B_1$ and $B_2$ ($51\%$ of the top $1\%$), SNPs involved in selection in the environment of population $A$ ($9\%$), and false positive SNPs (FDR of $38\%$). Except for false positives (FDR of $14\%$), the squared loadings $\rho^2_{j2}$ with the second PC rather pick SNPs involved in selection in $B_1$ and $B_2$ ($42\%$ for $B_1$ and $44\%$ for $B_2$). Once again, there is a large overlap between the SNPs picked by the communality $h^2$ and by $\rho^2_{1}$ ($92\%$ of overlap) and between the SNPs picked by $h'^2$ and $\rho^2_{2}$ ($93\%$ of overlap). Because the first PC discriminates population $A$ from $B_1$ and $B_2$ (Fig. S1), the SNPs most correlated with PC1 correspond to SNPs related to adaptation in the (red and green) branches that separate $A$ from populations $B_1$ and $B_2$. By contrast, the SNPs that are most correlated to PC2 correspond to SNPs related to adaptation in the two (blue and yellow) branches that separate population $B_1$ from $B_2$ (Fig. \ref{fig:stacked_bar2}). We additionally evaluate to what extent the results are robust with respect to some parameter settings. When considering the $5\%$ of the SNPs with most extreme values of the statistics instead of the top $1\%$, we also find that the summary statistics pick SNPs related to different evolutionary events (Fig. S6). The main difference being that the FDR increases considerably when considering the top $5\%$ instead of the top $1\%$(Fig. S6). We also consider variation of the selection coefficient ranging from $s=1.01$ to $s=1.1$ ($s=1.025$ corresponds to the simulations of Figures \ref{fig:stacked_bar} and \ref{fig:stacked_bar2}). As expected, the false discovery rate of the different statistics based on PCA is considerably reduced when the selection coefficient increases (Fig. S7). In the divergence model of Fig \ref{fig:stacked_bar}, we also compare the false discovery rates obtained with the statistics $h^2$, $h^{\prime2}$ and with a Bayesian factor model implemented in the software {\it PCAdapt} \cite[]{duforet14}. For the optimal choice of $K=2$, the statistic $h^{\prime2}$ and the Bayesian factor model provide the smallest FDR (Fig. S8). However, when varying the value of $K$ from $K=1$ to $K=6$, we find that the communality $h^2$ and the Bayesian approach are robust to over-specification of $K$ ($K>3$) whereas the false discovery rate obtained with $h^{\prime2}$ increases importantly as $K$ increases beyond $K=2$ (Fig. S8). We also consider a more general isolation-with-migration model. In the divergence model where adaptation occurs in two different lineages of the population tree (Figure \ref{fig:stacked_bar}), we add constant migration between all pairs of populations. We assume that migration occurred after the split between $B_1$ and $B_2$. We consider different values of migration rates generating a mean $F_{ST}$ of $7.5\%$ for the smallest migration rate to a mean $F_{ST}$ of $0\%$ for the largest migration rate. We find that the $R^2$ correlation between $F_{ST}$ and $h^2$ decreases as a function of the migration rate (Fig S9). For $F_{ST}$ values larger than $0.5\%$, $R^2$ is larger than $97\%$. The squared correlation $R^2$ decreases to $47\%$ for the largest migration rate. Beyond a certain level of migration rate, population structure, as ascertained by principal components, is no more described by well-separated clusters of individuals (Fig S10) but by a more clinal or continuous pattern (Fig S10) explaining the difference between $F_{ST}$ and $h^2$. However, the false discovery rates obtained with the different statistics based on PCA and with $F_{ST}$ evolve similarly as a function of the migration rate. For both types of approaches, the false discovery rate increases for larger migration with almost no true discovery (only 1 true discovery in the top $1\%$ lists) when considering the largest migration rate. The main results obtained under the divergence models can be described as follows. The principal components correspond to different evolutionary lineages of the divergence tree. The communality statistic $h^2$ provides similar list of candidate SNPs than $F_{ST}$ and it is mostly influenced by the first principal component which can be problematic if other PCs also convey adaptive events. To counteract this limitation, which can potentially lead to the loss of important signals of selection, we show that looking at the squared loadings with each of the principal components provide adaptive SNPs that are related to different evolutionary events. When adding migration rates between lineages, we find that the main results are unchanged up to a certain level of migration rate. Above this level of migration rate, the relationship between $F_{ST}$ and $h^2$ does not hold anymore and genome scans based on either PCA or $F_{ST}$ produce a majority of false positives. \subsection*{1,000 Genome data} Since we are interested in selective pressures that occurred during the human diaspora out of Africa, we decide to exclude individuals whose genetic makeup is the result of recent admixture events (African Americans, Columbians, Puerto Ricans and Mexicans). The first three principal components capture population structure whereas the following components separate individuals within populations (Figs. \ref{fig:PC} and S11). The first and second PCs ascertain population structure between Africa, Asia and Europe (Fig. \ref{fig:PC}) and the third principal component separates the Yoruba from the Luhya population (Fig. S11). The decay of eigenvalues suggests to use $K=2$ because the eigenvalues drop between $K=2$ and $K=3$ where a plateau of eigenvalues is reached (Fig. S3). When performing a genome scan with PCA, there are different choices of statistics. The first choice is the $h^2$ communality statistic. Using the three continents as labels, there is a squared correlation between $h^2$ and $F_{ST}$ of $R^2=0.989$. To investigate if $h^2$ is mostly influenced by the first PC, we determine if the outliers for the $h^2$ statistics are related with PC1 or with PC2. Among the top $0.1\%$ of SNPs with the largest values of $h^2$, we find that $74\%$ are in the top $0.1\%$ of the squared loadings $\rho_{j1}^2$ corresponding to PC1 and $20\%$ are in the top $0.1\%$ of the squared loadings $\rho_{j2}^2$ corresponding to PC2. The second possible choice of summary statistics is the $h^{\prime2}$ statistic. Investigating the repartition of the $0.1\%$ outliers for $h'$, we find that $0.005\%$ are in the top $0.1\%$ of the squared loadings $\rho_{j1}^2$ corresponding to PC1 and $85\%$ are in the top $0.1\%$ of the squared loadings $\rho_{j2}^2$ corresponding to PC2. The $h^{\prime2}$ statistic is mostly influenced by the second PC because the distribution of the $V_{2j}^2$ (normalized squared loadings) has a longer tail than the corresponding distribution for PC1 (Fig. S5). Because the $h^2$ statistic is mostly influenced by PC1 and $h^{\prime2}$ is mostly influenced by PC2, confirming the results obtained under the divergence models, we rather decide to perform two separate genome scans based on the squared loadings $\rho_{j1}^2$ and $\rho_{j2}^2$. The two Manhattan plots based on the squared loadings for PC1 and PC2 are displayed in Figs. \ref{fig:scan_PC1} and \ref{fig:scan_PC2} (Table S1 contains the loadings for all variants). Because of Linkage Disequilibrium, Manhattan plots generally produce clustered outliers. To investigate if the top $0.1\%$ outliers are clustered in the genome, we count---for various window sizes---the proportion of contiguous windows containing at least one outlier. We find that outlier SNPs correlated with PC1 or with PC2 are more clustered than expected if they would have been uniformly distributed among the $36,536,154$ variants (Fig. S12). Additionally, the clustering is larger for the outliers related to the second PC as they cluster in fewer windows (Fig. S12). As the genome scan for PC2 captures more recent adaptive events, it reveals larger genomic windows that experienced fewer recombination events. The 1,000 Genome data contain many low-frequency SNPs; $82\%$ of the SNPs have a minor allele frequency smaller than $5\%$. However, these low-frequency variants are not found among outlier SNPs. There are no SNP with a minor allele frequency smaller than $5\%$ among the $0.1\%$ of the SNPs most correlated with PC1 or with PC2. The 100 SNPs that are the most correlated with the first PC are located in 24 genomic regions (Table S2). Most of the regions contain just one or a few SNPs except a peak in the gene APPBP2 that contains 33 out of the 100 top SNPs, a peak encompassing the RTTN and CD226 genes containing 17 SNPS and a peak in the ATP1A1 gene containing 7 SNPs (Fig. \ref{fig:scan_PC1}). Confirming a larger clustering for PC2 outliers, the 100 SNPs that are the most correlated with PC2 cluster in fewer genomic regions (Table S3). They are located in 14 genomic regions including a region overlapping with EDAR contains 44 top hits, two regions containing 8 SNPs and located in the pigmentation genes SLC24A5 and SLC45A2, and two regions with 7 top hit SNPs, one in the gene KCNMA1 and another one encompassing the RGLA/MYO5C genes (Fig. \ref{fig:scan_PC2}). We perform Gene Ontology enrichment analyses using {\it Gowinda} for the SNPs that are the most correlated with PC1 and PC2. For PC1, we find, among others, enrichment (${\rm FDR} \leq 5\%$) for ontologies related to the regulation of arterial blood pressure, the endocrine system and the immunity response (interleukin production, response to viruses) (Table S4). For PC2, we find enrichment ($FDR \leq 5\%$) related to olfactory receptors, keratinocyte and epidermal cell differentiation, and ethanol metabolism (Table S5). We also search for polygenic adaptation by looking for biological pathways enriched with outlier genes \cite[]{daub13}. For PC1, we find one enriched (${\rm FDR} \leq 5\%$) pathway consisting of the beta defensin pathway (Table S6). The beta defensin pathway contains mainly genes involved in the innate immune system consisting of 36 defensin genes and of 2 Toll-Like receptors (TLR1 and TLR2). There are additionally 2 chemokine receptors (CCR2 and CCR6) involved in the beta defensin pathway. For PC2, we also find one enriched pathway consisting of fatty acid omega oxidation (${\rm FDR} \leq 5\%$, Table S7). This pathway consists of genes involved in alcohol oxidation (CYP, ALD and ALDH genes). Performing a less stringent enrichment analysis which can find pathways containing overlapping genes, we find more enriched pathways: the beta defensin and the defensin pathways for PC1 and ethanol oxidation, glycolysis/gluconeogenesis and fatty acid omega oxidation for PC2 (Table S8). To further validate the proposed list of candidate SNPs involved in local adaptation, we test for an enrichment of genic or non-synonymous SNP among the SNPs that are the most correlated with the PC. We measure the enrichment among outliers by computing odds ratio \cite[]{kudaravalli09,fagny14}. For PC1, we do not find significant enrichments (Table \ref{tab:1}) except when measuring the enrichment of genic regions compared to non-genic regions ($OR=10.18$ for the 100 most correlated SNPs, $P<5\%$ using a permutation procedure). For PC2, we find an enrichment of genic regions among outliers as well as an enrichment of non-synonymous SNPs (Table \ref{tab:1}). By contrast with the enrichment of genic regions for SNPs extremely correlated with the first PC, the enrichment for the variants extremely correlated with PC2 outliers is significant when using different thresholds to define outliers (Table \ref{tab:1}). \section*{Discussion} The promise of a fine characterization of natural selection in humans fostered the development of new analytical methods for detecting candidate genomic regions \cite[]{vitti13}. Population-differentiation based methods such as genome scans based on $F_{ST}$ look for marked differences in allele frequencies between population \cite[]{holsinger09}. Here, we show that the communality statistic $h^2$, which measures the proportion of variance of a SNP that is explained by the first $K$ principal components, provides a similar list of outliers than the $F_{ST}$ statistic when there are $K+1$ clusters of populations. In addition, the communality statistic $h^2$ based on PCA can be viewed as an extension of $F_{ST}$ because it does not require to define populations in advance and can even be applied in the absence of well-defined populations. To provide an example of genome scans based on PCA when there are no clusters of populations, we additionally consider the POPRES data consisting of 447,245 SNPSs typed for 1,385 European individuals \cite[]{nelson08}. The scree plot indicates that there are $K=2$ relevant clusters (Fig. S3). The first principal component corresponds to a Southeast-Northwest gradient and the second one discriminates individuals from Southern Europe along a East-West gradient \cite[]{novembre08b,jay13} (Figure \ref{fig:pca_popres}). Considering the 100 SNPs most correlated with the first PC, we find that 75 SNPs are in the lactase region, 18 SNPs are in the HLA region, 5 SNPs are in the ADH1C gene, 1 SNP is in HERC2 and another is close to the LOC283177 gene (Figure \ref{fig:scan_popres}). When considering the 100 SNPs most correlated with the second PC, we find less clustering than for PC1 with more peaks (Fig. S13). The regions that contain the largest number of SNPs in the top 100 SNPs are the HLA region (41 SNPs) and a region close to the NEK10 gene (10 SNPs), which is a gene potentially involved in breast cancer \cite[]{ahmed09}. The genome scan retrieves well-known signals of adaption in humans that are related to lactase persistence (LCT) \cite[]{bersaglieri04}, immunity (HLA), alcohol metabolism (ADH1C) \cite[]{han07} and pigmentation (HERC2) \cite[]{wilde14}. The analysis of the POPRES data shows that genome scan based on PCA can be applied when there is a clinal or continuous pattern of population structure without well-defined clusters of individuals. When there are clusters of populations, we have shown with simulations that genome scans based on $F_{ST}$ can be reproduced with PCA. Genome scans based on PCA have the additional advantage that a particular axis of genetic variation, which is related to adaptation, can be pinpointed. Bearing some similarities with PCA, performing a spectral decomposition of the kinship matrix has been proposed to pinpoint populations where adaptation took place \cite[]{fariello13}. However, despite of some advantages, the statistical problems related to genome scans with $F_{ST}$ remain. The drawbacks of $F_{ST}$ arise when there is hierarchical population structure or range expansion because $F_{ST}$ does not account for correlations of allele frequencies among subpopulations \cite[]{bierne13,lotterhos14}. An alternative presentation of the issues arising with $F_{ST}$ is that it implicitly assumes either a model of instantaneous divergence between populations or an island-model \cite[]{bonhomme10}. Deviations from these models severely impact false discovery rates \cite[]{duforet14}. Viewing $F_{ST}$ from the point of view of PCA provides a new explanation about why $F_{ST}$ does not provide an optimal ranking of SNPs for detecting selection. The statistic $F_{ST}$ or the proposed $h^2$ communality statistic are mostly influenced by the first principal component and the relative importance of the first PC increases with the difference between the first and second eigenvalues of the covariance matrix of the data. Because the first PC can represent ancient adaptive events, especially under population divergence models \cite[]{mcvean09}, it explains why $F_{ST}$ and the communality $h^2$ are biased toward ancient evolutionary events. Following recent developments of $F_{ST}$-related statistics that account for hierarchical population structure \cite[]{bonhomme10,gunther13,foll14}, we proposed an alternative statistic $h^{\prime2}$, which should give equal weights to the different PCs. However, analyzing simulations and the 1000 Genomes data shows that $h^{\prime2}$ do not properly account for hierarchical population structure because outliers identified by $h^{\prime2}$ are almost always related to the last PC kept in the analysis. To avoid to bias data analysis in favor of one principal component, it is possible to perform a genome scan for each principal component. In addition to ranking the SNPs when performing a genome scan, a threshold should be chosen to extract a list of outlier SNPs. We do not have addressed the question of how to choose the threshold and rather used empirical threshold such as the $99\%$ quantile of the distribution of the test statistic (top $1\%$). If interested in controlling the false discovery rate, we can assume that the loadings $\rho_{kj}$ are Gaussian with zero mean \cite[]{galinsky15}. Because of the constraints imposed on the loadings when performing PCA, the variance of the $\rho_{kj}$'s is equal to the proportion of variance explained by the $k^{\rm th}$ PC, which is given by $\lambda_k/(p\times (n-1))$ where $\lambda_k$ is the $k^{\rm th}$ eigenvalue of the matrix $Y Y^T$. Assuming a Gaussian distribution for the loadings, the communality (equation (\ref{eq:h})) can then be approximated by a weighted sum of chi-square distribution. Approximating a weighted sum of chi-square distribution with a chi-square distribution, we have \cite[]{yuan10} \begin{equation} \label{eq:chi2} h^2 \times K/c \leadsto \chi^2_K, \end{equation} where $c=\sum_{i=1}^K \lambda_K/(p\times(n-1))$ is the proportion of variance explained by the first $K$ PCs. The chi-square approximation of equation (\ref{eq:chi2}) bears similarity with the approximation of \citet{lewontin73} that states that $F_{ST} \times ({\rm pop}-1)/ \bar{F}_{ST}$ follows a chi square approximation with $n$ degrees of freedom where $\bar{F}_{ST}$ is the mean $F_{ST}$ over loci and ${\rm pop}$ is the number of populations. In the simulations of an island model and of a divergence model, quantile-to-quantile plots indicate a good fit to the theoretical chi-square distribution of expression (\ref{eq:chi2}) (Figure S14). When using the chi-square approximation to compute P-values, we evaluate if FDR can be controlled using Benjamini-Hochberg correction \cite[]{benjamini95}. We find that the actual proportion of false discoveries corresponds to the target FDR for the island model but the procedure is too conservative for the divergence model (Figure S15). For instance, when controlling FDR at a level of $25\%$, the actual proportion of false discoveries is of $15\%$. A recent test based on $F_{ST}$ and a chi-square approximation was also found to be conservative \cite[]{Lotterhos15}. Analysing the phase 1 release of the 1000 Genomes data demonstrates the suitability of a genome scan based on PCA to detect signals of positive selection. We search for variants extremely correlated with the first PC, which corresponds to differentiation between Africa and Eurasia and with the second PC, which corresponds to differentiation between Europe and Asia. For variants most correlated with the second PC, there is a significant enrichment of genic and non-synonymous SNPs whereas the enrichment is less detectable for variants related to the first PC. The enrichment analysis confirms that positive selection may favor local adaptation of human population by increasing differentiation in genic regions especially in non synonymous variants \cite[]{barreiro08}. Consistent with LD, we find that candidate variants are clustered along the genome with a larger clustering for variants correlated with the Europe-Asia axis of differentiation (PC2). The difference of clustering illustrates that statistical methods based on LD for detecting selection will perform differently depending on the time frame under which adaptation had the opportunity to occur \cite[]{sabeti06}. The fact that population divergence, and its concomitant adaptive events, between Europe and Asia is more recent that the out-of-Africa event is a putative explanation of the difference of clustering between PC1 and PC2 outliers. Explaining the difference of enrichment between PC1 and PC2 outliers is more difficult. The weaker enrichment for PC1 outliers can be attributed either to a larger number of false discoveries or to a larger importance of other forms of natural selection such as background selection \cite[]{hernandez11}. When looking at the 100 SNPs most correlated with PC1 or PC2, we find genes for which selection in humans was already documented (9/24 for PC1 and 5/14 for PC2, Table S9). Known targets for selection include genes involved in pigmentation (MATP, OCA2 for PC1 and SLC45A2, SLC24A5, and MYO5C for PC2), in the regulation of sweating (EDAR for PC2), and in adaptation to pathogens (DARC, SLC39A4, and VAV2 for PC1). A 100 kb region in the vicinity of the APPBPP2 gene contains one third of the 100 SNPs most correlated with PC1. This APPBPP2 region is a known candidate for selection and has been identified by looking for miRNA binding sites with extreme population differentiation \cite[]{li12}. APPBPP2 is a nervous system gene that has been associated with Alzheimer disease, and it may have experienced a selective sweep \cite[]{williamson07}. For some SNPs in APPBPP2, the differences of allele frequencies between Eurasiatic population and SubSaharan populations from Africa are of the order of $90\%$ (\url{http://www.popgen.uchicago.edu/ggv}) calling for a further functional analysis. Moreover, looking at the 100 SNPs most correlated with PC1 and PC2 confirms the importance of non-coding RNA (FAM230B, D21S2088E, LOC100133461, LINC00290, LINC01347, LINC00681), such as miRNA (MIR429), as a substrate for human adaptation \cite[]{li12,grossman13}. Among the other regions with a large number of candidate SNPs, we also found the RTTN/CD226 regions, which contain many SNPs correlated with PC1. In different selection scans, the RTTN genes has been detected \cite[]{carlson05,barreiro08}, and it is involved in the development of the human skeletal system \cite[]{wu10}. An other region with many SNPs correlated with PC1 contains the ATP1A1 gene involved in osmoregulation and associated with hypertension \cite[]{gurdasani15}. The regions containing the largest number of SNPs correlated with PC2 are well-documented instances of adaptation in humans and includes the EDAR, SLC24A5 and SLC45A2 genes. The KCNMA1 gene contains 7 SNPs correlated with PC2 and is involved in breast cancer and obesity \cite[]{oeggerli12,jiao11}. As for KCNMA1, the MYO5C has already been reported in selection scans although no mechanism of biological adaption has been proposed yet \cite[]{chen10,fumagalli10}. To summarize, the list of most correlated SNPs with the PCs identifies well-known genes related to biological adaptation in humans (EDAR, SLC24A5,SLC45A2, DARC), but also provides candidate genes that deserve further studies such as the APPBPP2, TP1A1, RTTN, KCNMA1 and MYO5C genes, as well as the ncRNAs listed above. We also show that a scan based on PCA can also be used to detect more subtle footprints of positive selection. We conduct an enrichment analysis that detects polygenic adaptation at the level of biological pathways \cite[]{daub13}. We find that genes in the beta-defensin pathway are enriched in SNPs correlated with PC1. The beta-defensin genes are key components of the innate immune system and have evolved through positive selection in the catarrhine primate lineages \cite[]{hollox08}. As for the HLA complex, some beta-defensin genes (DEFB1, DEFB127) show evidence of long-term balancing selection with major haplotypic clades coexisting since millions of years \cite[]{cagliani08,hollox08}. We also find that genes in the omega fatty acid oxidation pathways are enriched in SNPs correlated with PC2. This pathway was also found when investigating polygenic adaptation to altitude in humans \cite[]{foll14}. The proposed explanation was that omega oxidation becomes a more important metabolic pathway when beta oxidation is defective, which can occur in case of hypoxia \cite[]{foll14}. However, this explanation is not valid in the context of the 1000 Genomes data when there are no populations living in hypoxic environments. Proposing phenotypes on which selection operates is complicated by the fact that the omega fatty acid oxidation pathway strongly overlaps with two other pathways: ethanol oxidation and glycolysis. Evidence of selection on the alcohol dehydrogenase locus have already been provided \cite[]{han07} with some authors proposing that a lower risk for alcoholism might have been beneficial after rice domestication in Asia \cite[]{peng10}. This hypothesis is speculative and we lack a confirmed biological mechanism explaining the enrichment of the fatty acid oxidation pathway. More generally, the enrichment of the beta-defensin and of the omega fatty acid oxidation pathways confirms the importance of pathogenic pressure and of metabolism in human adaptation to different environments \cite[]{hancock08,barreiro09,fumagalli11,daub13}. In conclusion, we propose a new approach to scan genomes for local adaptation that works with individual genotype data. Because the method is efficiently implemented in the software {\it PCAdapt}, analyzing $36,536,154$ SNPs took only $502$ minutes using a single core of an Intel(R) Xeon(R) (E5-2650, 2.00GHz, 64 bits). Even with low-coverage sequence data (3x), PCA-based statistics retrieve well-known examples of biological adaptation which is encouraging for future whole-genome sequencing project, especially for non-model species, aiming at sampling many individuals with limited cost. \section*{Materials and Methods} \subsection*{Simulations of an island model} Simulations were performed with {\it ms} \cite[]{hudson02}. We assume that there are 3 islands with $100$ sampled individuals in each of them. There is a total of $1,400$ neutral SNPs, and $100$ adaptive SNPs. SNPs are assumed to be unlinked. To mimic adaptation, we consider that adaptive SNP have a migration rate smaller than the migration rate of neutral SNPs ($4N_0m=4$ for neutral SNPs) \cite[]{bazin10}. The strength of selection is equal to the ratio of the migration rates of neutral and adaptive SNPs. Adaptation is assumed to occur in one population only. The {\it ms} command lines for neutral and adaptive SNPs are given below (assuming an effective migration rate of $4 N_0 m = 0.1$ for adaptive SNPs). \begin{verbatim} ./ms 300 1400 -s 1 -I 3 100 100 100 -ma x 4 4 4 x 4 4 4 x #neutral ./ms 300 100 -s 1 -I 3 100 100 100 -ma x 0.1 0.1 0.1 x 4 0.1 4 x #outlier \end{verbatim} The values of migrations rates we consider for adaptive SNPs are $4 N_0 m = 0.04, 0.1, 0.4, 1, 2$. \section*{Simulations of divergence models} We assume that each population has a constant effective population size of $N_0= 1,000$ diploid individuals, with 50 individuals sampled in each population. The genotypes consist of 10,000 independent SNPs. The simulations were performed in two steps. In the first step, we used the software {\it ms} to simulate genetic diversity \cite[]{hudson02} in the ancestral population. We kept only variants with a minor allele frequency larger than $5\%$ at the end of the first step. The second step was performed with {\it SimuPOP} \cite[]{peng05} and simulations were started using the allele frequencies generated with {\it ms} in the ancestral population. Looking forward in time, we consider that there are $100$ generations between the initial split and the following split between the two $B$ subpopulations, and $200$ generations following the split between the two $B$ subpopulations. We assume no migration between populations. In the simulation of Fig. \ref{fig:stacked_bar}, we assume that 250 SNPs confer a selective advantage in the branch leading to population $A$ and 250 other SNPs confer a selective advantage in the branch leading to population $B_1$. We consider an additive model for selection with a selection coefficient of $s=1.025$ for heterozygotes. For the simulation of Fig. \ref{fig:stacked_bar2}, we assume that there are four non-overlapping sets of $125$ adaptive SNPs with each set being related to adaptation in one of the four branches of the divergence tree. A SNP can confer a selective advantage in a single branch only. When including migration, we consider that there are $200$ generations between the initial split and the following split between the two $B$ subpopulations, and $100$ generations following the split between the two $B$ subpopulations. We consider migration rates ranging from $0.2\%$ to $5\%$ per generation. Migration is assumed to occur only after the split between $B_1$ and $B_2$. The migration rate is the same for the three pairs of populations. To estimate the $F_{ST}$ statistic, we consider the estimator of Weir and Cockerham \cite[]{weir84}. \section*{1000 Genomes data} We downloaded the 1000 Genomes data (phase 1 v3) at \url{ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/phase1/analysis_results/integrated_call_sets/} \cite[]{Altshuler12}. We kept low-coverage genome data and excluded exomes and triome data to minimize variation in read depth. Filtering the data resulted in a total of $36,536,154$ SNPs that have been typed on $1,092$ individuals. Because the analysis focuses on biological adaptation that took place during the human diaspora out of Africa, we removed recently admixed populations (Mexican, Columbian, PortoRican, and AfroAmerican individuals from the Southwest of the USA). The resulting dataset contains 850 individuals coming from Asia (two Han Chinese and one Japanese populations), Africa (Yoruba and Luhya) and Europe (Finish, British in England and Scotland, Iberian, Toscan, and Utah residents with Northern and Western European ancestry). \section*{Enrichment analyses} We used {\it Gowinda} \cite[]{kofler12} to test for enrichment of Gene Ontology (GO). A gene is considered as a candidate if there is at least one of the most correlated SNPs (top $1\%$) that is mapped to the gene (within an interval of 50Kb upstream and downstream of the gene). Enrichment was computed as the proportion of genes containing at least one outlier SNPs among the genes of the given GO category that are present in the dataset. In order to sample a null distribution for enrichment, {\it Gowinda} performs resampling without replacement of the SNPs. We used the {\it --gene} option of {\it Gowinda} that assumes complete linkage within genes. We performed a second enrichment analysis to determine if outlier SNPs are enriched for genic regions. We computed odds ratio \cite[]{kudaravalli09} $$ {\rm OR}= \frac{{\rm Pr}({\rm genic} | {\rm outlier})}{{\rm Pr}({\rm not \, genic} | {\rm outlier})} \frac{{\rm Pr}({\rm not \, genic} | {\rm not \, outlier})}{{\rm Pr}({\rm genic} | {\rm not \, outlier})}. $$ We implemented a permutation procedure to test if an odds ratio is significantly larger than 1 \cite[]{fagny14}. The same procedure was applied when testing for enrichment of UTR regions and of non-synonymous SNPs. \section*{Polygenic adaptation} To test for polygenic adaptation, we determined whether genes in a given biological pathway show a shift in the distribution of the loadings \cite[]{daub13}. We computed the SUMSTAT statistic for testing if there is an excess of selection signal in each pathway \cite[]{daub13}. We applied the same pruning method to take into account redundancy of genes within pathways. The test statistic is the squared loading standardized into a z-score \cite[]{daub13}. SUMSTAT is computed for each gene as the sum of test statistic of each SNP belonging to the gene. Intergenic SNPs are assigned to a gene provided they are situated 50kb up or downstream. We downloaded 63,693 known genes from the UCSC website and we mapped SNPs to a gene if a SNP is located within a gene transcript or within 50kb of a gene. A total of 18,267 genes were mapped with this approach. We downloaded 2,681 gene sets from the NCBI Biosystems database. After discarding genes that were not part of the aforementioned gene list, removing gene sets with less than 10 genes and pooling nearly identical gene sets, we kept 1,532 sets for which we test if there was a shift of the distribution of loadings. \subsection*{Acknowledgments} This work has been supported by the LabEx PERSYVAL-Lab (ANR-11-LABX-0025-01) and the ANR AGRHUM project (ANR-14-CE02-0003-01). POPRES data were obtained from dbGaP (accession number phs000145.v1.p1) \bibliographystyle{mbe} \renewcommand\refname{References}
1,116,691,497,038
arxiv
\section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus -.2ex}{2.3ex plus .2ex}{\large\bf}} \def\subsection{\@startsection{subsection}{2}{\z@}{-3.25ex plus -1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize\bf}} \def\@fnsymbol#1{\ensuremath{\ifcase#1\or 1\or 2\or 3\or 4\or 5\or 6\or 7 \or 8\ or 9 \or 10\or 11 \else\@ctrerr\fi}} \makeatother \author{Siu-Wing Chen \thanks{Supported by Research Grants Council, Hong Kong, China (project no.~16200317). Department of Computer Science and Engineering, HKUST, Clear Water Bay, Hong Kong. Email: {\tt [email protected]}} \and Otfried Cheon \thanks{Supported by ICT R\&D program of MSIP/IITP~[R0126-15-1108]. School of Computing, KAIST, Daejeon, South Korea. Email: {\tt [email protected], [email protected]}} \and Taegyoung Lee\footnotemark[2]} \begin{document} \maketitle \begin{abstract} Given $n$ data points in $\mathbb{R}^d$, an appropriate edge-weighted graph connecting the data points finds application in solving clustering, classification, and regresssion problems. The graph proposed by Daitch, Kelner and Spielman~(ICML~2009) can be computed by quadratic programming and hence in polynomial time. While in practice a more efficient algorithm would be preferable, replacing quadratic programming is challenging even for the special case of points in one dimension. We develop a dynamic programming algorithm for this case that runs in $O(n^2)$ time. Its practical efficiency is also confirmed in our experimental results. \end{abstract} \section{Introduction} Many interesting data sets can be interpreted as point sets in~$\Reals^{d}$, where the dimension~$d$ is the number of features of interest of each data point, and the coordinates are the values of each feature. To model the similarity between discrete samples, one can introduce appropriate undirected weighted edges connecting proximal points. Such a graph is useful in applications such as classification, regression, and clustering~(see, for instance,~\cite{ng01,zhou03}). For example, let $w_{ij}$ denote the weight determined for the edge that connects two points $p_i$ and $p_j$, and regression can be performed to predict function values $f_i$'s at the points $p_i$'s by minimizing $\sum_{i,j} w_{ij} (f_i - f_j)^2$, subject to fixing the subset of known $f_i$'s~\cite{daitch09}. As another example, for any given integer $k$, one can obtain a partition of the weighted graph into $k$ clusters based on spectral analysis of the eigenvectors of the Laplacian of the weighted graph~\cite{daitch09,ng01}. Note that the weighted graph may actually be connected. To allow efficient data analysis, it is important that the weighted graph is sparse. Different proximity graphs have been suggested for this purpose. The \emph{kNN}-graph connects each point to its $k$~nearest neighbors. The $\varepsilon$-ball graph connects each point to all other points that are within a distance~$\varepsilon$. In both cases, an edge of length~$\ell$ is assigned a weight of $\exp(-\ell^2/2\sigma^2)$, where the parameters~$k$, $\varepsilon$ and~$\sigma$ need to be specified by the user. It is unclear how to set these parameters in an automatic, efficient way. Several studies have found the \emph{kNN}-graph and the $\varepsilon$-ball graphs to be inferior to other graphs proposed~\cite{daitch09,han15,zhang14}. We consider the graph proposed by Daitch, Kelner, and Spielman~\cite{daitch09}. It is provably sparse, and experiments have shown that it offers good performance in classification, clustering and regression. This graph is defined via quadratic optimization as follows: Let $P = \{p_1, p_2, \dots, p_n\}$ be a set of $n$~points in~$\Reals^{d}$. We assign weights $w_{ij} \geq 0$ to each pair of points~$(p_i, p_j)$, such that $w_{ij} = w_{ji}$ and $w_{ii} = 0$. These weights determine for each point~$p_i$ a vector~$\vec{v}_i$, as follows: \[ \vec{v}_i = \sum_{j = 1}^{n} w_{ij} (p_j - p_{i}). \] Let $v_i$ denote $\|\vec{v}_i\|$. The weights are chosen so as to minimize the sum \[ Q = \sum_{i=1}^{n} v_{i}^{2}, \] under the constraint that the weights for each point add up to at least one (to prevent the trivial solution of $w_{ij} = 0$ for all $i$ and~$j$): \[ \sum_{j=1}^{n} w_{ij} \geq 1 \qquad \text{for~$1 \leq i \leq n$}. \] The resulting graph contains an edge connecting $p_i$ and $p_j$ if and only if $w_{ij} > 0$. Daitch et al.~\cite{daitch09} showed that there is an optimal solution where at most $(d+1)n$ weights are non-zero. Moreover, in two dimensions, optimal weights can be chosen such that the graph is planar. Clearly, the optimal weights can be computed by quadratic programming. A quadratic programming problem with $m$ variables, $c$ constraints, and $L$ input bits can be solved in $O(m^4 L^2)$ time using the method of Ye and Tse~\cite{ye89}. There is another algorithm by Kapoor and Vaidya~\cite{kapoor86} that has an asymptotic running time of $O((m+c)^{3.67}L \cdot \log L \cdot \log (m+c))$. In our case, there are $n(n-1)/2$ variables and $\Theta(n)$ constraints. So the running time is $O(n^{7.34}L \cdot \log L \cdot \log n)$, which is impractical even for moderately large $n$. Daitch et al.~reported that a data set of 4177~points requires a processing time of approximately 13.8~hours. Graphs based on optimizing other convex quality measures have also been considered~\cite{jebara09,zhang14}. Our goal is to design an algorithm to compute the optimal weights in Daitch et al.'s formulation that is significantly faster than quadratic programming. Perhaps surprisingly, this problem is challenging even for points in one dimension, that is, when all points lie on a line. In this case, it is not difficult to show (Lemma~\ref{lem:consecutive}) that there is an optimal solution such that $w_{ij} > 0$ if and only if $p_i$ and $p_j$ are consecutive. This reduces the number of variables to~$n-1$. Even in one dimension, the weights in an optimal solution do not seem to follow any simple pattern as we illustrate in the following two examples. Some weights in an optimal solution can be arbitrarily high. Consider four points $p_1,p_2,p_3,p_4$ in left-to-right order such that $\|p_1 - p_2\| = \|p_3 - p_4\| = 1$ and $\|p_2 - p_3\| = \varepsilon$. By symmetry, $w_{12} = w_{34}$, and so $v_1 = v_4 = w_{12}$. Since $w_{12} + w_{23} \geq 1$ and $w_{23} + w_{34} \geq 1$ are trivially satisfied by the requirement that $w_{12} = w_{34} \geq 1$, we can make $v_2$ zero by setting $w_{23} = w_{12}/\varepsilon$. In the optimal solution, $w_{12} = w_{34} = 1$ and $w_{23} = 1/\varepsilon$. So $w_{23}$ can be arbitrarily large. Given points $p_1, \cdots, p_n$ in left-to-right order, it seems ideal to make $v_i$ a zero vector. One can do this for $i \in [2,n-1]$ by setting $w_{i-1,i}/w_{i,i+1} = \|p_i - p_{i+1}\|/\|p_{i-1} - p_i\|$, however, some of the constraints $w_i + w_{i+1} \geq 1$ may be violated. Even if we are lucky that for $i \in [2,n-1]$, we can set $w_{i-1,i}/w_{i,i+1} = \|p_i - p_{i+1}\|/\|p_{i-1} - p_i\|$ without violating $w_i + w_{i+1} \geq 1$, the solution may not be optimal as we show below. Requiring $v_i = 0$ for $i \in [2,n-1]$ gives $v_1 = v_n = w_{12}\|p_1 - p_2\|$. In general, we have $\|p_1 - p_2 \| \not= \|p_{n-1}-p_n\|$, so we can assume that $\|p_1-p_2\| > \|p_{n-1}-p_n\|$. Then, $w_{n-1,n} = w_{12}\|p_1-p_2\|/\|p_{n-1}-p_n\| > 1$ as $w_{12} \geq 1$. Since $w_{n-1,n} > 1$, one can decrease $w_{n-1,n}$ by a small quantity $\delta$ while keeping its value greater than 1. Both constaints $w_{n-1,n} \geq 1$ and $w_{n-2,n-1} + w_{n-1,n} \geq 1$ are still satisfied. Observe that $v_n$ drops to $w_{12}\|p_1-p_2\|-\delta\|p_{n-1}-p_n\|$ and $v_{n-1}$ increases to $\delta\|p_{n-1}-p_n\|$. Hence, $v_{n-1}^2 + v_n^2$ decreases by $2\delta w_{12}\|p_1-p_2\|\|p_{n-1}-p_n\| - 2\delta^2\|p_{n-1}-p_n\|^2$, and so does~$Q$. The original setting of the weights is thus not optimal. If $w_{n-3,n-2} + w_{n-2,n-1} > 1$, it will bring further benefit to decrease $w_{n-2,n-1}$ slightly so that $v_{n-1}$ decreases slightly from $\delta \|p_{n-1}-p_n\|$ and $v_{n-2}$ increases slightly from zero. Intuitively, instead of concentrating $w_{12}\|p_1 - p_2\|$ at $v_n$, it is better to distribute it over multiple points in order to decrease the sum of squares. But it does not seem easy to determine the best weights. Although there are only $n-1$ variables in one dimension, quadratic programming still yields a high running time of $O(n^{3.67}L \cdot \log L \cdot \log n)$. We present a dynamic programming algorithm that computes the optimal weights in $O(n^2)$ time in the one-dimensional case. The intermediate solution has an interesting structure such that the derivature of its quality measure depends on the derivative of a subproblem's quality measure as well as the inverse of this derivative function. This makes it unclear how to bound the size of an explicit representation of the intermediate solution. Instead, we develop an implicit representation that facilitates the dynamic programming algorithm. We implemented our algorithm, with both the explicit and the implicit representation of intermediate solutions. Both versions run substantially faster than the quadratic solver in \textsc{cvxopt}. For instance, for 3200~points, \textsc{cvxopt} needs over 20~minutes to solve the quadratic program, while our algorithm takes less than half a second to compute the optimal weights. \section{A single-parameter quality measure function} We will assume that the points are given in sorted order, so that $p_1 < p_2 < p_3 < \dots < p_n$. We first argue that the only weights that need to be non-zero are the weights between consecutive points, that is, weights of the form~$w_{i,i+1}$. \begin{lemma} \label{lem:consecutive} For $d=1$, there is an optimal solution where only weights between consecutive points are non-zero. \end{lemma} \begin{proof} Assume an optimal solution where $w_{ik} > 0$ and $i < j < k$. We construct a new optimal solution as follows: Let $a = p_{j} - p_{i}$, $b = p_{k} - p_{j}$, and $w = w_{ik}$. In the new solution, we set $w_{ik}= 0$, increase $w_{ij}$ by $\frac{a+b}{a}w$, and increase $w_{jk}$ by~$\frac{a+b}{b}w$. Note that since $a+b > a$ and $a+b > b$, the sum of weights at each vertex increases, and so the weight vector remains feasible. The value $v_{j}$ changes by $-a \times \frac{a+b}{a} w + b \times \frac{a+b}{b} w = 0$, the value $v_{i}$ changes by $-(a+b)\times w + a \times \frac{a+b}{a} w = 0$, and the value $v_{k}$ changes by $+(a+b)\times w - b \times \frac{a+b}{b} w = 0$. It follows that the new solution has the same quality as the original one, and is therefore also optimal. \end{proof} To simplify the notation, we set $d_{i} = p_{i+1} - p_{i}$, for $1 \leq i < n$, rename the weights as $w_{i} := w_{i,i+1}$, again for $1 \leq i < n$, and observe that \begin{align*} v_{1} & = w_{1} d_{1}, \\ v_{i} & = \left|w_{i} d_{i} - w_{i-1} d_{i-1}\right| \qquad \text{for $2 \leq i \leq n-1$}, \\ v_{n} & = w_{n-1} d_{n-1}. \end{align*} For $i \in [2,n-1]$, we introduce the quantity \[ Q_{i}~=~d_{i}^{2}w_{i}^{2} + \sum_{j=1}^{i} v_{j}^{2}~=~d_{i}^{2}w_{i}^{2} + d_1^2w_1^2 + \sum_{j=2}^{i} (d_jw_j - d_{j-1}w_{j-1})^2, \] and note that $Q_{n-1} = \sum_{i=1}^n v_{i}^{2} = Q$. Thus, our goal is to choose the $n-1$ non-negative weights~$w_{1}, \dots, w_{n-1}$ such that $Q_{n-1}$ is minimized, under the constraints \begin{align*} w_{1} & \geq 1, \\ w_{j} + w_{j+1} & \geq 1 \qquad \text{for $2 \leq j \leq n-2$},\\ w_{n-1} & \geq 1. \end{align*} The quantity~$Q_{i}$ depends on the weights~$w_{1}, w_{2}, \dots, w_{i}$. We concentrate on the last one of these weights, and consider the function \[ w_{i} \mapsto Q_{i}(w_{i}) = \min_{w_1, \dots, w_{i-1}} Q_{i}, \] where the minimum is taken over all choices of $w_{1},\dots, w_{i-1}$ that respect the constraints $w_{1} \geq 1$ and $w_{j} + w_{j+1} \geq 1$ for $2 \leq j \leq i-1$. The function $Q_{i}(w_{i})$ is defined on~$[0, \infty)$. We denote the derivative of the function~$w_{i} \mapsto Q_{i}(w_{i})$ by~$R_i$. We will see shortly that~$R_{i}$ is a continuous, piecewise linear function. Since $R_{i}$ is not differentiable everywhere, we define~$S_{i}(x)$ to be the right derivative of~$R_{i}$, that is \[ S_{i}(x) = \lim_{~y \rightarrow x^+} R_i'(y). \] The following theorem discusses~$R_{i}$ and~$S_{i}$. The shorthand $\ddi := 2d_i d_{i+1}$, for $1 \leq i < n-1$, will be convenient in its proof and the rest of the paper. \begin{theorem} \label{thm:ri} The function $R_i$ is strictly increasing, continuous, and piecewise linear on the range~$[0, \infty)$. We have $R_i(0) < 0$, $S_{i}(x) \geq (2 + \nicefrac 2i)d_{i}^{2}$ for all $x \geq 0$, and $R_{i}(x) = (2 + \nicefrac 2i)d_{i}^{2}x$ for sufficiently large~$x > 0$. \end{theorem} \begin{proof} We prove all claims by induction over~$i$. The base case is $i=2$. Observe that \[ Q_{2} = v_{1}^{2} + v_{2}^{2} + d_{2}^{2}w_{2}^{2} = 2d_{1}^{2}w_{1}^{2} - 2d_{1}d_{2}w_{1}w_{2} + 2d_{2}^{2}w_{2}^{2}. \] For fixed~$w_{2}$, the derivative with respect to~$w_{1}$ is \begin{equation} \frac{\partial}{\partial w_{1}} Q_{2} = 4d_{1}^{2}w_{1} - 2d_1 d_2 w_{2}, \label{eq:0} \end{equation} which implies that $Q_{2}$ is minimized for $w_{1} = \frac{d_{2}}{2d_{1}}w_{2}$. This choice is feasible (with respect to the constraint~$w_{1} \geq 1$) when $w_{2} \geq \frac{2d_1}{d_2}$. If $w_{2} < \frac{2d_1}{d_2}$, then $\frac{\partial}{\partial w_1} Q_2$ is positive for all values of~$w_{1} \geq 1$, so the minimum occurs at $w_{1} = 1$. It follows that \[ Q_2(w_2) = \begin{cases} \frac 32 d_{2}^{2} w_{2}^{2} & \text{for } w_2 \geq \frac{2d_1}{d_2}, \\ 2d_{2}^{2}w_{2}^{2} - \ddone w_2 + 2d_1^{2} & \text{otherwise}, \end{cases} \] and so we have \begin{equation} R_2(w_2) = \begin{cases} 3 d_{2}^{2} w_{2} & \text{for } w_2 \geq \frac{2d_1}{d_2}, \\ 4d_{2}^{2}w_{2} - \ddone & \text{otherwise}. \end{cases} \label{eq:r2} \end{equation} In other words, $R_2$ is piecewise linear and has a single breakpoint at~$\frac{2d_1}{d_2}$. The function $R_2$ is continuous because $3d_{2}^{2}w_2 = 4d_{2}^{2}w_{2}-\ddone$ when $w_{2} = \frac{2d_1}{d_2}$. We have~$R_{2}(0) = -\ddone < 0$, $S_2(x) \geq 3d_2^2$ for all~$x \geq 0$, and $R_{2}(x) = 3d_{2}^{2} x$ for $x \geq \frac{2d_1}{d_2}$. The fact that $S_2(x) \geq 3d_2^2 > 0$ makes $R_2$ strictly increasing. Consider now $i \geq 2$, assume that~$R_i$ and~$S_{i}$ satisfy the induction hypothesis, and consider~$Q_{i+1}$. By definition, we have \begin{equation} Q_{i+1} = Q_{i} - \ddi w_{i}w_{i+1} + 2d_{i+1}^{2}w_{i+1}^{2}. \label{eq:1} \end{equation} For a given value of $w_{i+1} \geq 0$, we need to find the value of $w_{i}$ that will minimize~$Q_{i+1}$. The derivative is \[ \frac{\partial}{\partial w_{i}} Q_{i+1} = R_{i}(w_{i}) - \ddi w_{i+1}. \] The minimum thus occurs when $R_{i}(w_{i}) = \ddi w_{i+1}$. Since~$R_{i}$ is a strictly increasing continuous function with $R_{i}(0) < 0$ and $\lim_{x\rightarrow \infty} R_i(x)=\infty$, for any given $w_{i+1} \geq 0$, there exists a unique value~$w_{i} = R_{i}^{-1}(\ddi w_{i+1})$. However, we also need to satisfy the constraint $w_{i} + w_{i+1} \geq 1$. We first show that $R_{i+1}$ is continuous and piecewise linear, and that $R_{i+1}^{-1}(0) < 0$. We will distinguish two cases, based on the value of $\wio := R_{i}^{-1}(0)$. \paragraph{Case~1:} $\wio \geq 1$. This means that $R_{i}^{-1}(\ddi w_{i+1}) \geq 1$ for any $w_{i+1} \geq 0$, and so the constraint of $w_i + w_{i+1} \geq 1$ is satisfied for the optimal choice of~$w_{i} = R_i^{-1}(\xi_i w_{i+1})$. It follows that \begin{align*} Q_{i+1}(w_{i+1}) & = Q_{i}\big(R_{i}^{-1}(\ddi w_{i+1})\big) - \ddi w_{i+1}R^{-1}(\ddi w_{i+1}) + 2d_{i+1}^{2} w_{i+1}^{2}. \end{align*} The derivative $R_{i+1}$ is therefore \begin{align} R_{i+1}(w_{i+1}) & = R_{i}(R_{i}^{-1}(\ddi w_{i+1})) \frac{\ddi}{R'_{i}(R_{i}^{-1}(\ddi w_{i+1}))} \nonumber \\ & \quad - \ddi R_{i}^{-1}(\ddi w_{i+1}) - \ddi w_{i+1} \frac{\ddi}{R'_{i}(R_{i}^{-1}(\ddi w_{i+1}))} + 4d_{i+1}^{2} w_{i+1} \nonumber \\ & = 4d_{i+1}^{2} w_{i+1} - \ddi R_{i}^{-1}(\ddi w_{i+1}). \label{eq:2} \end{align} Since $R_{i}$ is continuous and piecewise linear, so is $R_{i}^{-1}$, and therefore~$R_{i+1}$ is continuous and piecewise linear. We have $R_{i+1}(0) = -\ddi \wio < 0$. \paragraph{Case~2:} $\wio < 1$. Consider the function $x \mapsto f(x) = x + R_{i}(x)/\ddi$. Since $R_i$ is continuous and strictly increasing by the inductive assumption, so is the function $f$. Observe that $f(\wio) = \wio < 1$. As $\wio < 1$, we have $R_i(1) > R_i(\wio) = 0$, which implies that $f(1) > 1$. Thus, there exists a unique value~$\wis \in (\wio ,1)$ such that $f(\wis) = \wis + {R_{i}(\wis)}/{\ddi} = 1$. For $w_{i+1}\geq 1 - \wis = R_{i}(\wis)/\ddi$, we have $R_{i}^{-1}(\ddi w_{i+1}) \geq \wis$, and so $R_{i}^{-1}(\ddi w_{i+1}) + w_{i+1}\geq 1$. This implies that the constraint $w_i + w_{i+1} \geq 1$ is satisfied when $Q_{i+1}(w_{i+1})$ is minimized for the optimal choice of $w_{i} = R_{i}^{-1}(\ddi w_{i+1})$. So $R_{i+1}$ is as in~\eqref{eq:2} in Case~1. When $w_{i+1} < 1-\wis$, the constraint $w_{i} + w_{i+1} \geq 1$ implies that $w_{i} \geq 1 - w_{i+1} > \wis$. For any $w_{i} > \wis$ we have $\frac{\partial}{\partial w_{i}}Q_{i+1} = R_{i}(w_{i}) - \ddi w_{i+1} > R_{i}(\wis) - \ddi (1-\wis) = 0$. So $Q_{i+1}$ is increasing, and the minimal value is obtained for the smallest feasible choice of~$w_{i}$, that is, for $w_{i} = 1 - w_{i+1}$. It follows that \begin{align*} Q_{i+1}(w_{i+1}) & = Q_{i}(1-w_{i+1}) - \ddi w_{i+1}(1-w_{i+1}) + 2d_{i+1}^{2} w_{i+1}^{2} \\ & = Q_{i}(1-w_{i+1}) - \ddi w_{i+1} + (\ddi + 2d_{i+1}^{2}) w_{i+1}^{2}, \end{align*} and so the derivative $R_{i+1}$ is \begin{align} R_{i+1}(w_{i+1}) & = -R_{i}(1-w_{i+1}) + (2\ddi + 4d_{i+1}^{2})w_{i+1} - \ddi. \label{eq:3} \end{align} Combining \eqref{eq:2} and \eqref{eq:3}, we have \begin{align} R_{i+1}(w_{i+1}) & = \begin{cases} -R_{i}(1-w_{i+1}) + (2\ddi + 4d_{i+1}^{2})w_{i+1} - \ddi & \text{for } w_{i+1} < 1-\wis, \\ 4d_{i+1}^{2} w_{i+1} - \ddi R_{i}^{-1}(\ddi w_{i+1}) & \text{for } w_{i+1} \geq 1-\wis. \end{cases} \label{eq:4} \end{align} For $w_{i+1} = 1 - \wis$, we have $R_{i}(1-w_{i+1}) = R_{i}(\wis) = \ddi(1-\wis)$ and $R_{i}^{-1}(\ddi w_{i+1}) = R_{i}^{-1}(\ddi(1-\wis)) = \wis$, and so both expressions have the same value: \begin{align*} -R_{i}(1-w_{i+1}) & + (2\ddi + 4d_{i+1}^{2})w_{i+1} - \ddi \\ & = \ddi\wis - \ddi + 2\ddi - 2\ddi\wis + 4d_{i+1}^{2}(1-\wis) - \ddi \\ & = 4d_{i+1}^{2}(1-\wis) - \ddi\wis \\ & = 4d_{i+1}^{2}(1-\wis) - \ddi R_{i}^{-1}(\ddi w_{i+1}). \end{align*} Since $R_{i}$ is continuous and piecewise linear, this implies that $R_{i+1}$ is continuous and piecewise linear. We have $R_{i+1}(0) = -R_{i}(1) - \ddi$. Since $\wio < 1$, we have $R_{i}(1) > R_{i}(\wio) = 0$, and so $R_{i+1}(0) < 0$. \medskip Next, we show that $S_{i+1}(x) \geq (2 + \nicefrac 2{i+1}) d_{i+1}^2$ for all $x \geq 0$, which implies that $R_{i+1}$ is strictly increasing. If $\wio < 1$ and $x < 1-\wis$, then by \eqref{eq:4}, \[ S_{i+1}(x) = S_{i}(1-x) + 2\ddi + 4d_{i+1}^{2} > 4d_{i+1}^{2} > (2 + \nicefrac 2{i+1})d_{i+1}^{2}. \] If $\wio \geq 1$ or $x > 1-\wis$, we have by~(\ref{eq:2}) and~(\ref{eq:4}) that $R_{i+1}(x) = 4d_{i+1}^{2}x - \ddi R_{i}^{-1}(\ddi x)$. By the inductive assumption that $S_i(x) \geq (2 + \nicefrac 2{i})d_i^2$ for all $x \geq 0$, we get $\frac{\partial}{\partial x}R_{i}^{-1}(x) \leq 1/\big((2 + \nicefrac 2i)d_{i}^{2}\big)$. It follows that \begin{align*} S_{i+1}(x) & \geq 4d_{i+1}^{2} - \frac{(2d_{i}d_{i+1})^{2}}{(2+ \nicefrac 2i)d_{i}^{2}} = \Big(4 - \frac{4}{2 + \nicefrac 2i}\Big)d_{i+1}^{2} = \Big(4 - \frac{2i}{i + 1}\Big)d_{i+1}^{2} \\ & = \Big(2 + \frac{2}{i+1}\Big) d_{i+1}^{2}. \end{align*} This establishes the lower bound on~$S_{i+1}(x)$. Finally, by the inductive assumption, when $x$ is large enough, we have $R_{i}^{-1}(x) = x/\big((2 + \nicefrac 2i)d_{i}^{2}\big)$, and so \begin{align*} R_{i+1}(x) & = 4d_{i+1}^{2}x - \frac{(2d_{i}d_{i+1})^{2}}{(2+ \nicefrac 2i)d_{i}^{2}} x = \Big(2 + \frac{2}{i+1}\Big) d_{i+1}^{2} x, \end{align*} completing the inductive step and therefore the proof. \end{proof} \section{The algorithm} Our algorithm progressively constructs a representation of the functions~$R_{2}, R_{3}, \dots, R_{n-1}$. The function representation supports the following three operations: \begin{itemize} \item Op~1: given $x$, return $R_{i}(x)$; \item Op~2: given $y$, return $R_{i}^{-1}(y)$; \item Op~3: given $\xi$, return $\xis$ such that $\xis + \frac{R_{i}(\xis)}{\xi} = 1$. \end{itemize} The proof of Theorem~\ref{thm:ri} gives the relation between $R_{i+1}$ and $R_i$. This will allow us to construct the functions one by one---we discuss the detailed implementation in Sections~\ref{sec:basic}~and~\ref{sec:fast} below. Once all functions~$R_2, \dots, R_{n-1}$ are constructed, the optimal weights~$w_{1}, w_{2}, \dots, w_{n-1}$ are computed from the~$R_{i}$'s as follows. Recall that $Q = Q_{n-1}$, so $w_{n-1}$ is the value minimizing~$Q_{n-1}(w_{n-1})$ under the constraint~$w_{n-1} \geq 1$. If $R_{n-1}^{-1}(0) \geq 1$, then $R_{n-1}^{-1}(0)$ is the optimal value for $w_{n-1}$; otherwise, we set $w_{n-1}$ to 1. To obtain~$w_{n-2}$, recall from \eqref{eq:1} that $Q = Q_{n-1} = Q_{n-2}(w_{n-2}) - \ddof{n-2}w_{n-2}w_{n-1} + 2d_{n-1}^{2}w_{n-1}^{2}$. Since we have already determined the correct value of~$w_{n-1}$, it remains to choose~$w_{n-2}$ so that $Q_{n-1}$ is minimized. Since \[ \frac{\partial}{\partial w_{n-2}}Q_{n-1} = R_{n-2}(w_{n-2}) - \ddof{n-2}w_{n-1}, \] $Q_{n-1}$ is minimized when $R_{n-2}(w_{n-2}) = \ddof{n-2}w_{n-1}$, and so~$w_{n-2} = R_{n-2}^{-1}(\ddof{n-2}w_{n-1})$. In general, for $i \in [2,n-2]$, we can obtain~$w_{i}$ from~$w_{i+1}$ by observing that \[ Q_{n-1} = Q_{i}(w_{i}) - \ddi w_{i}w_{i+1} + g(w_{i+1},\ldots,w_{n-1}), \] where $g$ is function that only depends on $w_{i+1}, \ldots, w_{n-1}$. Taking the derivative again, we have \[ \frac{\partial}{\partial w_{i}}Q_{n-1} = R_{i}(w_{i}) - \ddi w_{i+1}, \] so choosing $w_{i} = R_{i}^{-1}(\ddi w_{i+1})$ minimizes~$Q_{n-1}$. To also satisfy the constraint~$w_{i} + w_{i+1} \geq 1$, we need to choose $w_{i} = \max\{R_{i}^{-1}(\ddi w_{i+1}), \, 1 - w_{i+1}\}$ for $i \in [2,n-2]$. Finally, from the discussion that immediately follows~\eqref{eq:0}, we set $w_{1} = \max\{\frac{d_{2}}{2d_{1}}w_{2},\, 1\}$. To summarize, we have \begin{align*} w_{n-1} & = \max\{R_{n-1}^{-1}(0), \, 1\},\\ w_{i} &= \max\{R_{i}^{-1}(\ddi w_{i+1}), \, 1 - w_{i+1}\}, \qquad \text{for } i \in [2,n-2],\\ w_{1} &= \textstyle\max\{\frac{d_{2}}{2d_{1}}w_{2},\, 1\}. \end{align*} It follows that we can obtain the optimal weights using a single Op~2 on each~$R_{i}$. \subsection{Explicit representation of piecewise linear functions} \label{sec:basic} Since $R_{i}$ is a piecewise linear function, a natural representation is a sequence of linear functions, together with the sequence of breakpoints. Since~$R_{i}$ is strictly increasing, all three operations can then be implemented to run in time~$O(\log k)$ using binary search, where $k$ is the number of function pieces. The function~$R_{2}$ consists of exactly two pieces. We construct it directly from~$d_{1}, d_{2}$, and~$\xi_{1}$ using~(\ref{eq:r2}). To construct~$R_{i+1}$ from~$R_{i}$, we first compute~$\wio = R_{i}^{-1}(0)$ using Op~2 on~$R_{i}$. If $\wio \geq 1$, then by \eqref{eq:2} each piece of~$R_{i}$, starting at the $x$-coordinate~$\wio$, gives rise to a linear piece of~$R_{i+1}$, so the number of pieces of~$R_{i+1}$ is at most that of~$R_{i}$. If $\wio < 1$, then we compute $\wis$ using Op~3 on~$R_{i}$. The new function $R_{i+1}$ has a breakpoint at~$1-\wis$ by \eqref{eq:4}. Its pieces for $x \geq 1-\wis$ are computed from the pieces of~$R_{i}$ starting at the $x$-coordinate~$\wis$. Its pieces for $0 \leq x < 1-\wis$ are computed from the pieces of~$R_{i}$ between the $x$-coordinates~$1$ and~$\wis$. (Increasing~$w_{i+1}$ now corresponds to a decreasing~$w_{i}$.) This implies that every piece of~$R_{i}$ that covers $x$-coordinates in the range~$[\wis, 1]$ will give rise to \emph{two} pieces of~$R_{i+1}$, so the number of pieces of~$R_{i+1}$ may be twice the number of pieces of~$R_{i}$. Therefore, although this method works, it is unclear whether the number of linear pieces of~$R_i$ is bounded by a polynomial in~$i$. \subsection{A quadratic time implementation} \label{sec:fast} Since we have no polynomial bound on the number of linear pieces of the function~$R_{n-1}$, we turn to an implicit representation of~$R_{i}$. The representation is based on the fact that there is a linear relationship between points on the graphs of the functions~$R_{i}$ and~$R_{i+1}$. Concretely, let $y_{i} = R_{i}(x_{i})$, and $y_{i+1} = R_{i+1}(x_{i+1})$. Recall the following relation from \eqref{eq:2} for the case of $\wio \geq 1$: \begin{align*} R_{i+1}(w_{i+1}) & = 4d_{i+1}^{2} w_{i+1} - \ddi R_{i}^{-1}(\ddi w_{i+1}). \end{align*} We can express this relation as a system of two equations: \begin{align*} y_{i+1} & = 4d_{i+1}^{2} x_{i+1} - \ddi x_{i}, \\ y_{i} & = \ddi x_{i+1}. \end{align*} This can be rewritten as \begin{align*} y_{i+1} & = 4d_{i+1}^{2} y_{i} / \ddi - \ddi x_{i}, \\ x_{i+1} & = y_{i} / \ddi, \end{align*} or in matrix notation \begin{align} \vectorii & = M_{i+1} \times \vectori, \text{ where } M_{i+1} = \left( \begin{matrix} 0 & 1/\ddi & 0 \\ -\ddi & 4d_{i+1}^{2}/\ddi & 0 \\ 0 & 0 & 1 \end{matrix}\right). \label{eq:10} \end{align} On the other hand, if $\wio < 1$, then $R_{i+1}$ has a breakpoint at $1-\wis$. The value $\wis$ can be obtained by appying Op~3 to $R_i$. We compute the coordinates of this breakpoint: $(1-\wis, R_{i+1}(1-\wis))$. Note that $R_{i+1}(1-\wis) = 4d_{i+1}^2(1-\wis) - \xi_i R_i^{-1}(\xi_i(1-\wis))$ which can be computed by applying Op~2 to $R_i$. For $x_{i+1} > 1-\wis$, the relationship between $(x_{i}, y_{i})$ and $(x_{i+1}, y_{i+1})$ is given by \eqref{eq:10}. For $0 \leq x_{i+1} < 1-\wis$, recall from \eqref{eq:3} that \begin{align*} R_{i+1}(w_{i+1}) & = -R_{i}(1-w_{i+1}) + (2\ddi + 4d_{i+1}^{2})w_{i+1} - \ddi. \end{align*} We again rewrite this as \begin{align*} y_{i+1} & = -y_{i} + (2\ddi + 4d_{i+1}^{2})x_{i+1} - \ddi, \\ x_{i} & = 1 - x_{i+1}, \end{align*} which gives \begin{align*} y_{i+1} & = -y_{i} + (2\ddi + 4d_{i+1}^{2})(1- x_{i}) - \ddi, \\ x_{i+1} & = 1 - x_{i}, \end{align*} or in matrix notation: \begin{align*} \vectorii & = L_{i+1} \times \vectori, \text{ where } L_{i+1} = \left( \begin{matrix} -1 & 0 & 1 \\ -2\ddi -4d_{i+1}^{2} & -1 & \ddi + 4d_{i+1}^{2} \\ 0 & 0 & 1 \end{matrix}\right). \end{align*} The function~$R_{i+1}$ is stored by storing the breakpoint~$(\xs_{i+1}, \ys_{i+1}) = (1-\wis,R_{i+1}(1-\wis))$ as well as the two matrices~$L_{i+1}$ and~$M_{i+1}$. Note that the first function~$R_{2}$ is stored explicitly. A new function~$R_{i+1}$ can be constructed in constant time plus a constant number of queries on~$R_{i}$ and requires constant space only. We now explain how the three operations Op~1, Op~2, and Op~3 are implemented on this representation of the function~$R_{i}$. For an operation on~$R_{i}$, we progressively build transformation matrices $T_{i}^{i}, T_{i-1}^{i}, T_{i-2}^{i}, \dots, T_{3}^{i}, T_{2}^{i}$ such that $(x_{i}, y_{i}, 1) = T_{j}^{i} \times (x_{j}, y_{j}, 1)$ for every~$2 \leq j \leq i$ in a neighborhood of the query. Once we obtain~$T_{2}^{i}$, we use our explicit representation of~$R_{2}$ to express~$y_{i}$ as a linear function of~$x_{i}$ in a neighborhood of the query, which then allows us to answer the query. The first matrix~$T_{i}^{i}$ is the identity matrix. We obtain $T_{j}^{i}$ from~$T_{j+1}^{i}$, for $j \in [2,i-1]$, as follows: If~$R_{j+1}$ has no breakpoint, then $T_{j}^{i} = T_{j+1}^{i}\times M_{j+1}$. If $R_{j+1}$ has a breakpoint~$(\xs_{j+1}, \ys_{j+1})$, then either $T_{j}^{i} = T_{j+1}^{i}\times M_{j+1}$ or $T_{j}^{i} = T_{j+1}^{i}\times L_{j+1}$, depending on which side of the breakpoint applies to the answer of the query. We can decide this by comparing $(x', y') = T_{j+1}^{i} \times (\xs_{j+1}, \ys_{j+1}, 1)$ with the query. More precisely, for Op~1 we compare the input~$x$ with~$x'$, for Op~2 we compare the input~$y$ with~$y'$, and for Op~3 we compute $x' + y'/\xi$ and compare with~$1$. It follows that our implicit representation of~$R_{i}$ supports all three operations on~$R_{i}$ in time~$O(i)$, and so the total time to construct~$R_{n-1}$ is~$O(n^{2})$. \begin{theorem} \label{thm:alg} Given $n$ points on a line, we can compute an optimal set of weights for minimizing the quality measure $Q$ in $O(n^2)$ time. \end{theorem} \section{Experiments} We have implemented both the explicit and implicit representations in Python. For comparison, we used the quadratic solver \textsc{cvxopt}\footnote{\url{http://cvxopt.org}} using the modeling library \textsc{picos}\footnote{\url{http://picos.zib.de}} (our code is available at \url{https://github.com/otfried/graph-fitting-1d}). \paragraph{Running times.} To compare the running time of the different methods, we first generated problem instances randomly, by setting each interpoint distance~$d_{i}$ to an independent random value, taken uniformly from the integers~$\{1, 2, \dots, 50\}$. Table~\ref{tab:1} shows the results. \begin{table} \begin{center} \begin{tabular}{|c|ccc|} \hline $n$ & QP & Explicit & Implicit \\ \hline 100 & 0.413 & 0.00809 & 0.129 \\ 200 & 1.51 & 0.0183 & 0.353 \\ 400 & 6.38 & 0.0536 & 1.3 \\ 800 & 32.3 & 0.127 & 6.25 \\ 1600 & 208 & 0.217 & 17.1 \\ 3200 & 1,300 & 0.406 & 89.2 \\ \hline \end{tabular} \end{center} \caption{Running times of the three methods (in seconds).} \label{tab:1} \end{table} Perhaps surprisingly, the simple method that represents each~$R_{i}$ as a sequence of linear functions outperforms the other two methods. Apparently, at least for random interpoint distances, the number of linear pieces of these functions does not grow fast. \paragraph{Number of pieces.} To investigate this further, we have generated problem instances, with various distributions used for the random generation of interpoint distances. The results can be seen in Table~\ref{tab:2}. In the small uniform distribution, interpoint distances are taken uniformly from the set~$\{1,2,\dots, 50\}$, for the large uniform distribution from the set~$\{1, 2, \dots, 10,000\}$. In the third column, interpoint distances are sampled from a Gaussian distribution with mean~$100$ and standard deviation~$30$. For each distribution and~$n$, we compute the functions~$R_{2}, R_{3}, \dots, R_{n-1}$, and take the maximum of the number of pieces over these $n-2$ functions. We repeat each experiment~$1,000$ times, and show both the average and the maximum of the number of pieces found. \begin{table} \begin{center} \begin{tabular}{|c||cc|cc|cc|} \hline & \multicolumn{2}{c|}{small uniform} & \multicolumn{2}{c|}{large uniform} & \multicolumn{2}{c|}{Gaussian} \\ $n$ & avg & max & avg & max & avg & max \\ \hline 100 & 13.753 & 33 & 13.726 & 31 & 13.109 & 31 \\ 1000 & 23.613 & 48 & 23.483 & 51 & 22.246 & 49 \\ 10000 & 33.793 & 73 & 35.329 & 65 & 31.529 & 57 \\ 100000 & 42.634 & 125 & 48.279 & 95 & 41.701 & 76 \\ \hline \end{tabular} \end{center} \caption{Average and maximum number of pieces for three different distributions.} \label{tab:2} \end{table} The table explains why the simple method performs so well in practice: as long as the number of pieces remains small, its running time is essentially linear. In fact, we are not even using binary search to implement the three operations on the piecewise linear functions. \paragraph{Precision.} The \textsc{cvxopt} solver uses an iterative procedure in floating point arithmetic, and so its precision is limited. With the tolerance set to the maximum feasible value of~$10^{-6}$, some weights differ from our algorithm's solution by as much as~$0.05$. Our algorithm can easily be implemented using exact or high-precision arithmetic. In fact, in our implementation it suffices to provide the initial distance vector using Python \verb+Fraction+ objects for exact rational arithmetic, or as high-precision floating point numbers from the \verb+mpmath+ Python library.\footnote{\url{http://mpmath.org}} Using rational arithmetic, computing the exact optimal solution for 3200~points with integer interpoint distances from the set~$\{1,2,\dots,50\}$ takes between~1.4 and 4~seconds. \section{Conclusion} While in practice the explicit representation of the functions~$R_{i}$ works well, we do not have a polynomial time bound on the running time using this method. Future work should determine if this method can indeed be slow on some instances, or if the number of pieces can be bounded. It would also be nice to obtain an algorithm for higher dimensions that is not based on a quadratic programming solver. In two dimensions, we have conducted some experiments that indicate that the Delaunay triangulation of the point set contains a well-fitting graph. If we choose the graph edges only from the Delaunay edges and compute the optimal edge weights, the resulting quality measure is very close to the best quality measure in the unrestricted case. It is conceivable that one can obtain a provably good approximation from the Delaunay triangulation.
1,116,691,497,039
arxiv
\section{\large introduction} Coupled cavity arrays have recently been proposed as a novel system for realizing quantum computation \cite{angelakis-ekert04} and for simulations of quantum many-body systems \cite{simulation of many body system}. More recently, the steady-state polaritonic \cite{two-state} and membrane entanglement \cite{ple-hue-har} of driven cavity arrays were studied under realistic dissipation environment. Also, there has been an attempt to relate coupled cavity arrays with Josephson oscillations \cite{coherent control of photon emission}. At finite temperature, it is expected that the steady state of the coupled-cavity system is a thermal state, since standard statistical mechanics tells us that if a system interacts with a large reservoir at a fixed temperature, it will relax eventually to an equilibrium state characterized by the Boltzmann distribution with a well-defined temperature, i.e. that of the reservoir. However, such thermal relaxation is true only for some simple systems such as a single empty cavity coupled to a thermal bath \cite{Carmichael}. For many other systems e.g. coupled cavities with external pumping lasers, the steady state does not need to be a thermal state, and its deviation from a thermal state depends on various factors: inter-cavity couplings, presence of the pump, detuning and so forth. The purpose of this article is twofold: firstly we wish to demonstrate the possibility of achieving coherent control of the steady-state entanglement between mixed light-matter excitations generated in macroscopically separated atom-cavity systems, and secondly, we hope to elucidate the conditions under which the steady state differs from a thermal state, especially the relation between the thermalization of the system and the correlations of the subsystems, using the coupled atom-cavity system as an example. This paper is organized as follows. In Sec. II, we introduce the setup and the Hamiltonian for coherent control of the steady-state entanglement. In Sec. III, we derive an effective equation for the dynamics of the system. In Sec. IV, we discuss the coherent control of the steady-state entanglement. In Sec. V, we discuss an alternative setup: two coupled cavities with three driving fields. In Sec. VI, we discuss the thermalization of two defect cavities coupled to one driven wave guide in between. In Sec. VII, we summarize our result. \section{\large The setup and the Hamiltonian}\label{chapter3-3cavity-setup-Hamiltonian} The setup we study is shown in Fig. \ref{Fig.3-cav}. It contains three interacting atom-cavity systems ($S_{1}$, $S_{2}$, $S_{3}$) connected by three waveguides/fibres. Each waveguide/fibre is pumped by a classical field with a phase $\phi_{i}$, ($i=1,2,3$). The setup could be realized in a variety of cavity-quantum-electrodynamics (cavity-QED) technologies including photonic crystals, circuit QED, toroidal cavities connected through fibers, Fabry-Perot cavities and coupled defect cavities interacting with quantum dots \cite{pbgs,rest}. Light from the connecting waveguides/fibers can directly couple to the photonic modes of the atom-cavity systems through tunneling or evanescent coupling. In each atom-cavity site we assume the interaction and the corresponding nonlinearity to be strong enough with at most one excited polariton\cite{two-state}. \begin{figure}[t] \epsfxsize=.30\textwidth \epsfysize=.25\textwidth \centerline{\epsffile{Dimitris9.eps}} \caption[Schematic representation of three interacting cavity-atom systems]{(color online). Schematic representation of three interacting cavity-atom systems ($S_{1}$, $S_{2}$, $S_{3}$) based on a possible implementation using photonic crystals (for illustration purposes only): the connecting wave guides carrying the driving classical fields with phases $\phi_{1}$, $\phi_{2}$, $\phi_{3}$ are replaced by fibers or stripline microresonators for different implementations \cite{pbgs,rest}. The three wave guides and three driving fields are labeled with the same indices to the phases $\phi_{1}$, $\phi_{2}$, $\phi_{3}$. } \label{Fig.3-cav} \end{figure} The Hamiltonian describing the system is \begin{align}\label{H-original-1} H_{0}=&H_{a,0}+H_{p,0}+H_{J,0},\\ H_{a,0}=&\sum_{i=1}^{3}\omega_{c,i}a_{i}^{\dagger}a_{i},\,\,\,H_{p,0}=\sum_{i=1}^{3}\omega_{p,i}P_{i}^{\dagger}P_{i},\end{align} \begin{align} H_{J,0}=\sum_{i=1}^{3}J_{i}(a_{i}^{\dagger}(P_{i}+P_{i+1})+a_{i}(P_{i}^{\dagger}+P_{i+1}^{\dagger}))+\sum_{i=1}^{3}(\alpha_{i}e^{i(\phi_{i}-\omega_{d}t)}a_{i}^{\dagger}+\alpha_{i}e^{-i(\phi_{i}-\omega_{d}t)}a_{i}),\end{align} where $H_{a,0}$ and $H_{p,0}$ are the free Hamiltonians of the wave guides and cavities, with $a_{i}^{\dagger}$, $a_{i}$ the field operators of the single-mode wave guides and $\omega_{c,i}$ ($\omega_{p,i}$) the frequencies of $i$th waveguide mode (the polariton in $i$th cavity). $P_{i}^{\dagger}$ ($P_{i}$) the operators describing the creation (annihilation) of a mixed atom-photon excitation (polariton) at the $i$th cavity-atom system ($P_{4}\equiv P_{1}$). The first summation in $H_{J,0}$ describes couplings between cavities and wave guides, with $J_{i}$ the coupling strength between the photon mode in the $i$th waveguide and the adjacent two polaritons. The second summation in $H_{J,0}$ describes the classical driving of the wave guides, where $\alpha_{i}$ is proportional to the amplitude of the $i$th driving field with $\phi_{i}$ its phase and $\omega_{d}$ the frequency of the driving fields. It can be seen that the Hamiltonian $H_{0}$ in Eq. (\ref{H-original-1}) is explicitly time-dependant. To remove the time dependence, we make the following transformation \cite{rotating-frame}.\begin{align} H=U_{1}^{\dagger}H_{0}U_{1}-iU_{1}^{\dagger}\frac{\partial U_{1}}{\partial t},\end{align} where $U_{1}=e^{-it\omega_{d}(\sum_{i=1}^{3}(a_{i}^{\dagger}a_{i}+P_{i}^{\dagger}P_{i})}$. After a straightforward calculation, we obtain \begin{align}\label{H-original} H&=H_{a}+H_{p}+H_{J},\\ H_{a}&=\sum_{i=1}^{3}(\omega_{c,i}-\omega_{d})a_{i}^{\dagger}a_{i},\,\,\,H_{p}=\sum_{i=1}^{3}(\omega_{p,i}-\omega_{d})P_{i}^{\dagger}P_{i},\label{chapter2-Hamiltonian-empty-doped-cavity}\\ H_{J}&=\sum_{i=1}^{3}J_{i}(a_{i}^{\dagger}(P_{i}+P_{i+1})+a_{i}(P_{i}^{\dagger}+P_{i+1}^{\dagger}))+\sum_{i=1}^{3}(\alpha_{i}e^{i\phi_{i}}a_{i}^{\dagger}+\alpha_{i}e^{-i\phi_{i}}a_{i}).\label{chapter2-Hamiltonian-interaction-driving}\end{align} The density matrix $\rho(t)$ of the system associated with $H$ is related, in the following way, to the density matrix $\rho_{0}(t)$ of the system associated with $H_{0}$. \begin{eqnarray} \rho(t)=U_{1}^{\dagger}\rho_{0}(t)U_{1}.\end{eqnarray} We say that the new Hamiltonian $H$ is written in the rotating frame of the driving lasers. \section{\large The dynamics of the system}\label{chapter3-3cavity-master-equation} In this section, we will derive the dynamical equation for the system. The polaritons and waveguide modes in our system described in the last section are assumed to decay with rates $\gamma$ and $\kappa$ respectively. The master equation for the whole system density operator $R$ is: \begin{align} \frac{dR}{dt}=L_{a}R+L_{p}R+L_{J}R,\label{me}\end{align} \begin{align}\label{super operator 1} L_{a}R&=-i[H_{a}\,,R]+L_{a}^{\prime}R,\\\label{super operator 2}L_{p}R&=-i[H_{p}\,,R]+L_{p}^{\prime}R,\\\label{super operator 3}L_{J}R&=-i[H_{J}\,,R],\end{align} where $H_{a}$, $H_{p}$ and $H_{J}$ are given by Eqs. (\ref{chapter2-Hamiltonian-empty-doped-cavity}), and (\ref{chapter2-Hamiltonian-interaction-driving}) respectively, and \begin{align} L_{a}^{\prime}R=\frac{\kappa}{2}\sum_{i=1}^{3}(2a_{i}Ra_{i}^{\dagger}-a_{i}^{\dagger}a_{i}R-Ra_{i}^{\dagger}a_{i}),\,\,\,L_{p}^{\prime}R=\frac{\gamma}{2}\sum_{i=1}^{2}(2\sigma_{i}R\sigma_{i}^{\dagger}-\sigma_{i}^{\dagger}\sigma_{i}R-R\sigma_{i}^{\dagger}\sigma_{i}). \end{align} We use the projection operator method in Ref. \cite{two-state}. To this end, we define the projector $PR=r_{ss}\otimes \textrm{tr}_{a_{1},a_{2},a_{3}}R$, where $r_{ss}$ satisfying $L_{a}r_{ss}=0$ is the equilibrium state of the three wave guides, which is close to the vacuum state $\ket{000}\bra{000}$ when weak driving for the wave guides is assumed i.e. $\alpha_{i}\le J_{i}\ll\kappa$ ($i=1,2,3$). The orthogonal complement of $P$ is $Q=1-P$. The operators $P$ and $Q$ have the properties that \cite{projection-operator-method} \begin{eqnarray}\label{property 1} PL_{p}&=&L_{p}P\,,\\\label{property 2}PL_{a}&=&L_{a}P=0,\\\label{property 3}PL_{J}P&=&0.\end{eqnarray} Applying $P$ and $Q$ respectively to Eq. (\ref{me}) and using the properties (\ref{property 1}), (\ref{property 2}) and (\ref{property 3}), we get \begin{eqnarray}\label{p} P\frac{dR}{dt}&=&PL_{p}PR(t)+PL_{J}QR(t),\\\label{q}Q\frac{dR}{dt}&=&Q(L_{a}+L_{p}+L_{J})QR(t)+QL_{J}PR(t).\end{eqnarray} Formally integrate (\ref{q}) to get\begin{eqnarray} QR(t)=\int_{-\infty}^{t}Qe^{(L_{a}+L_{p}+L_{J})(t-t^{\prime})}L_{J}PR(t^{\prime})dt^{\prime},\end{eqnarray} which is then replaced into Eq. (\ref{q}). For the case $J_{i}\ll\kappa$, ($I=1,2,3$) we only keep the second order in $J_{i}$\,. By tracing out $a_{1}$, $a_{2}$ and $a_{3}$, we obtain \begin{eqnarray} \frac{d\rho}{dt}&=&-i[H_{p}\,,\rho(t)]+L_{p}^{\prime}\rho(t)\nonumber\\ &+&\int_{0}^{\infty}dt^{\prime}\textrm{tr}_{a_{1},a_{2},a_{3}}[{L_{J}e^{(L_{a}+L_{p})t^{\prime}}}L_{J}e^{-L_{p}t^{\prime}}(r_{ss}\otimes \rho)]\end{eqnarray} Substituting $L_{a}$, $L_{p}$ and $L_{J}$ with expressions (\ref{super operator 1}), (\ref{super operator 2}) and (\ref{super operator 3}), we get \begin{eqnarray}\label{eq-eff} \frac{d\rho}{dt}=&-&i[H_{\mbox{\rm eff}}\,,\rho]+\sum_{i=1}^{3}(\Gamma_{i-1}z_{i-1}+\Gamma_{i}z_{i})F_{i,i}^{P}\rho\nonumber\\ &+&\sum_{i=1}^{3}\Gamma_{i}(F_{i,i+1}^{P}\rho+F_{i+1,i}^{P}\rho)\,,\end{eqnarray} with $\displaystyle H_{\mbox{\rm eff}}=\sum_{i=1}^{3}(\omega_{p,i}-\omega_{d})P_{i}^{\dagger}P_{i} +\sum_{i=1}^{3}\Gamma_{i}y_{i}(P_{i}^{\dagger}P_{i+1}+P_{i}^{\dagger}P_{i+1}) +\sum_{i=1}^{3}(\Gamma_{i}y_{i}P_{i}^{\dagger}P_{i+1}$ +$\Gamma_{i}x_{i}(P_{i}^{\dagger}+P_{i+1}^{\dagger}))+h.c.\,$,$\vspace*{1mm}$ where $h.c.$ denotes the Hermitian conjugation of its previous summation. The first two summations in $\displaystyle H_{\mbox{\rm eff}}$ cancel with each other with a proper choice of $\omega_{p,i}$\,. $F_{i,j}^{P}(\rho)=2P_{i}\rho P_{j}^{\dagger}-P_{i}^{\dagger}P_{j}\rho-\rho P_{i}^{\dagger}P_{j}\,$, $\displaystyle \Gamma_{i}=2J_{i}^{2}\kappa/(\kappa^{2}+4\Delta_{i}^{2})$,$\vspace*{2mm}$ $x_{i}=-\alpha_{i}e^{i\phi_{i}}(2\Delta_{i}+i\kappa)/(J_{i}\kappa)$, $y_{i}=-2\Delta_{i}/\kappa$, $\Delta_{i}=\omega_{c,i}-(\omega_{p,i}+\omega_{p,i+1})/2$, $\omega_{p,4}\equiv \omega_{p,1}$, $z_{i}=1+\gamma/(4\Gamma_{i})$, $\Gamma_{0}\equiv\Gamma_{3}$ and $z_{0}\equiv z_{3}$. It can be seen from Eq. (\ref{eq-eff}) that the couplings and detunings between the wave guide and its adjacent two polaritons induce an effective interaction between them given by $\Gamma_{i}y_{i}$ (see $H_{\textrm{eff}}$). The driving on the wave guides is equivalently transferred to the driving on the polaritons ($\Gamma_{i}x_{i}$ in $H_{\textrm{eff}}$), which decay with rates $\Gamma_{i-1}z_{i-1}+\Gamma_{i}z_{i}=\Gamma_{i-1}+\Gamma_{i}+\gamma$. Since $\Gamma_{i}$ is related to $\kappa$, the polaritons effectively have two different channels for the decay. They can either decay directly to the surrounding with $\gamma$ and they can also dissipate energy via the coupling $J_{i-1}$ or $J_{i}$ ($J_{0}\triangleq J_{3}$) to the adjacent two leaky wave guides (who also decay by $\kappa$). We notice that the second channel also mixes the polaritons' operators, as seen in the second line of Eq. (\ref{eq-eff}). This mixing is actually one of the main reasons for entanglement creation among the polaritons. Note that the other two contributing factors are the interactions among polaritons and the driving on them. \section{\large Coherent control of the steady-state entanglement}\label{chapter3-3cavity-steady-state} We now derive the steady state $\rho_{ss}$ by requiring that $\displaystyle\frac{d\rho_{ss}}{dt}=0$ in Eq. (\ref{eq-eff}). This is done numerically due to the large number of coupled equations involved. For a three-polariton density matrix, we trace out the polaritonic degree of freedom of cavity 1 and calculate the polaritonic entanglement of formation between cavity 2 and 3 using the concurrence as a measure \cite{Woot}. The concurrence $C(\rho_{ss})$ is effectively a function of the parameters $x_{i}$\,, $y_{i}$ and $z_{i}$. We perform a numerical optimization of $C(\rho_{ss})$ by varying these parameters and find that $C(\rho_{ss})$ is larger when $\Gamma_{2}\ll\Gamma_{1}=\Gamma_{3}$ and $x_{3}=-x_{1}$, i.e. the first and third driving fields have equal intensity but opposite phases. We also note here that the relation $\Gamma_{2}\ll \Gamma_{1}=\Gamma_{3}$ indicates that the coupling between the two cavities in question is much weaker than the coupling between each one of the cavities and the third cavity. Also the state of the polariton in cavity 1 for the maximum entanglement point is found to be almost a pure state at ground energy level and therefore almost uncorrelated to the polaritons in cavity 2 and 3. Thus, the total density matrix $\rho\approx \ket{\textrm{ground}}\bra{\textrm{ground}}\otimes \rho_{2,3}$. Although this result initially looks counter-intuitive, it can be explained as follows: the maximum entanglement between the two parties, i.e. cavities 2 and 3, in a three-party system, is attained when the state of the third party, i.e. cavity 1, nearly factorizes in the combined three-party state. The fact that this is happening for relatively strong couplings of $J_{12}\equiv J_1$ and $J_{13} \equiv J_3$ compared to $J_{23}\equiv J_2$ is reminiscent of the behavior of a coherent process taking place. It is interesting to observe an analogy here with the case of coherently superposing two initially uncoupled ground states in a $\Lambda$-type quantum system through an excited state using two classical fields to mediate the interaction \cite{Scully, EIT-Harris}. In figure \ref{chapter3-coherent-trapping}, we compare our setup for entanglement control of three-coupled-cavity system with the coherent population trapping in a three-level atom. For the latter, if the two driving fields have opposite phases and the atom's initial state is $(\ket{2}+\ket{3})/\sqrt{2}$, there will be no population in the excited state $\ket{1}$ and the atom will remain in a superposition of the states $\ket{2}$ and $\ket{3}$. Note that the states $\ket{2}$ and $\ket{3}$ are not coupled in this case. The superposition of them is established by a quantum interference in the state $\ket{1}$ \cite{Scully}. It appears that the quantum correlation in our setup is somewhat "trapped" in the cavity 2 and cavity 3 if the driving fields 1 and 3 have opposite phases (The cavity 2 and cavity 3 are almost uncoupled. In this case, it is numerically verified that the driving field between them has almost no influence on their steady-state entanglement). \begin{figure}[h] \epsfxsize=.25\textwidth \epsfysize=.2\textwidth \centerline{\epsffile{3-level-atom.eps}} \caption[Coherent trapping of correlations in a 3-cavity system]{(color online). The coherent trapping of a $\Lambda$-type three-level atom driven by two classical fields on resonance, where $\omega_{1}$ and $\omega_{2}$ are the frequencies of the two driving fields . If the states $\ket{2}$ and $\ket{3}$ are degenerate, one could use two laser fields with different polarizations to distinguish the two driving paths ($\ket{2}$ to $\ket{1}$, and $\ket{3}$ to $\ket{1}$).} \label{chapter3-coherent-trapping} \end{figure} The observation in the above paragraph is further justified by noticing that $C(\rho_{ss})$ is varied with the phases of the first and third driving fields. In Fig. \ref{Fig.phase13} we plot $C(\rho_{ss})$ as a function of the phases of driving fields with $z_{1}=z_{3}=1.01$ and $z_{2}=11$. When the phase difference is $\phi_{1}-\phi_{3}=(2k+1)\pi$ ($k$ is an integer), we get a maximum of 0.417. For general phase relations, an oscillatory behavior characteristic of the expected coherent effect takes place. There is a corresponding oscillatory behavior for the $\Lambda$-type three-level atom: the summation of the modulus square of the amplitudes in the states $\ket{2}$ and $\ket{3}$ is a periodic function of the phase difference between the two driving fields and takes a maximum when their phases are opposite \cite{Scully}. \begin{figure} \epsfxsize=.35\textwidth \epsfysize=.3\textwidth \centerline{\epsffile{phase13new.eps}} \caption[The coherent effect of the entanglement in a 3-cavity system]{(color online). The concurrence between the polaritons in cavity 2 and cavity 3 as a function of $\phi_{1}$ and $\phi_{3}$. $x_{1}=1.67e^{i\phi_{1}}$, $x_{3}=1.67e^{i\phi_{3}}$. When $\phi_{1}-\phi_{3}=(2k+1)\pi$ ($k$ is an integer), the concurrence reaches a maximum of 0.417. The upper left figure is the sectional view at $\phi_{3}=0$.} \label{Fig.phase13} \end{figure} \section{\large An alternative setup: Two coupled cavities with three driving fields} In Section \ref{chapter3-3cavity-steady-state}, we find that when the entanglement between the two of the three cavities reaches a maximum value, the third cavity nearly decouples from the two cavities. It therefore seems that the third cavity plays absolutely no role in the establishment of the entanglement between the other two cavities. To check if this argument is correct and identify the role of the third cavity in the entanglement generation and control, we remove the third cavity and investigate the entanglement of the remaining two cavities. This new setup is shown in Fig. \ref{Fig.2-cav}, where there are three wave guides coupled to two cavity-atom systems and these three wave guides are driven by three classical fields respectively. We analyze the polaritonic entanglement between cavity 2 and 3 (relabeled as $S_{1}$ and $S_{2}$ in Fig. \ref{Fig.2-cav}). \begin{figure} \epsfxsize=.3\textwidth \epsfysize=0.25\textwidth \centerline{\epsffile{Dimitris10.eps}} \caption[Schematic diagram of the two cavity-atom systems]{(color online). Schematic diagram of the two coupled defect cavities in which there are three wave guides carrying the three respective classical laser fields. Note that each waveguide carrying classical fields can also be replaced by fibers or stripline microresonators for different implementation technologies \cite{pbgs,rest}.} \label{Fig.2-cav} \end{figure} The Hamiltonian and the derivation of the effective master equation are similar to those for the three-cavity setup in Section \ref{chapter3-3cavity-setup-Hamiltonian} and \ref{chapter3-3cavity-master-equation}. We therefore omit the detailed derivation steps and provide only the final effective master equation. \begin{eqnarray}\label{eq-eff-2} \frac{d\rho}{dt}=&-&i[H_{\mbox{\rm eff}}'\,,\rho]\nonumber\\ &+&(\Gamma_{2}z_{2}+\Gamma_{1})F_{1,1}^{P}\rho+(\Gamma_{2}z_{2}+\Gamma_{3})F_{2,2}^{P}\rho\nonumber\\ &+&\Gamma_{2}(F_{1,2}^{P}\rho+F_{2,1}^{P}\rho)\,,\end{eqnarray} with $\displaystyle H_{\mbox{\rm eff}}'=(\Gamma_{2}y_{2}P_{1}^{\dagger}P_{2}$ +$\sum_{i=1}^{2}(\Gamma_{i}x_{i}+\Gamma_{i+1}x_{i+1})P_{i}^{\dagger})+h.c.\,$,$\vspace*{1mm}$ where $h.c.$ denotes the Hermitian conjugation of its previous summation. $F_{i,j}^{P}(\rho)$ is defined in Section \ref{chapter3-3cavity-master-equation} as $2P_{i}\rho P_{j}^{\dagger}-P_{i}^{\dagger}P_{j}\rho-\rho P_{i}^{\dagger}P_{j}\,$, $\displaystyle \Gamma_{i}=2J_{i}^{2}\kappa/(\kappa^{2}+4\Delta_{i}^{2})$,$\vspace*{2mm}$ $x_{i}=-\alpha_{i}e^{i\phi_{i}}(2\Delta_{i}+i\kappa)/(J_{i}\kappa)$, $y_{2}=-2\Delta_{2}/\kappa$, $\Delta_{1}=\omega_{c,1}-\omega_{p,1}$, $\Delta_{2}=\omega_{c,1}-(\omega_{p,1}+\omega_{p,3})/2$, $\Delta_{3}=\omega_{c,1}-\omega_{p,3}$, $z_{i}=1+\gamma/(4\Gamma_{i})$. The optimization of this entanglement gives similar values of the parameters like the ones used above except that the values for $\Gamma_{i}$ are reversed, i.e. $\Gamma_{2}\gg\Gamma_{1}=\Gamma_{3}$; however, the concurrence reaches a maximum of 0.47. Again the dependence $\phi_{1}-\phi_{3}=(2k+1)\pi$ ($k$ is an integer) is apparent (see Fig. \ref{Fig.phase23}). However, if we compare the insets in Fig. \ref{Fig.phase13} and Fig. \ref{Fig.phase23} for the cross-sectional plots of the concurrence for $\phi_{3}=0$, we see that the plot in Fig. \ref{Fig.phase13} has a narrower peak whereas the plot in Fig. \ref{Fig.phase23} is broader. This implies that the maximum concurrence for configuration in Fig. \ref{Fig.2-cav} is substantially more stable against variation in the phases $\phi_{1}$ and $\phi_{3}$ than that in Fig. \ref{Fig.3-cav}. However, when the dissipation (parametrized by $\gamma$ in $z_{i}$)) increases, the entanglement in the latter configuration decreases more slowly than the former one. This can be numerically verified. Thus we conclude that cavity 1 in Fig. \ref{Fig.3-cav} not only mediates coherently between cavities 2 and 3, but it also stabilizes the amount of entanglement between the two cavities. \begin{figure}[t] \epsfxsize=.35\textwidth \epsfysize=0.3\textwidth \centerline{\epsffile{phase23.eps}} \caption[The coherent effect of the entanglement in a 2-cavity system]{(color online). The concurrence between two cavities -Fig.\ref{Fig.2-cav}- as a function of $\phi_{1}$ and $\phi_{3}$. $x_{2}=y_{2}=0$, $x_{1}=5e^{i\phi_{1}}$,$x_{3}=5e^{i\phi_{3}}$, $\Gamma_{1}=\Gamma_{3}=1.316\times 10^{8}$ and $\Gamma_{2}=10^{10}$. When $\phi_{1}-\phi_{3}=(2k+1)\pi$ ($k$ is an integer), the concurrence reaches a maximum of 0.470. The upper left figure is the sectional view at $\phi_{3}=0$.} \label{Fig.phase23} \end{figure} There are many other configurations for the coupled-cavity setup. For instance, one could consider an extension of the setup in Ref. \cite{two-state} to three defect cavities, as shown in Fig. \ref{more-cavity-2}. However, numerical optimization for this extension and many others does not seem to increase the polaritonic entanglement between any two cavities. Therefore, the setups in Fig. \ref{Fig.3-cav} and \ref{Fig.2-cav} appear to be optimal ones for two-polariton entanglement. \begin{figure}[h] \begin{center} \includegraphics[width=0.25\textwidth,height=0.2\textwidth]{more-cavity-2.eps} \end{center} \caption{(color online). Three defect cavities coupled to one wave guide.}\label{more-cavity-2} \end{figure} \section{\large Thermalization of the coupled-cavity system}\label{model} In this section, we consider the thermalization of the lossy driven atom-cavity system. For simplicity, we consider a simpler system which involves two defect cavities coupled to a driven wave guide, as shown in Fig. \ref{2-cavity}. \begin{figure}[h] \begin{center} \includegraphics[width=0.25\textwidth,height=0.2\textwidth]{2-cavity.eps} \end{center} \caption{(color online). Two defect cavities coupled to one wave guide.}\label{2-cavity} \end{figure} This system was studied in Ref. \cite{two-state}, where the reservoir temperature is set to be zero and an analytical solution was obtained (see Eq. (23)-(29) therein). For finite temperature, the master equation needs to be modified, i.e. Eq. (12) and (13) of Ref. \cite{two-state} are replaced by \begin{align} L_{a}'R=&\kappa(n_{c}+1)(2aRa^{\dagger}-a^{\dagger}aR-Ra^{\dagger}a)+\kappa n_{c}(2a^{\dagger}Ra-aa^{\dagger}R-Raa^{\dagger}),\\ L_{p}'R=&\sum_{i=1}^{2}\gamma(n_{p}+1)(2\sigma_{i} R \sigma^{\dagger}_{i}-\sigma^{\dagger}_{i}\sigma_{i} R-\rho\sigma^{\dagger}_{i}\sigma_{i})+\gamma n_{p}(2\sigma^{\dagger}_{i} R \sigma-\sigma\sigma^{\dagger}_{i}R-R \sigma_{i}\sigma^{\dagger}_{i}),\label{polariton decay}\end{align} where $n_{c}=\frac{1}{e^{\hbar\omega_{cav}/k_{B}T_{R}}-1}$ is the mean photon number at the reservoir temperature $T_{R}$ and the cavity frequency $\omega_{cav}$. Similarly $n_{p}=\frac{1}{e^{\hbar\omega_{pol}/k_{B}T_{R}}-1}$ is the mean photon number at the reservoir temperature $T_{R}$ and the polaritonic frequency $\omega_{pol}$. The effective master equation for the two polaritons can be obtained using the same method in Ref. \cite{two-state}. For $T_{R}\ll\hbar\omega_{pol}/k_{B}$, the temperature terms in Eq. (\ref{polariton decay}) are preserved in the final effective master equation i.e. Eq. (20) of Ref. \cite{two-state}. The steady state $\rho^{ss}$ is obtained by requiring $\frac{d\rho^{ss}}{dt}=0$. To characterize the degree of thermalization of the steady state, we calculate the distance between the steady state and a thermal state, using the following distance measure \cite{trace distance}: \begin{align}\label{distance measure} d(\rho^{ss},\,\,\rho^{th})=\frac{1}{2}\textrm{tr}|\rho^{ss}-\rho^{th}|.\end{align} The trace distance $d(\rho^{ss},\,\,\rho^{th})$ provides a useful measure to distinguish the steady state $\rho^{ss}$ from the thermal state $\rho^{th}$ through quantum measurements \cite{trace distance2}. Therefore, if $d(\rho^{ss},\,\,\rho^{th})$ increases with system parameters we say that the system is farther away from thermalization. Also, the thermal state $\rho^{th}$ is chosen to be $\rho^{th}=\exp[-\hbar\omega_{pol}(\sigma_{1}^{\dagger}\sigma_{1}+\sigma_{2}^{\dagger}\sigma_{2})/k_{B}T_{R}]$ up to a normalization factor tr$(\rho^{th})$. Fig. \ref{Fig.c2} shows the distance $d(\rho^{ss},\,\,\rho^{th})$ as a function of $x$ and $T_{R}$, where $x$ is a parameter defined in Ref. \cite{two-state} (below Eq. (22)) and it is proportional to the strength of the driving field. The relevant parameters $y=15,z=1.01$ (see Ref. \cite{two-state}). The unit of $T_{R}$ is $\hbar\omega_{pol}/k_{B}$. It is seen in Fig. \ref{Fig.c2} that the steady state is close to the thermal state if there is no driving field, and for stronger driving field the steady state is farther away from thermalization. This is reasonable from a physical perspective as the driving field generally induces coherence (i.e. non-zero off-diagonal elements in the polaritonic density matrix) for the polaritons while the thermal state is diagonal. In addition, it seems that $d(\rho^{ss},\,\,\rho^{th})$ does not depend on the reservoir temperature. This may be because $T_{R}\ll\hbar\omega_{pol}/k_{B}$ so that the effect of the thermal agitation is rather small. The effect should certainly manifest itself for larger $T_{R}$. However this regime is beyond the approximation for the derivation of the effective master equation ($T_{R}\ll\hbar\omega_{pol}/k_{B}$) and it is in general not easily solvable even with numerical calculations. \begin{figure}[h] \begin{center} \subfigure[] { \includegraphics[width=0.35\textwidth,height=0.3\textwidth]{c2.eps} \label{Fig.c2}} \hspace{20mm}\subfigure[] { \includegraphics[width=0.35\textwidth,height=0.3\textwidth]{dc2.eps} \label{Fig.dc2} } \end{center} \caption{(color online). The distance $d(\rho^{ss},\,\,\rho^{th})$ for (a) and the derivative $|\partial d(\rho^{ss},\,\,\rho^{th})/\partial x|$ for (b) as functions of $x$ (proportional to the driving strength) and $T_{R}$.} \end{figure} Comparing Fig. \ref{Fig.c2} for a fixed $T_{R}$ with the first plot of Fig. 2 ($y=15$) in Ref. \cite{two-state}, one finds that they are not consistent, especially for large $x$, for which $d(\rho^{ss},\,\,\rho^{th})$ is very large while the polaritonic entanglement is negligible. However, if one takes the derivative of $d(\rho^{ss},\,\,\rho^{th})$ with respect to $x$, then a relationship appears. Fig. \ref{Fig.dc2} shows $|\partial d(\rho^{ss},\,\,\rho^{th})/\partial x|$ as a function of $x$ and $T_{R}$. It can be seen that there are two peaks for a fixed temperature. This is similar to the first plot of Fig. 2 in Ref. \cite{two-state}. Also the two plots are consistent for large $x$. Therefore, it may be concluded that the change rate of the thermalization with respect to the driving strength (rather than the thermalization itself) is related to the polaritonic entanglement. Physically, for a increase/decrease of the driving strength i.e. more/less coherent energy is injected into the system, a more rapid change of the thermal property (or the degree of thermalization) of the system indicates that a stronger correlation (entanglement) is established. The coherent energy refers to fact that the driving field induces off-diagonal elements in the polaritonic density matrix as mentioned previously. One could conjecture that a more rapid change of the degree of thermalization of the system may indicate that the interaction between the two polaritons are stronger which leads to a stronger entanglement between them. \section{\large Conclusion} In this paper, we show that long-distance steady state entanglement in a lossy network of driven light-matter systems can be coherently controlled through the tuning of the phase difference between the driving fields. The role of driving phase field in engineering interaction and entanglement in coupled atom-cavities was also discussed in Ref. \cite{driving phase}. Here, it is found that in a closed network of three-cavity-atom systems the maximum of entanglement for any pair is achieved even when their corresponding direct coupling is much smaller than their couplings to the third party. This effect is reminiscent of coherent effects found in quantum optics that coherent population transfers between otherwise uncoupled levels through a third level using two classical coherent fields. An alternative geometry: two-coupled cavities with three driving fields is discussed. For finite temperature, we analyze the thermalization of the two defect cavities coupled to one driven wave guide. It is found that the change rate of the thermalization of the system with respect to the driving strength (rather than the thermalization itself) can indicate the degree of the polaritonic correlation (entanglement). Acknowledgement - This work was supported by National Research Foundation \& Ministry of Education, Singapore. Li Dai would like to thank Dr. Jun-Hong An for helpful discussions.
1,116,691,497,040
arxiv
\section{Introduction} A \textit{time series}, also known as a \textit{trajectory}, is a sequence of observed data $\boldsymbol{t} = (t_1, t_2, ... t_n)$ measured over time. A large number of real world data in medicine \citep{keogh2006finding}, finance \citep{tsay2005analysis}, astronomy \citep{scargle1982studies} and computer vision \citep{lin1995fast} are time series. A key question that is often asked about time series data is: "How similar are two given trajectories?" A notion of trajectory similarity allows one to do unsupervised learning, such as clustering and visualization, of time series data, as well as supervised learning, such as classification \citep{xing2003distance}. However, measuring the distance between trajectories is complex, because of the temporal correlation between data in a time series and the complex nature of the noise that may be present in the data (e.g. different sampling rates) \citep{yin2014generalized}. In the literature, many methods have been proposed to measure the similarity between trajectories. In the simplest case, when trajectories are all sampled at the same frequency and are of equal length, Euclidean distance can be used \citep{besse2015review}. When comparing trajectories with different sampling rates, dynamic time warping (DTW) is a popular choice \citep{besse2015review}. Because the choice of distance metric can have a significant effect on downstream analysis \citep{xing2003distance, yin2014generalized, wang2013effectiveness}, a plethora of other distances have been hand-crafted based on the specific characteristics of the data and noise present in the time series. However, a review of five of the most popular trajectory distances found that no one trajectory distance is more robust than the others to all of the different kinds of noise that are commonly present in time series data \citep{wang2013effectiveness}. As a result, it is perhaps not surprising that many distances have been manually designed for different time series domains and datasets. In this work, we propose an alternative to hand-crafting a distance: we develop an end-to-end framework to \textit{learn} a good similarity metric directly from unlabeled time series data. While data-dependent analysis of time-series is commonly performed in the context of supervised learning (e.g. using RNNs or convolutional networks to classify trajectories \citep{wang2017time}), this is not often performed in the case when the time series are unlabeled, as it is more challenging to determine notions of similarity in the absence of labels. Yet the unsupervised regime is critical, because in many time series datasets, ground-truth labels are difficult to determine, and yet the notion of similarity plays a key role. For example, consider a set of disease trajectories recorded in a large electronic health records database: we have the time series information of the diseases contracted by a patient, and it may be important to determine which patient in our dataset is most similar to another patient based on his or her disease trajectory. Yet, the choice of ground-truth labels is ambiguous in this case. In this work, we develop an easy-to-use method to determine a distance that is appropriate for a given set of unlabeled trajectories. In this paper, we restrict ourselves to the family of trajectory distances known as \textit{warping distances} (formally defined in Section \ref{subsection:warping}). This is for several reasons: warping distances have been widely studied, and are intuitive and interpretable \citep{besse2015review}; they are also efficient to compute, and numerous heuristics have been developed to allow nearest-neighbor queries on datasets with as many as trillions of trajectories \citep{rakthanmanon2012searching}. Thirdly, although they are a flexible and general class, warping distances are particularly well-suited to trajectories, and serve as a means of regularizing the unsupervised learning of similarity metrics directly from trajectory data. We show through systematic experiments that learning an appropriate warping distance can provide insight into the nature of the time series data, and can be used to cluster, query, or visualize the data effectively. \paragraph*{Related Work} The development of distance metrics for time series stretches at least as far back as the introduction of dynamic time warping (DTW) for speech recognition \citep{sakoe1978dynamic}. Limitations of DTW led to the development and adoption of the Edit Distance on Real Sequence (EDR) \citep{chen2005robust}, the Edit Distance with Real Penalty (ERP) \citep{chen2004marriage}, and the Longest Common Subsequence (LCSS) \citep{kearney1990stream} as alternative distances. Many variants of these distances have been proposed, based on characteristics specific to certain domains and datasets, such as the Symmetric Segment-Path Distance (SSPD) \citep{besse2015review} for GPS trajectories, Subsequence Matching \citep{goyal2018clinically} for medical time series data, among others \citep{marteau2009time}. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{mds_plots2_2.jpg} \caption{\textbf{Learning a Distance with Autowarp.} Here we visualize the stages of Autowarp by using multi-dimensional scaling (MDS) to embed a set of 50 trajectories into two dimensions at each step of the algorithm. Each dot represents one observed trajectory that is generated by adding Gaussian noise and outliers to 10 copies of 5 seed trajectories (each color represents a seed). (left) First, we run MDS on the original trajectories with Euclidean distance. (center) Next, we run MDS on the latent representations learned with a sequence-to-sequence autoencoder, which partially resolves the original clusters. (right) Finally, we run MDS on the original trajectories using the learned warping distance, which completely resolves the original clusters.} \label{fig:mds} \end{figure} Prior work in metric learning from trajectories is generally limited to the supervised regime. For example, in recent years, convolutional neural networks \citep{wang2017time}, recurrent neural networks (RNNs) \citep{lipton2015learning}, and siamese recurrent neural networks \citep{pei2016modeling} have been proposed to classify neural networks based on labeled training sets. There has also been some work in applying unsupervised deep learning learning to time series \citep{langkvist2014review}. For example, the authors of \citep{malhotra2017timenet} use a pre-trained RNN to extract features from time-series that are useful for downstream classification. Unsupervised RNNs have also found use in anomaly detection \citep{shipmon2017time} and forecasting \citep{romeu2015stacked} of time series. \section{The Autowarp Approach} Our approach, which we call Autowarp, consists of two steps. First, we learn a latent representation for each trajectory using a sequence-to-sequence autoencoder. This representation takes advantage of the temporal correlations present in time series data to learn a low-dimensional representation of each trajectory. In the second stage, we search a family of warping distances to identify the warping distance that, when applied to the original trajectories, is most similar to the Euclidean distances between the latent representations. Fig. \ref{fig:mds} shows the application of Autowarp to synthetic data. \paragraph{Learning a latent representation} Autowarp first learns a latent representation that captures the significant properties of trajectories in an unsupervised manner. In many domains, an effective latent representation can be learned by using autoencoders that reconstruct the input data from a low-dimensional representation. We use the same approach using sequence-to-sequence autoencoders. This approach is inspired by similar sequence-to-sequence autoencoders, which have been successfully applied to sentiment classification \citep{dai2015semi}, machine translation \citep{sutskever2014sequence}, and learning representations of videos \citep{srivastava2015unsupervised}. In the architecture that we use (illustrated in Fig. \ref{fig:autoencoder}), we feed each step in the trajectory sequentially into an \textit{encoding} LSTM layer. The hidden state of the final LSTM cell is then fed identically into a \textit{decoding} LSTM layer, which contains as many cells as the length of the original trajectory. This layer attempts to reconstruct each trajectory based solely on the learned latent representation for that trajectory. What kind of features are learned in the latent representation? Generally, the hidden representation captures overarching features of the trajectory, while learning to ignore outliers and sampling rate. We illustrate this in Fig. \ref{fig:autoencoder_results} in Appendix \ref{app:figures}: the LSTM autoencoders learn to denoise representations of trajectories that have been sampled at different rates, or in which outliers have been introduced. \floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,center},capbesidewidth=5cm}}]{figure}[\FBwidth] {\caption{\textbf{Schematic for LSTM Sequence-Sequence Autoencoder.} We learn a latent representation for each trajectory by passing it through a sequence-to-sequence autoencoder that is trained to minimize the reconstruction loss $\left\Vert \boldsymbol{t} - \tilde{\boldsymbol{t}} \right\Vert^2$ between the original trajectory $\boldsymbol{t}$ and decoded trajectory $\tilde{\boldsymbol{t}}$. In the decoding stage, the latent representation $h$ is passed as input into each LSTM cell. \label{fig:autoencoder}}} { \includegraphics[width=0.6\textwidth]{Autoencoder.pdf} } \paragraph{Warping distances} \label{subsection:warping} Once a latent representation is learned, we search from a family of warping distances to find the warping distance across the original trajectories that mimics the distances between each trajectory's latent representations. This can be seen as ``distilling" the representation learned by the neural network into a warping distance (e.g. see \citep{hinton2015distilling}). In addition, as warping distances are generally well-suited to trajectories, this serves to regularize the process of distance metric learning, and generally produces better distances than using the latent representations directly (as illustrated in Fig. 1). We proceed to formally define a \textit{warping distance}, as well as the family of warping distances that we work with for the rest of the paper. First, we define a \textit{warping path} between two trajectories. \begin{mydef}A \textbf{warping path} $\boldsymbol{p} = (p_0, \ldots p_L)$ between two trajectories $\boldsymbol{t}^A = (t^A_1, \ldots t^A_n)$ and $\boldsymbol{t}^B = (t^B_1, \ldots t^B_m)$ is a sequence of pairs of trajectory states, where the first state comes from trajectory $\boldsymbol{t}^A$ or is null (which we will denote as $t^A_0$), and the second state comes from trajectory $\boldsymbol{t}^B$ or is null (which we will denote as $t^B_0$). Furthermore, $\boldsymbol{p}$ must satisfy two properties: \begin{itemize} \item boundary conditions: $p_0 = (t^A_0,t^B_0)$ and $p_L = (t^A_{n}, t^B_{m})$ \item valid steps: $p_k = (t^A_i, t^B_j) \implies p_{k+1} \in \{(t^A_{i+1},t^B_j),(t^A_{i+1},t^B_{j+1}),(t^A_{i},t^B_{j+1})\}$. \end{itemize} \end{mydef} Warping paths can be seen as traversals on a $(n+1)$-by-$(m+1)$ grid from the bottom left to the top right, where one is allowed to go up one step, right one step, or one step up and right, as shown in Fig. \ref{fig:warping} in Appendix \ref{app:figures}. We shall refer to these as \textit{vertical}, \textit{horizontal}, and \textit{diagonal} steps respectively. \begin{mydef} Given a set of trajectories $\mathcal{T}$, a \textbf{warping distance} $d$ is a function that maps each pair of trajectories in $\mathcal{T}$ to a real number $\in [0, \infty)$. A warping distance is completely specified in terms of a cost function $c(\cdot, \cdot)$ on two pairs of trajectory states: Let $\boldsymbol{t}^A, \boldsymbol{t}^B \in \mathcal{T}$. Then $d(\boldsymbol{t}^A, \boldsymbol{t}^B)$ is defined\footnote{A more general definition of warping distance replaces the summation over $c(p_{i-1},p)$ with a general class of statistics, that may include $\max$ and $\min$ for example. For simplicity, we present the narrower definition here.} as $d(\boldsymbol{t}^A, \boldsymbol{t}^B) = \min_{\boldsymbol{p} \,\in P} \sum_{i=1}^{L}c(p_{i-1}, p_{i})$ The function $c(p_{i-1},p_{i})$ represents the cost of taking the step from $p_{i-1}$ to $p_i$, and, in general, differs for horizontal, vertical, and diagonal steps. $P$ is the set of all warping paths between $\boldsymbol{t}^A$ and $\boldsymbol{t}^B$. \end{mydef} Thus, a warping distance represents a particular optimization carried over all valid warping paths between two trajectories. In this paper, we define a family of warping distances $\mathcal{D}$, with the following parametrization of $c(\cdot, \cdot)$: \begin{equation}\boxed{c((t^A_i,t^B_{j}),(t^A_{i'},t^B_{j'})) = \begin{cases} \sigma(\left\Vert t^A_{i'} - t^B_{j'}\right\Vert, \frac{\epsilon}{1-\epsilon}) & i'>i, j'>j \\ \frac{\alpha}{1-\alpha} \cdot \sigma(\left\Vert t^A_{i'} - t^B_{j'}\right\Vert, \frac{\epsilon}{1-\epsilon}) + \gamma & i'=i \text{ or } j'=j \end{cases}} \end{equation} Here, we define $\sigma(x, y) \eqdef y \cdot \tanh(x/y)$ to be a soft thresholding function, such that $\sigma(x, y) \approx x$ if $0\le x \le y$ and $\sigma(x, y) \approx y$ if $x > y$. And, $\sigma(x, \infty) \eqdef x$. The family of distances $\mathcal{D}$ is parametrized by three parameters $\alpha, \gamma, \epsilon$. With this parametrization, $\mathcal{D}$ includes several commonly used warping distances for trajectories, as shown in Table \ref{table:distance_parametrizations}, as well as many other warping distances. \noindent\makebox[\textwidth][c]{% \begin{minipage}{\textwidth} \renewcommand*\footnoterule{} \begin{center} \begin{longtable}{ |l|c|c|c| } \hline Trajectory Distance & $\alpha$ & $\gamma$ & $\epsilon$ \\ [0.5ex] \hline\hline Euclidean\footnote{The Euclidean distance between two trajectories is infinite if they are of different lengths} & \;\;\;\;\;\; 1 \;\;\;\;\;\; & \;\;\;\;\;\; 0 \;\;\;\;\;\; & $ 1$ \\ Dynamic Time Warping (DTW) \citep{sakoe1978dynamic} & 0.5 & 0 & 1 \\ Edit Distance ($\gamma_0$) \citep{chen2004marriage} & 0 & $0 < \gamma_0 $ & 1 \\ Edit Distance on Real Sequences ($\gamma_0, \epsilon_0$) \citep{chen2005robust} \footnote{This is actually a smooth, differentiable approximation to EDR} & 0 & $0 < \gamma_0$ & $0 <\epsilon_0 < 1$ \\ \hline \caption{Parametrization of common trajectory dissimilarities \label{table:distance_parametrizations}} \vspace{-4mm} \end{longtable} \end{center} \end{minipage}} \paragraph{Optimizing warping distance using betaCV} \label{subsection:23} Within our family of warping distances, how do we choose the one that aligns most closely with the learned latent representation? To allow a comparison between latent representations and trajectory distances, we use the concept of betaCV: \begin{mydef}Given a set of trajectories $\mathcal{T} = \{\boldsymbol{t}^1, \boldsymbol{t}^2, \ldots \boldsymbol{t}^T\}$, a trajectory metric $d$ and an assignment to clusters $C(i)$ for each trajectory $\boldsymbol{t}^i$, the \textbf{betaCV}, denoted as $\beta$, is defined as: \begin{equation} \beta(d) = \frac{\frac{1}{Z} \sum_{i=1}^T\sum_{j=1}^T d(\boldsymbol{t}^i, \boldsymbol{t}^j) \; 1\left[C(i)=C(j)\right]}{\frac{1}{T^2} \sum_{i=1}^T\sum_{j=1}^T d(\boldsymbol{t}^i, \boldsymbol{t}^j)}, \label{eq:betacv} \end{equation} where $Z = \sum_{i=1}^T\sum_{j=1}^T 1\left[C(i)=C(j)\right]$ is the normalization constant needed to transform the numerator into an average of distances. \end{mydef} In the literature, the betaCV is used to evaluate different clustering assignments $C$ for a fixed distance \citep{zaki2014data}. In our work, it is the distance $d$ that is not known; were true cluster assignments known, the betaCV would be a natural quantity to minimize over the distances in $\mathcal{D}$, as it would give us a distance metric that minimizes the average distance of trajectories to other trajectories within the same cluster (normalized by the average distances across all pairs of trajectories). However, as the clustering assignments are not known, we instead use the Euclidean distances between to the latent representations of two trajectories to determine whether they belong to the same ``cluster." In particular, we designate two trajectories as belonging to the same cluster if the distance between their latent representations is less than a threshold $\delta$, which is chosen as a percentile $\bar{p}$ of the distribution of distances between all pairs of latent representations. We will denote this version of the betaCV, calculated based on the latent representations learned by an autoencoder, as $\hat{\beta}_h(d)$: \begin{mydef}Given a set of trajectories $\mathcal{T} = \{\boldsymbol{t}^1, \boldsymbol{t}^2, \ldots \boldsymbol{t}^T\}$, a metric $d$ and a latent representation for each trajectory $h_i$, the \textbf{latent betaCV}, denoted as $\hat{\beta}$, is defined as: \begin{equation} \hat{\beta} = \frac{\frac{1}{Z} \sum_{i=1}^T\sum_{j=1}^T d(\boldsymbol{t}^i, \boldsymbol{t}^j) \; 1\left[\left\Vert h_i - h_j \right\Vert_2 < \delta \right]}{\frac{1}{T^2} \sum_{i=1}^T\sum_{j=1}^T d(\boldsymbol{t}^i, \boldsymbol{t}^j)}, \end{equation} where $Z$ is a normalization constant defined analogously as in (\ref{eq:betacv}). The threshold distance $\delta$ is a hyperparameter for the algorithm, generally set to be a certain threshold percentile ($\bar{p}$) of all pairwise distances between latent representations. \end{mydef} With this definition in hand, we are ready to specify how we choose a warping distance based on the latent representations. We choose the warping distance that gives us the lowest latent betaCV: $$\boxed{\hat{d} = \argmin_{d \in \mathcal{D}} \hat{\beta}(d).}$$ We have seen that the learned representations $h_i$ are not always able to remove the noise present in the observed trajectories. It is natural to ask, then, whether it is a good idea to calculate the betaCV using the noisy latent representations, in place of true clustering assignments. In other word, suppose we computed $\beta$ based on known clusters assignments in a trajectory dataset. If we then computed $\hat{\beta}$ based on somewhat noisy learned latent representations, could it be that $\beta$ and $\hat{\beta}$ differ markedly? In Appendix \ref{app:proofs}, we carry out a theoretical analysis, assuming that the computation of $\hat{\beta}$ is based on a noisy clustering $\tilde{C}$. We present the conclusion of that analysis here: \begin{proposition}[\textbf{Robustness of Latent BetaCV}] Let $d$ be a trajectory distance defined over a set of trajectories $\mathcal{T}$ of cardinality $T$. Let $\beta(d)$ be the betaCV computed on the set of trajectories using the true cluster labels $\{C(i)\}$. Let $\hat{\beta}(d)$ be the betaCV computed on the set of trajectories using noisy cluster labels $\{\tilde{C}(i)\}$, which are generated by independently randomly reassigning each $C(i)$ with probability $p$. For a constant $K$ that depends on the distribution of the trajectories, the probability that the latent betaCV changes by more than $x$ beyond the expected $Kp$ is bounded by: \begin{align} \Pr(|\beta - \hat{\beta}| > Kp + x) \le e^{-2Tx^2/K^2} \end{align} \end{proposition} This result suggests that a latent betaCV computed based on latent representations may still be a reliable metric even when the latent representations are somewhat noisy. In practice, we find that the quality of the autoencoder does have an effect on the quality of the learned warping distance, up to a certain extent. We quantify this behavior using an experiment showin in Fig. \ref{fig:training_latent_betacv} in Appendix \ref{app:figures}. \section{Efficiently Implementing Autowarp} There are two computational challenges to finding an appropriate warping distance. One is efficiently searching through the continuous space of warping distances. In this section, we show that the computation of the BetaCV over the family of warping distances defined above is differentiable with respect to quantities $\alpha, \gamma, \epsilon$ that parametrize the family of warping distances. Computing gradients over the whole set of trajectories is still computationally expensive for many real-world datasets, so we introduce a method of sampling trajectories that provides significant speed gains. The formal outline of Autowarp is in Appendix B. \paragraph{Differentiability of betaCV.} In Section \ref{subsection:23}, we proposed that a warping distance can be identified by the distance $d \in \mathcal{D}$ that minimizes the BetaCV computed from the latent representations. Since $\mathcal{D}$ contains infinitely many distances, we cannot evaluate the BetaCV for each distance, one by one. Rather, we solve this optimization problem using gradient descent. In Appendix \ref{app:proofs}, we prove the that BetaCV is differentiable with respect to the parameters $\alpha, \gamma, \epsilon$ and the gradient can be computed in $O(T^2 N^2)$ time, where $T$ is the number of trajectories in our dataset and $N$ is the number of elements in each trajectory (see Proposition \ref{proposition:2}). \paragraph{Batched gradient descent.} When the size of the dataset becomes modestly large, it is no longer feasible to re-compute the exact analytical gradient at each step of gradient descent. Instead, we take inspiration from negative sampling in word embeddings \citep{mikolov2013distributed}, and only sample a fixed number, $S$, of pairs of trajectories at each step of gradient descent. This reduces the runtime of each step of gradient descent to $O(SN^2)$, where $S \approx 32-128$ in our experiments. Instead of the full gradient descent, this effectively becomes batched gradient descent. The complete algorithm for batched Autowarp is shown in Algorithm \ref{alg:autowarp} in Appendix \ref{app:algorithm}. Because the betaCV is not convex in terms of the parameters $\alpha, \gamma, \epsilon$, we usually repeat Algorithm \ref{alg:autowarp} with multiple initializations and choose the parameters that produce the lowest betaCV. \section{Validation} Recall that Autowarp learns a distance from unlabeled trajectory data in two steps: first, a latent representation is learned for each trajectory; secondly, a warping distance is identified that is most similar to the learned latent representations. In this section, we empirically validate this methodology. \begin{figure}[] \centering \includegraphics[width=0.95\linewidth]{gaussian_outliers_triple_plot3.pdf} {\caption{\textbf{Validating Latent BetaCV.} We construct a synthetic time series dataset with Gaussian noise and outliers added to the trajectories. We compute the latent betaCV for various distances (left), which closely matches the plot of the true betaCV (middle) computed based on knowledge of the seed clusters. As a control, we plot the betaCV computed based on the original trajectories (right). Black dots represent the optimal value of $\alpha$ and $\gamma$ in each plot. Lower betaCV is better. \label{fig:betacv}}} \end{figure} \paragraph{Validating latent betaCV.} We generate synthetic trajectories that are copies of a seed trajectory with different kinds of noise added to each trajectory. We then measure the $\hat{\beta}_h$ for a large number of distances in $\mathcal{D}$. Because these are synthetic trajectories, we compare this to the true $\beta$ measured using the known cluster labels (each seed generates one cluster). As a control, we also consider computing the betaCV based on the Euclidean distance of the original trajectories, rather than the Euclidean distance between the latent representations. We denote this quantity as $\hat{\beta}_t$ and treat it as a control. Fig. \ref{fig:betacv} shows the plot when the noise takes the form of adding outliers and Gaussian noise to the data. The betaCVs are plotted for distances $d$ for different values of $\alpha$ and $\gamma$ with $\epsilon=1$. Plots for other kinds of noise are included in Appendix \ref{app:figures} (see Fig. \ref{fig:beta_cv_all_plots}). These plots suggest that $\hat{\beta}_h$ assigns each distance a betaCV that is representative of the true clustering labels. Furthermore, we find that the distances that have the lowest betaCV in each case concur with previous studies that have studied the robustness of different trajectory distances. For example, we find that DTW ($\alpha=0.5, \gamma=0$) is the appropriate distance metric for resampled trajectories, Euclidean ($\alpha=1, \gamma =0$) for Gaussian noise, and edit distance ($\alpha=0, \gamma \approx 0.4$) for trajectories with outliers. \begin{figure}[] \centering \subfloat[]{% \includegraphics[width=0.44\linewidth]{sensitivity.pdf} } \quad \subfloat[]{% \includegraphics[width=0.44\linewidth]{sensitivity2-4-2.pdf} } \caption{\textbf{Sensitivity Analysis on Trajectories with Outliers.} (a) We investigate how the percentile threshold parameter affects latent betaCV. (b) We also investigate the effect of changing the latent dimensionality on the relative ordering of the distances. We find that the qualitative ranking of different distances is generally robust to the choice of these hyperparameters.} \label{fig:sensitivity} \end{figure} \paragraph*{Ablation and sensitivity analysis.} Next, we investigate the sensitivity of the latent betaCV calculation to the hyperparameters of the algorithm. We find that although the betaCV changes as the threshold changes, the relative ordering of different warping distances mostly remains the same. Similarly, we find that the dimension of the hidden layer in the autoencoder can vary significantly without significantly affecting qualitative results (see Fig. \ref{fig:sensitivity}). For a variety of experiments, we find that a reasonable number of latent dimensions is $\approx L \cdot D$, where $L$ is the average trajectory length and $D$ the dimensionality. We also investigate whether both the autoencoder and the search through warping distances are necessary for effective metric learning. Our results indicate that both are needed: using the latent representations alone results in noisy clustering, while the warping distance search cannot be applied in the original trajectory space to get meaningful results (Fig. \ref{fig:mds}). \label{subsection:downstream} \paragraph{Downstream classification.} A key motivation of distance metric learning is the ability to perform downstream classification and clustering tasks more effectively. We validated this on a real dataset: the Libras dataset, which consists of coordinates of users performing Brazilian sign language. The $x$- and $y$-coordinates of the positions of the subjects' hands are recorded, as well as the symbol that the users are communicating, providing us labels to evaluate our distance metrics. For this experiment, we chose a subset of 40 trajectories from 5 different categories. For a given distance function $d$, we iterated over every trajectory and computed the 7 closest trajectories to it (as there are a total of 8 trajectories from each category). We computed which fraction of the 7 shared their label with the original trajectory. A good distance should provide us with a higher fraction. We evaluated 50 distances: 42 of them were chosen randomly, 4 were well-known warping distances, and 4 were the result of performing Algorithm 1 from different initializations. We measured both the betaCV of each distance, as well as the accuracy. The results are shown in Fig. \ref{fig:libras}, which shows a clear negative correlation (rank correlation is $=0.85$) between betaCV and label accuracy. \vspace{3mm} \floatbox[{\capbeside\thisfloatsetup{capbesideposition={right,center},capbesidewidth=7cm}}]{figure}[\FBwidth] {\caption{\textbf{Latent BetaCV and Downstream Classification.} Here, we choose 50 warping distances and plot the latent betaCV of each one on the Libras dataset, along with the average classification when each trajectory is used to classify its nearest neighbors. Results suggest that minimizing latent betaCV provides a suitable distance for downstream classification. \label{fig:libras}}} { \includegraphics[width=0.3\textwidth]{libras2-3.png} } \section{Autowarp Applied to Real Datasets} \label{section:experiments} \vspace{-0.3cm} Many of the hand-crafted distances mentioned earlier in the manuscript were developed for and tested on particular time series datasets. We now turn to two such public datasets, and demonstrate how Autowarp can be used to \textit{learn} an appropriate warping distance from the data. We show that the warping distance that we learn is competitive with the original hand-crafted distances. \begin{figure}[] \centering \subfloat[]{% \includegraphics[width=0.30\linewidth]{cabs3-1.pdf} } \quad \subfloat[]{% \includegraphics[width=0.295\linewidth]{cabs3-2.pdf} } \quad \subfloat[]{% \includegraphics[width=0.30\linewidth]{cabs3-3.pdf} } \caption{\textbf{Taxicab Mobility Dataset} (a) We plot the trajectories, along with their start and end points. (b) We evaluate the average normalized distance to various numbers of neighbors for five different trajectory distances, and find that the Autowarp distance (black line) produces the most compact clusters (c) We apply spectral clustering with 5 different clusters (each color represents a different cluster) using the Autowarp learned distance.} \label{fig:cab} \end{figure} \paragraph*{Taxicab Mobility Dataset.} We first turn to a dataset that consists of GPS measurements from 536 San-Francisco taxis over a 24-day period\footnote{Data can be downloaded from \url{https://crawdad.org/epfl/mobility/20090224/cab/}.}. This dataset was used to test the SSPD distance metric for trajectories \citep{besse2015review}. Like the authors of \citep{besse2015review}, we preprocessed the dataset to include only those trajectories that begin when a taxicab has picked up a passenger at the the Caltrain station in San Francisco, and whose drop-off location is in downtown San Francisco. This leaves us $T=500$ trajectories, with a median length of $N=9$. Each trajectory is 2-dimensional, consisting of an x- and y-coordinate. The trajectories are plotted in Fig. \ref{fig:cab}(a). We used Autowarp (Algorithm 1 with hyperparameters $d_h=10, S=64, \bar{p} = 1/5$) to learn a warping distance from the data ($\alpha=0.88, \gamma=0, \epsilon=0.33$). This distance is similar to Euclidean distance; this may be because the GPS timestamps are regularly sampled. The small value of $\epsilon$ suggests that some thresholding is needed for an optimal distance, possibly because of irregular stops or routes taken by the taxis. The trajectories in this dataset are not labeled, so to evaluate the quality of our learned distance, we compute the average distance of each trajectory to its $k$ closest neighbors, normalized. This is analogous to how to the original authors evaluated their algorithm: the lower the normalized distance, the more ``compact" the clusters. We show the result of our Fig. \ref{fig:cab}(b) for various values of $k$, showing that the learned distance is as compact as SSPD, if not more compact. We also visualize the results when our learned distance metric is used to cluster the trajectories into 5 clusters using spectral clustering in Fig. \ref{fig:cab}(c). \paragraph*{Australian Sign Language (ASL) Dataset.} Next, we turn to a dataset that consists of measurements taken from a smart glove worn by a sign linguist\footnote{Data can be downloaded from \url{http://kdd.ics.uci.edu/databases/auslan/auslan.data.html}.}. This dataset was used to test the EDR distance metric \citep{chen2005robust}. Like the original authors, we chose a subset consisting of $T=50$ trajectories, of median length $N=53$. This subset included 10 different classes of signals. The measurements of the glove are 4-dimensional, including x-, y-, and z-coordinates, along with the rotation of the palm. We used Autowarp (Algorithm 1 with hyperparameters $d_h=20, S=32, \bar{p} = 1/5$) to learn a warping distance from the data (learned distance: $\alpha=0.29, \gamma=0.22, \epsilon=0.48$). The trajectories in this dataset are labeled, so to evaluate the quality of our learned distance, we computed the accuracy of doing nearest neighbors on the data. Most distance functions achieve a reasonably high accuracy on this task, so like the authors of \citep{chen2005robust}, we added various sources of noise to the data. We evaluated the learned distance, as well as the original distance metric on the noisy datasets, and find that the learned distance is significantly more robust than EDR, particularly when multiple sources of noise are simultaneously added, denoted as "hybrid" noises in Fig. \ref{fig:asl}. \vspace{3mm} \floatbox[{\capbeside\thisfloatsetup{capbesideposition={left,center},capbesidewidth=5cm}}]{figure}[\FBwidth] {\caption{\textbf{ASL Dataset.} We use various distance metrics to perform nearest-neighbor classifications on the ASL dataset. The original ASL dataset is shown on the left, and various synthetic noises have been added to generate the results on the right. `Hybrid1' is a combination of Gaussian noise and outliers, while `Hybrid2' refers to a combination of Gaussian and sampling noise. \label{fig:asl}}} { \includegraphics[width=0.6\textwidth]{asl-results-bar2.pdf} } \section{Discussion} In this paper, we propose Autowarp, a novel method to learn a similarity metric from a dataset of unlabeled trajectories. Our method learns a warping distance that is similar to latent representations that are learned for a trajectory by a sequence-to-sequence autoencoder. We show through systematic experiments that learning an appropriate warping distance can provide insight into the nature of the time series data, and can be used to cluster, query, or visualize the data effectively. Our experiments suggest that both steps of Autowarp -- first, learning latent representations using sequence-to-sequence autoencoders, and second, finding a warping distance that agrees with the latent representation -- are important to learning a good similarity metric. In particular, we carried out experiments with deeper autoencoders to determine if increasing the capacity of the autoencoders would allow the autoencoder alone to learn a similarity metric. Our results, some of which are shown in Figure \ref{fig:autoencoder_complexity} in Appendix \ref{app:figures}, show that even deeper autoencoders are unable to learn useful similarity metrics, without the regularization afforded by restricting ourselves to a family of warping distances. Autowarp can be implemented efficiently because we have defined a differentiable, parametrized family of warping distances over which it is possible to do batched gradient descent. Each step of batched gradient descent can be computed in time $O(SN^2)$, where $S$ is the batch size, and $N$ is the number of elements in a given trajectory. There are further possible improvements in speed, for example, by leveraging techniques similar to FastDTW \citep{salvador2007toward}, which can approximate any warping distance in linear time, bringing the run-time of each step of batched gradient descent to $O(SN)$. Across different datasets and noise settings, Autowarp is able to perform as well as, and often better, than the hand-crafted similarity metric designed specifically for the dataset and noise. For example, in Figure \ref{fig:cab}, we note that the Autowarp distance performs almost as well as, and in certain settings, even better than the SSPD metric on the Taxicab Mobility Dataset, for which the SSPD metric was specifically crafted. Similarly, in Figure \ref{fig:asl}, we show that the Autowarp distance outperforms most other distances on the ASL dataset, including the EDR distance, which was validated on this dataset. These results confirm that Autowarp can learn useful distances without prior knowledge of labels or clusters within the data. Future work will extend these results to more challenge time series data, such as those with higher dimensionality or heterogeneous data. \section*{Acknowledgments} We are grateful to many people for providing helpful suggestions and comments in the preparation of this manuscript. Brainstorming discussions with Ali Abdalla provided the initial sparks that led to the Autowarp algorithm, and discussions with Ali Abid were instrumental in ensuring that the formulation of the algorithm was clear and rigorous. Feedback from Amirata Ghorbani, Jaime Gimenez, Ruishan Liu, and Amirali Aghazadeh was invaluable in guiding the experiments and analyses that were carried out for this paper. \newpage
1,116,691,497,041
arxiv
\section{The Revet\xspace{} Language} \label{sec:lang} \input{figures/motiv_code} Revet\xspace{} compiles a structured and imperative programming language, which is familiar to many programmers. This language is lowered to a flexible control-flow graph (CFG) IR, which uses hierarchical regions to capture explicit parallelism. \subsection{Key Revet\xspace{} Language Features} The language has two new features compared to languages like C. First, Revet\xspace{} requires user-annotated parallelism in the form of \lstinline{foreach}, \lstinline{replicate}, and \lstinline{fork} statements. Inside parallel regions, Revet\xspace{} supports the fine-grained control flow expected in an imperative language using \emph{threads}: small sets of live variables that flow through the chip. Finally, Revet\xspace{} uses iterators to efficiently orchestrate DRAM to SRAM transfers for data-dependent access patterns inside these sequential sections. \paragraph{Flexible Parallelism} Revet\xspace{}, unlike prior work (such as Spatial~\cite{koeplinger2018spatial}, Plasticine~\cite{prabhakar2017plasticine}, or GPUs), supports flexible nested parallelism. Nested parallelism enables scalar to vector \emph{broadcasting}, which uses fewer on-chip resources, and flexible parallelism allows vector regions to be nested inside other vector regions. This is important for emulating caches: all threads start in a vector outer region and either traverse a vector cache hit path or a scalar miss path with a further-nested vector DRAM load. Without this flexibility, the vector hit path would preclude vectorization of DRAM loads. Revet\xspace{} has explicitly parallel \lstinline{foreach} statements to avoid the need to automatically parallelize \lstinline{for} loops. \lstinline!foreach! loops run over integer ranges defined by minimum (default zero), maximum, and stride (default one). Threads inside a \lstinline{foreach} have a read-only view of their parent loop's in-scope variables, but they can dereference pointers allocated by the parent to perform memory writes. Furthermore, a \lstinline{foreach} thread may return a value, which is associatively reduced and returned to the parent. \input{figures/motivatingsnippet} Finally, Revet\xspace{} supports dynamic thread spawning and termination using the \lstinline!fork! construct, unlike GPUs that spawn threads in rigid blocks at kernel launch time. While \lstinline{foreach} creates threads beneath a parent, \lstinline{fork} creates new threads at the same hierarchy level. This is useful when traversing trees: at every node, a thread can fork to traverse every child in parallel. The \lstinline{fork} operation has simple semantics: every live variable is \emph{copied} into a set of new threads, and the \lstinline!fork! returns a counter to distinguish multiple forked threads. This is similar to a POSIX fork~\cite{posixfork}, except Revet\xspace{} can fork an arbitrary number of threads instead of only two threads (parent and child). We also provide an \lstinline!exit! operation that terminates thread execution. \Cref{fig:motiv_dataflow} shows how Revet\xspace's parallel constructs work together to map the computation in \Cref{fig:motivate} across load-balanced, spatially distributed, parallel pipelines. Specifically, the body of the \lstinline!while! loop on line 22 and the implicit fill path for the \lstinline!ReadIt! are critical code sections, so Revet\xspace{} uses explicit parallelism to run them on multiple vector pipeline. The outer \lstinline{foreach} (line 12) will first transform a scalar value into a vector of threads. The \lstinline{replicate} (line 18) will use outer parallelism to distribute those threads across the chip as multiple scalar pipelines (instead of one vector pipeline). Next, the \lstinline{while} statement automatically adds vector parallelism again within each inner pipeline, because threads are executing independently on their own lanes. Finally, the \lstinline!ReadIt! transfer path is scalar (refills are infrequent), so the implicit \lstinline!foreach! inside it can be vectorized again. Revet\xspace's flexible programming model lets the compiled dataflow code transition between vector and scalar execution without explicit programmer intervention. \input{tables/adapters} \paragraph{Fine-Grained Sequential Execution} Control flow is important for implementing several asymptotically efficient algorithms, which can provide a significant benefit over their brute-force equivalents. The dataflow-threads backend (for which Revet\xspace{} is the first compiler) supports fine-grained sequential execution. Every parallel region in a \lstinline!foreach! runs as an independent \emph{thread.} The execution order of operations across threads is unspecified, but operations within a thread (and their side effects) are ordered. Although sequential control flow does not require new language constructs, it has significant advantages over both Plasticine and GPUs. Revet\xspace{} supports a control-flow decision on every lane in every cycle---significantly faster than Plasticine's single FSM per compute unit. Furthermore, our control flow is non-blocking because threads are moved into spatially-distributed pipelines, which is an improvement over GPUs where a single stalled thread will block the progress of other threads. \paragraph{Access-Pattern Optimized Memories} Memory access patterns that are easy for programmers are hard for hardware and vice versa. Programmers would rather access memory one word at time, relying on hardware like caches to keep access times low. Conversely, hardware prefers loading an entire vector from DRAM into an SRAM scratchpad and then accessing the scratchpad explicitly. We balance the burden of achieving high performance in a scratchpad-based design between the programmer and the compiler using the semantics shown in \Cref{tab:adapters}. The programmer starts by choosing an appropriate access mode. For example, affine accesses with known dimensions should use views, which coordinate large-tile transfers and can be accessed within \lstinline{foreach} loops. Conversely, data-dependent sequential accesses should use iterators, which our compiler maps to optimized hardware that coordinates small-block transfers dynamically for each thread based on local control-flow decisions, as shown in \Cref{fig:hitmiss}. To support these primitives, and maintain Spatial's~\cite{koeplinger2018spatial} support for multi-buffered on-chip SRAM, Revet\xspace{} supports dynamic allocation of on-chip memories. Like Spatial, Revet\xspace{} requires that on-chip SRAMs have a compile-time fixed size. However, unlike Spatial and SARA~\cite{zhang2021sara}, Revet\xspace{} supports out-of-order allocation and deallocation to enable reordered thread execution. \subsection{Hierarchical CFG Intermediate Representation} \label{sec:cfghier} \label{sec:hiercfg} Before lowering to dataflow, Revet\xspace{} first converts input code into a hierarchical CFG. Standard CFGs represent arbitrary control flow within a region, and hierarchical (\lstinline!foreach! and \lstinline!replicate!) regions represent the explicit parallelism exposed by the program, as shown in \Cref{fig:motiv_dataflow}. \paragraph{Sequential Code} Revet\xspace's hierarchical CFG sits between two common abstractions: MLIR's hierarchical structured control flow (SCF~\cite{mlir_scf}) and lowered CFGs (e.g., MLIR's SSACFG regions~\cite{mlir_lang}). The SCF dialect maps control-flow operations using nested regions, which provides a direct mapping for the syntax tree of imperative code. Conversely, the SSACFG uses basic-block terminators like conditional branches, which are closer to machine code and permit optimizations like basic-block deduplication. MLIR usually lowers SCF directly to SSACFG by inlining the nested control-flow regions and inserting branches, which is fine for the single-program-counter abstraction. However, flattening an entire program to SSACFG breaks Revet\xspace's multi-program-counter abstraction, which forbids arbitrary branches into and out of parallel regions. When programs use \lstinline!foreach! statements, basic blocks will run over different nesting depths, and code that uses \lstinline!replicate! statements will have basic blocks running in different places. Otherwise, for example, a backward branch from a \lstinline!foreach! to its parent would cause infinite nesting, or a branch out of a \lstinline!replicate! would bypass the compiler-inserted merge network. Instead, forcing programs into the hierarchical CFG abstraction ensures that every level of nesting has one entry and exit point. Practically, this corresponds to code that is partially lowered from SCF to SSACFG in MLIR, with \lstinline!if! and \lstinline!while! flattened but \lstinline!foreach! and \lstinline!replicate! maintained. Within a level of hierarchy, sequential code is expressed using a standard CFG made up of acyclic subgraphs and natural loops. Control-flow edges become graph edges, and basic blocks become nodes in the graph, as shown in \Cref{fig:motiv_dataflow}. Nested hierarchical regions within a CFG region are treated as basic blocks by the containing CFG. Furthermore, our analysis does not support function calls and computed branches (i.e., switch statements). Instead, all function calls must be inlined, and any switch statements must be converted to conditional branches. Finally, the natural-loops constraint does not limit generality because node splitting~\cite{janssen1997making} can be used to transform an arbitrary CFG into one consisting of only acyclic graphs and loops. \paragraph{Foreach} A \lstinline!foreach! region receives counter inputs (min, max, step) and sends vectorized counter outputs to its contained CFG. At the exit block of the contained CFG, the \lstinline!foreach! block will receive a vector of results, which it will reduce and return as one element to the containing CFG. The \lstinline!foreach! region thus adds one level of hierarchy to on-chip tensors: if a one-dimensional tensor is passed in, the contained CFG will execute on a two-dimensional tensor. This is visible in \Cref{fig:motiv_dataflow} via the barrier levels ($\Omega_n$) that terminate the final levels of each on-chip link. Other inputs are broadcast to \lstinline!foreach! regions. For example, a value that is defined outside two nested \lstinline!foreach! regions will have two broadcast levels added to a use inside the innermost CFG, one added to a use in the middle CFG, and zero added to a use in the defining CFG. Although \lstinline{while} loops do not create an opportunity for broadcasting (they permute data at the innermost tensor level), they do add a tensor level. Therefore, broadcast analysis adds the number of \lstinline{while} loops surrounding a use to its level. Finally, values defined in the enclosing CFG can bypass the \lstinline!foreach! because the \lstinline!foreach! adds barriers between input elements. Bypassing can save significant hardware resources. By default, the \lstinline!foreach! uses a scalar-vector-scalar dataflow created by counters and reductions, but, bypassed values never need to be converted to vector dataflow. This is visible in \Cref{fig:motiv_dataflow} as a scalar value in the \lstinline!foreach! loop on line 6 bypassing the \lstinline!foreach! loop on line 12. \paragraph{Replicate} \label{sec:replicate} \lstinline!replicate! regions receive a vector from their surrounding CFG and pass scalar values to their contained CFGs. The \lstinline!replicate! does not add a level of hierarchy to the on-chip tensor format, so if a one-dimensional tensor is passed in, each contained CFG will see a one-dimensional tensor. In \Cref{fig:motiv_dataflow}, the $\Omega_2$ done-barrier is unchanged passing into and out of the \lstinline!replicate!. Unlike \lstinline!foreach! regions, \lstinline!replicate!s do not support bypassing because they reorder their inputs, so any value that is defined in the surrounding CFG and live across the \lstinline!replicate! must be \emph{routed through} the \lstinline!replicate! as an argument and a result. Routing a live variable through a region entails including it in the thread state passed in and out of that region. Because the value is included in thread state, it is permuted with the remaining thread state during merge operations to maintain synchronization, at the cost of additional input buffers and network resources. In \Cref{fig:motiv_dataflow}, \lstinline!idx! needs to be routed through the \lstinline!replicate!. \section{Related Work} \label{sec:related} In this section, we discuss how Revet\xspace{} differs from prior work, including SIMT, dataflow machines, scratchpad management, and parallel languages. \paragraph{SIMT \& Vector-Threads} SIMT models like CUDA~\cite{nvidia2013cuda} and Vector-Threads~\cite{krashinsky2004vector} are the dominant programming model for GPUs and the inspiration for Revet\xspace{}. In CUDA GPUs, 32 threads form a warp which can execute one instruction per cycle, so inactive threads waste an execution slot. Revet\xspace{} can avoid this problem using spatial execution. Furthermore, CUDA's thread blocks prevent efficient dynamic thread spawning, while dataflow threads permits easy thread duplication in spatial pipelines~\cite{cudynamicparallelism}. \paragraph{Dataflow} Plasticine~\cite{prabhakar2017plasticine}, targeted by Spatial~\cite{koeplinger2018spatial} and SARA~\cite{zhang2021sara}, only has one FSM per compute unit. Therefore, one iteration has to complete before the next one can start! HLS~\cite{coussy2009introduction} suffers from the same global-FSM problem, because the it slices C programs into control logic (FSM) and datapath. Aurochs~\cite{vilim2021aurochs} was the first vRDA to support dataflow threading using the relational-algebra operations introduced by Gorgon~\cite{vilim2020gorgon}. Aurochs lacked composable control-flow primitives and as a result did not support high-level compilation. Furthermore, Aurochs did not support per-thread SRAM buffers and could not send on-chip scalar values, both of which are needed for efficient dataflow. Unlike Capstan~\cite{rucker2021capstan}, which enabled direct loops over sparse data structures, Revet\xspace{} optimizes for easier-to-achieve parallelism across threads. Stream-join~\cite{nowatzki2017stream,weng2020dsagen}, as proposed in the SPU~\cite{dadu2019towards}, is another paradigm for dataflow computing, but also suffers from the single-FSM problem. Instruction-based designs (where a CGRA is integrated with a CPU, like SNAFU~\cite{gobieski2021snafu} and MANIC~\cite{gobieski2019manic}) suffer from the single-FSM problem as well, because the CPU can only logically execute one thread at a time. Similarly, time-scheduling (e.g., Fifer~\cite{nguyen2021fifer}) virtualizes hardware to provide the abstraction of increased resources without changing the underlying compute model. Virtualizing multiple computing contexts can provide the abstraction of multiple FSMs, but the need for reconfiguration means that only one FSM can run at a time. Other approaches like Fleet~\cite{thomas2020fleet} and CoRAM~\cite{chung2011coram} help support streaming on FPGAs but require RTL for the streaming algorithms. Others have mapped complicated programs to tagged dataflow~\cite{arvind1977indeterminacy}, including Kahnian networks~\cite{khan1974semantics} and the Monsoon processor~\cite{papadopoulos1990monsoon}. Revet\xspace{} is more efficient and powerful because it targets vectorized, pipelined dataflow without the need for tags and does so from an imperative language. \paragraph{Data Orchestration} By targeting dataflow hardware with scratchpads instead of caches, Revet\xspace{} can improve efficiency relative to von Neumann approaches (albeit by eliminating the potential for reuse across threads). Other approaches like Buffets~\cite{pellauer2019buffets} and Stash~\cite{komuravelli2015stash} also automatically orchestrate scratchpad memory hierarchies. However, Revet\xspace's approach is unique by natively supporting multithreaded accesses and reusing the logic of a vRDA to do so. \paragraph{Languages \& Compilers} Others have proposed streaming-native DSLs to capture dataflow behavior like StreamIt~\cite{thies2002streamit} and Spidle~\cite{consel2003spidle}. However, the goal of Revet\xspace{} is not to expose streaming to the user, but instead to expose an imperative language and lower it to a dataflow backend. Finally, Cilk~\cite{blumofe1996cilk} and OpenMP~\cite{dagum1998openmp} provide C extensions for parallelism, like Revet\xspace{}; however, these languages use their extensions to target multicore CPUs and cannot lower to dataflow. \section{A Generic Model of Dataflow} \label{sec:dataflow} Revet\xspace{} lowers from an imperative language (\Cref{sec:lang}) to a vRDA backend based on streaming tensors, so we start by defining a tensor format with embedded control for distributed streaming control decisions. Then, we discuss the tensor primitives that implement \lstinline!foreach! parallelism, \lstinline!if! statements, and \lstinline!while! loops. Finally, we describe how these primitives map to an Aurochs-inspired vRDA~\cite{vilim2021aurochs}. \subsection{Structured-Link Tensor Format (SLTF)} \label{sec:format} In Revet\xspace{}, senders and receivers do not share synchronized controllers, so the sender must \emph{encode} its control decisions---and those made by upstream senders---so that they can be communicated to downstream units. This can be done by selectively sending data to only some receivers or by changing metadata. Revet\xspace{} uses a structured-link tensor format (SLTF) to encode this control metadata. \input{figures/hitmiss} \paragraph{Embedding Control with Data} Revet\xspace{} uses a structured on-chip data representation to capture live variables inside threads and hierarchy information across groups of threads. The live variables within each thread are sent as parallel tensors, where ordering associates live values across tensors. Hierarchy is encoded as done-tokens, or barriers ($\Omega_n$), to indicate the end of dimensions. For brevity, we represent barriers as $\Omega_n$ to indicate the end of dimension $n,$ starting with $\Omega_1$ to indicate the end of the lowest dimension. \Cref{fig:motiv_dataflow} has links annotated with these barriers to show how transitions into and out of parallel regions add and subtract levels of loop hierarchy. Intuitively, the hierarchy metadata represents ragged $k$ dimensional tensors, where the number of dimensions is fixed but each dimension can have a variable size. For example, the two-dimensional tensor [[0, 1], [2]] would be encoded as [0, 1, $\Omega_1,$ 2, $\Omega_2$] in the on-chip network. Here, $\Omega_2$ implies an $\Omega_1,$ after element 2, due to the tensor dimensions forming a strict hierarchy and there being scalar elements in the tensor. Adding this hierarchy to on-chip links is inexpensive. We assume that at most one barrier can be sent per on-chip vector and that $n \leq 15.$ This is far lower than observed loop nesting levels and less than 1\% overhead relative to a 512-bit (16\texttimes32-bit) vector. \paragraph{Composability} Handling the empty-tensor edge case is key to composability: without precise control for empty tensors, reductions could not compose with downstream operations. Therefore, to use embedded control metadata for synchronization, Revet\xspace{} must precisely track empty lists. For example, in our abstraction, the three 2-D tensors {[[]]} and {[[],[]]} and {[]} have unique representations ($\Omega_1,\Omega_2$ vs. $\Omega_1,\Omega_1,\Omega_2$ vs. $\Omega_2$). Although all three of these tensors contain no actual data, they represent different control-flow structures: an outer loop running one iteration with a zero-length inner loop, an outer loop running twice with zero-length inner loops, or an outer loop that does not run. Therefore, when passed to an additive reduction, they must yield distinct results: {[0]}, {[0,0]}, and {[]}. \subsection{Streaming Tensor Primitives} \label{sec:tensorprimitives} Given a format for on-chip links that can embed and propagate control-flow decisions, we now describe the streaming primitives that implement local control decisions like parallelism and branching. These primitives are used for individual basic-block edges, and they respect our structured-link tensor format (\Cref{sec:format}), so they can be composed arbitrarily. Together, they provide the sequencing, iteration, and selection needed for arbitrary algorithms. These streaming primitives are inherently agnostic to scalar vs. vector dataflow resources. In Revet\xspace's machine model, a scalar is logically a vector with one element, so scalar data can transit vector links and vector data can be serialized to fit onto a scalar link. However, in this section's examples (\Cref{fig:mapreduce,fig:filtmerge,fig:fbmerge}), we distinguish scalar from vector links to highlight how Revet\xspace's primitives can convert between scalar and vector dataflow. For example, the \lstinline!while! loop shown in \Cref{fig:fbmerge} has scalar entry and exit links with a vector backedge. \paragraph{Element-Wise Operations} Element-wise operations process one or more tensors: for example, two tensors may be added to yield a third tensor. Memory operations are also element-wise operations: an allocation transforms a void value into an pointer, a read transforms an address into a result, and a write transforms an address and data into a void value. Element-wise operations do not change the ordering, hierarchy, or number of dataflow threads. Therefore, in this section's examples (\Cref{fig:mapreduce,fig:filtmerge,fig:fbmerge}), these operations can take place along any graph edge. As mentioned previously, Revet\xspace{} provides memory ordering guarantees within a \emph{thread.} Therefore, the compiler must guarantee ordering for memory operations' side effects within a basic block. To do so, it relies on data-free \emph{void} tokens like SARA's CMMC~\cite{zhang2021sara}: these are generated by memory operations as results and are inserted as operands. Finally, these void tokens are carried through basic block transitions, like merges, to guarantee that basic blocks execute in order. \paragraph{\lstinline!foreach!: Expansion, Reduction, \& Flattening} \input{figures/mapreduce} Expansion primitives (\Cref{fig:mapreduce}) enlarge tensors to express map operations. The simplest expansion primitive is broadcasting, which takes a $k$- and a $(k+1)$-dimensional tensor and repeats every element in the first tensor along the last dimension of the second one. Counters can also expand tensors: a counter takes three $k$-D tensors (min, max, and step) and transforms them into a $(k+1)$-D tensor. Reduction (\Cref{fig:mapreduce}) uses an associative operation to coalesce the last tensor dimension into one element, lowering each barrier by one level. Flattening also removes a level of hierarchy from barriers but leaves elements untouched. Counter-expansion and reduction are used to implement \lstinline!foreach! regions. When control flow transfers into the region, the counter creates parallel control flow underneath it, with the index variable distinguishing the new threads in the dataflow and a barrier afterwards to synchronize. Variables that are live into the \lstinline!foreach! region use broadcasts, and a reduction collapses the parallel control flow into a single thread again. To implement \lstinline!fork! statements, counter-expansion is immediately followed by flattening, such that one thread is transformed into multiple. \paragraph{Acyclic Subgraphs: Filtering \& Forward Merging} \input{figures/filtmerge} Tensor filtering (\Cref{fig:filtmerge}) takes an element tensor and a predicate tensor and returns only the elements for which the predicate evaluates to true. For example, an \lstinline!if! statement would use a filter operation to mask off elements so that each element goes to either the \lstinline!if! block or the \lstinline!else! block. Barriers are passed through unmodified, creating two tensors from one moving forward through the pipeline. Forward merging (\Cref{fig:filtmerge}) is used at the beginning of a basic block that has two \emph{forward} branches into it. Merging interleaves elements from the lowest tensor dimension eagerly: whenever either input is ready to send, the merge can pass it through. In \Cref{fig:filtmerge}, this is evident when $t_3,$ which branches onto a slow path, exits the merge last. To preserve thread state, the merge can take multiple tensors (corresponding to all the live variables in a thread) and ensure that they are merged atomically. Because the merge keeps per-thread data together, and threads within a hierarchy level are unordered, it preserves programming model correctness. To respect \lstinline!foreach! operations' barriers, the merge operation uses barriers to pass ordering guarantees through the pipeline. When the merge unit reaches a barrier in the streaming on-chip input, it stalls that link until it reaches an equal barrier in the opposite link. This limits reordering (e.g., between two branches of the same \lstinline!if! statement) to one level of the tensor hierarchy, so threads in a parallel region do not cross barriers and remain synchronized to their parent thread. \paragraph{Cyclic Subgraphs: Forward-Backward Merging} Like forward merging, forward-backward merging (\Cref{fig:fbmerge}) interleaves incoming threads. However, unlike forward merging, forward-backward merging combines tensors resulting from backward branches (e.g., at the head of a \lstinline!while! loop). The backward branch can only send a barrier \emph{after} the merge sends a barrier, but the merge can only send a final barrier once it receives one from the backward branch. The forward-backward merge uses special logic to break this would-be cyclic dependency. A natural loop will have one header block, which is the meeting point of all forward edges into the loop and all backward edges. Intuitively, the forward-backward merge at the loop header takes a 1-D tensor of input elements at a time and iterates it to form a 2-D tensor of executed loop bodies. The forward-backward merge primitive starts by outputting values from the forward branch into the loop body until it receives a done-token. The done-token causes the merge to stall inputs on the forward branch, and the loop header will use barrier semantics to ensure the loop body is empty before allowing more threads to enter. Because the loop header is the sole entry point to a natural loop, it can \emph{reassign} barrier levels inside its loop as long as barriers exiting the loop are correct. Specifically, the loop header \emph{adds} a level to incoming barriers, so it can reserve the lowest barrier $\Omega_1$ for checking whether the loop body is empty. The merge will send a $\Omega_1$ token to terminate its sent data; it will continue to send this token every time it appears at the backward-branch input. When the loop body is empty (all the threads executing the while loop have terminated), the backward branch will receive two $\Omega_1$ tokens in a row, which will cause the forward-backward merge to send a done token at one level higher than that originally received on the forward-branch link. Edges leaving the body then lower all barriers by one level, eliminating the added $\Omega_1$ barriers and restoring input barriers to their correct levels. This ensures that cyclic regions respect the same barrier constraints as acyclic ones, making them composable. \input{figures/fbmerge} \subsection{Mapping to Virtual Hardware} Finally, these primitives must be mapped to a hardware model. We assume an overall compute-unit structure based on Aurochs and Plasticine~\cite{vilim2021aurochs,prabhakar2017plasticine}. In our model, mergers, counters, and broadcasts are at the beginning of the pipeline. Then, element-wise operations happen inside the pipeline. Reductions, filters, and flattening happen at the end of the pipeline. Instead, vector tensors can be converted to and from scalar tensors, which send only one element per cycle but require smaller links and buffers. \section{Evaluation}\label{sec:eval} We evaluate Revet\xspace{} and show that it out-performs industrial baseline architectures on a wide variety of applications. We first discuss our methodology. We then discuss Revet\xspace's performance and how optimizations improve generated code. \subsection{Methodology} We evaluate Revet\xspace{} using a cycle-accurate vRDA simulation including a model of HBM2 memory~\cite{kim2015ramulator,standard2013high,zhang2019scalable} against real-world baseline designs on a variety of kernels. \paragraph{Hardware Model} To evaluate the dataflow threads programming model and backend abstraction in a physically-constrained environment, we use an abstract machine model based on Plasticine~\cite{prabhakar2017plasticine}. Our abstract vRDA comprises 200 compute units (CUs), 200 memory units (MUs), and 80 DRAM address generators (AGs) connected by a flexible on-chip network~\cite{zhang2019scalable}; the parameters we use are shown in \Cref{tab:params}. We estimate area as that of Capstan~\cite{rucker2021capstan} with the logic from Aurochs~\cite{vilim2020gorgon} added in, giving a total area of approximately \SI{189}{mm^2} in a \SI{15}{nm} educational process with a clock frequency of \SI{1.6}{GHz}. Because the \SI{15}{nm} library lacks a memory compiler, prior work used SRAMs scaled from a \SI{28}{nm} industrial library. This is 4.3\texttimes{} smaller than Nvidia's V100 GPU, our primary baseline~\cite{v100datasheet}. To ensure that we accurately model hardware, we split Revet\xspace{}'s compiled programs to map to the blocks provided by our vRDA machine model. Our splitting constraints are the number of pipeline stages, registers, inputs, and outputs; we assume that merge units, constant inputs to merges, counters, and void inputs do not consume resources beyond their associated input buffers and registers. Furthermore, to respect MU and AG mapping limits, we only map address generation contexts where all inputs are scalar and output-accumulation contexts where the only operation is a void reduction. We further assume that operations can read and write 8- or 16-bit sub-registers (like x86's ?H and ?L sub-registers~\cite{morse1978intel}) and a small skid-buffer when reshaping vectors to fit on scalar links. \paragraph{Baselines} We evaluate Revet\xspace{} against an Nvidia V100~\cite{jia2018dissecting} GPU and an Intel Xeon CPU. All of our applications have independent threads running under a parallel region, so we scale problem sizes across platforms so that each reaches its peak performance and report normalized performance in GB/s. This ensures that baselines have the best performance possible and sets a lower bound on Revet\xspace's performance. Application sizes are reported as the sum of input and output data sizes, except for kD-tree, which uses the size of the fetched points that are counted. GPU tests were performed on an AWS p3.2xlarge instance using CUDA 11.6, RAPIDS 22.04~\cite{rapids}, and cuCollections~\cite{cuco} running Linux 5.13.0 and Nvidia driver 510.47.03. For all benchmarks except kD-tree, we use nvprof to measure only kernel runtime, which excludes device/host transfers, barriers, and CUDA stream synchronization. kD-tree uses host timers because the RAPIDS implementation uses multiple kernels. CPU tests were performed on an AWS m6i.16xlarge using GCC 11.2.0 with -O3 and OpenMP. The CPU is a \nth{3} generation Xeon Platinum at \SI{3.5}{GHz} with 64 threads and \SI{205}{GB/s} of DDR4 bandwidth. For both baselines, benchmarks were run 25 times and the average of all runs was taken. \paragraph{Applications} We use a variety of applications to evaluate Revet\xspace, as shown in \Cref{tab:apps_revet}. \ifdefined\longversion CPU baselines are inet\_aton, inet\_addr, \seqsplit{SMHasher::MurmurHash3\_x86\_32}, stl::unordered\_map::find, stl::boyer\_moore\_horspool\_searcher, and handwritten (\texttimes3), respectively. GPU baselines are rapids::isipv4, rapids::ip2int, rapids::hash\_values, cuco::static\_map::find, rapids::find, handwritten (\texttimes2), and ~rapids::\seqsplit{join\_quadtree\_and\_bounding\_boxes+rapids::quadtree\_point\_in\_polygon+rapids::groupby}. The handwritten Huffman and CPU kD-tree baselines use the same algorithm as Revet\xspace. \fi These applications are selected to focus on Revet\xspace{}'s \emph{new} functionality, so they all represent applications that cannot be compiled to Plasticine or other vRDAs. Because threads provide a super-set of MapReduce functionality, any code that could be compiled by Spatial could also be mapped by Revet\xspace. They are drawn from a variety of domains including string analytics, data-structure traversal, search, and generic data-processing algorithms like hashing. We focus on discrete kernels to avoid inter-kernel overheads in the baselines; the GPU tree traversal is the only multi-kernel baseline. When evaluating applications, we assume that runtime is a function of bulk throughput, data size, and initialization time: $\mathrm{runtime} = \mathrm{size}/\mathrm{throughput} + \mathrm{init}.$ Furthermore, all of our benchmarks exploit abundant, non-communicating threaded parallelism, so the amount of work can be increased without changing the \emph{nature} of the work. Revet\xspace{} also uses static SRAM allocation instead of caches, which means that threads do not interfere with each other, and adding more threads will not decrease aggregate system throughput. Therefore, for every platform, we use the largest data sizes feasible to measure throughput, which yields trivially short initialization times. Although the data sizes for Revet\xspace{} are relatively small, the inclusion of initialization time means that throughput would only \emph{increase} with larger datasets. \subsection{Resource Requirements \& Performance} Revet\xspace{} generates resource-efficient vRDA configurations. Furthermore, using Revet\xspace{}, a vRDA out-performs a GPU on a variety of applications and is DRAM-bandwidth-limited for many of them as well. \paragraph{Resource Breakdown} Our first evaluation shows the vRDA resources required by Revet\xspace-generated code in \Cref{tab:resources}. It is challenging for a vRDA application to make 100\% use of resources due to on-chip network constraints~\cite{zhang2019scalable}, so we scale outer parallelism to use 70\% usage of the critical resource (CU, MU, or AG). Using outer and vector parallelism, Revet\xspace{} can provide hundreds of SIMD-parallel lanes. Furthermore, some applications (isipv4 and ip2int) are outer-parallelized at two levels: tile loads/stores for thread arguments/results and the inner loops. For these, up to three vectorized streams (48 lanes) process tiles of thread inputs/outputs while thirty vectorized (480 lanes) streams process the inner while loops. Although Revet\xspace{} maps fewer vector lanes than a GPU has CUDA cores, Revet\xspace{} lanes process multiple pipelined instructions per cycle. \input{tables/perf} Revet\xspace{} has minimal resource overhead: for all of our applications, most mapped CUs are used for inner-loop operations, and only a few CUs and MUs are used for workload distribution and buffering live values around replicates. MUs are also used for retiming, but these can frequently be shared with those inserted for deadlock avoidance. \paragraph{Throughput} \Cref{tab:perf_revet} shows each design's throughput. On average, Revet\xspace{} is 3.8\texttimes{} faster than the GPU; when estimated die area is taken into account, this gap grows to over 16\texttimes. Furthermore, isipv4, ip2int, and murmur3 use over 75\% of the peak HBM2 bandwidth; hash-table is limited by DRAM activations. The greater geomean performance improvement from ideal DRAM (+35\%, D) than ideal on-chip resources (+6\%, SN) also shows that our applications are well-mapped. The GPU performs best on applications where each thread processes a small amount of data (ip2int and isipv4 read about \SI{13}{B} per thread). On applications like murmur3 and search, the GPU is slowed down by the longer data involved (\SI{64}{B} and \SI{256}{B}). Although SIMT supports 32 threads per cycle, the GPU can process fewer independent accesses through its L1 cache: therefore, independent threads cannot run at full throughput unless they access nearby addresses. This is because the GPU expects, and requires, coalescing for cached levels of the memory hierarchy (i.e., everything except explicit SRAM): the L1 cache can only execute a certain number of tag checks per cycle~\cite{lloyd2019gpucheck}. Revet\xspace{} does not have this problem: because iterators are mapped in SRAM, they execute in parallel without tag checks. Search performs better on the vRDA because Revet\xspace{}'s support for efficient branching enables the asymptotically-efficient Boyer-Moore~\cite{boyer1977fast} algorithm. Boyer-Moore is complex because each thread is independently matching backwards along the pattern or computing an offset; Revet\xspace{} uses nested \lstinline!while! loops to support this behavior. In addition to a poor search algorithm, the GPU's constraints also force a poor algorithm for tree traversal. Because CUDA does not support recursion (like the CPU) or \lstinline!fork! statements (like Revet\xspace), every iteration of its quad-tree traversal must write into a large array. However, because each iteration only selects a few children, little parallelism is extracted to amortize inter-kernel overheads. \paragraph{Aurochs Comparison} Finally, we compare to Aurochs~\cite{vilim2021aurochs}, a primitive implementation of dataflow threads. Most Revet\xspace{} applications cannot run on Aurochs because it lacks support for the local allocation needed for intra-thread locality. The tree traversal benchmark is supported. The more efficient on-chip primitives supported by Revet\xspace{} allow the kD-tree implementation to be over 11\texttimes{} faster than the Aurochs tree traversal. First, Aurochs lacked support for thread-local storage, which results in up to 10 live variables traversing its pipeline that have to be duplicated whenever threads are forked; Revet\xspace{} can store these variables in SRAM. \input{figures/kdtree} Second, Aurochs does not support fine-grained parallelism via \lstinline!foreach! loops. Our kD-tree uses a \lstinline!foreach! loop to vectorize 15 comparisons for every node, which are ANDed together to identify which child nodes should be traversed. \Cref{fig:kdtree} shows a simplified version that uses three lanes to traverse two tree levels. Every lane's comparison starts with a mask for regions that are ignored: for instance, Lane~1 ignores the right two regions. Then, the lane compares its partition value against the query's minimum and maximum ranges, which produces a per-lane output. In the example, Lane~2's comparison, which produces a validity mask of 1110, is overridden by Lane~0, which determines that 0010 and 0001 are invalid. Therefore, with only \SI{64}{B} loaded from DRAM per node, Revet\xspace{} can handle a 16-ary tree. \input{figures/opt_res} \subsection{Optimizations} In \Cref{sec:implementation}, we discussed several optimization passes for Revet\xspace. In this section, we discuss how these either save resources or improve performance directly. \paragraph{Resource-Saving Optimizations} \Cref{fig:ressave} shows the effect of disabling enhanced if-to-select conversion, allocator hoisting and replicate bufferization, and variable packing. Not all optimizations improve all applications: for example, if-to-select conversion has no impact on isipv4, which has no convertible \lstinline!if! statements. These passes lower resources requirements by either reducing the number of basic blocks (If Conv) or live variables that have to permuted in the pipeline (Buffer and Pack). However, extracting pointers after allocator hoisting can add resources, as seen in the Buffer column. Without these resource-saving optimizations, only kD-tree would be able to hit the outer-parallelism factors that we target in our evaluation because it is AG-limited. \paragraph{Hierarchy Removal} \Cref{fig:hierremoval} shows how hierarchy removal improves area-performance scaling, using murmur3 as a case study. Generally, every application loads a tile of thread initialization data from DRAM, computes the thread results, and stores the data back. With hierarchy removal, applications run on small tiles which can coexist in the pipeline. The hierarchical case uses large tiles that \emph{cannot} coexist: one tile must be done inside a while loop before another can start executing. These large tiles can either be loaded outside the \lstinline!replicate! (shared init.) or inside the \lstinline!replicate! (duplicated init.). With tiles loaded outside replicated regions, hierarchy actually slightly reduces area by limiting overhead. However, as outer-parallelism increases, the parallelism allocated to each replicated region decreases, which leads to a widening performance gap. If tiles are instead loaded inside replicated regions, hierarchy can achieve similar performance but with area increases from duplicated tile loads and stores. \paragraph{Load Balancing} \Cref{fig:loadbalance} shows how allocation hoisting improves search's performance on the hybrid network, where different outer-parallel regions run at different throughputs. This handles a slow \lstinline!replicate! region that is 30\% slower than the fastest one, and is better than Plasticine's~\cite{prabhakar2017plasticine} fixed work allocation, which would be bottlenecked by the slow region. For small amounts of work (left), the allocator is able to assign buffers to all incoming threads without starving. Therefore, every region gets an equal amount of work (12.5\% of the input). As the amount of work increases, the allocator runs out of buffers, so not every thread is able to run in the first wave. Then, faster regions free their allocated buffers first, and regions are assigned new work only after they complete existing work. This creates a feedback loop that leads to slower regions receiving less work (less than 10\%) and faster regions more (14\%), avoiding a 21\% slowdown if all regions ran at the slowest region's speed. \input{figures/hierremoval} \input{figures/loadbalance} \section{Introduction} \label{sec:intro} Spatial dataflow accelerators eliminate the overheads of modern von Neumann machines, including instruction fetch, dynamic scheduling, caching, and speculation, by statically scheduling computation. In particular, vectorized reconfigurable dataflow accelerators (vRDA)~\cite{jouppi2017datacenter,jouppi2021ten,prabhakar2017plasticine,rucker2021capstan,prabhakar2021sambanova} are a new class of hardware that uses a large grid of compute and memory to exploit both \emph{vector} (SIMD) and \emph{pipeline} parallelism. Furthermore, coarse-grained pipelining enables on-chip kernel fusion: a program can be pipelined across the entire chip without intermediate materializations to DRAM. Overall, vRDAs maximize compute and memory bandwidth while lowering overhead. However, vRDAs are limited by their programming model. Currently, users program vRDAs with either hard-coded kernels~\cite{vilim2020gorgon,vilim2021aurochs} or hierarchical MapReduce code (e.g., the Spatial~\cite{koeplinger2018spatial} language). Programming vRDAs with libraries of hard-coded kernels limits them to that small set of operations and prevents efficient fusion across operations, while prior MapReduce models limit programs to parallel loops over a control-flow-free inner loop body. This eliminates algorithms with any data-dependent inner iteration because they require control flow, exceeding MapReduce's limits. Although vRDAs are more efficient, GPUs currently dominate the compute-accelerator market. Imperative programming models, including the \emph{threaded} SIMT model used by GPUs, are more powerful than MapReduce because they support parallelism over data-dependent structured control-flow like \lstinline!if! statements and \lstinline!while! loops. This generality gap between MapReduce and SIMT inspires our key question: can we program vRDAs with thread-based programming languages? \input{figures/intro} In this paper, we introduce Revet\xspace{} (\textbf{Re}configurable \textbf{ve}ctor \textbf{t}hreading), a compiler that uses dataflow threading~\cite{vilim2021aurochs} to map a simple, yet expressive imperative language to vRDAs. Dataflow threads increase the flexibility of vRDAs by moving control-flow decisions from a specialized control plane (with extremely limited bandwidth) to the higher-throughput data plane, which executes them as spatial routing decisions. Revet\xspace{} introduces a control-flow to dataflow lowering pass that supports a wide variety of control-flow constructs including \lstinline!while! loops, \lstinline!if! statements, and nested parallel \lstinline!foreach! loops. In turn, this control flow enables more asymptotically efficient algorithms to solve user-facing problems. We describe several optimizations for Revet\xspace. These include efficient scratchpad orchestration for common access patterns, like data-dependent sequential reads and writes, with the ease of accessing caches. Other optimizations lower the number of compute units needed for the resulting dataflow. Finally, we describe the composable abstract machine model that Revet\xspace targets, which is based on the Aurochs vRDA~\cite{vilim2021aurochs} with support for broadcasting to lower on-chip communication requirements by using smaller scalar network switches~\cite{zhang2019scalable}. Our key contributions are: \begin{enumerate} \item a language that efficiently captures parallelism and memory locality for applications with data-dependent control flow in a way that can be mapped to vRDAs (\Cref{sec:lang}), \item a flexible vRDA abstract machine that extends Aurochs's per-lane control flow with hierarchical control flow (\Cref{sec:dataflow}), and \item a compiler that optimizes and lowers our new language to streaming, vectorized, and pipelined dataflow on our abstract machine (\Cref{sec:implementation}). \end{enumerate} We demonstrate Revet\xspace's flexibility by compiling a variety of applications drawn from data analytics, data structure traversal, geospatial analytics, and string analytics, none of which can be expressed in MapReduce. We use cycle-accurate simulation to show that Revet\xspace{} out-performs a V100 GPU by a geomean 3.8\texttimes{} on a 4.3\texttimes{} smaller vRDA, resulting in an area-adjusted speed up of 16\texttimes{}. We also analyze how Revet\xspace{} uses the vRDA's hardware units and demonstrate that our optimization passes save significant resources. \section{Implementation} \label{sec:implementation} \input{figures/compilerstages} In this section, we describe the practical details behind our prototype implementation. Revet\xspace{}'s compiler starts by parsing the language and eliminating several constructs implemented to improve programmer productivity, including views and iterators. Then, the compiler performs optimizations on the hierarchical CFG (\Cref{sec:hiercfg}) to improve the efficiency of the generated dataflow. Finally, after lowering the CFG to dataflow and inserting buffers to avoid deadlock, the compiler optimizes the resulting dataflow graph. \subsection{Front-End Lowering} \label{sec:frontendlowering} Revet\xspace{} uses a front-end~\cite{parr2013definitive} that emits code into an MLIR~\cite{lattner2020mlir} representation as a mixture of the structured control flow dialect (SCF) and a custom Revet\xspace{} dialect that captures our custom front-end features (\Cref{sec:lang}). Our front-end also inserts type conversion operations as needed. Then, we progressively lower the high-level Revet\xspace{} memory operations (e.g., iterators) until every memory is expressed as an SRAM with scalar accesses and perform hierarchy elimination if requested. \paragraph{View \& Iterator Lowering} We rewrite Revet\xspace{} views and iterators (\Cref{tab:adapters}) into MemRefs (MLIR's annotated memory type) and integers. Views are simple: allocations are replaced with a MemRef allocation and a bulk load (if needed) and deallocations are replaced with a MemRef deallocation and a bulk store (if needed). These primitives are more efficient for sub-word types because a backend-inserted bulk store can process 32 bits per cycle. Iterators are slightly more complicated: the basic \lstinline!ReadIt!, for example, has a MemRef buffer, a local pointer (8 bits to reduce dataflow overhead), and a global pointer (in SRAM). The global pointer is fetched and incremented only when the local pointer wraps around. Because dereference is less common, we fill read iterators' buffers only at dereference to decrease the amount of hardware mapped. \lstinline!WriteIt!s can be flushed at increments or deallocation, which would na\"ively require two store paths. To avoid this, the \lstinline!ManualWriteIt! takes an input at increment indicating the last loop iteration to elide the deallocation flush. \paragraph{Foreach Hierarchy Elimination} \lstinline!foreach! regions are eventually lowered to streaming tensor operations (\Cref{sec:tensorprimitives}), which use barriers to sequence threads. However, barriers can limit parallelism inside \lstinline!while! loops, where they force a total flush of the loop body before new threads can enter the loop. Because barriers change side-effect ordering, they cannot be automatically eliminated, so we only rewrite \lstinline{pragma}-annotated \lstinline!foreach! statements. When rewriting these statements, we initialize a memory location with the number of elements expected and execute a \lstinline!fork! to create hierarchy-less threads, as shown in \Cref{fig:flatten}. Instead of reduction, threads atomically fetch and decrement the shared memory location. If zero elements remain, the thread is the last one and iteration is complete; otherwise, the thread exits. This removes the strict synchronization between \lstinline!foreach! loops: the straggling children of one parent can be interleaved with those of the next parent. \input{figures/flatten} \subsection{Optimization} \label{sec:optimization} At this point, our IR is in a mix of SCF, standard arithmetic operations, and physical memory operations. We run several rewrite passes before lowering the code to dataflow, in addition to existing MLIR passes. These passes rewrite the high-level IR to increase dataflow performance. \paragraph{SRAM Allocator Optimizations} \label{sec:fusealloc} To avoid fragmentation, Revet's on-chip allocation relies on compile-time-determined fixed-size buffers at each memory. For example, if an SRAM buffer is specified as \SI{64}{B} (matching the vector width), we rewrite every memory access as: $\mathrm{ptr}\times\mathrm{64}+\mathrm{off}.$ This transformation means that every integer within a range $[0,\mathrm{max})$ is a valid pointer, and one pointer can be used at multiple memories as long as it is in range. By default, the maximum pointer is the size of a single MU divided by the thread-local buffer size, but users can increase the thread count using a \lstinline!pragma!, which will cause multiple MUs to be inserted to increase storage. We fuse all allocations in a basic block, taking the intersection of valid pointers for the memories to be fused. Because each allocator is sampling from a range defined by a single maximum value, the fused range is defined by the smallest maximum pointer across all memories in the basic block. Finally, Revet loads these pointers into a queue stored in a memory unit, so allocation pops a pointer from this queue and deallocation pushes it back. \label{sec:hoistalloc} \paragraph{Allocator Hoisting \& Bufferization} \label{sec:bufferize} If a \lstinline!replicate! region has one allocation after fusion, we can increase its range, using the low bits to point to a specific region and the high bits to address an SRAM buffer within that region. This has two benefits. First, it lowers resource requirements by vectorizing allocation (\Cref{fig:hoistbuffer}), with one allocator globally instead of one per region. Second, it provides native round-robin load balancing: regions only receive new allocations after they complete existing ones. Because live values cannot bypass a \lstinline!replicate! (\Cref{sec:cfghier}), they would by default have to be routed through it. Instead, we \emph{reuse} the single live pointer into a \lstinline!replicate! (if one has been hoisted) to bufferize live values around it, inserting an SRAM to store the value (\Cref{fig:hoistbuffer}). Then, we replace all uses after the replicate with a load from this SRAM, so the value is not live through the replicate. \paragraph{If to Select Conversion} \label{sec:lowerif} Na\"ive dataflow would assign a compute unit to each branch of an \lstinline{if} statement, but \lstinline{if} statements without inner loops would just leave empty lanes. Therefore, we inline all \lstinline{if} statements that lack inner loops, replacing them with conditional moves and predicating memory operations. This is more powerful than MLIR's default of only rewriting empty \lstinline{if}s. \input{figures/hoistbuffer} \paragraph{Sub-Word Packing} \label{sec:subwordpack} Every variable that is live into a merge operation consumes a significant number of network resources and input buffers, leading to congestion. Therefore, we identify sub-word values (\lstinline{int8} and \lstinline{int16}) that are live into or out of \lstinline{while} loops. In a na\"ive dataflow lowering, each of these would have to be promoted to an \lstinline{int32} because threads execute on 32-bit lanes. However, this would waste buffers and on-chip links, which are critical resources when mapping. Therefore, we pack these values into a single \lstinline{int32}, making sure to minimize permutation for nested loops. We also optimize AND, shift, and OR operations that can be expressed as sub-word reads and writes on fixed boundaries. \subsection{CFG to Dataflow Lowering} \label{sec:cflowdflowimpl} In this section, we describe a prototype system that operates on MLIR's SCF dialect using specially-annotated CFGs as an intermediate format to ensure our hierarchical-CFG constraints are respected. \paragraph{Graph Annotation} Between SCF and dataflow, we use an annotated CFG that indicates which edges should be forward merges and which edges should be forward-backward merges using specialized block terminators. We also flatten \lstinline!foreach! regions into the main CFG and replace their input and output edges with special counter and reduce terminators (like Tapir~\cite{schardl2017tapir}). We ensure that every block has no more than two predecessors to meet our hardware's merge constraint. Unlike the hierarchical CFG described in \Cref{sec:hiercfg}, this IR is brittle, so optimizations that could change the graph structure are disabled. \paragraph{Basic-Block Inputs} When mapping a block, we start by identifying all live-in variables and determining whether they should be broadcast using a mapping table of blocks to nesting depths. For example, a block following an \lstinline!if! statement will have two sets of live-ins (one from each branch, followed by a merge operation) in addition to broadcasts. In case there are no live-in values, a data-less void value is inserted between blocks to guarantee a correct number of live-outs. This void value is chained through pipelined memory operations to guarantee a data dependence to enforce ordering after contexts have been split. After cloning inputs, we clone the basic block as element-wise operations. \paragraph{Basic-Block Outputs} Basic-block live-out values are mapped based on the terminator operation. Unconditional branches map to unconditional outputs: each lane in the pipeline is sent as an output. Conditional branches map to filter outputs, taking a condition and a value per lane; only lanes matching the condition are sent. \lstinline!foreach! block terminators map to counters, which are later moved to the head of their destination context, and reductions map to a reduction operation at the end of the context pipeline. Finally, outputs exiting a while-loop region strip hierarchy without reduction. \paragraph{Replicate} The logic to connect \lstinline!replicate! regions to the enclosing CFG is implemented using the filter and merge primitives shown in \Cref{sec:dataflow}: each thread is broadcast to a filter before every contained CFG and only threads with matching keys are forwarded. Afterwards, return values from the contained CFG are merged using a tree of forward merge units. To speed compilation, we use late unrolling for \lstinline!replicate! regions: we create one node with code and multiple nodes for references, which are duplicated immediately before placement. We then insert work-distribution and output-merging logic using the filter and forward-merge primitives from \Cref{sec:dataflow}. To avoid a single slow \lstinline!replicate! region stalling a hoisted allocator and starving faster regions, we insert buffers in the work-distribution logic. \ifdefined\longversion \paragraph{Deadlock Avoidance \& Mitigation} \input{figures/deadlock.tex} \label{sec:deadlock} In the real world, finite buffer depths can stall senders, and in the worst case, full buffers on a cyclic path can cause deadlock. Revet\xspace{} CFGs are deadlock-free when they are acyclic \emph{or} have no control flow on the cyclic path such that the loop header can use backpressure to accurately sense the total worst-case buffering requirements. Furthermore, deadlocks can be avoided in all practical cases by adding a buffer on the cyclic path that is deep enough to drain all threads from the pipeline. Deadlocks can happen in cyclic graphs due to waits-holds dependency cycles. However, deadlocks can be avoided in cyclic graphs that have only a single path from the header back to the loop header (effectively, while loops with no nested control flow). In these graphs, the loop header must simply ensure that there is enough buffer space to \emph{receive} a thread on the backward edge before merging a thread from the forward edge. This guarantees that the cyclic path will never be full \added{(\Cref{fig:deadlock}(a))}. However, an \lstinline{if} statement inside the loop may have a deep and shallow path, and the deep path may accumulate a large number of threads. Then, if threads \emph{cross over} to the shallow path, it could fill up and create a cyclic stall \added{(\Cref{fig:deadlock}(b))}. To solve this problem, a deep \deleted{(whole-scratchpad) }buffer is inserted on the backward edge. Normally, the buffer will be empty because the merger will avoid stalling the backward edge. However, in an incipient deadlock, the merger's backward-edge input will fill up, and the buffer will \added{absorb the excess load from the long path}. The merger will stop admitting new threads into the loop, and existing threads will drain from the acyclic loop body into the buffer, alleviating the deadlock. \fi \input{tables/rda_params} \input{tables/apps_tab} \subsection{Dataflow Optimization} \label{sec:streamingtransforms} In the previous subsection, we described lowering to a \emph{virtual} streaming IR, which is a one-to-one mapping of control-flow constructs to dataflow. Here, we describe passes that transform the virtual streaming IR into a \emph{physical} streaming IR. These include passes like splitting that ensure our IR can map to on-chip units and optimization passes like retiming. \paragraph{Vector/Scalar Link Analysis} Because some buffers can only store scalars, accurately mapping virtual links to either vector or scalar physical links is important---especially for merges. Only two vector-vector merges fit in a context (a total of four vector buffers), but four scalar-vector merges fit, halving the number of resources required. However, if a high-throughput link is mapped to a scalar physical link, then its throughput will be only $\rfrac{1}{16}$ of peak. Therefore, we treat links as vector by default, with the exception of blocks following \lstinline!while! loops and the entrances and exits of \lstinline!replicate! regions and the main program graph. However, a \lstinline{pragma} can override this. \paragraph{Splitting, Retiming, \& Placement} Initially, compute operations are mixed with memory operations in contexts, and a single context may have memory operations at two or more memories and an impossible number of inputs, outputs, or operations. We first split every memory operation into its own context, and then split over-size contexts. We next insert buffers to avoid deadlock \ifdefined\longversion (\Cref{sec:deadlock}) \fi and mitigate path-delay imbalances~\cite{zhang2021sara}. Finally, we place using previously proposed tools~\cite{zhang2019scalable}, prioritizing deeply nested nodes. \section{Conclusion} \label{sec:conclusion} We introduce Revet\xspace{}, a compiler that takes threaded imperative code and lowers it to run on a vectorized RDA. Revet\xspace{} enables control flow in the presence of abundant unordered parallelism on an architectural paradigm that previously only supported control-flow-free parallel sections. Thus, Revet\xspace{} provides SIMT's \emph{threaded} abstraction---one control flow decision per lane, per cycle---while also demonstrating intelligent scratchpad allocation that eliminates the need for caches for a wide range of threaded applications. On a variety of real-world applications, Revet\xspace{} is 3.8\texttimes{} faster than a GPU on a 4.3\texttimes{} smaller chip and over 13\texttimes{} faster than a CPU. \section{Background} Revet\xspace{} compiles to a machine model based on Aurochs~\cite{vilim2021aurochs}, a vRDA for dataflow threads. The current state of the art programming model for vRDA compilation is Spatial~\cite{koeplinger2018spatial}, which uses a parallel-patterns approach. Revet\xspace{} retains Spatial's support for explicit, user-facing parallelism while using dataflow threads to improve the flexibility of vectorization and add support for sequential control flow within parallel sections. \paragraph{Aurochs} Aurochs~\cite{vilim2021aurochs} is a vRDA that maps programs to a grid of compute and memory resources, as shown in \Cref{fig:aurochs}. Specifically, Aurochs is a grid of vectorized compute units (CUs) and memory units (MUs), arranged in a checkerboard pattern and surrounded by DRAM address generators (AGs). The CUs, MUs, and AGs are connected by a hybrid static-dynamic network~\cite{zhang2019scalable}, and the entire chip runs at a fixed clock frequency. Inside a CU, Aurochs's pipeline follows a linear layout. To maintain 100\% dataflow throughput, each unit uses input buffers at the pipeline head to account for network path-length imbalances. Then, the input buffers can be interleaved using a merge unit~\cite{vilim2020gorgon} and broadcasted using a counter-based control unit. After the pipeline-head logic has permuted the inputs, data then enters the pipeline, where six stages (each with an element-wise, statically-mapped instruction over 16 lanes) process it. Finally, data exits through an optional filtering stage to the network. Aurochs introduced the \emph{dataflow threads} model, where every \emph{thread} is simply a database record containing its live values---values that a CPU would store in registers. The thread record tracks the live values throughout the pipeline and keeps them synchronized as network links carries streaming lists of threads. Using the filtering stage and subsequent merging, Aurochs emulates basic control flow on these threads. For example, filters can select between \lstinline!if! and \lstinline!else! branches, a data-dependent subset of threads for recirculation in a \lstinline!while! loop, or a set of threads to be dropped entirely. When routing decisions send a thread's live values to a CU, the CU does its computation and yields a modified thread record---the new state of the registers after the basic block. \paragraph{Vectorization} Dataflow architectures are frequently network-limited~\cite{zhang2019scalable}, so efficiently using network resources is critical. Because Aurochs~\cite{vilim2021aurochs} is built on Plasticine~\cite{prabhakar2017plasticine}, its network and per-unit input buffers are a mixture of vector (\SI{512}{b}) and scalar (\SI{32}{b}) resources. However, Aurochs can not use the lower-overhead scalar network for dataflow-threads code because its dataflow threads lack the \emph{hierarchy} needed to associate a scalar with vectorized data. Prior work (Spatial~\cite{koeplinger2018spatial} and SARA~\cite{zhang2021sara}) used nested parallel patterns, which provided the hierarchy to associate a scalar with vectorized data and therefore use the scalar network. However, SARA limits the scalar-vector dataflow transition to counter-expansion and the vector-scalar transition to associative (e.g., addition) reduction. This \emph{inflexibility} means that SARA cannot map other dataflow transitions like vector-to-scalar filtering or scalar-to-vector merging, which are needed to differentiate between common-path and rare-path code. \paragraph{Control Flow} SARA~\cite{zhang2021sara} lowers the Spatial~\cite{koeplinger2018spatial} language to streaming dataflow \emph{contexts,} which were each mapped to one basic block. First, SARA maps the instructions inside the basic block to pipeline stages. Then, SARA extracts control logic, which for parallel-patterns code is a nested series of counter-driven loops, and maps it to specialized control hardware. CUs provide a single vector context while MUs and AGs provides multiple \emph{scalar} contexts designed for address generation and done-token accumulation. The context control hardware Aurochs inherits from Plasticine~\cite{prabhakar2017plasticine} can make one decision per cycle (e.g., increment/reset counter) that is shared across all 16 lanes, and the hardware only tracks one set of state. Together, these limitations leave SARA's compiled vRDA code with the same lack of flexibility as SIMD instructions on CPUs~\cite{lomont2011introduction}, which are far less flexible than the SIMT model of GPUs~\cite{nvidia2013cuda}. Finally, Aurochs's ad hoc filtering mechanism lacks the ability to support arbitrary compiled control flow, because the machine model lacks an efficient mechanism for grouping and synchronizing threads.
1,116,691,497,042
arxiv
\section{Introduction} Located at the interface between the organism and the surrounding environment, the skin constitutes the first line of defense against external threats, including irritants and pathogens. In order to control potential colonization of the skin surface by pathogens, the epidermal cells, called keratinocytes, produce antimicrobial peptides (AMPs) \cite{pazgier_human_2006}. The physiologically acidic skin surface pH also contributes to control the growth of bacterial populations \cite{proksch_ph_2018,korting_differences_1990}. Another contributor to the defense against pathogen colonization are commensal bacteria in the community of microorganisms living on the skin, commonly referred to as the skin microbiome. Over the past decade, several studies have highlighted the key role played by such commensal bacterial species defending against invading pathogens, as well as their contribution to the regulation of the immune system \cite{lai_commensal_2009,cogen_staphylococcus_2010,lai_activation_2010,kong_skin_2011,belkaid_dialogue_2014,byrd_human_2018}. Alterations in the composition of the skin microbiome resulting in a dominance by a pathogenic species, also called dysbiosis, have been associated with skin conditions such as acne or atopic dermatitis (AD) \cite{leyden_propionibacterium_1975,kong_temporal_2012}. In the case of AD, the patient skin is often colonized by \textit{Staphylococcus aureus} (\textit{S. aureus}), especially on the lesions \cite{kong_temporal_2012}. Treatment strategies targeting non-specific elimination of cutaneous microflora, such as bleach baths, have shown conflicting results regarding their capacity to reduce the disease severity \cite{chopra_efficacy_2017}. On the other hand, treatments involving introduction of commensal species, like \textit{Staphylococcus hominis} \cite{nakatsuji_development_2021} on the skin surface appear promising. Accordingly, the interactions between the commensal populations, pathogens and skin cells seem at the heart of maintaining microbiome balance. There is therefore a necessity to investigate further those interactions and the drivers of dominance of one population over others. Unfortunately, it is challenging to perform \textit{in vitro} experiments involving more than one or two different species, even more so on skin explants or skin equivalents. Mathematical models of population dynamics have been developed and used for more that 200 years \cite{malthus_essay_1798}. Here, we introduce a model based on ordinary differential equations (ODEs), describing the interactions of a population of commensal species with one of opportunistic pathogens and the skin cells. We study the factors influencing the dominance of one population over the other on a microbiologically relevant timescale of a couple of days corresponding to biological experimental data. More specifically, we identify constraining relationships on the parameter values, based on published experimental data \cite{nakatsuji_antimicrobials_2017,kohda_vitro_2021}, corresponding to special cases of our model, allowing us to reduce the parametric dimension of our model from 13 to 5 parameters. Interestingly, we observe in the reduced model a phenomenon of meta-stability \cite{TK08neuron,RSNGW15cmsb}, also called quasi-stability, in which the seemingly stable state reached after 30 hours following the initiation of the experiment, is followed after 300 hours by a reversed stable state. On the time scale of the experiments, we show that certain changes in the environment, like an elevation of skin surface pH, create favorable conditions for the emergence and colonization of the skin by the opportunistic pathogen population. Such predictions can help identify potential therapeutic strategies for the treatment of skin conditions involving microbiome dysbiosis, and underscore the importance of meta-stable states in the real biological processes at their different time scales. \section{Initial ODE model with 13 parameters} The model built in this paper considers two types of bacterial populations. The first population, $S_c$, regroups commensal bacteria species having an overall beneficial effect for the skin, and the second population, $S_p$, represents opportunistic pathogens. The differential equations for both bacterial populations are based on the common logistic growth model \cite{zwietering_modeling_1990}, considering non-explicitly the limitations in food and space. The limited resources are included in the parameters $K_{sc}$ and $K_{sp}$, representing the optimum concentration of the populations in a given environment, considering the available resources. The bactericidal effect of antimicrobial peptides (AMPs) produced by skin cells, $Amp_h$, on $S_p$ is included with a Hill function. This type of highly non-linear functions have been used previously to model the effect of antibiotics on bacterial populations \cite{meredith_bacterial_2015}. For the sake of simplicity, the AMPs produced by skin cells is introduced as a constant parameter, $[Amp_h]$, in the model. It represents the average concentration of these AMPs among surface cells, under given human genetic background and environmental conditions. Several studies revealed that commensal bacterial populations, like \textit{S. epidermidis} or \textit{S. hominis}, are also able to produce AMPs targeted against opportunistic pathogens, such as \textit{S. aureus} \cite{cogen_selective_2010,nakatsuji_antimicrobials_2017}. For these reasons, we introduce in the model AMPs of bacterial origin, $Amp_b$, acting similarly to $Amp_h$ on the pathogenic population $S_p$. $Amp_b$ is produced at rate $k_c$ by $S_c$, and degraded at rate $d_a$. Furthermore, we include a defense mechanism of $S_p$ against $S_c$ with a direct killing effect. Altogether, this gives us the following ODE system with 3 variables and 13 parameters, all taking non-negative values: \begin{System}\label{fullSystem} \frac{d [S_c]}{dt} = \left( r_{sc} \left( 1 - \frac{[S_c]}{K_{sc}} \right) - \frac{d_{sc} [S_p]}{C_1 + [S_p]} \right) [S_c] \\ \\ \frac{d [S_p]}{dt} = \left(r_{sp} \left( 1 - \frac{[S_p]}{K_{sp}} \right) - \frac{d_{spb} [Amp_b]}{C_{ab} + [Amp_b]} - \frac{d_{sph} [Amp_h]}{C_{ah} + [Amp_h]} \right) [S_p]\\ \\ \frac{d [Amp_b]}{dt} = k_c [S_c] - d_a \end{System} \begin{table}[t] \caption{List of the parameters and variables of our mathematical model with their units. CFU = Colony forming unit, AU = Arbitrary Unit, ASU = Arbitrary Surface Unit}\label{tab13param} \begin{tabular}{|c|p{10cm}|} \hline \textbf{Variable} & \textbf{Interpretation (unit)}\\ \hline $[S_c]$ & Surface apparent concentration of $S_c$ ($CFU.ASU^{-1}$)\\ $[S_p]$ & Surface apparent concentration of $S_p$ ($CFU.ASU^{-1}$)\\ $[Amp_b]$ & Concentration of $Amp_b$ ($AU.ASU^{-1}$)\\ \hline \textbf{Parameter} & \textbf{Interpretation (unit)}\\ \hline $r_{sc}$ & Growth rate of $S_c$ ($h^{-1}$) \\ $r_{sp}$ & Growth rate of $S_p$, ($h^{-1}$) \\ $K_{sc}$ & Optimum concentration of $S_c$ ($CFU.ASU^{-1}$) \\ $K_{sp}$ & Optimum concentration of $S_p$ ($CFU.ASU^{-1}$) \\ $d_{sc}$ & Maximal killing rate of $S_c$ by $S_p$ ($h^{-1}$) \\ $C_1$ & Concentration of $S_p$ inducing half the maximum killing rate $d_{sc}$ ($CFU.ASU^{-1}$) \\ $d_{spb}$ & Maximal killing rate of $S_p$ by $Amp_b$, ($h^{-1}$) \\ $C_{ab}$ & Concentration of $Amp_b$ inducing half the maximum killing rate $d_{spb}$ ($AU.ASU^{-1}$) \\ $d_{sph}$ & Maximal killing rate of $S_p$ by $Amp_h$, ($h^{-1}$) \\ $C_{ah}$ & Concentration of $Amp_h$ inducing half the maximum killing rate $d_{sph}$ ($AU.ASU^{-1}$) \\ $[Amp_h]$ & Concentration of AMPs produced by the skin cells ($AU.ASU^{-1}$)\\ $k_c$ & Production rate of $Amp_b$ by $S_c$ ($AU.h^{-1}.CFU^{-1}$) \\ $d_a$ & Degradation rate of $Amp_b$ ($AU.h^{-1}$)\\ \hline \end{tabular} \end{table} The model is illustrated on Fig. \ref{fig:ModelOverview} and Table \ref{tab13param} recapitulates the variables and the parameters with their unit. Such a model cannot be solved analytically. Furthermore, the use of optimization algorithms to infer the 13 parameter values from data resulted in many valid sets of parameter values. Therefore, it is clearly necessary to restrict the number of parameters by identifying some of them, to be able to analyze the model. \begin{figure} \centering \includegraphics[width=\textwidth]{ModelOverview.png} \caption{Model overview, green arrow representing production and red T-lines representing killing effect.} \label{fig:ModelOverview} \end{figure} \section{Using published experimental data to define relations between model parameters by steady-state reasoning} The amount of quantitative experimental data available for the model calibration is very limited due to the difficulty of carrying out experiments involving co-cultures of different bacterial species. Most of the published work focuses on single species or on measuring the relative abundances of species living on the skin, which is highly variable between individuals and skin sites \cite{grice_topographical_2009}. In the case of AD specifically, \textit{S. aureus} is considered pathogenic and \textit{S. epidermidis} commensal. Published data exist however for those species which we can use to constrain the parameter values of the model. Two series of \textit{in vitro} experiments are considered \cite{nakatsuji_antimicrobials_2017,kohda_vitro_2021}. While \textit{in vitro} cultures, even on epidermal equivalent, do not entirely capture the native growth of bacteria on human skin, they provide useful quantitative data that would be very difficult to measure \textit{in vivo}. In the first experiment \cite{kohda_vitro_2021}, mono-cultures and co-cultures of \textit{S. epidermidis} and \textit{S. aureus} were allowed to develop on a 3D epidermal equivalent. Table \ref{tabExpDataKohda} recapitulates the population sizes of the two species measured after 48 hours of incubation. Kohda \textit{et al.} also performed another co-culture experiment where \textit{S.epidermidis} was inoculated 4 hours prior to \textit{S.aureus} in the media. This data is not used here as it requires additional manipulation to match the situation represented by the model. However, it would be interesting to use it in the future for model validation. In the second experiment \cite{nakatsuji_antimicrobials_2017} the impact of human (LL-37) and bacterial (\textit{Sh}-lantibiotics) AMPs on \textit{S. aureus} survival was studied. The experiments were performed \textit{in vitro}, and the \textit{S. aureus} population size was measured after 24 hours of incubation. Table \ref{tabExpDataNakatsuji} summarizes their observations. \begin{table} \caption{Experimental data from Kohda et. al \cite{kohda_vitro_2021} used for identifying parameter values. }\label{tabExpDataKohda} \begin{center} \begin{tabular}{l|c|c} & \textbf{\textit{S. epidermidis} (CFU/well)} & \textbf{\textit{S. aureus} (CFU/well)}\\ \hline \textbf{Mono-cultures} & $4.10^8$ & $3.10^9$ \\ \textbf{Co-cultures} & $1.10^8$ & $1.10^9$ \\ \end{tabular} \end{center} \end{table} \begin{table} \caption{Experimental data from Nakatsuji et. al \cite{nakatsuji_antimicrobials_2017} used for identifying parameter relations. }\label{tabExpDataNakatsuji} \begin{center} \begin{tabular}{c|c|c} \textbf{\textit{Sh}-lantibiotics ($\mu M$)} & \textbf{LL-37 ($\mu M$) }& \textbf{\textit{S. aureus} (CFU/mL)}\\ \hline 0 & 4 & $10^9$\\ 0 & 8 & $6.10^5$\\ 0.32 & 0 & $5.10^8$\\ 0.64 & 0 & $3.10^3$ \end{tabular} \end{center} \end{table} \subsection{Parameter values inferred from mono-culture experiment data} \label{subsec:Param_mono} We consider first the monocultures experiments from Kohda \textit{et al.} \cite{kohda_vitro_2021}, representing the simplest experimental conditions. \textit{S. epidermis} is a representative of the commensal population $S_c$, and \textit{S. aureus} of the pathogenic one, $S_p$. Since the two species are not interacting, the set of equations simplifies to: \begin{System} \frac{d [S_c]}{dt} = \left( r_{sc} \left( 1 - \frac{[S_c]}{K_{sc}} \right) \right) [S_c] \\ \\ \frac{d [S_p]}{dt} = \left(r_{sp} \left( 1 - \frac{[S_p]}{K_{sp}} \right) \right) [S_p] \end{System} At steady-state, the population concentrations are either zero, or equal to their optimum capacities ($K_{sc}$ or $K_{sp}$) when the initial population concentration is non-zero. Given the rapid growth of bacterial population, the experimental measurements done after 48 hours of incubation can be considered as corresponding to a steady-state, which gives: \begin{equation} \label{Kse} K_{sc} = 4.10^8 \; CFU.ASU^{-1} \end{equation} \begin{equation} \label{Ksa} K_{sp} = 3.10^9 \; CFU.ASU^{-1} \end{equation} \subsection{Parameter relations inferred from experimental data on AMP} The experimental conditions of Nakatsuji \textit{et al.} \cite{nakatsuji_antimicrobials_2017} correspond to the special case where there is no commensal bacteria alive in the environment, only the bacterial AMPs, in addition to those produced by the skin cells. Our system of equations then reduces to: \begin{equation} \frac{d [S_p]}{dt} = \left(r_{sp} \left( 1 - \dfrac{[S_p]}{K_{sp}} \right) - \frac{d_{spb} [Amp_b]}{C_{ab} + [Amp_b]} - \frac{d_{sph} [Amp_h]}{C_{ah} + [Amp_h]} \right) [S_p] \end{equation} The concentrations in LL-37 and \textit{Sh}-lantibiotics, translated in our model into $[Amp_h]$ and $[Amp_b]$ respectively, are part of the experimental settings. Therefore, we consider them as constants over time. At steady state, we get: \begin{equation} \label{NakaEqSS} [S_p]^* = 0 \quad \textrm{or} \quad [S_p]^* = K_{sp} \left ( 1 - \frac{d_{spb} [Amp_b]}{r_{sp} (C_{ab} + [Amp_b])} - \frac{d_{sph} [Amp_h]}{r_{sp} (C_{ah} + [Amp_h])} \right) \end{equation} Let us first focus on the special case where no \textit{Sh}-lantibiotics were introduced in the media, translating into $[Amp_b] = 0$ in our model. We consider again that the biological observations after 24 hours of incubation correspond to steady-state and substitute the experimental values measured \\ $[Amp_h] = 4 \,\mu M$ ; $[S_p]^* = 10^9$ CFU, and $[Amp_h] = 8 \,\mu M$ ; $[S_p]^* = 6.10^5$ CFU, \\ together with the values of $K_{sc}$ and $K_{sp}$ (from \eqref{Kse} and \eqref{Ksa}) in \eqref{NakaEqSS}, to obtain the following equations: \begin{System} \frac{d_{sph}}{r_{sp}} = \frac{4 +C_{ah}}{6} \\ \\ \frac{d_{sph}}{r_{sp}} = \frac{(10^4 - 2)(C_{ah} +8)}{8.10^4} \end{System} which reduce to $C_{ah} = 8$ and $\frac{d_{sph}}{r_{sp}} = 2$.\\ Following the same method with the experimental conditions without any LL-37 (i.e. $[Amp_h] = 0$) and using two data points \\ ($[Amp_b] = 0.32 \,\mu M$ ; $[S_p]^* = 5.10^8$ CFU) and ($[Amp_b] = 0.64 \; \mu M$ ; $[S_p]^* = 3.10^3$ CFU), \\ we get $C_{ab} = 0.16$ and $\frac{d_{spb}}{r_{sp}} = \frac{5}{4}$. It is notable that the maximum killing rates of $S_p$ by $Amp_b$ and $Amp_h$ are both proportional to $S_p$ growth rate. Interestingly, such proportional relation has been observed experimentally between the killing rate of \textit{Escherichia coli} by an antibiotic and the bacterial growth rate \cite{tuomanen_rate_1986}. To be consistent with the ranges of \textit{Sh}-lantibiotics concentrations described in Nakatsuji \textit{et al.} \cite{nakatsuji_antimicrobials_2017}, $[Amp_b]$ should take positive values below 10. Given that $[Amp_b]^* = \frac{k_c [S_c]^*}{d_a}$ at steady-state, and that $K_{sc} = 4.10^8$ CFU is the upper bound for $[S_c]^*$, we obtain the following constraint: \begin{equation} \frac{k_c}{d_a} \leq \frac{1}{4.10^7} \end{equation} \subsection{Parameter relations inferred from co-culture data} The initial model described earlier is representative of the experimental settings of the co-culture conditions described in Kohda \textit{et al.} \cite{kohda_vitro_2021}. At steady-state, the system \eqref{fullSystem} gives: \begin{equation}\label{SEcoKohda} [S_c]^* = 0 \quad \textrm{or} \quad [S_c]^* = K_{sc} \left ( 1 - \frac{d_{sc} [S_p]^*}{r_{sc} (C_1 + [S_p]^*)} \right) \end{equation} \begin{equation}\label{SAcoKohda} [S_p]^* = 0 \quad \textrm{or} \quad [S_p]^* = K_{sp} \left ( 1 - \frac{d_{spb} [Amp_b]}{r_{sp} (C_{ab} + [Amp_b])} - \frac{d_{sph} [Amp_h]}{r_{sp} (C_{ah} + [Amp_h])} \right) \end{equation} \begin{equation} {[Amp_b]}^* = \frac{k_c [S_c]^*}{d_a} \end{equation} Considering that what is observed experimentally after 48 hours of incubation is at steady-state, one can replace $[S_c]^*$ and $[S_p]^*$ with the experimental data point (\textit{S. epidermidis} $= 10^8$ CFU; \textit{S. aureus} $= 10^9$ CFU) in \eqref{SEcoKohda} and \eqref{SAcoKohda} to get the following parameter relation: \begin{equation}\label{paramRKohda1} \frac{d_{sc}}{r_{sc}} = \frac{3}{4.10^9} C_1 + \frac{3}{4} \end{equation} \begin{equation}\label{paramRKohda2} \frac{2}{3} r_{sp} = \frac{d_{sph} [Amp_h]}{C_{ah} + [Amp_h]} + \frac{10^8 d_{spb} k_c}{d_a C_{ab}+10^8 k_c} \end{equation} By integrating the values found for $C_{ah}$ and $C_{ab}$, and the relations involving $d_{sph}$ and $d_{spb}$ into \eqref{paramRKohda2}, we end up with: \begin{equation}\label{relation_da} d_a = 10^8 k_c \, \frac{56 + 31 [Amp_h]}{2.56\,(4-[Amp_h])} \quad \textrm{with } [Amp_h] < 4 \end{equation} \section{Reduced model with 5 parameters} \label{sec:ReducedModel} Using the previously mentioned experimental data, and assuming they represent steady state conditions of the initial model \eqref{fullSystem}, we have reduced the parametric dimension of the model from 13 to 5. Specifically, out of the original 13 parameters, we could define the values of 4 of them, and derive 4 functional dependencies from the values of the remaining parameters, as summarized in Table \ref{tabReducedModel}). \begin{table} \caption{Summary of the parameter relations embedded in the reduced model.}\label{tabReducedModel} \begin{center} \begin{tabular}{c|c} \textbf{Parameter} & \textbf{Value or relation to other parameters}\\ \hline $K_{sc}$ & $4.10^8$ \\ $K_{sp}$ & $3.10^9$ \\ $C_{ah}$ & 8 \\ $C_{ab}$ & 0.16 \\ $d_{sph}$ & $2 \, r_{sp}$\\ $d_{spb}$ & $\frac{5}{4} \, r_{sp}$\\ $d_{sc}$ & $\displaystyle{r_{sc} \left (\frac{3}{4.10^9} \; C_1 + \frac{3}{4} \right )}$ \\ $d_a$ & $\displaystyle{10^8 k_c \, \frac{56 + 31 [Amp_h]}{2.56\,(4-[Amp_h])}}$ with $[Amp_h] < 4$ \end{tabular} \end{center} \end{table} {In our skin microbiome model \eqref{fullSystem}, the parameters that remain unknown are thus:} \begin{itemize} \item $r_{sc}$, the growth rate of $S_c$ which can reasonably take values between 0 and 2 $h^{-1}$ following \cite{czock_mechanism-based_2007,campion_pharmacodynamic_2005}; \item $r_{sp}$, the growth rate of $S_p$, taking similar values in the interval between 0 and 2 $h^{-1}$; \item $C_1$, the concentration of $S_p$ that induces half the maximum killing rate $d_{sc}$ (in $CFU.ASU^{-1}$) and is thus bounded by the optimum concentration of $S_p$, i.e.~$K_{sp} = 3.10^9 \; CFU.ASU^{-1}$, as calculated in section \ref{subsec:Param_mono} from \cite{kohda_vitro_2021}; \item $k_c$, the production rate of $[Amp_b]$ chosen to take values between $0$ and $0.1 \; AU.h^{-1}.CFU^{-1}$, and shown to have a limited impact on the steady-state values in section \ref{subsec:Sensitivity}; \item$[Amp_h]$, the concentration in $AU.ASU^{-1}$ of AMPs produced by skin cells between 0 and 4 (equation \eqref{relation_da}). \end{itemize} \subsection{Simulations at the time scale of the experiments} In order to reproduce what was observed by Kohda et. al \cite{kohda_vitro_2021}, that is a dominant pathogenic population after 50 hours which can thus be considered as dysbiosis in our skin microbiome model, it is sufficient to fix a relatively low concentration of Amp produced by the skin cells, i.e. $Amph_h=1.5$, and some fixed values for the four other parameters chosen in their intervals described above. Among a continuum of possible solutions, we chose $r_{sc} = 0.5, \, r_{sp} = 1 , \, C_1 = 5.10^6 , \, k_c = 0.01$. The doses of \textit{S. epidermidis} and \textit{S. aureus} applied at the surface of the 3D epidermal equivalent at the beginning of the experiment ($10^5 CFU/mL$ and $10^3 CFU/mL$ respectively) are used as the initial concentrations for $[S_c]$ and $[S_p]$ respectively. Fig. \ref{fig:Sim_Kohda} shows the result of a numerical simulation\footnote{All computation results presented in this paper have been done using the BIOCHAM software with a notebook runnable online and available at \url{https://lifeware.inria.fr/wiki/Main/Software\#CMSB22b}.} of our model with those parameters which are in accordance to the co-culture experiments of Kohda et. al and reproduce a consistent qualitative behavior \cite{kohda_vitro_2021}. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Sim_Kohda_clean.png} \caption{Numerical simulation of the reduced ODE model over 50 hours, with initial conditions $[S_c]=10^5,\ [S_p]=10^3,\ [Amp_b]=0$ and parameter values $[Amp_h]=1.5,\ r_{sc} = 0.5, \, r_{sp} = 1 , \, C_1 = 5.10^6 , \, k_c = 0.01,$ to fit Kohda et al. co-culture data \cite{kohda_vitro_2021} (Table \ref{tabExpDataKohda}).} \label{fig:Sim_Kohda} \end{figure} Our model can also be used to reproduce what is considered a balanced microbiome, corresponding to the commensal population being significantly more abundant than the pathogenic one. This requires modifying some parameter values to represent a less virulent pathogenic population, closer to the physiological context, given that the experiments from Kohda et. al \cite{kohda_vitro_2021} were performed using a virulent methicillin-resistant \textit{S. aureus} strain. We chose $r_{sp} = 0.5$, $C_1 = 2.10^8$ and a higher production of AMPs by the skin cells, $[Amp_h] = 3$, to compensate for feedback loops or stimuli that might be missing in the 3D epidermal equivalent used. Fig. \ref{fig:Sim_default} shows a simulation trace obtained under those conditions which clearly indicates the dominance of the non-pathogenic population under those conditions. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Sim_shortTimeScale_Ah3_Clean.png} \caption{Numerical simulation of the reduced ODE model over 50 hours, with initial conditions $[S_c]=10^5,\ [S_p]=10^3,\ [Amp_b]=0$ and parameter values $r_{sc} = r_{sp} = 0.5 , \, C_1 = 2.10^8 , \, k_c = 0.01 ,\ [Amp_h] = 3$ corresponding to Kohda et al. experiments \cite{kohda_vitro_2021}.} \label{fig:Sim_default} \end{figure} \subsection{Parameter sensitivity and robustness analyses} \label{subsec:Sensitivity} Since the previous simulations rely on some choices of values for the unknown parameters, it is important to evaluate the robustness of the predictions of our model by performing an analysis of sensitivity to the parameter values. This is possible in Biocham by specifying the property of interest in quantitative temporal logic \cite{RBFS11tcs}. The interesting property here is the stabilization at the time scale of the experiments around 48 hours of the bacterial population sizes to the values given by simulation (Fig. \ref{fig:Sim_default}), Here we use the temporal logic formula:\\ $F(Time==40 \wedge NSc = x1 \wedge NSp = y1 \wedge F(G(NSc = x2 \wedge NSp = y2)))$\\ and objective values equal to 1 for the free variables $x1, x2, y1, y2$, to express that the normalized variables $NSc$ and $NSp$, i.e. current values of $Sc$ and $Sp$ divided by their expected value at steady state, respectively $10^8$ and $10^9$ in the pathogenic case of Kohda et al. experiments, is reached (F, finally) at time around 40 and finally at the end of the time horizon (FG) of 50 hours. On a given simulation trace, the free variables of the formula have a validity domain (here fixed values) which is used to define a continuous degree of satisfaction of the property as a distance to the objective values, and a robustness degree by sampling parameter values around their nominal values \cite{RBFS11tcs}. The sensitivity analysis (Table \ref{tabSensitivity}) reveals that the dominance of the commensal population is highly sensitive to variations of the initial concentration of the pathogen. To a lesser extend, the dominant population is also sensitive to the growth rates ($r_{sc}$ and $r_{sp}$) and the concentration of human AMPs ($[Amp_h]$). On the other hand, $C_1$ and $k_c$ do not seem to affect the relative proportions of the bacterial populations. \begin{table} \caption{Sensitivity of the model to variations of the parameters and initial concentrations for the property of reaching the same values at time 40 and time horizon 50 as in Fig.~\ref{fig:Sim_default}.}\label{tabSensitivity} \begin{center} \begin{tabular}{c|c|c} \textbf{Parameter} & \textbf{Coefficient of variation} & \textbf{Robustness degree}\\ \hline $r_{sc}$ & 0.2 & 0.62\\ $r_{sp}$ & 0.2 & 0.57\\ $C_1$ & 10 & 0.95\\ $k_c$ & 1 & 0.95\\ $[Amp_h]$ & 0.2 & 0.53\\ $[S_p](t=0)$ & 10 & 0.23 \\ $[S_c](t=0)$ & 10 & 0.58\\ $(r_{sc},r_{sp})$ & 0.2 & 0.48\\ $([S_c](t=0),[S_p](t=0))$ & 10 & 0.31\\ \end{tabular} \end{center} \end{table} \subsection{Meta-stability revealed by simulation on a long time scale} Interestingly, by extending the simulation time horizon to a longer time scale of 500 hours, one can observe a meta-stability phenomenon, shown in Fig.~\ref{fig:metastab}. The seemingly stable state observed in Fig.~\ref{fig:Sim_default} at the relevant time scale of 50 hours of the experiments, is thus not a mathematical steady state, but a meta-stable state, also called quasi-stable state, that slowly evolves, with $\frac{d[S_c]}{dt} \neq 0$ and $\frac{d[S_p]}{dt} \neq 0$, towards a true stable state of the model reached around 300 hours in which the population density are reversed. The $S_c$ population almost reaches its optimum capacity $K_{sc}$ after approximately 30 hours and stays relatively stable for around 100 hours more, that is over 4 days, which can reasonably be considered stable on the microbiological time scale. Meanwhile, the $S_p$ population is kept at a low concentration compared to $S_c$, even though it is continuously increasing and eventually leading to its overtake of $S_c$. By varying the parameters values, it appears that this meta-stability phenomenon emerges above a threshold value of $2.5$ for $[Amp_h]$ \footnote{All computation results presented in this paper have been done using the BIOCHAM software with a notebook runnable online and available at \url{https://lifeware.inria.fr/wiki/Main/Software\#CMSB22b}.}, that is for almost half of its possible values (see section \ref{sec:ReducedModel}). That phenomenon of meta-stability, also called quasi-stability, is a classical notion of dynamical systems theory, particularly well-studied in the case of oscillatory systems for which analytical solutions exist, and as models of brain activity \cite{TK08neuron}. It is worth noting that it has also been considered in the computational systems biology community with respect to model reduction methods based on the identification of different regimes corresponding to different preponderant terms of the ODEs, for which simplified dynamics can be defined, and chained within a hybrid automaton \cite{RSNGW15cmsb}. More generally, this raises the question of the existence and importance of meta-stability in real biological processes, as well as the validity of the steady state assumptions made in mathematical modeling methods to fit the models to the observed experimental data. \begin{figure} \centering \includegraphics[width = 0.8\textwidth]{quasi-stability_Ah3_clean.png} \caption{{Numerical simulation of the reduced ODE model on a longer time scale of 500 hours, with the same initial concentrations and parameter values as in Fig.~\ref{fig:Sim_default}, showing an inversion of the dominant bacterial population after 220 hours.}} \label{fig:metastab} \end{figure} \section{Conditions favoring the pathogenic population} Whether the dysbiosis observed in AD is the cause or the result of the disease is unclear \cite{kobayashi_dysbiosis_2015,koh_skin_2021}. Infants developing AD do not necessarily have more \textit{S. aureus} present on their skin prior to the onset of the disease compared to the healthy group \cite{kennedy_skin_2017}. This suggests that atopic skin has some characteristics enabling the dominance of \textit{S. aureus} over the other species of the microbiome. To test this hypothesis, we investigate two changes of the skin properties observed in AD patients (skin surface pH elevation \cite{eberlein-konig_skin_2000} and reduced production of AMPs \cite{ong_endogenous_2002}) and their impact on the dominant species at steady-state. More specifically, we study the behavior of the system following the introduction of a pathogen and whether the pathogen will colonize the media depending on the initial concentrations of the bacterial populations and the particular skin properties mentioned before. \subsection{Skin surface pH elevation} According to Proksch \cite{proksch_ph_2018}, the physiological range for skin surface pH is 4.1-5.8. However, in certain skin conditions, like AD, an elevation of this pH has been observed. Dasgupta \textit{et al.} studied \textit{in vitro} the influence of pH on the growth rates of \textit{S. aureus} and \textit{S. epidermidis}\cite{dasgupta_16502_2020}. Their experimental results show that, when the pH is increased from 5 to 6.5, the growth rate of \textit{S. epidermidis} is multiplied by 1.8, whereas the one of \textit{S. aureus} is multiplied by more than 4 (Table \ref{tabDasgupta}). \begin{table} \caption{Experimental data from Dasgupta et. al \cite{dasgupta_16502_2020} showing the influence of pH on growth rates of \textit{S. epidermidis} and \textit{S. aureus}}\label{tabDasgupta} \begin{center} \begin{tabular}{c|c|c|} \multirow{2}{0.6cm}{\textbf{pH}} & \multicolumn{2}{c|}{\textbf{Growth rate ($\Delta$OD/hour)}}\\ \cline{2-3} & \textit{\textbf{S. aureus}} & \textit{\textbf{S. epidermidis}}\\ \hline 5 & 0.03 & 0.05\\ 5.5 & 0.04 & 0.07\\ 6 & 0.09 & 0.08\\ 6.5 & 0.13 & 0.09\\ 7 & 0.14 & 0.10\\ \end{tabular} \end{center} \end{table} Their data can be used to select values for the growth rates $r_{sc}$ and $r_{sp}$ in our model, corresponding to healthy skin with a skin surface pH of 5 and compromised skin with a pH of 6.5. Because the experiments from Dasgupta \textit{et al.} were performed \textit{in vitro} and the bacterial population sizes measured with optical density (OD) instead of CFU, the growth rates cannot be directly translated into $r_{sc}$ and $r_{sp}$. We use $r_{sc} = 0.5$ as the reference value for the commensal growth rate at pH 5, following on from previous simulation (Fig. \ref{fig:Sim_default}). Maintaining the ratio between the two population growth rates at pH 5 and the multiplying factors following the pH elevation from Dasgupta \textit{et al.} experimental data, we can define two sets of values for $r_{sc}$ and $r_{sp}$: \begin{equation*} \textrm{skin surface pH of 5} \quad \Rightarrow r_{sc} = 0.5, \, r_{sp} = 0.3 \end{equation*} \begin{equation*} \textrm{skin surface pH of 6.5} \quad \Rightarrow r_{sc} = 0.9, \, r_{sp} = 1.3 \end{equation*} Considering the healthy skin scenario with a skin surface pH of 5, the influence of the bacterial populations initial concentrations on the dominant species after 50 hours is evaluate using the temporal logic formula: $$F(\textrm{Time}==40 \wedge ([S_c] > u1 \, [S_p]) \wedge F(G([S_c] > u2 \, [S_p])))$$ where $u1$ and $u2$ are free variables representing the abundance factors between both populations, evaluated at Time$= 40$ and at the last time point of the trace respectively (F stands for finally and G for globally at all future time points), i.e. at the time horizon of the experiments of 50 hours. When given with an objective value, e.g. $u1=10$, the distance between that value and the validity domain of the formula, i.e. the set of values for $u1$ that satisfy the formula, provides a violation degree which is used to evaluate the satisfaction degree of the property. Here, we evaluate how much the temporal formula $F(\textrm{Time}==40 \wedge ([S_c] > u1 \, [S_p]) \wedge F(G([S_c] > u2 \, [S_p])))$, $u1 \rightarrow 10, \, u2 \rightarrow 10$, is satisfied given variations of the initial concentrations of two populations (Fig. \ref{fig:landscape_lowpH}). The model predicts that, under the healthy skin condition, the commensal population will always dominate after 50 hours, except when introduced at a relatively low concentration ($<2.10^4$) while the initial concentration of the pathogenic population is high ($>5.10^5$). \begin{figure} \centering \includegraphics[width = 0.7\textwidth]{landscape_IC_lowPH_highRes.png} \caption{Landscape of satisfaction degree of the temporal formula corresponding to healthy skin with a skin surface pH of 5 ($r_{sc} = 0.5$ and $r_{sp} = 0.3$). The x and y axis represent variations of the initial quantities of $[S_p]$ and $[S_c]$ respectively. The color coding corresponds to the satisfaction degree of the temporal logic formula. Values used for the other parameters: $C_1 = 2.10^8$, $k_c = 0.01$, $[Amp_h] = 3$.} \label{fig:landscape_lowpH} \end{figure} The model predicts a higher vulnerability of the skin regarding invading pathogens with an elevated skin surface pH. When evaluating the same temporal formula with growth rates values corresponding to a skin surface pH of 6.5, we observe that even when the initial concentration of commensal is high ($>10^7$), the pathogenic population is able to colonize the skin when introduced at a concentration as low as $3.10^4$ (Fig. \ref{fig:landscape_highpH}). \begin{figure} \centering \includegraphics[width = 0.7\textwidth]{landscape_IC_highPH_highRes.png} \caption{Landscape of satisfaction degree of the temporal formula corresponding to compromised skin with a skin surface pH of 6.5 ($r_{sc} = 0.9$ and $r_{sp} = 1.3$). The x and y axis represent variations of the initial quantities of $[S_p]$ and $[S_c]$ respectively. The color coding corresponds to the satisfaction degree of the temporal logic formula. Values used for the other parameters: $C_1 = 2.10^8$, $k_c = 0.01$, $[Amp_h] = 3$.} \label{fig:landscape_highpH} \end{figure} Such predictions highlight the protective effect of the skin surface acidic pH against the invasion of pathogenic bacteria. \subsection{Reduced production of skin AMPs} As mentioned before, human keratinocytes constitutively produce AMPs as a defense against pathogens. In atopic dermatitis, the expression of AMPs is dysregulated, leading to lower concentration levels of AMPs in the epidermis \cite{nakatsuji_antimicrobials_2017}. Similarly to the analysis done for skin surface pH, our model can be used to study how the skin microbiome reacts to modulation of the AMPs production by the skin cells. Two situations are considered: an impaired production of AMPs by the skin cells ($[Amp_h] = 0.5$) and a higher concentration with $[Amp_h] = 3$. Using the same methodology as in the case of skin surface pH, the temporal logic formula $F(\textrm{Time}==40 \wedge ([S_c] > u1 \, [S_p]) \wedge F(G([S_c] > u2 \, [S_p])))$, $u1 \rightarrow 10, \, u2 \rightarrow 10$, is evaluated for variations of the initial concentrations of both populations for $[Amp_h] = 0.5$ and $[Amp_h] = 3$ (Fig. \ref{fig:landscape_Amph}).\\ \begin{figure} \centering \includegraphics[width = 0.9\textwidth]{landscape_IC_lowAMPh_highRes.png} \includegraphics[width = 0.9\textwidth]{landscape_IC_highAMPh_highRes.png} \caption{Landscape of satisfaction degree of the healthy condition formula with a low concentration of human AMPs on the upper graph ($[Amp_h] = 0.5$) and a high concentration at the bottom ($[Amp_h] = 3$). The x and y axis represent variations of the initial quantities of $[S_p]$ and $[S_c]$ respectively. The color coding corresponds to the satisfaction degree of the temporal logic formula. Values used for the other parameters: $r_{sc} = r_{sp} = 0.5$, $C_1 = 2.10^8$, $k_c = 0.01$.} \label{fig:landscape_Amph} \end{figure} The model predicts a slightly protective effect of $Amp_h$ regarding the colonization of the skin by a pathogenic population, for low initial concentrations. However when both populations are introduced in high concentrations, the increase of $[Amp_h]$ appears to have the opposite effect of facilitating the colonization by the pathogenic population. This mitigated effect might be due to the presence of $[Amp_h]$ in the constraint related to the degradation rate of $[Amp_b]$ (equation \eqref{relation_da}) and deserves further investigation. \section{Conclusion} The objective of this research is the identification of conditions which might favor or inhibit the emergence of pathogenic populations in the skin microbiome. Such analyses can lead to insights about potential treatment strategies aiming at restoring a dysbiotic condition. We have developed a simple ODE model of skin microbiome with 3 variables and 13 parameters which could be reduced to 5 parameters by using published data from the literature and steady state reasoning on the observations made in the biological experiments. Our bacterial population model is generic in the sense that we did not take into account the peculiarities of some specific bacterial populations, but on some general formulas of adversary population dynamics and influence factors. We showed through sensitivity analyses that our model predictions are particularly robust with respect to parameter variations. Perhaps surprisingly, we also showed that this simple model exhibits over a large range of biologically relevant parameter values, a meta-stability phenomenon, revealed by allowing the simulation to continue for times one order of magnitude longer than the reported experimental times. This observation questions the existence and importance of meta-stability phenomena in real biological processes, whereas a natural assumption made in mathematical modeling, and model fitting to data, is that the experimental data are observed in states corresponding to real stable states of the mathematical model. \subsubsection*{Acknowledgments.} We are grateful to Mathieu Hemery, Aurélien Naldi and Sylvain Soliman for interesting discussions on this work. \bibliographystyle{splncs04}
1,116,691,497,043
arxiv
\section{Introduction} An accurate determination of the charm mass plays an important role on the precise physical evaluation of several observables, from K and B decays to CKM matrix elements and in lattice QCD. One of the usual techniques to extract the charm mass is to use the sum rules approach based on the relation between the moments of the production rate $R$ and the inverse power of the square mass of the $c$ quark, and the Pad\'e method (see \cite{Dehnadi:2011gc,Masjuan:2009wy}). This approach should confront the fact that one have to employ the moments of the integral of R over the whole energy range, which are \textit{global} properties, even though they are only known up to a certain scale $\Lambda$ (since we only know experimentally $R$ in a finite window). We propose to wield the \textit{local} properties of $R$ through a new "non-analytic reconstruction" method \cite{Greynat:2010kx, Greynat:2011zp}. As we will show, this approach allows us to obtain local properties of the heavy quark correlators at each points of the spectrum with a systematic error and then to find a value of the charm mass directly on a $\chi^2$ regression on the experimental points. \section{Details of the method} \subsection{Non-analytic reconstruction} Let us consider the vector polarization function \begin{equation} \left(q_{\mu} q_{\nu}- q^2 g_{\mu\nu}\right)\ \Pi(q^2) = i \int\!{\,\rm d}^4\,x \e^{iqx}\ \left< 0 \left| \mathrm{T}\, j_\mu(x) \, j^\mu(0) \right| 0 \right> \;, \end{equation} with the current $j_\mu(x)=\overline{\psi}(x) \gamma_\mu \psi (x)$, which has a cut in the complex plane starting at $q^2=4 m^2$, where $m$ is the (pole) mass of the heavy quark considered. In QCD perturbation theory, it can be expanded as \begin{equation} \label{Pi} \Pi(q^{2}) =\Pi(0) + \Pi^{(0)}(q^{2}) + \left(\frac{\alpha_{s}}{\pi}\right)\Pi^{(1)}(q^{2})+\left(\frac{\alpha_{s}}{\pi}\right)^{2}\Pi^{(2)}(q^{2})\ + \left(\frac{\alpha_{s}}{\pi}\right)^{3}\Pi^{(3)}(q^{2}) + {\mathcal O}(\alpha_s^4 )\;, \end{equation} where only $\Pi^{(0)}$ and $ \Pi^{(1)}$ are know analytically, (for $z= q^2/4m^2$) \begin{equation} \Pi^{(0)}(z)=\frac{3}{16 \pi^2}\left[\frac{20}{9}+\frac{4}{3\ z}-\frac{4(1-z)(1+2\ z)}{3\ z}\ G(z)\right]\;, \end{equation} and \begin{multline} \Pi^{(1)}(z) = \frac{3}{16\pi^{2}}\left[\frac{5}{6}+\frac{13}{6z}-\frac{(1-z)(3+2z)}{z}G(z)+ \frac{(1-z)(1-16z)}{6z}G^{2}(z)\right.\\ -\,\left.\frac{(1+2z)}{6z}\left(1+2z(1-z)\frac{d}{dz}\right)\frac{I(z)}{z}\right], \end{multline} in which we used the auxiliary functions, \begin{align} G(z)&=\frac{2\ u\ \log u}{u^2-1}\\ I(z) & = 6\Big[\zeta_{3}+4\,\mbox{Li}_{3}(-u)+2\,\mbox{Li}_{3}(u)\Big] \nonumber\\ &\hspace*{0.8cm} -8\Big[2\,\mbox{Li}_{2}(-u)+\mbox{Li}_{2}(u)\Big]\ln u -2\Big[2\,\ln(1+u)+\ln(1-u)\Big]\ln^{2}u\,, \end{align} and \begin{equation} u=\frac{\sqrt{1-1/z}-1}{\sqrt{1-1/z}+1}\;. \end{equation} As it has been shown \cite{Greynat:2010kx, Greynat:2011zp} even if the functions $\Pi^{(2)}$ and $\Pi^{(3)}$ are unknown analytically, one can reconstruct them from their expansions around $q^2 \rightarrow 0$ (Taylor expansion), $q^2\rightarrow 4m^2$ (threshold expansion) and $q^ 2 \rightarrow \infty$ (OPE), as \begin{equation}\label{approx} \Pi^ {(k)}(z) = \sum_{n=0}^{N_k^*} \Omega^{(k)}(n) \omega^n + \sum_{p,\ell} (-)^\ell \left[\alpha_{p,\ell}^{(k)}{\,\rm Li}^ {(\ell)}(p,\omega) -\beta_{p,\ell}^{(k)}{\,\rm Li}^ {(\ell)}(p,-\omega)\right] + \mathcal{E}^{(k)}(N_k^ *,\omega)\;. \end{equation} Let emphasize a little this expression. First one defines the so-called \textit{conformal change of variable} \begin{equation}\label{conformal} z=\frac{4 \omega}{(1+\omega)^2}\qquad , \qquad \omega=\frac{1-\sqrt{1-z}}{1+\sqrt{1-z}}\ . \end{equation} This change of variables maps the cut $z$ plane into a unit disc in the $\omega$ plane, as we can see on Figure \ref{fig:omegaplan}. The physical cut $z\in [1, \infty[$ is transformed into the circle $|\omega| = 1$ . The points $z = 0$ into $\omega = 0$, $z =1$ into $\omega=1$, the limit $z \rightarrow +\infty \pm i \varepsilon$ into $\omega \rightarrow -1 \pm i \varepsilon$, and $z \rightarrow -\infty $ into $\omega \rightarrow -1$. \begin{figure}[h] \begin{center} \includegraphics[width=12cm]{Fig1.eps} \label{fig:omegaplan} \end{center} \caption{Conformal mapping between $z$ and $\omega$.} \end{figure} For both functions $\Pi^{(2)}$ and $\Pi^{(3)}$, Feynman diagrams calculations at $q^2 \rightarrow 0$ give the expansions up to an order $N_k^*$ (for $k=2,3$) \begin{equation} \Pi^{(k)}(z)\underset{q^2 \rightarrow 0}{=}\sum_{n=0}^{N_k^*}C^{(k)}(n)z^n + \mathcal{O}\left(z^ {N_k^*+1}\right)\underset{\omega \rightarrow 0}{=} \sum_{n=0}^{N_k^*} \Omega^{(k)}(n) \omega^n+ \mathcal{O}\left(\omega^ {N_k^*+1}\right)\, , \end{equation} where the relation between the two coefficients $C^{(k)}$ and $\Omega^{(k)}(n)$ is \begin{align} \Omega^{(k)}(n) &= (-1)^n \sum_{p=1}^{n} \frac{(-1)^p\;4^p\;\Gamma(n+p)}{\Gamma(2p)\Gamma(n+1-p)}\; C^{(k)}(p) \;,\label{Omega0=C}\\ C^{(k)}(n) &=2^{1-2n}\Gamma\left(2n\right)\;\sum_{p=1}^n \frac{ p}{\Gamma\left(1+n-p\right) \Gamma\left(1+n+p\right)} \; \Omega^{(k)}(p) \; .\label{C=Omega0} \end{align} The main part of the approximation in (\ref{approx}) lies on the combination of the polylogarithms functions, \begin{equation} {\,\rm Li}^ {(\ell)}(s,\omega) =\frac{{\,\rm d}^\ell}{{\,\rm d} s^ \ell} \left[\frac{\omega}{\Gamma(s)} \int_0^1 \frac{{\,\rm d} t }{1- \omega t} \log^{s-1}\left(\frac{1}{t}\right) \right]\underset{ |\omega| < 1}{=} (-1)^\ell \sum_{n=1}^\infty \frac{\log^\ell n}{n^s}\; \omega^n\;, \end{equation} and the analytic evaluation of the coefficients $\alpha_{p,\ell}^{(k)}$ and $\beta_{p,\ell}^{(k)}$. In order to reconstruct $\Pi^{(2)}$ and $\Pi^{(3)}$, we collect here their corresponding coefficients (see \cite{Greynat:2010kx, Greynat:2011zp} for more details) \begin{equation} \left\{ \begin{aligned} \alpha^{(2)}_{0,0} &= 3.44514 \\ \alpha^{(2)}_{1,0} &=-0.492936 \\ \alpha^{(2)}_{1,1} &= 2.25\\ \alpha^{(2)}_{2, 0} &= 3.05433\\ \end{aligned}\;, \right. \hspace{2cm} \left\{ \begin{aligned} \beta^{(2)}_{1, 0} &= 0.33723\\ \beta^{(2)}_{1, 1} &= 0.211083\\ \beta^{(2)}_{3, 0} &= 0.183422\\ \beta^{(2)}_{3,1} &= -0.620598\\ \end{aligned} \right.\;, \end{equation} \begin{equation} \left\{ \begin{aligned} \alpha^{(3)}_{-1,0}&= 10.5456 \\ \alpha^{(3)}_{0,1} &= 31.0063\\ \alpha^{(3)}_{0,0} &= -11.0769 \\ \alpha^{(3)}_{1,0} &= 36.3318 \\ \alpha^{(3)}_{1,1} &= 37.1514\\ \alpha^{(3)}_{1,2} &= 10.125 \end{aligned}\;, \right. \hspace{0.5cm} \left\{ \begin{aligned} \beta^{(3)}_{1, 0} &= -0.181866\\ \beta^{(3)}_{1, 1} &= 0.211083\\ \beta^{(3)}_{1, 2} &= -0.879515\\ \beta^{(3)}_{3, 0} &= -10.4385\\ \beta^{(3)}_{3, 2} &= 3.82702 \end{aligned} \right.\;, \hspace{0.5cm} \left\{ \begin{aligned} \beta^{(3)}_{5, 0} &= -70.9277\\ \beta^{(3)}_{5, 1} &= 56.3093 \\ \beta^{(3)}_{5, 2} &= 20.9951 \\ \beta^{(3)}_{5, 3} &= -7.55063 \end{aligned} \right.\;. \end{equation} At least, one gives the error functions $\mathcal{E}^{(k)}$, \begin{align} \label{eq:ErrorFunctions} \mathcal{E}^{(2)}(N^*_2,\omega) &= \begin{bmatrix} + 1 \\ 0\end{bmatrix} \sum_{n=N^*_2+1}^{\infty} \frac{\log^{1.5}n}{n^3}\, \omega^n\\ \mathcal{E}^{(3)}(N^*_3,\omega) &= \begin{bmatrix} + 15 \\ -15\end{bmatrix} \sum_{n=N^*_3+1}^{\infty} \frac{\log^{3}n}{n^2}\, \omega^n\;, \end{align} which encode the systematic error from the reconstructions. \subsection{Experimental data} There exists several experimental results for the $e^+e^-$ in hadrons that one can use for the fitting of the $c$ quark mass. Each of the experiments give the ratio $R(s)$ of the radiation-corrected measured hadronic cross section to the calculated lowest-order cross section for muon pair production, \begin{equation}\label{def:R} R(s) = \frac{\sigma_0\left(e^ + e^ - \longrightarrow \text{hadrons} \right)}{\sigma_0\left(e^ + e^ - \longrightarrow \mu^ +\mu^ - \right)} = \frac{\sigma_0\left(e^ + e^ - \longrightarrow \text{hadrons} \right)}{4\pi \alpha^2/3s} \, , \end{equation} that has the experimental values shown in Fig. \ref{fig:ExperimentalSpectrum2to11Gev} . \begin{table}[h] \begin{center} \begin{tabular}{|c|c|} \hline\hline Experiment & Reference \\ \hline MARK I &\cite{Siegrist:1981zp}\\ PLUTO &\cite{Criegee:1981qx}\\ CrystalBall (Run 1) &\cite{Edwards:1990pc}\\ CrystalBall (Run 2) &\cite{Edwards:1990pc}\\ MD1 &\cite{Blinov:1993fw}\\ CLEO &\cite{Ammar:1997sk}\\ CLEO &\cite{Besson:1984bd,Besson:2007aa}\\ BES &\cite{Bai:2001ct}\\ BES &\cite{Ablikim:2006mb}\\ CLEO & \cite{:2007qwa}\\ CLEO & \cite{CroninHennessy:2008yi}\\ \hline\hline \end{tabular} \caption{All different experimental sets considered for the fits.} \label{tab:ReferencedSets} \end{center} \end{table} \begin{figure}[h] \begin{center} \input{TotalPlot} \end{center} \caption{Collection of the different experimental sets for the V-V spectrum. } \label{fig:ExperimentalSpectrum2to11Gev} \end{figure} This Fig \ref{fig:ExperimentalSpectrum2to11Gev} shows that the complete spectrum is sensitive to resonances, as expected. It is obvious that a perturbative approach cannot take into account the resonances description, then one has to make an arbitrary choice on where we assume that the continuum limit is reached or in other words, where the perturbative description is pertinent. Let's choose the value of 5 GeV. Of course the influence of the arbitrariness has to be discussed and taken account in the evaluation of the error but it is something depending on the perturbative and heavy-quark limit more than the reconstruction itself. The idea now is to perform a fit among all this data points to extract the \textit{perturbative} mass $m_c$ of the $c$-quark. \subsection{Fitting approach} The first step in the fitting procedure is to choose the following expression for the running $\alpha_s(s)$, \begin{multline} \alpha_s(s) = \frac{4\pi}{\beta_0\ln(s/\Lambda^2)}\left[ 1 - \frac{2\beta_1}{\beta_0^2}\frac{\ln[\ln(s/\Lambda^2)]}{\ln(s/\Lambda^2)} \right.\\ \left.+\frac{4\beta_1^2}{\beta_0^4\ln^2(s/\Lambda^2)}\;\left(\left(\ln\left[\ln(s/\Lambda^2)\right]-\frac{1}{2}\right)^2 +\frac{\beta_2\beta_0}{8\beta_1^2}-\frac{5}{4}\right)\right] , \label{AlphaSrunning} \end{multline} where $\Lambda$ is the energy scale and the $\beta$-function has coefficients \begin{align} \beta_0 & = 11-\frac{2n_f}{3}\,, & \beta_1 & = 51-\frac{19n_f}{3}\,, & \beta_2 & = 2857-\frac{5033n_f}{9}+\frac{325n_f^2}{27}\;, \end{align} and $n_f$ is the number of quarks with mass smaller than $\sqrt{s}/2$. The \textit{theoretical} expression (\ref{def:R}) is related to $\Pi(q^2)$ (\ref{Pi}), up to $\alpha_s^3$, \begin{multline} \label{eq:RPi} R_\text{th.}(s) = \left[\left(\frac{2}{3}\right)^2+\left(\frac{1}{3}\right)^2+\left(\frac{1}{3}\right)^2\right]N_c\left[ 1 + \frac{\alpha_s(s)}{\pi}+1.525 \left(\frac{\alpha_s(s)}{\pi}\right)^2-11.686\left(\frac{\alpha_s(s)}{\pi}\right)^3\right] \\ + 12\pi \left(\frac{2}{3}\right)^ 2 {\,\rm Im} \left[\Pi^ {(0)} + \frac{4}{3} \frac{\alpha_s(s)}{\pi}\Pi^{(1)}+\left(\frac{\alpha_s(s)}{\pi}\right)^ 2\Pi^{(2)} + C_3 \left(\frac{\alpha_s(s)}{\pi}\right)^3 \Pi^{(3)}\right] \end{multline} where all $\Pi^{(k)}$ functions have the argument $z = \frac{s}{4m^2_c}$, and $N_c$ is the number of colors. The goal of the analysis is to extract $m_c$ from the comparison between the value of $R_\text{exp.}$ and $R_\text{th.}$. The usual method used is to built the moments associated to R from 0 to $\Lambda^2$ and identifying the coefficients of the Taylor expansion that are proportional up to a factor to $m_c^{-2}$. Instead of this approach, we propose to perform the analysis directly on the function itself, because thanks to the reconstruction method formula (\ref{approx}), its expression is available and its systematic error too (\ref{eq:ErrorFunctions}). For this we will use a $\chi^2$-method with the assumption \begin{equation} \chi^ 2(m_c) \doteq \sum_{j=1}^ {N} \left(\frac{R_\text{exp.}(s_j)-R_\text{th.}(s_j)}{\sigma_\text{exp.}(s_j)}\right)^ 2 + \left(\frac{R_\text{exp.}(s_j)-R_\text{th.}(s_j)}{\sigma_\text{th.}(s_j)}\right)^ 2\;, \end{equation} where the $s_j$ are the experimental energy points, the $\sigma_\text{exp.}$ is the experimental error and the theoretical error $\sigma_\text{th.}$ due the approximation of the reconstruction is given by \begin{multline} \sigma_\text{th.}^2(s) = \frac{256\pi^2}{9} \left|{\,\rm Im} \left[ \left(\frac{\alpha_s(s)}{\pi}\right)^ 2\mathcal{E}^{(2)}\left(N_2^*,\omega\right)\right]\right|^2 \\ +\frac{256\pi^2}{9} \; C_3^2 \; \left|{\,\rm Im} \left[ \left(\frac{\alpha_s(s)}{\pi}\right)^ 3\mathcal{E}^{(3)}\left(N_3^*,\omega\right)\right]\right|^2\;, \end{multline} with $\omega =\frac{1-\sqrt{1-\frac{s}{4m_c^2}}}{1+\sqrt{1-\frac{s}{4m_c^2}}}$. \section{Results} \subsection{Numerical results at order $\alpha_s^2$} At $\alpha_s^ 2$ order, one obtains after a regression procedure with a $\chi^2_\text{min}/\text{d.o.f.}=1.03$, \begin{equation} m_c(pole) = 1.85 \pm 0.08 \;\; \text{GeV}\;, \end{equation} that is translated into the $\overline{\text{MS}}$ mass as \cite{Melnikov:2000qh} \begin{equation} m_c(\overline{\text{MS}}) = 1.12 \pm 0.08 \;\; \text{GeV}\;. \end{equation} Assuming now that the mass $m_c$ obeys to a Gaussian density of probability, one can easily reconstruct points by points the error generated on $R_\text{th.}$ by this hypothesis, taking into account that the relation between $m_c$ and $R_\text{th.}$ is highly non linear and non trivial for expressing the error. We choose then to use a Monte-Carlo approach to obtaining the mean value of $R_\text{th.}$ and its error as shown in Fig \ref{fig:Extrapolation}. \begin{figure}[h] \begin{center} \input{ExtrapolationSTotal} \end{center} \caption{The reconstructed radiation-corrected measured hadronic cross section to the calculated lowest-order cross section for muon pair production.} \label{fig:Extrapolation} \end{figure} \section{Conclusions} We show that it is possible to extract the charm mass value after a $\chi^2$ regression to the experimental data of the radiation-corrected measured hadronic cross section to the calculated lowest-order cross section for muon pair production using the non-analytic reconstruction of the heavy-quark correlators. We present here a preliminary result up to $\alpha_s^2$. The next step would include the order $\alpha_2^3$ and a complete analysis of all different systematic contributions \cite{GMM:2013}.
1,116,691,497,044
arxiv
\section{Introduction} \label{intro} Current studies of the structure of the Milky Way are dominated by a series of major observational programs running from ESA's Hipparcos mission \cite{Hipparcos}, through major photometric surveys such as the SDSS \cite{Sloan}, spectroscopic surveys such as RAVE \cite{RAVE}, to ESA's scheduled Gaia mission \cite{Gaia}, which aims to return photometric and astrometric data for $10^9$ stars and low-dispersion spectra for $\sim10^8$ stars. Turning these data-sets into a consistent picture of the current structure and the assembly of the whole Galaxy, including the dark-matter content, is an ambitious and important goal. It is likely to be impossible without sufficiently sophisticated models that can be used to interpret the data (for example compensating for the observational biases of the various surveys). Models of the gross structure of the Galaxy have been produced with varying levels of complexity. Mass models \cite{WDJJB98:mass,PJM11:mass} simply give the density distribution of the various components of the Galaxy, and thus the Galactic potential. Kinematic models, such as those produced by \textsc{galaxia} \cite{Galaxia}, specify the density and velocity distributions of the luminous components of the Galaxy, but do not consider the question of whether these are consistent with a steady state in any Galactic potential. The Besan\c{c}on Galaxy model \cite{Roea03} is primarily a kinematic model (in that it is not constrained by Newton's laws of motion on large scales), with a dynamical element used to determine the vertical structure of the disc. Fully dynamical models \cite{PJMJJB11:LOS} have a distribution of stars in phase-space which is made from phase-mixed orbits in a given Galactic potential, and therefore a distribution function ({\sc df}) which is a function of the integrals of motion, which by Jeans theorem \cite{Je15} means it is in a steady-state. \section{Benefits of dynamical models} \label{sec:benefits} A previous paper \cite{PJMJJB11:LOS} provides a detailed discussion of the benefits of dynamical models. Here I just sum up the main points. The most obvious advantage of dynamical models is that, unlike kinematic models, they allow us to deduce the gravitational potential of the Galaxy. Existing mass models were fit to observations under the assumption of near circular orbits for various tracers, which is only suitable for a small subset of astrophysical objects, found close to the Galactic plane. Exploiting richer data-sets requires far more careful modelling. This will allow us to infer the distribution of dark matter in the Galaxy, under the assumption that the Galaxy is approximately in a steady state. A second, complementary, advantage is that dynamical models allow us to connect objects that we can observe to objects those we can not. This means we can use observations in the solar neighbourhood to learn about the structure of the Galaxy at large. For example \cite{MayB} showed that if the stellar halo were in virial equilibrium, more than half the stars of the stellar halo would be on orbits that bring them through the solar neighbourhood. An additional advantage of dynamical models is that the associated {\sc df}\ depends only on the three integrals of motion (either explicitly or implicitly), as opposed to the full six dimensions of phase-space. \section{Torus models} Dynamical modelling has been dominated by Schwarzschild modelling \cite{SchwarzI}, especially for analysing the dynamics of early-type galaxies. This technique involves first integrating a number of orbits in the adopted gravitational potential and then seeking weights for these orbits such that the weighted sum of the orbits reproduces the observational data. More recently, the ``made-to-measure'' (M2M) technique introduced by \cite{SyerT} has been used to produce $N$-body models which can be fitted in a broadly similar way \cite{deLorenzi,Bissantz04}, though in this case the particle weights (which are effectively the orbit weights) are determined ``on-the-fly'', rather than after the orbit has been integrated. \begin{figure*} \centerline{\hfil \resizebox{82mm}{!}{\includegraphics{McMillan_P_fig1a.eps}}\hspace{4mm} \resizebox{82mm}{!}{\includegraphics{McMillan_P_fig1b.eps}}} \caption{Distribution of stars in $v_R$ (left) and $v_\phi$ (right) in the Solar neighbourhood, data from the Geneva-Copenhagen survey assuming the value of $\mathbf{v_\odot}$ given in eq.~\ref{eq:vsol} (histogram), and model from \cite{JJB10} which is a single {\sc df}\ $f(\mathbf{J})$ which must fit \emph{both} the $v_R$ and $v_\phi$ distributions (curves, solid curve is from the full model, dashed curve is thin disk contribution only). The dotted vertical line in the $v_\phi$ plot is the assumed circular speed at the Sun. Clearly the $v_\phi$ distributions are significantly offset from one another. The only way to bring the data and model to reasonable agreement is to apply a correction of $\sim7\,\mathrm{km\,s}^{-1}$ to $V_\odot$. Figure reproduced from \cite{JJB10} with permission.} \label{fig:vsol} \end{figure*} Both Schwarzschild and M2M models have {\sc df} s which are implicitly function of the integrals of motion, as they are constructed from phase-averaged orbits. An alternative modelling strategy is to produce models which are \emph{explicitly} functions of the integrals of motion. It is not possible to produce plausible models of the Galactic disc with {\sc df} s which are functions of the classical integrals, $f(E,L_z)$, as these models have equal velocity dispersions in the radial and vertical directions, which is not the case in the Solar neighbourhood \cite{AumerB09}. However, one can instead use the three orbital ``actions'', $J_i$, as the integrals of motion and use analytic {\sc df} s, $f(\mathbf{J})$, which are appropriate for the Galactic disc (e.g. \cite{JJB10}). The three actions $J_i$ and three conjugate angles coordinates $\theta_i$ provide canonical coordinates for six-dimensional phase space \cite{BT08}. The conventional phase space coordinates $\mathbf{w}\equiv({\bf x},{\bf v})$ are $2\pi$-periodic in the angles. The actions are conserved quantities for any orbit, and the angles increase linearly with time: \begin{equation} \mbox{\boldmath$\theta$}(t) = \mbox{\boldmath$\theta$}(0) + \mbox{\boldmath$\Omega$}(\mathbf{J}) t, \end{equation} where the components of $\mbox{\boldmath$\Omega$}$ are the orbital frequencies. The major obstacle to using {\sc df} s of the form $f(\mathbf{J})$ is that the relationship between phase space coordinates $\mathbf{w}$ and $\mathbf{J},\mbox{\boldmath$\theta$}$ is only known analytically for a very limited set of gravitational potentials, none of which provides a reasonable approximation to the Galactic potential. There are two available approaches for determining the relationship between $\mathbf{w}$ and $\mathbf{J}$: \begin{itemize} \item The adiabatic approximation, in which motion in the radial and vertical directions are largely decoupled \cite{JJB10,JJBPJM11}. \item Torus modelling, in which the relationship between $\mathbf{J},\mbox{\boldmath$\theta$}$ and $\mathbf{w}$ in an isochrone potential (for which it is known analytically \cite{BT08}) is numerically distorted to fit the potential of interest \cite{McGJJB90,KaJJB94,PJMJJB08}. \end{itemize} Comparison of these two approaches has shown that for most purposes they agree to reasonable accuracy up to as far as $\sim2.5\,\mathrm{kpc}$ from the Galactic plane \cite{JJBPJM11}. The adiabatic approximation has the advantage that it does not require specialised torus-fitting computer code, and can straightforwardly determine the value of $\mathbf{J}$ for a given $\mathbf{w}$. Torus modelling finds all of the values of $\mathbf{w}$ associated with a given $\mathbf{J}$, but can only find $\mathbf{J}$ given $\mathbf{w}$ as an iterative process. Torus modelling has the advantages that it can tell us about the coupling between different components of motion (e.g. the tilt of the velocity ellipsoid \cite{JJBPJM11}), and that it allows us to find the angle variables $\mbox{\boldmath$\theta$}$. \section{The local standard of rest} As mentioned in Section~\ref{sec:benefits}, one major advantage of using dynamical models is that the dimensionality of the models is reduced by the assumption that the Galaxy is made up of phase mixed orbits -- $f(\mathbf{J})$ depends on only three actions as opposed to the six dimensions of $f(\mathbf{w})$. The value of this simplification of the model is well illustrated by a recent revision in the peculiar Solar velocity relative to the local standard of rest, $\mathbf{v_\odot}$. The value of $\mathbf{v_\odot}$ was found by \cite{WDJJB98:LSR} using observations by Hipparcos. The components of velocity towards the Galactic centre ($U$), in the direction of Galactic rotation ($V$) and towards the north Galactic pole ($W$) were analysed separately. The $V$-component of the Solar velocity is the most difficult to determine as asymmetric drift \cite{BT08} means that average stellar velocity lags the circular velocity. Stromberg's equation was used by \cite{WDJJB98:LSR} to extrapolate from the observed populations (separated by colour) to a hypothetical population with zero velocity dispersion, which would have zero asymmetric drift. The value of $\mathbf{v_\odot}$ found, \begin{equation} \label{eq:vsol} U_\odot,V_\odot,W_\odot = (10.00\pm0.36,\,5.25\pm0.62,\,7.17\pm0.38)\,\mathrm{km\,s}^{-1}, \end{equation} was the widely accepted value for over a decade. Figure~\ref{fig:vsol} shows the distribution of stars in $v_\phi$ and $v_R$ in the Solar neighbourhood, determined from Hipparcos observations and the Geneva-Copenhagen survey \cite{GCS04} assuming this value of $\mathbf{v_\odot}$ (histogram), and the best fitting analytic {\sc df} \ $f(\mathbf{J})$ from \cite{JJB10} (solid curve). This gives new insight because the distributions in $U$ and $V$ are not considered as if they were independent, but instead it is recognised that a single dynamically consistent {\sc df}\ must fit both. The only physically plausible way to bring the model and data $v_\phi$ distributions into agreement is to apply a correction of $\sim7\,\mathrm{km\,s}^{-1}$ to the value of $V_\odot$ (altering the circular velocity moves both distributions and has a negligible effect). This is $\sim11\sigma$ from the widely accepted value! This contention, that the previously assumed value of $V_\odot$ must be in error, has since been supported by analysis of Galactic maser sources with measured parallaxes \cite{PJMJJB10}, and explained as an error in the application of Stromberg's equation, associated with the metallicity gradient in the disc \cite{SBD10}. \section{Beyond steady-state models} Axisymmetric models with {\sc df} s of the form $f(\mathbf{J})$ cannot fully describe the Galaxy, as it is not in fact in a steady state, but they are an important first step towards interpreting observations of our Galaxy, and it is likely to be very fruitful to study observational data for structures that cannot be explained by these models. We can then look for explanations of these structures as signatures of other features, such as the Galactic bar, spiral or warp, or matter that has been accreted. Some examples of using torus models to explore these signatures already exist. \subsection{Signatures of accreted satellites} \begin{figure} \centerline{\hfil \resizebox{41mm}{!}{\includegraphics{McMillan_P_fig2a.eps}}\hspace{4mm} \resizebox{41mm}{!}{\includegraphics{McMillan_P_fig2b.eps}}} \caption{Frequencies $\Omega_r$ \& $\Omega_\phi$ (left) and a convenient projection in action space (right) for stars in the solar neighbourhood from a simulation of the accretion of a satellite galaxy, taken $t=7.5\,\mathrm{Gyr}$ after the satellite was accreted. The stars form patches in frequency space with spacing $\Delta\Omega_\phi$ between patches such that $\Delta\Omega_\phi t/2\pi\sim1$ (the spacing in $\Omega_r$ is not as simple because a two ranges of $\theta_r$ correspond to the solar neighbourhood). The same separation into discrete clumps is visible in action because $\mbox{\boldmath$\Omega$}=\mbox{\boldmath$\Omega$}(\mathbf{J})$. These figures are taken from \cite{PJMJJB08}.} \label{fig:patches} \end{figure} The appearance in angle-action coordinates of an accreted satellite galaxy was explored by \cite{PJMJJB08}. Long after phase mixing has rendered an accreted satellite indistinguishable from the background population in position, there is a strong relationship between the stars' positions and their orbital frequencies (because all the stars were once collected in the same small volume, when they were part of the satellite). This means that a sample of these stars taken from a finite volume is only found in certain small volumes in frequency space. In Fig~\ref{fig:patches} I show figures from \cite{PJMJJB08} of the frequencies and actions of stars in a finite volume about the solar position in a simulation of the disruption of a satellite galaxy. In both cases the figure shows the distribution $7.5\,\mathrm{Gyr}$ after the satellite was disrupted. The distribution in frequency is clearly divided into finite ``patches'', and the distribution in action is even more cleanly divided, because $\mbox{\boldmath$\Omega$}=\mbox{\boldmath$\Omega$}(\mathbf{J})$ and it provides a more convenient projection of the distribution. By considering the spacing between the ``patches'' in frequency space, along with the angles of the individual stars, \cite{PJMJJB08} showed that it was even possible to determine with high accuracy the time since the satellite was disrupted. Similar techniques could even be used to determine the potential of the Galaxy (as the potential must allow the stars to all have come from the same initial satellite). Similar work has been carried out by other authors showing that using the orbital frequencies alone one can achieve some of these objectives, even for cosmological simulations with numerous accreting satellites and a non-static background potential \cite{GoHe10}. \subsection{Signatures of Lindblad resonances} \begin{figure} \centerline{\hfil \resizebox{\hsize}{!}{\includegraphics[angle=270]{McMillan_P_fig3a.eps}}}\vspace{-4mm} \centerline{\hfil \resizebox{\hsize}{!}{\includegraphics[angle=270]{McMillan_P_fig3b.eps}}} \caption{Distribution of stars in action (upper) and angle (lower) in the Solar neighbourhood with positions and velocities given by the Geneva Copenhagen survey. The gross structures of the distributions are due to selection effects. The Hyades moving group can be seen as an overdensity in action that is spread out in $J_r$ at around $J_\phi=0.97$, tending towards slightly lower $J_\phi$ with increasing $J_r$ (consistent with a resonance line in $\mbox{\boldmath$\Omega$}$), and as an overdensity in angle around $\theta_r=-\pi/2$. The expected angle relation for stars at resonance would produce overdensities that tend to increased $\theta_r$ with increased $\theta_\phi$ for inner Lindblad resonances, and to decreased $\theta_r$ with increased $\theta_\phi$ for outer Lindblad resonances. However, selection effects reshape overdensities in angle space quite significantly. Figures are adapted from \cite{PJM11:Res} } \label{fig:hyades} \end{figure} A recent study \cite{Se10} showed that in addition to having a relationship of the form $l\Omega_r+m\Omega_\phi \simeq const$, stars that have recently been trapped at a resonance with a perturber also follow a relationship of the form $l\theta_r+m\theta_\phi \simeq const$, where in both cases $l$ and $m$ are integers, with the perturbation having $m$-fold symmetry and $l$ being $-1$ for an inner Lindblad resonance and $+1$ for an outer Lindblad resonance. The Hyades moving group, which is a very strong feature of the local velocity distribution \cite{WD98}, lies around a straight line in the $J_\phi,J_r$ plane (Fig~\ref{fig:hyades}), of the sort associated with the condition $l\Omega_r+m\Omega_\phi \simeq const$ (a resonance line in action space). This finding is indeed consistent with a resonance line for the both the cases $l=\pm1$ and a range of values of $m$ (with the exact details of the resonance lines depending quite sensitively on the details of the Galactic potential assumed). It was claimed by \cite{Se10} that the distribution of stars in angle coordinates clearly indicated that the Hyades were associated with an inner Lindblad resonance. This is because, in angle space, the stars of the Hyades moving group are associated with an overdensity in the quantity $-\theta_r+m\theta_\phi$ for various values of $m$, and this overdensity did not appear to significantly shift in position as a function of $J_r$. In \cite{PJM11:Res} I compared the observed structure to torus models, and demonstrated the significant and non-intuitive impact of selection effects. In Fig~\ref{fig:hyades} I show the distribution of Solar neighbourhood stars in the $\theta_r,\theta_\phi$ plane. Selection effects are responsible for the overall structure of the density distribution, most notably the high densities around $\theta_\phi=0$, $\theta_r=0$ or $\pm\pi$ and the near absence of stars with $\theta_r<0$, $\theta_\phi>0$ or $\theta_r>0$, $\theta_\phi<0$. A less obvious selection effect brought to light by torus models is that stars with a given value of $\mathbf{J}$, found in the Solar neighbourhood are also very strongly restricted in their possible range of $\mbox{\boldmath$\theta$}$. Using these models I was able to show that observed overdensities in $\theta_r+m\theta_\phi$ (which should correspond to outer Lindblad resonances) were not the result of selection effects (as claimed by \cite{Se10}). Also, these selection effects mean that the stars associated with the resonance line in action space have to lie near certain lines in angle space, otherwise they will not be observed in the Solar neighbourhood. It is the interplay of the resonant conditions on $\mathbf{J}$ and $\mbox{\boldmath$\theta$}$, and the fact that the stars lie in the Solar neighbourhood that determines the distribution in both angle \emph{and} action, and considering either distribution independently of the other can lead to false conclusions. The examination of torus models which included resonant components (with resonant conditions on both $\mathbf{J}$ and $\mbox{\boldmath$\theta$}$) indicated that the Hyades overdensity was consistent with stars trapped at \emph{either} an inner \emph{or} outer Lindblad resonance. \section*{Acknowledgments} I am grateful to James Binney who was co-author or author on many of papers used as examples in this article, and provided helpful suggestions on presentation. This work is supported by a grant from the Science and Technology Facilities Council.
1,116,691,497,045
arxiv
\subsection{Algebraic signatures and basic maps} In our examples and consequences we will consider different algebraic structures on spaces of continuous functions. To this end, recall (see \cite{MR1221741}) that an \emph{algebraic signature}\index{Signature!Algebraic} is a collection $\eta$ of pairs $(*,n)$, where $*$ is a (function) symbol and $n$ is a non-negative integer, called the \emph{arity} of $*$. A \emph{model} of $\eta$ consists of a set $H$ and a map associating to each $(*,n)\in\eta$ a function $*:H^n\to H$, $(c_1,\ldots,c_n)\mapsto c_1*\cdots*c_n$. (We use the convention that $H^0$ is a singleton set, so that a $0$-ary function symbol is the same as a constant.) For example, the usual signature of groups consists of one binary symbol $\cdot$ (for the product), one unary symbol $(\ )^{-1}$ (the inversion) and one constant/0-ary symbol $1$ (the unit). If $H$ is a model of $\eta$ and $X$ is a set then the function space $H^X$ can also be regarded as a model of $\eta$ with the pointwise structure: $(f_1*\cdots*f_n)(x)=f_1(x)*\cdots*f_n(x)$ for all $f_1,\ldots,f_n\in H^X$, all $x\in X$, and all $n$-ary function symbols $\ast$. A \emph{morphism} of two models $H_1$ and $H_2$ of a given signature $\eta$ is a map $m:H_1\to H_2$ such that for any $n$-ary function symbol $*$ of $\eta$ and any $x_1,\ldots,x_n\in H_1$, we have $m(x_1*\cdots*x_n)=m(x_1)*\cdots*m(x_n)$. Finally, a \emph{submodel} of a model $H$ of a signature $\eta$ is a subset $K\subseteq H$ such that for all $n$-ary symbols $*$ of $\eta$ and any $d_1,\ldots,d_n\in K$, $d_1*\cdots*d_n\in K$, so that $K$ can be naturally regarded as a model of $\eta$. In the topological setting, a \emph{continuous model} $H$ of a signature $\eta$ is defined in the same manner, but we assume that all maps are continuous. In this case, if $X$ is a topological space then $C(X,H)$ is a submodel of $H^X$. \begin{proposition}\label{propositionmodelmorphism} Let $X$ and $Y$ be sets. Suppose that $H_X$ and $H_Y$ are models for an algebraic signature $\eta$, and that $\mathcal{A}(X)$ and $\mathcal{A}(Y)$ are submodels of $(H_X)^X$ and $(H_Y)^Y$. Then for all $x\in X$, $\mathcal{A}(X)|_x$ is a submodel of $H_X$, and similarly $\mathcal{A}(Y)|_y$ is a submodel of $H_Y$ for all $y\in Y$. (See Equation \eqref{definitionimageofpointunderclassoffunction}.) Let $T:\mathcal{A}(X)\to\mathcal{A}(Y)$ be a basic map with respect to a function $\phi:Y\to X$, and let $\chi$ be the $T$-transform. Then $T$ is a morphism (for $\eta$) if and only if every section $\chi(y,\cdot)$ is a morphism (from $\mathcal{A}(X)|_{\phi(y)}$ to $\mathcal{A}(Y)|_y$). \end{proposition} \begin{proof} Given $x\in X$, the evaluation map $\pi_x:(H_X)^X\to H_X$, $\pi_x(f)=f(x)$, is a morphism, and it follows that $\mathcal{A}(X)|_x=\pi_x(\mathcal{A}(X))$ is a submodel of $H_X$. Note that for all $y\in Y$, $\chi(y,\cdot)\circ\pi|_{\phi(y)}=\pi_y\circ T$. On one hand, $T$ is a morphism if and only if $\pi_y\circ T$ is a morphism for all $y$. On the other, $\pi|_{\phi(y)}$ is a surjective morphism from $\mathcal{A}(X)$ to $\mathcal{A}(X)|_{\phi(y)}$. It follows that $T$ is a morphism if and only if $\chi(y,\cdot)$ is a morphism for all $y\in Y$.\qedher \end{proof} \subsection{Group-valued maps}\label{subsectiongroupvaluedmaps} We first note a simple property for basic isomorphisms of groups of functions which will be used throughout the next section: \begin{proposition}\label{propositionbasicnessofgroupvalued} Suppose that $H_X$ and $H_Y$ are groups, $\mathcal{A}(X)$ and $\mathcal{A}(Y)$ are subgroups of $(H_X)^X$ and $(H_Y)^Y$, respectively, $T:\mathcal{A}(X)\to\mathcal{A}(Y)$ is a group isomorphism and $\phi:Y\to X$ is a function. Then $T$ is $\phi$-basic if and only if for all $y\in Y$, \[f(\phi(y))=1\Longrightarrow Tf(y)=1,\qquad\forall f\in\mathcal{A}(X).\] \end{proposition} \begin{proof} This follows directly from \ref{propositionbasicisomorphisms}\ref{propositionbasicisomorphisms(a)}, that $f(x)=g(x)$ if and only if $(fg^{-1})(x)=1$ and the fact that $T$ preserves products and inverses. \end{proof} In the topological setting, assume that $H$ is a topological group, $X$ is a locally compact Hausdorff space and $\theta\in C(X,H)$. Then, $C_c(X,\theta)$ and $C_c(X,1)$ are $\perp\!\!\!\perp$-isomorphic, via $f\mapsto f\theta^{-1}$, and in fact this same map can be used to show that $C_c(X,\theta)$ is (weakly) regular if and only if $C_c(X,1)$ is (weakly) regular. Moreover, if $C_c(X,\theta)$ is a subgroup of $C(X,H)$ then $\theta=1$ outside of a compact set. Indeed, let $f,g\in C_c(X,\theta)$ be arbitrary, and suppose that $fg\in C_c(X,\theta)$ as well. As $f$ and $fg$ both coincide with $\theta$ outside of a compact, then $g=1$ outside of a compact. Since $g=\theta$ outside of a compact, then $\theta=1$ outside of a compact. Therefore, in this case, we obtain $C_c(X,\theta)=C_c(X,1)$. Since we will be interested in structure-preserving maps between submodels of $C(X,H)$, in the case the codomain is a group we may always assume that $\theta=1$. In the case that $H=\mathbb{R}$ or $\mathbb{C}$, as additive groups, we reobtain the usual notion of support. \subsection{Continuity} Now, we study continuity of basic $\perp\!\!\!\perp$-isomorphisms and relate it to the continuity of its transform, and for this we need a few results from general topology. \begin{proposition}\label{propositiondisjointopensets} If $F$ is an infinite subset of a regular Hausdorff space $X$, then there exists a countable infinite subset $\left\{y_1,y_2,\ldots\right\}\subseteq F$ and pairwise disjoint open sets $U_n$ such that $y_n\in U_n$ for all $n$. \end{proposition} \begin{proof} If $F$ has a cluster point $y$, we choose arbitrary $y_1\in Y\setminus\left\{y\right\}$. Consider disjoint neighbourhoods $U_1$ of $y_1$ and $V_1$ of $y$. Repeating the procedure with $V_1$ and proceeding recursively, we obtain the desired sequence $\left\{U_n\right\}_n$ If $F$ does not have a cluster point, take any infinite countable subset $\left\{y_n:n\in\mathbb{N}\right\}\subseteq F$, and for each $n$ let $V_n$ be a neighbourhood of $y_n$ with $V_n\cap F=\left\{y_n\right\}$. By regularity of $X$, take a neighbourhood $U_1$ of $y_1$ such that $\overline{U_1}\subseteq V_1$, and then recursively take neighbourhoods $U_n$ of $y_n$ such that $\overline{U}_n\subseteq V_n\setminus\bigcup_{i=1}^{n-1}\overline{U_i}$. \end{proof} For the next proposition, recall (\cite[27.4]{MR0264581}) that a topological space $H$ is \emph{locally path-connected} if every point $t\in H$ admits a neighbourhood basis consisting of path-connected subsets. \begin{proposition}\label{propositionconstructfunction} Let $X$ be a locally compact Hausdorff space, $\left\{x_n\right\}_n$ be a sequence of elements of $X$, $\left\{U_n\right\}_n$ a sequence of pairwise disjoint open subsets of $X$ with $x_n\in U_n$ for all $n$. Let $H$ be a Hausdorff first-countable locally path-connected topological space and consider a family $\left\{g_n:U_n\to H\right\}$ of continuous functions such that $g_n(x_n)$ converges to some $t\in H$. Then \begin{enumerate}[label=(\alph*)] \item\label{propositionconstructfunction(a)} there exists a continuous function $f:X\to H$ such that $f(x_n)=g_n(x_n)$ for all sufficiently large $n$, and $f(x)=t$ for all $x\not\in\bigcup_n U_n$. \item\label{propositionconstructfunction(b)} if $H=\mathbb{R}$, there is a continuous function $f:X\to\mathbb{R}$ such that $f=g_n$ on a neighbourhood of $x_n$ and $f(x)=t$ for all $x\not\in\bigcup_n U_n$. \end{enumerate} (Note that it is not necessary to take subsequences!) \end{proposition} \begin{proof} \begin{enumerate}[label=(\alph*)] \item Let $\left\{W_n\right\}_n$ be a decreasing basis of path-connected neighbourhoods of $t$. Disregarding any $n$ such that $g_n(x_n)$ does not belong to $W_1$, and repeating the sets $W_k$ if necessary (i.e., considering a new sequence of neighbourhoods of $t$ of the form \[W_1,W_1,\ldots,W_1,W_2,W_2,\ldots,W_2,\ldots,\] where each $W_k$ is repeated finitely many times) we may assume that $t_n:=g_n(x_n)\in W_n$. For each $n$, take a continuous path $\alpha_n:[0,1]\to W_n$ such that $\alpha_n(0)=t_n$ and $\alpha_n(1)=t$. Now take continuous functions $b_n:X\to[0,1]$ such that $b_n(x_n)=0$ and $b_n=1$ outside $U_n$. Define $f$ as $\alpha_n\circ b_n$ on each $U_n$, and as $t$ on $X\setminus\bigcup_n U_n$. The only non-trivial part about continuity of $f$ is proving that $f$ is continuous on the boundary $\partial\bigcup_n U_n$. If $x$ belongs to this set then $f(x)=t$. Given a basic neighbourhood $W_N$ of $t$, we have that $\bigcap_{n=1}^N(\alpha_n\circ b_n)(W_N)$ is a neighbourhood of $x$ contained in $f^{-1}(W_N)$, and thus $f$ is continuous \item For each $n$, choose an open neighbourhood $V_n$ of $x_n$ such that $\overline{V_n}\subseteq U_n$. By considering even smaller neighbourhoods if necessary we can assume $|g_n(x)-g_n(x_n)|<1/n$ for all $x\in V_n$. Up to modifying $g_n$ except on a neighbourhood of $x_n$ (e.g.\ by using Urysohn functions), we can assume $g_n=t$ on $U_n\setminus V_n$. Define $f=g_n$ on each $U_n$ and $f=t$ on $X\setminus\bigcup_n U_n$. The proof that $f$ is continuous is similar to that of item \ref{propositionconstructfunction(a)}.\qedhere \end{enumerate} \end{proof} \begin{theorem}\label{theorembasicisomorphisms} Let $X$ and $Y$ be locally compact Hausdorff and for $Z\in\left\{X,Y\right\}$, $H_Z$ a Hausdorff space and $\theta_Z\in C(Z,H_Z)$ be given such that $(Z,\theta_Z,C_c(Z,\theta_Z))$ is regular. Suppose that $T:C_c(X,\theta_X)\to C_c(Y,\theta_Y)$ is a $\perp\!\!\!\perp$-isomorphism, that $\phi:Y\to X$ is the $T$-homeomorphism $\phi$, and that that $T$ is $\phi$-basic. Let $\chi:Y\times H_X\to H_Y$ be the corresponding $(\phi,T)$-transform. Consider the following statements: \begin{enumerate}[label=(\arabic*)] \item\label{theorembasicisomorphisms(1)} $\chi$ is continuous. \item\label{theorembasicisomorphisms(2)} Each section $\chi(y,\cdot)$ is a continuous; \item\label{theorembasicisomorphisms(3)} $T$ is continuous with respect to the topologies of pointwise convergence. \end{enumerate} Then the implications (1)$\Rightarrow$(2)$\iff$(3) always hold. If $X$, $Y$ and $H_X$ are first countable, $H_X$ is locally path-connected and $\theta_X$ is constant, then $(2)\Rightarrow (1)$. \end{theorem} \begin{named}{Remarks} \begin{enumerate}[label=\arabic*.] \item In the last part of the theorem, if $H_X$ admits any structure of topological group then the condition that $\theta_X$ is constant can be dropped, by the discussion in Subsection \ref{subsectiongroupvaluedmaps}. \item The domain of the $(\phi,T)$-transform $\chi$ is $Y\times H_X$ because we assume that $C_c(X,\theta_X)$ is regular. \end{enumerate} \end{named} \begin{proof} The implication (1)$\Rightarrow$(2) is trivial. \begin{description} \item[\normalfont{(2)$\Rightarrow$(3)}] Suppose $f_i\to f$ pointwise. Then for all $y$, the section $\chi(y,\cdot)$ is continuous, thus \[Tf_i(y)=\chi(y,f_i(\phi(y)))\to\chi(y,f(\phi(y)))=Tf(y).\] This proves that $Tf_i\to Tf$ pointwise. \item[\normalfont{(3)$\Rightarrow$(2)}] Assume that $T$ is continuous with respect to pointwise convergence. Let $y\in Y$ be fixed. Suppose that $t_i\to t$ in $H_X$, and let us prove that $\chi(y,t_i)\to\chi(y,t)$. Choose any function $f\in C_c(X,\theta_X)$ such that $f(\phi(y))=t$. Let $\operatorname{Fin}(X)$ be the collection of finite subsets of $X$, ordered by inclusion. We will construct a net $\left\{f_{(F,i)}\right\}_{(F,i)\in\operatorname{Fin}(X)\times I}$ of functions in $C_c(X,\theta_X)$ such that \begin{enumerate}[label=(\roman*)] \item $f_{(F,i)}\to f$ pointwise; \item $f_{(F,i)}(\phi(y))=t_i$. \end{enumerate} Given $F\in\operatorname{Fin}(X)$ and $i\in I$, consider a family $\left\{U_x:x\in F\cup\left\{\phi(y)\right\}\right\}$ of pairwise disjoint open sets such that $x\in U_x$ for each $x$. Given $x\in F\cup\left\{\phi(y)\right\}$ consider, by regularity of $C_c(X,\theta_X)$, a function $f_{(F,i,x)}\in C_c(X,\theta_X)$ such that $\supp(f_{(F,i,x)})\subseteq U_x$ and \[f_{(F,i,x)}(x)=\begin{cases}t_i,&\text{if }x=\phi(y)\\ f(x),&\text{otherwise}.\end{cases}\] We then define $f_{(F,i)}:X\to H_X$ by \[f_{(F,i)}=f_{(F,i,x)}\text{ on each set }U_x\text{ and }f_{(F,i)}=\theta_X\text{ on }X\setminus\bigcup_{x\in F\cup\left\{\phi(y)\right\}}U_x.\] This way we obtain $f_{(F,i)}\in C_c(X,\theta_X)$. Properties (i) and (ii) above are immediate because $t_i\to t$. Since $f_{(F,i)}\to f$ pointwise then $Tf_{(F,i)}\to Tf$ pointwise. For each $F\in\operatorname{Fin}(X)$ and $i\in I$, we have \[\chi(y,t_i)=\chi(y,f_{(F,i)}(\phi(y)))=Tf_{(F,i)}(y)\] so by considering $i$ and $F$ sufficiently large we see that $\chi(y,t_i)\to Tf(y)=\chi(y,t)$ as $i\to\infty$. \end{description} We now assume further that $X$, $Y$ and $H_X$ are first countable, $H_X$ is locally path-connected and $\theta_X$ is constant. Let $c\in H_X$ such that $\theta_X(x)=c$ for all $x\in X$. \begin{description} \item[\normalfont{(2)$\Rightarrow$(1)}] Assume that each section $\chi(y,\cdot)$ is continuous. In order to prove that $\chi$ is continuous, we simply need to prove that for any converging sequence $(y_n,t_n)\to (y,t)$ in $Y\times H_X$, we can take a subsequence $(y_{n'},t_{n'})$ such that $\chi(y_{n'},t_{n'})\to \chi(y,t)$ as $n'\to\infty$. Given a converging sequence $(y_n,t_n)\to(y,t)$, consider an open $Y'\subseteq Y$ with compact closure such that $y,y_n\in Y'$ for all $n$. We have two cases: If for a given $z\in Y$ the set $N(z)=\left\{n\in\mathbb{N}:y_n=z\right\}$ is infinite, then we necessarily have $z=y$. Restricting the sequence $(y_n,t_n)$ to $N(y)$ and using continuity of the section $\chi(y,\cdot)$, we obtain $\chi(y_n,t_n)=\chi(y,t_n)\to \chi(y,t)$ as $n\to\infty$, $n\in N(y)$. Now assume that none of the sets $N(z)=\left\{n\in\mathbb{N}: y_n=z\right\}$ ($z\in Y$) is infinite. We may then take a subsequence and assume that all the elements $y_n$ are distinct, and actually never equal to $y$. Using Proposition \ref{propositiondisjointopensets} and taking another subsequence if necessary, consider pairwise disjoint open subsets $U_n\subseteq\phi(Y')$ with $\phi(y_n)\in U_n$ and $\phi(y)\in X\setminus\bigcup_n\phi(U_n)$. We then consider, by Proposition \ref{propositionconstructfunction}\ref{propositionconstructfunction(a)}, a continuous function $f:\overline{\phi(Y')}\to H_X$ such that $f(\phi(y_n))=t_n$ and $f=t$ on $X\setminus \bigcup_n U_n$. In particular, $f=t$ on the boundary $\partial(\phi(Y'))$. We now need to extend $f$ to an element of $C_c(X,\theta_X)$ (this is where we use that $\theta_X=c$ is constant). We have two cases: \begin{description} \item[Case 1] $t$ is in the path-connected component of $c$: Since $H_X$ is locally path-connected, there is a continuous path $\beta:[0,1]\to H_X$ with $\beta(0)=t$ and $\beta(1)=c$. Let $g:X\to[0,1]$ be a function with $g=0$ on $\phi(Y')$ and $g=1$ outside of a compact containing $\phi(Y')$. By defining $f=\beta\circ g$ outside of $\phi(Y')$, we obtain $f\in C_c(X,\theta_X)$. ($f$ is continuous because $f=t=\beta\circ g$ on $\partial(\phi(Y')$.) \item[Case 2] $t$ is not in the path-connected component of $c$: Since $H_X$ is locally path-connected, its path-connected components are clopen, and regularity of $C_c(X,\theta_X)$ then implies that $X$ (and thus also $Y=\phi(X)$) is zero-dimensional. In particular, we could have assumed at the beginning that $Y'$ is clopen, so simply set $f=c$ outside of $\phi(Y')$. \end{description} In any case, we obtain $f\in C_c(X,\theta_X)$ with $f(\phi(y))=t$ and $f(\phi(y_n))=t_n$, so \[\chi(y_n,t_n)=Tf(y_n)\to Tf(y)=\chi(y,t).\qedhere\] \end{description} \end{proof} \subsection{Non-vanishing bijections} Let $X$ and $Y$ be compact Hausdorff spaces, $H_X$ and $H_Y$ Hausdorff spaces, $\theta_X\in C(X,H_X)$, $\theta_Y\in C(Y,\theta_Y)$ and $\mathcal{A}(X)$ and $\mathcal{A}(Y)$ regular subsets of $C_c(X,\theta_X)$ and $C_c(Y,\theta_Y)$, respectively. \begin{definition}[\cite{MR2324919}]\label{definitionnonvanishingbijection}\index{Non-vanishing} We call a bijection $T:\mathcal{A}(X)\to\mathcal{A}(Y)$ \emph{non-vanishing} if for every $f_1,\ldots,f_n\in\mathcal{A}(X)$, \[\bigcap_{i=1}^n[f_i=\theta_X]=\varnothing\iff\bigcap_{i=1}^n[Tf_i=\theta_Y]=\varnothing.\] \end{definition} \begin{proposition}\label{propositionnonvanishingbijectionisperppisomorphism} If $T:\mathcal{A}(X)\to\mathcal{A}(Y)$ is a non-vanishing bijection, then $T$ is a $\perp\!\!\!\perp$-isomorphism. \end{proposition} \begin{proof} First note that $f\perp g$ if and only if $[f=\theta_X]\cup[g=\theta_X]=X$, or equivalently if every closed subset of $X$ intersects $[f=\theta_X]$ or $[g=\theta_X]$. As the sets $[h=\theta_X]$ ($h\in\mathcal{A}(X)$) form a closed basis, Cantor's Intersection Theorem implies that $f\perp g$ is equivalent to the following statement: \begin{center} ``For all $h_1,\ldots,h_n\in\mathcal{A}(X)$, if $\bigcap_{i=1}^n[h_i=\theta_X]\cap[f=\theta_X]$ and $\bigcap_{i=1}^n[h_i=\theta_X]\cap[g=\theta_X]A$ are both empty, then $\bigcap_{i=1}^n[h_i=\theta_X]=\varnothing$.'' \end{center} This condition is preserved under non-vanishing bijections, and so $T$ is a $\perp$-isomorphism. Now we show that $f\perp\!\!\!\perp g$ is equivalent to the following statement: ``There are finite families $\left\{a_i\right\}$, $\left\{b_j\right\}$ and $\left\{c_k\right\}$ in $\mathcal{A}(X)$ such that \begin{enumerate}[label=(\roman*)] \item $\bigcap_{i,j,k}[a_i=\theta_X]\cap[b_j=\theta_X]\cap[c_k=\theta_X]=\varnothing$; \item $a_i\perp b_j$ for all $i$ and $j$; \item $f\perp b_j$, $f\perp c_k$, $g\perp a_i$, and $g\perp c_k$ for all $i$, $j$ and $k$.'' \end{enumerate} Indeed, if $f\perp\!\!\!\perp g$, by regularity and compactness one can take finite families $\left\{a_i\right\}$ and $\left\{b_j\right\}$ satisfying (ii) and such that $\supp(f)\subseteq\bigcup_i[a_i\neq\theta_X]$ and $\supp(g)\subseteq\bigcup_j[b_j\neq\theta_X]$. Then take a finite family $\left\{c_k\right\}$ such that (iii) is satisfied and such that \[X\setminus\left(\bigcup_{i,j}[a_i\neq\theta_X]\cup[b_j\neq\theta_X]\right)\subseteq\bigcup_k[c_k\neq\theta_X],\] which implies (i). Conversely, if such families $\left\{a_i\right\},\left\{b_j\right\},\left\{c_k\right\}$ in $\mathcal{A}(X)$ exist, suppose $x\in\supp(f)\cap\supp(g)$. By item (i), there is at least one of the indices $i$, $j$ or $k$ such that $a_i(x)$, $b_j(x)$ or $c_k(x)$ is not equal to $\theta_X(x)$. Since $x\in\supp(f)$, and $f\perp b_j$ and $f\perp c_k$ by (iii), the only possibility is that $a_i(x)\neq\theta_X(x)$ for some $i$. The same argument with $g$ in place of $f$ yields $b_j(x)\neq\theta_X(x)$ for some $j$, contradicting (ii). Similar statements hold with $Y$ in place of $X$, and all of these properties are preserved by non-vanishing bijections. Therefore $T$ is a $\perp\!\!\!\perp$-isomorphism. \end{proof} \begin{theorem}\label{theoremnonvanishing} For every non-vanishing bijection $T:\mathcal{A}(X)\to\mathcal{A}(Y)$ there is a unique homeomorphism $\phi:Y\to X$ such that $[f=\theta_X]=\phi([Tf=\theta_Y])$ for all $f\in\mathcal{A}(X)$. \end{theorem} \begin{proof} By Proposition \ref{propositionnonvanishingbijectionisperppisomorphism}, we already know that $T$ is a $\perp\!\!\!\perp$-isomorphism, so let $\phi$ be the $T$-homeomorphism. Recall (Definition \ref{definitionsigma}) that $\sigma^{\theta_X}(f)=\operatorname{int}(\overline{[f\neq\theta_X]})$ and $Z^{\theta_X}(f)=\operatorname{int}([f=\theta_X])$ for all $f\in\mathcal{A}(X)$. Let us prove that \begin{align*} f(x)=\theta_X(x)\iff&\forall h_1\ldots,h_n\in\mathcal{A}(X),\\ &\text{if }x\not\in\bigcup_{i=1}^n\sigma^{\theta_X}(h_i)\text{ then }\bigcap_{i=1}^n[h_i=\theta_X]\cap[f=\theta_X]\neq\varnothing,\stepcounter{counterequation}\tag{\arabic{section}.\arabic{counterequation}}\label{equationnonvanishingproperty} \end{align*} Indeed, for the ``$\Rightarrow$'' direction, assume that $f(x)=\theta_X(x)$ and $h_1,\ldots,h_n$ are such that $x\not\in\bigcup_i\sigma^{\theta_X}(h_i)$. Then $x\in\bigcap_i[h_i=\theta_X]\cap[f=\theta_X]$, and this set is nonempty. For the converse we prove the contrapositive. Assume that $f(x)\neq\theta_X(x)$, so regularity and compactness give us $h_1,\ldots,h_n\in\mathcal{A}(X)$ such that $x\in Z^{\theta_X}(h_i)$ for all $i$ and $X\setminus[f\neq\theta_X]\subseteq\bigcup_i[h_i\neq\theta_X]$, which negates the right-hand side of \eqref{equationnonvanishingproperty}. Since $\phi(\sigma^{\theta_Y}(Th))=\sigma^{\theta_X}(h)$ for all $h\in\mathcal{A}(X)$ and $T$ is non-vanishing, the condition of \eqref{equationnonvanishingproperty} is preserved by $T$ and therefore, $\phi$ has the desired property. \end{proof} \subsection{Disjointness relations} \begin{definition}\label{definitionsigma} Given $f\in C(X,H)$, we define the ($\theta$-)\emph{support} of $f$ as \[\supp^\theta(f)=\overline{[f\neq\theta]}.\] We define $\sigma^\theta(f)$ as the interior of $\supp^\theta(f)$, and $Z^\theta(f)$ as the complement of $\supp^\theta(f)$: \[\sigma^\theta(f)=\operatorname{int}\supp^\theta(f)\qquad\text{and}\qquad Z^\theta(f)=X\setminus\supp^\theta(f).\] Let $C_c(X,\theta)$ be the set of continuous functions from $X$ to $H$ with compact ($\theta$-)support. Whenever there is no risk of confusion, we will drop $\theta$ from the notation and write simply $\supp(f)$, $\sigma(f)$, $Z(f)$ and $C_c(X)$. Now we define the following relations: Given $f,g\in C(X,H)$, \begin{enumerate}[label=\arabic*.] \item $f\perp g$: if $[f\neq\theta]\cap[g\neq\theta]=\varnothing$; we say that $f$ and $g$ are \emph{weakly disjoint};\index{Disjoint functions} \item $f\perp\!\!\!\perp g$: if $\supp(f)\cap\supp(g)=\varnothing$; we say that $f$ and $g$ are \emph{strongly disjoint};\index{Disjoint functions!Strongly disjoint} \item $f\subseteq g$: if $\sigma(f)\subseteq\sigma(g)$; \item $f\Subset g$: if $\supp(f)\subseteq\sigma(g)$. \end{enumerate} \end{definition} Note that $Z^\theta(f)$ is the complement of $\sigma^\theta(f)$ in the lattice of regular open sets of $X$ (see \cite[Chapter 10]{MR2466574}). Also, $\sigma^\theta(f)$ is the regularization of $[f\neq\theta]$, so these two sets do not coincide in general. \begin{example} Suppose $X=H=[0,1]$, $\theta=0$ (the zero map $[0,1]\to[0,1]$) and $f=\mathrm{id}_{[0,1]}$, the identity map of $[0,1]$. Then $[f\neq\theta]=(0,1]$ but $\sigma^\theta(f)=[0,1]$. \end{example} When $H$ comes with additional structure, a particular choice of $\theta$ general yields a suitable notion of support, and the relations above may be described in terms of this structure (this is the main idea in Section \ref{sectionconsequences}.) \begin{example} If $H=\mathbb{R}$ or $\mathbb{C}$, and $\theta=0$ is the constant zero function, we obtain the usual notion of support. We may describe $\perp$ in terms of the multiplicative structure of $C_c(X)=C_c(X,0)$: $f\perp g$ if and only if $fg=0$, which is the only absorbing element of $C_c(X)$. \end{example} \begin{example} If $H=X$ and $\theta=\operatorname{id}_X$, then $\supp^\theta(f)=\overline{\left\{x\in X:f(x)\neq x\right\}}$. \end{example} \begin{example}[Kania-Rmoutil, \cite{MR3813611}]\label{examplekaniarmoutil} Let $X$, $H$ and $\theta$ as in the beggining of this \hyperref[sectiondisjointness]{section}. Define the \emph{compatibility ordering} on $C_c(X,\theta)$ by \[f\preceq g\iff g|_{\supp^\theta(f)}=f|_{\supp^\theta(f)}.\] Then $\theta$ is the minimum of $\preceq$ in $C_c(X,\theta)$. We can describe weak disjointness in $C_c(X,\theta)$ by \[f\perp g\iff\inf_{\preceq}\left\{f,g\right\}=\theta\text{ and }\left\{f,g\right\}\text{ has a }\preceq\text{-upper bound.}\] \end{example} We will, moreover, be interested in recovering $X$ not from the whole set $C_c(X)$, but instead from a subcollection $\mathcal{A}\subseteq C_c(X)$. We will need to assume, however, that there are enough functions in $\mathcal{A}$ in order to separate points of $X$. \begin{definition}\label{definitionregularperpp} Let $\mathcal{A}\subseteq C_c(X)$ be a subset containing $\theta$. Denote $\sigma(\mathcal{A})=\left\{\sigma(f):f\in\mathcal{A}\right\}$. We say that $(X,\theta,\mathcal{A})$ (or simply $\mathcal{A}$) is \begin{enumerate} \item \emph{weakly regular} if $\sigma(\mathcal{A})$ is a basis for the topology of $X$.\index{Regular family of functions!weakly regular} \item \emph{regular} if for every $x\in X$, every neighbourhood $U$ of $x$ and every $c\in H$ there is $f\in\mathcal{A}$ with $f(x)=c$ and $\supp(f)\subseteq U$.\index{Regular family of functions} \end{enumerate} \end{definition} Regularity and weak regularity should be thought of as versions of Urysohn's Lemma. We will need to analyze relations between $\subseteq,\Subset,\perp$ and $\perp\!\!\!\perp$. The following lemma is immediate from $\sigma(f)$ being the regularization of $[f\neq\theta]$. \begin{lemma}\label{lemmadisjointnessintermsofsigma} $f\perp g$ if and only if $\sigma(f)\cap\sigma(g)=\varnothing$. \end{lemma} \begin{definition}\label{definitiondisjointcover}\index{Cover} Suppose $\mathcal{A}\subseteq C_c(X)$ is weakly regular. A family $A\subseteq\mathcal{A}$ is a \emph{cover} of an element $b\in\mathcal{A}$ if given $h\in\mathcal{A}$, $h\perp a$ for all $a\in A$ implies $h\perp b$. \end{definition} \begin{lemma}\label{lemmacovers} Suppose $\mathcal{A}$ is weakly regular, and let $A\subseteq\mathcal{A}$ and $b\in\mathcal{A}$. The following are equivalent: \begin{enumerate}[label=(\arabic*)] \item\label{lemmacovers(1)} $A$ is a cover of $b$; \item\label{lemmacovers(2)} The closure of $\bigcup_{a\in A}[a\neq\theta]$ contains $\supp(b)$. \end{enumerate} \end{lemma} \begin{proof} \ref{lemmacovers(1)}$\Rightarrow$\ref{lemmacovers(2)}: Let $x\in\supp(b)$. Take an open neighbourhood of $x$ of the form $\sigma(h)$, $h\in\mathcal{A}$. Since $\supp(b)=\overline{\sigma(b)}$, the intersection $\sigma(h)\cap\sigma(b)$ is nonempty and thus $h$ and $b$ are not disjoint. From $A$ being a cover, $h$ is not disjoint to some $a\in A$, which means that $\sigma(h)\cap[a\neq\theta]$ is weakly nonempty. Since $A$ is weakly regular then $x$ is in the closure of $\bigcup_{a\in A}[a\neq\theta]$. \ref{lemmacovers(2)}$\Rightarrow$\ref{lemmacovers(1)}: Suppose $h\in\mathcal{A}$ is such that $h\perp a$ for all $a\in A$. This means that $(\bigcup_{a\in A}[a\neq\theta])\cap[h\neq\theta]=\varnothing$. Taking the closure of the first term and using (2) we conclude that $[b\neq\theta]\cap[h\neq\theta]\subseteq\supp(b)\cap[h\neq\theta]=\varnothing$, so $h\perp b$. \end{proof} If $\mathcal{A}\subseteq C_c(X)$ and $\theta\in\mathcal{A}$, note that $\subseteq$ is a preorder on $\mathcal{A}$, whose only infimum is $\theta$. Alternatively, $\theta$ is the only element of $\mathcal{A}$ such that $\theta\perp\theta$. Thus the function $\theta$ is uniquely determined in terms of either $\perp$ or $\subseteq$. \begin{theorem}\label{theoremrelationsrelations} Suppose $\mathcal{A}$ is weakly regular. If $f,g\in\mathcal{A}$, then \begin{enumerate}[label=(\alph*)] \item\label{theoremrelationsrelations(a)} $f\subseteq g\iff\forall h(h\perp g\Rightarrow h\perp f)$; \item\label{theoremrelationsrelations(b)} $f\perp g\iff$ The $\subseteq$-infimum of $\left\{f,g\right\}$ is $\theta$; \item\label{theoremrelationsrelations(c)} $f\subseteq g\iff\forall h(h\Subset f\Rightarrow h\Subset g)$; \item\label{theoremrelationsrelations(d)} $f\subseteq g\iff\forall h(h\perp\!\!\!\perp g\Rightarrow h\perp\!\!\!\perp f)$; \item\label{theoremrelationsrelations(e)} $f\perp\!\!\!\perp g\iff\exists h_1,k_1,\ldots,h_n,k_n\in\mathcal{A}$ such that $\left\{h_1,\ldots,h_n\right\}$ is a cover of $f$, $h_i\Subset k_i$ and $\phantom{f\perp\!\!\!\perp g\iff...}k_i\perp g$ for all $i$; \item\label{theoremrelationsrelations(f)} $f\Subset g\iff\forall b\in\mathcal{A}$, $\exists h_1,\ldots,h_n\in\mathcal{A}$ such that $\left\{h_1,\ldots,h_n,g\right\}$ is a cover of $b$ and $h_i\perp\!\!\!\perp f$. \end{enumerate} By items \ref{theoremrelationsrelations(a)} and \ref{theoremrelationsrelations(b)}, $\perp$ and $\subseteq$ are equi-expressible (i.e., each one is completely determined by the other). By \ref{theoremrelationsrelations(c)} and \ref{theoremrelationsrelations(d)} one can recover $\subseteq$ (and hence $\perp$) from either $\Subset$ or $\perp\!\!\!\perp$, which in turn implies, from \ref{theoremrelationsrelations(e)} and \ref{theoremrelationsrelations(f)}, that $\Subset$ and $\perp\!\!\!\perp$ are also equi-expressible. \end{theorem} \begin{proof} Items \ref{theoremrelationsrelations(a)}-\ref{theoremrelationsrelations(d)} are easy consequences of weak regularity of $\mathcal{A}$, and $X$ being a regular topological space for items \ref{theoremrelationsrelations(c)}-\ref{theoremrelationsrelations(d)}. \begin{enumerate}[label=\ref{theoremrelationsrelations(\alph*)}]\setcounter{enumi}{4 \item $\Rightarrow$: Suppose $f\perp\!\!\!\perp g$. Given $x\in \supp(f)$, weak regularity of $\mathcal{A}$ and regularity of $X$ yield $h_x,k_x\in\mathcal{A}$ such that $x\in\sigma(h_x)$, $h_x\Subset k_x$ and $k_x\perp g$. Compactness of $\supp(f)$ allows us to find the elements $h_i,k_i$ we need. $\Leftarrow$: Suppose such $h_i,k_i$ exist. Then by Lemma \ref{lemmacovers}, \[\supp(f)\subseteq\bigcup_{i=1}^n\supp(h_i)\subseteq\bigcup_{i=1}^n\sigma(k_i)\subseteq X\setminus\supp(g),\] and so $f\perp\!\!\!\perp g$. \item $\Rightarrow$: Suppose $f\Subset g$ and take any $b\in\mathcal{A}$. Since $\supp(b)\setminus\sigma(g)$ is compact and does not intersect $\supp(f)$, we can take $h_1,\ldots,h_n\in\mathcal{A}$ such that $h_i\perp\!\!\!\perp f$ and $\supp(b)\setminus\sigma(g)\subseteq\bigcup_i\sigma(h_i)$, which implies that $\left\{h_1,\ldots,h_n,g\right\}$ is a cover of $b$. $\Leftarrow$: By compactness os $\supp(f)$ and $\supp(g)$, take $b_1,\ldots,b_M$ in $\mathcal{A}$ such that $\supp(f)\cup\supp(g)\subseteq\bigcup_{k=1}^M\sigma(b_k)$. For each $k$ take functions $h_i^k$ satisfying the right-hand side of \ref{theoremrelationsrelations(f)}, relative to $b_k$. Given $k$, we have $\sigma(b_k)\subseteq\bigcup_i\supp(h_i^k)\cup\supp(g)$, so by taking complements we obtain $\bigcap_iZ(h_i^k)\cap Z(g)\cap\sigma(b_k)=\varnothing$, or equivalently $\bigcap_iZ(h_i^k)\cap\sigma(b_k)\subseteq\sigma(b_k)\setminus Z(g)\subseteq\supp(g)$. Taking interiors on both sides yields $\bigcap_iZ(h_i^k)\cap\sigma(b_k)\subseteq\sigma(g)$. Now from $h_i^j\perp\!\!\!\perp f$ we obtain \begin{align*} \supp(f)&\subseteq\bigcap_{i,j}Z(h_i^j)\cap\bigcup_{k=1}^M\sigma(b_k)\subseteq\bigcup_{k=1}^M\left[\bigcap_i Z(h_i^k)\cap\sigma(b_k)\right]\subseteq \sigma(g), \end{align*} so $f\Subset g$. \end{enumerate} \end{proof} \begin{remark} One should be careful with the connections between the pairs of relations $(\perp,\perp\!\!\!\perp)$ and $(\subseteq,\Subset)$. For example, $\perp$ and $\perp\!\!\!\perp$ may coincide but $\subseteq$ and $\Subset$ may not and vice-versa. See the example below. \end{remark} \begin{example} Let $X=H=\mathbb{R}$ and $\theta=0$, so that we are dealing with the usual notion of support. Let $\left\{(a_n,b_n):n\in\mathbb{N}\right\}$ (where $a_n<b_n$) be a basis of open intervals for the usual topology of $\mathbb{R}$ with $|b_n-a_n|\to 0$. Let $\left\{p_n:n\in\mathbb{N}\right\}$ be an one-to-one enumeration of the prime numbers. For each $n$, let $\widetilde{a_n}$ and $\widetilde{b_n}$ be, respectively, the largest and smallest rational numbers with denominators $p_n$ as reduced fractions, and which satisfy $\widetilde{a}_n\leq a_n<b_n\leq \widetilde{b}_n$ -- namely, $\widetilde{a}_n={\lfloor a_np_n\rfloor}/{p_n}$ and $\widetilde{b}_n={\lceil b_np_n\rceil}/{p_n}$. In particular, $|\widetilde{a_n}-a_n|+|\widetilde{b_n}-b_n|\leq 2/p_n\to 0$ (since the enumeration of the primes is one-to-one), and thus the sets $U_n:=(\widetilde{a_n},\widetilde{b_n})$ also form a basis of $\mathbb{R}$. For each $n$, let $f_n\in C_c(\mathbb{R})$ with $\sigma(f_n)=U_n$, e.g. $f_n(x)=\max(0,(x-\widetilde{a_n})(\widetilde{b_n}-x))$, and let $\mathcal{A}=\left\{f_n:n\in\mathbb{N}\right\}$, which is weakly regular. Then $\perp$ and $\perp\!\!\!\perp$ coincide on $\mathcal{A}$, as do $\subseteq$ and $\Subset$, since the boundaries of all $U_n$ are pairwise disjoint. Letting $V=(\widetilde{a_1},\widetilde{b_1}+1)$ and $g_V$ be a continuous function with $\sigma(g_V)=V$, then $\perp$ and $\perp\!\!\!\perp$ still coincide in $\mathcal{A}\cup\left\{g_V\right\}$, however $\subseteq$ and $\Subset$ do not, since $f_1\subseteq g_V$ but $f_1\not\Subset g_V$. Alternatively, set $W=(\widetilde{b_1},\widetilde{b_1}+1)$ and let $g_W$ be any continuous function with $\sigma(g_W)=W$. Then $\subseteq$ and $\Subset$ still coincide in $\mathcal{A}\cup\left\{g_W\right\}$, however $\perp$ and $\perp\!\!\!\perp$ do not, because $f_1\perp g$ but not $f_1\perp\!\!\!\perp g$. \end{example} \subsection{$\perp\!\!\!\perp$-ideals} \input{perppideals} \subsection{The main theorems} The main theorem (\ref{maintheorem}) now follows easily from the previous subsection. Fix two locally compact Hausdorff spaces $X$ and $Y$, and for $Z\in\left\{X,Y\right\}$ a Hausdorff space $H_Z$, a continuous map $\theta_Z:Z\to H_Z$, and a subset $\mathcal{A}(Z)\subseteq C_c(Z,\theta_Z)$. \begin{definition} We call a map $T:\mathcal{A}(X)\to\mathcal{A}(Y)$ a \emph{$\perp\!\!\!\perp$-morphism} if $f\perp\!\!\!\perp g$ implies $Tf\perp\!\!\!\perp Tg$; $T$ is a \emph{$\perp\!\!\!\perp$-isomorphism} if it is bijective and both $T$ and $T^{-1}$ are $\perp\!\!\!\perp$-morphisms. $\perp$, $\subseteq$ and $\Subset$-isomorphisms are define analogously.\index{$\perp\!\!\!\perp$-morphism}\index{$\perp$-morphism}\index{$\subseteq$-morphism}\index{$\Subset$-morphism} \end{definition} By Theorem \ref{theoremrelationsrelations}\ref{theoremrelationsrelations(a)}, $\perp$-morphisms coincide with $\subseteq$ morphisms. We obtain: \begin{theorem}\label{theoremdisjoint} Suppose $(X,\theta_X,\mathcal{A}(X))$ and $(Y,\theta_Y,\mathcal{A}(Y))$ are weakly regular and $T:\mathcal{A}(X)\to\mathcal{A}(Y)$ is a $\perp$-isomorphism. Let $f,g\in\mathcal{A}(X)$. Then $\sigma(f)\subseteq \sigma(g)$ if and only if $\sigma(Tf)\subseteq \sigma(Tg)$. In particular, $Z(f)=\varnothing$ if and only if $Z(Tf)=\varnothing$. \end{theorem} Assume $(X,\theta,\mathcal{A}(X))$ is weakly regular. Let $\widehat{\mathcal{A}(X)}$ be the collection of maximal $\perp\!\!\!\perp$-ideals of $\mathcal{A}$, and endow it with the topology generated by the sets \[U(f)=\left\{I\in\widehat{\mathcal{A}(X)}:\exists g\Subset f\text{ such that }g\not\in I\right\},\qquad f\in\mathcal{A}(X).\] By Theorem \ref{theoremperppideals}, we obtain a bijection $\kappa_X:X\to\widehat{\mathcal{A}(X)}$, $\kappa_X(x)=\mathbf{I}(X\setminus\left\{x\right\})$. Since for all $x\in X$ and $f\in\mathcal{A}(X)$, \[x\in\sigma(f)\iff\exists g\Subset f\text{ such that } x\in\supp(g)\iff\kappa_X(x)\in U(f),\] then $\kappa_X(\sigma(f))=U(f)$, which proves that $\kappa_X$ is a homeomorphism. Perfoming a similar procedure with $Y$ and using standard duality arguments, we obtain our main theorem: \begin{theorem}\label{maintheorem} If $\mathcal{A}(X)$ and $\mathcal{A}(Y)$ are weakly regular and $T:\mathcal{A}(X)\to\mathcal{A}(Y)$ is a $\perp\!\!\!\perp$-isomorphism then there is a unique homeomorphism $\phi:Y\to X$ such that $\phi(\supp(Tf))=\supp(f)$ for all $f\in \mathcal{A}(X)$ (equivalently, $\phi(\sigma(Tf))=\sigma(f)$, or $\phi(Z(Tf))=Z(f)$, for all $f\in\mathcal{A}(X)$). \end{theorem} \begin{definition}\label{definitionthomeomorphism}\index{$T$-homeomorphism} The unique homeomorphism $\phi$ associated with $T$ as in \ref{maintheorem} will be called the \emph{$T$-homeomorphism}. \end{definition} We finish this section by proving that the hypothesis that $T$ is a $\perp\!\!\!\perp$-isomorphism in Theorem \ref{maintheorem} cannot be weakened to a $\perp$-isomorphism in general (Corollaries \ref{corollary01stoneperp} and \ref{corollary01circleperp}). We will consider only real-valued functions and the usual notion of support (i.e., $C_c(X)=C_c(X,0)$ for a space $X$). Let us fix some notation, and recall some basic facts about Stone duality. We refer to \cite{MR1507106,dml124080} for details (see also \cite[II.4.4]{MR861951}) \begin{named}{Notation} Given a Hausdorff space $X$, denote by $\operatorname{RO}_K(X)$ the generalized Boolean algebra of regular open subsets of $X$ with compact closure, and by $\operatorname{KO}(X)$ the generalized Boolean algebra of compact-open subsets of $X$. Given $A\in \operatorname{RO}_K(X)$, we define $\Sigma_X(A)=\left\{f\in C_c(X):\sigma(f)=A\right\}$. Given a generalized Boolean algebra $B$, let $\operatorname{Spec}(B)$ be the spectrum of $B$ (the topological space of ultrafilters on $B$ with the usual power set topology). \end{named} \begin{named}{Stone duality} The usual form of Stone duality states that the category of Stone (i.e., zero-dimensional, compact Hausdorff) spaces is dual to that of Boolean algebras. This extends to the \emph{locally} compact spaces and \emph{generalized} Boolean algebras, and in particular we obtain: Every zero-dimensional, locally compact Hausdorff space $X$ is (naturally) homeomorphic to $\operatorname{Spec}(\operatorname{KO}(X))$, and every generalized Boolean algebra $B$ is (naturally) isomorphic to $\operatorname{KO}(\operatorname{Spec}(B))$. For a more general version, see \cite{bicestarlinglcsd}. \end{named} In order to find non-homeomorphic spaces $X$ and $Y$ such that $C_c(X)$ and $C_c(Y)$ are $\perp$-isomorphic, we need the following result: \begin{theorem}\label{theoremperpisnotenough} Suppose that: \begin{enumerate}[label=(\roman*)] \item\label{theoremperpisnotenough(i)} $X$ and $Y$ are separable, locally compact Hausdorff spaces; \item\label{theoremperpisnotenough(ii)} For all nonempty $A\in\operatorname{RO}_K(X)$ and $B\in\operatorname{RO}_K(Y)$, $\Sigma_X(A)$ and $\Sigma_Y(B)$ are nonempty; \item\label{theoremperpisnotenough(iii)} $\varphi:\operatorname{RO}_K(X)\to\operatorname{RO}_K(Y)$ is an order isomorphism (with respect to set inclusion). \end{enumerate} Then $C_c(X)$ and $C_c(Y)$ are $\perp$-isomorphic. \end{theorem} \begin{proof} Given $A\in\operatorname{RO}_K(X)$, the sets $\Sigma_X(A)$ and $\Sigma_Y(\varphi(A))$ have the same cardinality: They are either singletons if $A=\varnothing$, or have cardinality $2^{\aleph_0}$, otherwise by \ref{theoremperpisnotenough(ii)} and since $X$ and $Y$ are separable, so consider any bijection $T_A:\Sigma_X(A)\to\Sigma_Y(\varphi(A))$. Then the map \[T:C_c(X)\to C_c(Y),\qquad T(f)=T_{\sigma(f)}(f)\] is a $\perp$-isomorphism. \end{proof} The following are technical lemmas which will allow us to construct $X$ and $Y$ satisfying the hypotheses of the theorem above. \begin{lemma \label{lemmawhenregularcompactopenareclopen} Suppose that $\mathfrak{C}$ is a zero-dimensional, locally compact Hausdorff space and $\operatorname{KO}(\mathfrak{C})$ is conditionally complete (i.e., every \emph{bounded} family has a supremum). Then $\operatorname{RO}_K(\mathfrak{C})=\operatorname{KO}(\mathfrak{C})$. \end{lemma} \begin{proof} The only non-trivial part is proving $\operatorname{RO}_K(\mathfrak{C})\subseteq\operatorname{KO}(\mathfrak{C})$. Given $A\in\operatorname{RO}(\mathfrak{C})$, the family $\left\{V\in\operatorname{KO}(\mathfrak{C}):V\subseteq A\right\}$ is bounded (in $\operatorname{KO}(\mathfrak{C})$), so let $U$ be its supremum in $\operatorname{KO}(\mathfrak{C})$. As $\mathfrak{C}$ is zero-dimensional we have $A\subseteq U$. Let us prove the converse inclusion. If $W\in\operatorname{KO}(\mathfrak{C})$ and $W\subseteq U\setminus\overline{A}$, then $A\subseteq U\setminus W$, from which follows that $U\subseteq U\setminus W$, so $W=\varnothing$. This proves that $U\setminus\overline{A}=\varnothing$, because $\mathfrak{C}$ is zero-dimensional, and so $U\subseteq\overline{A}$. However, $U$ is clopen and $A$ is regular open, which implies $A=U\in\operatorname{KO}(\mathfrak{C})$. \end{proof} \begin{lemma}\label{gprokseparableisseparable} If $X$ is a separable locally compact Hausdorff space, then $\mathfrak{C}=\operatorname{Spec}(\operatorname{RO}_K(X))$ is separable. \end{lemma} \begin{proof} Let $\left\{x_n:n\in\mathbb{N}\right\}$ be a countable dense subset of $X$. For each $n$, by Zorn's Lemma there exists $G_n\in\mathfrak{C}$ such that $\left\{U\in\operatorname{RO}_K(X):x_n\in U\right\}\subseteq G_n$. Given a basic open subset $[A]$ of $\mathfrak{C}$, where $A\in\operatorname{RO}_K(X)$, find $n\in\mathbb{N}$ with $x_n\in A$, so $A\in G_n$ and therefore $G_n\in[A]$. Thus $\left\{G_n:n\in\mathbb{N}\right\}$ is dense in $\mathfrak{C}$. \end{proof} \begin{lemma}\label{lemmaregularopeninsecondcountableissigma} If $X$ is a second-countable locally compact Hausdorff space and $A\in\operatorname{RO}_K(X)$, then there is $f\in C_c(X)$ such that $\sigma(f)=A$. \end{lemma} \begin{proof} First choose a countable family of compact subsets $K_n\subseteq A$ such that $\bigcup_nK_n=A$. For each $n$ we can, by Urysohn's Lemma and regularity of $X$, find a continuous function $f_n:X\to[0,1]$ such that $f_n(k)=1$ for all $k\in K_n$ and $\supp(f_n)\subseteq A$. Letting $f=\sum_{n=1}^\infty 2^{-n}f_n$ we obtain $[f\neq 0]=\sigma(f)=A$, because $A$ is regular. \end{proof} Given locally compact Hausdorff $X$, let $\mathfrak{C}=\operatorname{Spec}(\operatorname{RO}_K(X))$. The generalized Boolean algebra $\operatorname{RO}_K(X)$ is conditionally complete, so by Stone duality, $\operatorname{KO}(\mathfrak{C})$ is also conditionally complete, and hence coincides with $\operatorname{RO}_K(\mathfrak{C})$. As a consequence of Lemmas \ref{lemmawhenregularcompactopenareclopen}, \ref{gprokseparableisseparable} and \ref{lemmaregularopeninsecondcountableissigma}, and Theorem \ref{theoremperpisnotenough} when $X=[0,1]$, we conclude: \begin{corollary}\label{corollary01stoneperp} There exists a zero-dimensional, compact Hausdorff topological space $\mathfrak{C}$ (name\-ly, $\mathfrak{C}=\operatorname{Spec}(\operatorname{RO}_K([0,1]))$) -- which is, in particular, not homeomorphic to $[0,1]$ -- such that $C(\mathfrak{C})$ and $C([0,1])$ are $\perp$-isomorphic. \end{corollary} Note that, in the corollary above, the generalized Boolean algebra $\operatorname{RO}_K([0,1])$ is uncountable, thus $\mathfrak{C}=\operatorname{Spec}(\operatorname{RO}_K([0,1]))$ is not second-countable. For our second example, we will consider only second-countable spaces. In the next lemma, we denote by $\operatorname{int}_Z$ and $\operatorname{cl}_Z$ the interior and closure operators on subsets of a topological space $Z$, and by $\operatorname{RO}(Z)$ the Boolean algebra of regular open subsets of $Z$. \begin{lemma}\label{lemmamorphismregularopenofsubset} Let $X$ be a topological space and $U$ an open set of $X$. Then \begin{enumerate}[label=(\alph*)] \item\label{lemmamorphismregularopenofsubset(a)} If $A\in\operatorname{RO}(X)$ then $A\cap U\in\operatorname{RO}(U)$. \item\label{lemmamorphismregularopenofsubset(b)} The map \[\varphi_U:\operatorname{RO}(X)\to\operatorname{RO}(U),\qquad \varphi_U(A)=A\cap U\] is order-preserving and surjective; The map $\zeta_U:A\mapsto \operatorname{int}_X(\operatorname{cl}_X(A))$ is an order-preserving right inverse to $\varphi_U$; \item\label{lemmamorphismregularopenofsubset(c)} $\varphi_U$ is an order isomorphism if and only if $U$ is dense in $X$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[label=(\alph*)] \item Given $A\in\operatorname{RO}(X)$, since $U$ is open, we have \[\operatorname{int}_U(\operatorname{cl}_U(A\cap U))=\operatorname{int}_U(\operatorname{cl}_X(A)\cap U)=\operatorname{int}_X(\operatorname{cl}_X(A))\cap U=A\cap U,\] and this proves that $A\cap U\in\operatorname{RO}(U)$. \item The last statement is the only non-trivial part. If $A\in\operatorname{RO}(U)$, we again use the fact that $U$ is open, as in item \ref{lemmamorphismregularopenofsubset(a)}, to obtain \[\varphi_U(\zeta_U(A)))=\operatorname{int}_X(\operatorname{cl}_X(A))\cap U=\operatorname{int}_U(\operatorname{cl}_X(A)\cap U)=\operatorname{int}_U(\operatorname{cl}_U(A))=A,\] as desired. \item If $U$ is not dense in $X$, then there exists a nonempty set $A\in\operatorname{RO}(X)$ which is disjoint with $U$ (e.g. $A=\operatorname{int}_X(X\setminus U)$), so $\varphi_U(A)=\varnothing=\varphi_U(\varnothing)$, and thus $\varphi_U$ is not injective. Now assume that $U$ is dense in $X$, so let us prove that the map $\zeta_U$ of item \ref{lemmamorphismregularopenofsubset(b)} is a left inverse of $\varphi_U$. Given $A\in\operatorname{RO}(X)$, since $U$ is dense in $X$, \[\zeta_U(\varphi_U(A))=\operatorname{int}_X(\operatorname{cl}_X(A\cap U))=\operatorname{int}_X(\operatorname{cl}_X(A))=A.\qedhere\] \end{enumerate} \end{proof} Let $\mathbb{S}^1=\left\{z\in\mathbb{C}:|z|=1\right\}$ be the complex unit circle. \begin{corollary}\label{corollary01circleperp} $C([0,1])$ and $C(\mathbb{S}^1)$ are $\perp$-isomorphic. \end{corollary} \begin{proof} Let $X=(0,1)$ and $Y=\mathbb{S}^1\setminus\left\{1\right\}$. Then $X$ and $Y$ are homeomorphic, and two applications of Lemma \ref{lemmamorphismregularopenofsubset} imply that $\operatorname{RO}([0,1])$ and $\operatorname{RO}(\mathbb{S}^1)$ are order-isomorphic. Lemmas \ref{gprokseparableisseparable} and \ref{lemmaregularopeninsecondcountableissigma}, and Theorem \ref{theoremperpisnotenough}, imply that $C([0,1])$ and $C(\mathbb{S}^1)$ are $\perp$-isomorphic. \end{proof} \section*{Introduction} \input{dcfintroduction} \section{Disjointness and $\perp\!\!\!\perp$-isomorphisms}\label{sectiondisjointness} \input{disjointness} \section{Basic maps}\label{sectionbasicmaps} \input{basicmaps} \section{Consequences}\label{sectionconsequences} \input{recoveringknownresults} \input{newresults} \subsection*{Acknowledgements} This work is part of the author's PhD thesis at the University of Ottawa, written under supervision of Thierry Giordano and Vladimir Pestov, who provided many useful insights and suggestions to the subject at hand. \bibliographystyle{abbrv} \subsection{$L^1$-spaces}\label{subsectionl1spaces} Let $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$ be fixed. Given a topological space $X$, $C_c(X)$ will denote the space of $\mathbb{K}$-valued compactly supported continuous functions on $X$, where supports are the usual ones: $\supp(f)=\overline{[f\neq 0]}$.) Following \cite{MR924157}, a Borel measure $\mu$ on $X$ will be called \emph{regular} if \begin{itemize} \item $\mu$ is locally finite; \item For every Borel $E\subseteq X$, $\mu(E)=\inf\left\{\mu(V):E\subseteq V,\ V\text{ open}\right\}$; \item For every open $U\subseteq X$ with $\mu(U)<\infty$, $\mu(U)=\sup\left\{\mu(K):K\subseteq U,\ K\text{ compact}\right\}$. \end{itemize} and recall that the \emph{support} of $\mu$ is the set of points $x\in X$ whose neighbourhoods always have positive measure. We say that $\mu$ is \emph{fully supported} (on $X$) is the support of $\mu$ coincides with $X$, i.e., if every nonempty open subset has positive measure. \begin{lemma}\label{lemmadisjointnessl1} Let $X$ be a locally compact Hausdorff space and $\mu$ a fully supported Borel measure on $X$ such that every compact subset of $X$ has finite measure. If $\Vert\cdot\Vert_1$ denotes the corresponding $L^1$-norm, then for all $f,g\in C_c(X)$, $f\perp g$ if and only if \[\Vert Af+Bg\Vert_1=|A|\Vert f\Vert_1+|B|\Vert g\Vert_1\qquad\forall A,B\in\mathbb{K}\stepcounter{counterequation}\tag{\arabic{section}.\arabic{counterequation}}\label{equationlemmadisjointnessl1}\] \end{lemma} \begin{proof} If $f\perp g$ then $|Af+Bg|=|A||f|+|B||g|$, so the condition described above is immediate. For the converse we prove the contrapositive: If $f$ and $g$ are not (weakly) disjoint, then, up to taking multiples, there is $x\in X$ such that $f(x)=g(x)=1$. On an open neighbourhood $U$ of $x$, we have $|f-g|<|f|/2$, so $\Vert f-g\Vert_1\leq\Vert f\Vert_1+\Vert g\Vert_1-\mu(U)/2$. Since $\mu$ has full support, this is strictly smaller than $\Vert f\Vert_1+\Vert g\Vert_1$, which negates the statement in \eqref{equationlemmadisjointnessl1}.\qedhere \end{proof} \begin{theorem}\label{theoremdisjointnessl1} Let $X$ and $Y$ be locally compact Hausdorff spaces with fully supported regular Borel measures $\mu_X$ and $\mu_Y$, and let $T:C_c(X)\to C_c(Y)$ be a linear isomorphism which is isometric with respect to the $L^1$-norms. Then there exists a homeomorphism $\phi:Y\to X$ and a continuous function $p:Y\to\mathbb{S}^1$ such that \[Tf(y)=p(y)\frac{d\mu_X}{d(\phi_*\mu_Y)}(\phi(y))f(\phi(y))\] for all $f\in C_c(X)$ and $y\in Y$. \end{theorem} \begin{proof} By the previous lemma, $T$ is a $\perp$-isomorphism, so Jarosz' Theorem (\ref{theoremjarosz}) implies that there are a homeomorphism $\phi:Y\to X$ and a non-vanishing continuous function $P:Y\to\mathbb{C}$ such that \[T(f)(y)=P(y)f(\phi(y)),\qquad\text{for all }f\in C_c(X)\text{ and }y\in Y\] Now using the fact that $T$ is isometric, we have, for every $f\in C_c(X)$, \[\int_X|f|d\mu_X=\int_Y|Tf|d\mu_Y=\int_Y|P||f\circ\phi|d\mu_Y=\int_X|P\circ\phi^{-1}||f|d(\phi_*\mu_Y)\] which means that $|P\circ\phi^{-1}|$ is a continuous instance of the Radon-Nikodym derivative $d\mu_X/d(\phi_*\mu_Y)$. Since $p=P/|P|:Y\to\mathbb{S}^1$ is continuous, we obtain the result. \end{proof} \subsection{Measured groupoid convolution algebras}\label{subsectionwendelstheorem} In the next three results, we will focus on convolution algebras of topological groupoids. First, we will consider \emph{measured groupoids} in the sense of Hahn. See \cite{MR496796,MR496797,MR584266,MR1444088,MR0427598}. Note that throughout this section we consider only regular measures. Recall that a \emph{groupoid} $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is a small category with inverses, and a \emph{topological groupoid} is a groupoid endowed with a topology making the product and inversion maps continuous. The \emph{source} and \emph{range} maps on $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ are defined as $\so(a)=a^{-1}a$ and $\ra(a)=aa^{-1}$, respectively. The \emph{unit space} of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]=\so(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, and is identified with the object space of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$. We denote by $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[2]=\left\{(a,b)\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}\times\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}:\so(a)=\ra(b)\right\}$ the set of \emph{composable pairs}, i.e., pairs $(a,b)$ for which the product $ab$ is defined. Given $x,y\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$, we denote $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^y=\ra^{-1}(x)$, $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}_x=\so^{-1}(x)$, and $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}_x^y=\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^y\cap\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}_x$. We call $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}_x^x$ the \emph{isotropy group} at $x$. The product of two subsets $A,B\subseteq\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is $AB=\left\{ab:(a,b)\in(A\times B)\cap\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[2]\right\}$. Common examples of topological groupoids are: Equivalence relations, topological groups (where $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ is a singleton) and topological spaces (where $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}=\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$). More generally, every continuous group action induces a \emph{transformation groupoid}. Initially, given a locally compact Hausdorff topological groupoid $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$, we consider $C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, the space of real or complex-valued, compactly supported, continuous functions on $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$, simply as a vector space (with pointwise operations). Recall the notion of a \emph{Haar system}: \begin{definition}[{\cite[Definition 2.2]{MR584266}}]\label{definitionhaarsystem} A (continuous) \emph{left Haar system} for a locally compact Hausdorff topological groupoid $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is a collection of regular Borel measures $\lambda=\left\{\lambda^x:x\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]\right\}$ on $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ such that \begin{enumerate}[label=(\roman*)] \item\label{haarsystem1} For each $x\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$, $\lambda^x$ has support contained in $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^x$; \item\label{haarsystem2} (left invariance) For each $a\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$, $\lambda^{\ra(a)}(aE)=\lambda^{\so(a)}(E)$ for every compact $E\subseteq \@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\so(a)}$; \item\label{haarsystem3} (continuity) For each $f\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, the map $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]\to\mathbb{C}$, $x\mapsto \int fd\lambda^x$, is continuous. \end{enumerate} We will not make any distinction of whether each $\lambda^x$ is considered as a measure on $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ or as a measure on $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^x$. We say that $\lambda$ is \emph{fully supported} if the support of $\lambda^x$ is all of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^x$ for all $x\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$. \end{definition} Left invariance of $\lambda$ implies that for all $a\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ and $f\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\ra(a)})$ \[\int f(s)d\lambda^{\ra(a)}(s)=\int f(at)d\lambda^{\so(a)}(t)\] and we endow $C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ with convolution product \[(fg)(a)=\int f(s)g(s^{-1}a)\lambda^{\ra(a)}(s)=\int f(at)g(t^{-1})\lambda^{\so(a)}(t),\] which makes $C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ an algebra. It follows that for all $f,g\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, \[\supp(fg)\subseteq\supp(f)\supp(g).\stepcounter{counterequation}\tag{\arabic{section}.\arabic{counterequation}}\label{equationinclusionsupportproduct}\] \begin{definition} Let $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ be a locally compact Hausdorff topological groupoid with a left Haar system $\lambda$. Given a regular Borel measure $\mu$ on $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$, the measure \emph{induced} by $\mu$ and $\lambda$ is the unique regular Borel measure $(\lambda\circ\mu)$ on $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ which satisfies \[(\lambda\circ\mu)(E)=\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]}\lambda^x(E)d\mu(x)\] for every compact $E\subseteq\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$. (The existence of $(\lambda\circ\mu)$ is guaranteed by the Riesz-Markov-Kakutani Representation Theorem. \end{definition} If $\mu$ is fully supported on $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ and $\lambda$ is a fully supported Haar system on a locally compact Hausdorff groupoid $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$, then $(\lambda\circ\mu)$ is fully supported on $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$. The following lemma will allow us to verify if certain maps are groupoid morphisms. \begin{lemma}\label{lemmaproduct} Given a topological groupoid $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ with $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ Hausdorff and $a,b\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$, we have $\so(a)=\ra(b)$ if and only if for every pair of neighbourhoods $U$ of $a$ and $V$ of $b$ the product $UV$ is nonempty. \end{lemma} \begin{proof} From the second condition one can construct two nets $(a_i)_i$ and $(b_i)_i$ (over the same ordered set) converging to $a$ and $b$, respectively, such that $\so(a_i)=\ra(b_i)$, and so $\so(a)=\ra(b)$ because $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ is Hausdorff. The reverse implication is trivial. \end{proof} \begin{lemma}\label{lemmaderivativehaar} If $\lambda$ and $\mu$ are continuous Haar systems on a locally compact Hausdorff topological groupoid $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ such that the Radon-Nikodym derivatives $D^x=\frac{d\lambda^x}{d\mu^x}$ exist for all $x\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$, then $D$ is invariant in the sense that for all $a\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ and $\mu^{\so(a)}$-almost every $g\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\so(a)}$, $D^{\ra(a)}(ag)=D^{\so(a)}(g)$. \end{lemma} \begin{proof} Using invariance of $\mu$ and $\lambda$, we have, for every $f\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\so(a)})$, \begin{align*} \int f(t) D^{\ra(a)}(at)d\mu^{\so(a)}(t)&=\int f(a^{-1}s)D^{\ra(a)}(s)d\mu^{\ra(a)}(s)\\ &=\int f(a^{-1}s)d\lambda^{\ra(a)}(s)=\int f(t)d\lambda^{\so(a)}(t). \end{align*} Thus $t\mapsto D^{\ra(a)}(at)$ satisfies the property of the Radon-Nikodym derivative $d\lambda^{\so(a)}/d\mu^{\so(a)}=D^{\so(a)}$, hence these functions coincide $\mu^{\so(a)}$-a.e.\qedhere \end{proof} Now we prove that the convolution algebra $C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ together with the $L^1$-norm coming from $\lambda\circ\mu$, where $\lambda$ is a fully supported Haar system on $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ and $\mu$ is a fully supported measure on $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ completely determines the triple $(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},\lambda,\mu)$, up to isomorphism (compare to \cite{MR2102633}). We denote by $\mathbb{S}^1$ the \emph{circle group} (of complex numbers with absolute value $1$ under multiplication). \begin{theorem}\label{theoremmeasuredgroupoidconvolutionalgebra} Let $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ and $\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$ be locally compact Hausdorff groupoids. For each $Z\in\left\{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\right\}$, let $\lambda_Z$ be a fully supported Haar system on $Z$, and $\mu_Z$ a fully supported regular Borel measure on $Z^{(0)}$. If $T:C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})\to C_c(\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak})$ is an algebra isomorphism which is isometric with respect to the $L^1$-norms of $(\lambda_Z\circ \mu_Z)$ ($Z\in\left\{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\right\}$), then there are a topological groupoid isomorphism $\phi:\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\to\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ and a continuous morphism $p:\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\to\mathbb{S}^1$ such that \[Tf(h)=p(h)D(\phi(h))f(\phi(h))\] where $D$ is a continuous instance of the Radon-Nikodym derivative \[D(a)=\frac{d\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^{\ra(a)}}{d(\phi_*\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}^{\phi^{-1}(\ra(a))})}(a)\] and in this case, $\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}=\phi_*\mu_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}$. \end{theorem} \begin{proof} Again applying Lemma \ref{lemmadisjointnessl1} and Jarosz' Theorem (\ref{theoremjarosz}), we can find a homeomorphism $\phi:\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\to\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ and a continuous non-vanishing scalar function $P$ such that \[Tf(h)=P(h) f(\phi(h))\qquad\text{for all } f\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})\text{ and }h\in\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}.\] Let us check that $\phi$ is a groupoid morphism. Suppose $(a,b)\in\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}[2]$, and consider neighbourhoods $U$ and $V$ of $\phi(a)$ and $\phi(b)$, respectively. Choose non-negative functions $f_U,f_V\in C_c(\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak})$ such that \[\supp(f_a)\subseteq\phi^{-1}(U),\quad\supp(f_b)\subseteq \phi^{-1}(V)\quad\text{and}\quad f_a(a)=f_b(b)=1.\] Then $ab\in\supp(f_af_b)$, because $\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}$ has full support, and so $\phi(ab)\in\supp(T^{-1}(f_af_b))$. As $\phi$ is the $T$-homeomorphism and $T$ is an isomorphism, the inclusion in \eqref{equationinclusionsupportproduct} implies $\phi(ab)\subseteq UV$. By Lemma \ref{lemmaproduct}, the product $\phi(a)\phi(b)$ is defined, and moreover, continuity of the product implies that every neighbourhood of $\phi(a)\phi(b)$ contains $\phi(ab)$. Since $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is Hausdorff then $\phi(ab)=\phi(a)\phi(b)$. Therefore $\phi$ is a morphism and a homeomorphism, thus a topological groupoid isomorphism. If $f,g\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ and $c\in\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$, then on one hand \begin{align*} T(fg)(c)&=(TfTg)(c)=\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\ra(c)}} Tf(t)Tg(t^{-1}c)\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}^{\ra(c)}(t)\\ &=\int_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}^{\ra(c)}} P(t)f(\phi(t))P(t^{-1}c)g(\phi(t^{-1}c))d\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}^{\ra(c)}(t)\\ &=\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\phi(\ra(c))}} P(\phi^{-1}(s))f(s)P(\phi^{-1}(s)^{-1}c)g(s^{-1}\phi(c))d(\phi_*\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}^{\ra(c)})(s)\stepcounter{counterequation}\tag{\arabic{section}.\arabic{counterequation}}\label{theoremmeasuredgroupoidconvolutionalgebratodecomposeP1} \end{align*} and on the other \begin{align*} T(fg)(c)&=P(c)(fg)(\phi(c))=P(c)\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\phi(\ra(c))}} f(t)g(t^{-1}\phi(c))d\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^{\phi(\ra(c))}(t)\stepcounter{counterequation}\tag{\arabic{section}.\arabic{counterequation}}\label{theoremmeasuredgroupoidconvolutionalgebratodecomposeP2} \end{align*} Now let $f\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\phi(\ra(c))})$ be an arbitrary non-negative function. Define $g\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}_{\phi(\so(c))})$ by $g(t)=f(\phi(c)t^{-1})$. Extending $f$ and $g$ arbitrarily to elements of $C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, Equations \eqref{theoremmeasuredgroupoidconvolutionalgebratodecomposeP1} and \eqref{theoremmeasuredgroupoidconvolutionalgebratodecomposeP2} become \[\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\phi(\ra(c))}} P(\phi^{-1}(s))P(\phi^{-1}(s)^{-1}c)|f(s)|^2d\phi_*\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}^{\ra(c)}(s)=P(c)\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\phi(\ra(c))}} |f(s)|^2d\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^{\phi(\ra(c))}(s)\stepcounter{counterequation}\tag{\arabic{section}.\arabic{counterequation}}\label{equationtheoremrenaultorsimilar}\] for all non-negative $f\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\phi(\ra(c))})$ and all $c\in H$. Define $D:\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}\to\mathbb{C}$ (or $\mathbb{R}$ in the real case) by \[D(s)=\frac{P(\phi^{-1}(s))P(\phi^{-1}(s)^{-1})}{P(\phi^{-1}(\ra(s)))}.\] Using Equation \eqref{equationtheoremrenaultorsimilar} with $y=\ra(c)$ in place of $c$, we obtain \[\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\phi(y)}} D(s)|f(s)|^2d(\phi_*\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}^{y})(s)=\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\phi(y)}}|f(s)|^2d\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^{\phi(y)}(s)\] for all $f\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\phi(y)})$, thus $D$ is a continuous instance of the Radon-Nikodym derivative \[D(s)=\frac{d\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^{\phi(y)}}{d(\phi_*\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}^{y})}(s)=\frac{d\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^{\phi(\ra(c))}}{d(\phi_*\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}^{\ra(c)})}(s).\] Now applying this to Equation \eqref{equationtheoremrenaultorsimilar}, and using regularity of all measures involved, we conclude that for $\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^{\phi(\ra(c))}$-a.e.\ $s\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\phi(\ra(c))}$ \[P(\phi^{-1}(s))P(\phi^{-1}(s)^{-1}c)=D(s)P(c)\] and since all functions involved are continuous, and $\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^{\phi(\ra(c))}$ has full support, the same equality is actually valid for \emph{all} $s\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\phi(\ra(c))}$. Equivalently, for all $c\in\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$ and all $t\in\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}^{\ra(c)}$, $P(t)P(t^{-1}c)=D(\phi(t))P(c)$. Together with Lemma \ref{lemmaderivativehaar}, this implies that the map $p=P/(D\circ\phi)$ is a continuous groupoid morphism from $\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$ to the group of non-zero scalars. Now let us verify that $\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}$ and $\phi_*\mu_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}$ are equivalent measures. Suppose $K\subseteq\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ is a compact set with positive measure $\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}$. For every $x\in K$, choose any nonempty open set $A_x$ in $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^x$ with compact closure (although $A_x$ is not necessarily open in $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$). Letting $E=\overline{\bigcup_{x\in K}A_x}$, we obtain a compact subset of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ with positive measure $(\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}\circ\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}})$. By regularity of the measures, we can extend $T$ using the formula $T(f)=P\cdot (f\circ\phi)$ to an isometry on $L^1(\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}\circ\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}})$. In particular, \[(\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}\circ\mu_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}})(\phi^{-1}(E))=(\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}\circ\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}})(E)>0\] so $\ra(\phi^{-1}(E))=\phi^{-1}(K)$ has positive measure $\mu_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}$. By inner regularity of the measures, we conclude that $\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}$ is absolutely continuous with respect to $\phi_*\mu_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}$, and the reverse is similar. To prove that $p$ takes value in $\mathbb{S}^1$, let us denote by $\Vert\cdot\Vert_Z$ the $L^1$-norm with respect to $(\lambda_Z\circ\mu_Z)$ when $Z\in\left\{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\right\}$. For all $f\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ we have \begin{align*} \Vert Tf\Vert_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}&=\int_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}[0]}\left(\int_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}^y}|Tf|d\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}^y\right)d\mu_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}(y)=\int_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}[0]}\left(\int_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}^y}D|p(f\circ\phi)|d\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}^y\right)d\mu_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}(y)\\ &=\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]}\left(\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{x}}|(p\circ\phi^{-1})f|d\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^x\right)d(\phi_*\mu_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}})(x)\\ &=\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]}\left(\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{x}}|(p\circ\phi^{-1})(s)|\left(\frac{d(\phi_*\mu_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}})}{d\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}}(x)\right)|f(s)|d\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^x(s)\right)d\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}(x)\\ &=\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]}\left(\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{x}}|(p\circ\phi^{-1})(s)|\left(\frac{d(\phi_*\mu_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}})}{d\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}}(\ra(s))\right)|f(s)|d\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^x(s)\right)d\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}(x)\\ &=\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}|p\circ\phi^{-1}|\left(\frac{d(\phi_*\mu_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}})}{d\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}}\circ\ra\right)|f(s)|d(\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}\circ\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}) \end{align*} and since $\Vert Tf\Vert_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}=\Vert f\Vert_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}=\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}|f|d(\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}\circ\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}})$, we obtain \[|p\circ\phi^{-1}|=\frac{d\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}}{d(\phi_*\mu_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}})}\circ\ra\qquad(\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}\circ\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}})\text{-a.e.}\stepcounter{counterequation}\tag{\arabic{section}.\arabic{counterequation}}\label{equationcomparecocycleandradonnykodymofbasisspace}\] Since $p$ is a morphism then $p(\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}^{(0)})=\left\{1\right\}$, which, along with Equation \eqref{equationcomparecocycleandradonnykodymofbasisspace} and continuity of $p$, yields $|p|=|p\circ\ra|=1$ on $\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$. The same Equation (\ref{equationcomparecocycleandradonnykodymofbasisspace}) then also implies $\mu_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}=\phi_*\mu_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}$. \end{proof} \begin{remark} In the case of groups, the same type of classification was first proven by Wendel in \cite{MR0049910}, when considering the whole $L^1$-algebras of locally compact Hausdorff groups instead of only algebras of compactly supported continuous functions. Further generalizations of Wendel's Theorem were proven in \cite{MR0177058} and \cite{MR0193531}, and closely results in \cite{MR0160846} and \cite{MR0361622}. \end{remark} \subsection{$(I,\ra)$-Groupoid convolution algebras} In the next result we will again use the convolution algebras of topological groupoids, however now we will consider another norm, which was already defined in the work of Hahn (\cite{MR496797}) and played an important role in Renault's work (\cite{MR584266}). A locally compact Hausdorff groupoid is \emph{étale} if the range map $\ra:\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}\to\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ is a local homeomorphism. From this, it follows that $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ is open in $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$, that the product map is open and that $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^x$ is discrete for all $x\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ (see \cite{MR2304314}). Let $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ be a locally compact étale Hausdorff groupoid, $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$ and $\theta=0$, and let $\lambda$ be a Haar system for $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$. Again, we will consider the convolution algebra $C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})=C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},\mathbb{K})$ as defined in the previous subsection. Every left Haar system on an étale groupoid is essentially the counting measure (\cite[2.7]{MR584266}), in the sense that for all $x,y\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$, the map $a\mapsto \lambda^{\ra(a)}(\left\{a\right\})$ is constant on set $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}_x^y$. We define the $(I,\ra)$-norm on $C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ as \[\Vert f\Vert_{I,\ra}=\sup_{x\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]}\int |f|d\lambda^x.\] As $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is Hausdorff, the unit space $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is a closed subgroupoid of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$, hence (trivially) étale, Hausdorff and locally compact itself. The convolution product on $C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0])$ coincides with the pointwise product, and the $(I,\ra)$-norm is the uniform one: $\Vert f\Vert_{I,\ra}=\Vert f\Vert_\infty=\sup_{x\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]}|f(x)|$. Moreover, $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ is also open in $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ (because $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is étale), so we can identify $C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0])$ with the subalgebra $\left\{f\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]:\supp(f)\subseteq\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]\right\}$ of $C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$. \begin{definition} The algebra $C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0])$, identified as a subalgebra of $C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, is called the \emph{diagonal subalgebra} of $C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$. If $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ and $\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$ are locally compact étale Hausdorff groupoids, an isomorphism $T: C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})\to C_c(\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak})$ is called \emph{diagonal-preserving} if $T(C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]))=C_c(\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}[0])$. \end{definition} \begin{theorem}\label{theoremrenault} Let $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ and $\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$ be locally compact Hausdorff étale groupoids with continuous fully supported left Haar systems $\lambda_G$ and $\lambda_H$, respectively, and $T:C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})\to C_c(\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak})$ a diagonal-preserving algebra isomorphism, isometric with respect to the $(I,\ra)$-norms. Then there is a (unique) topological groupoid isomorphism $\phi:\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\to\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ and a continuous morphism $p:\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\to\mathbb{S}^1$ such that \[Tf(h)=p(h)D(\phi(h))f(\phi(h))\] where $D$ is a continuous instance of the Radon-Nikodym derivative \[D(a)=\frac{d\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^{\ra(a)}}{d(\phi_*\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}^{\phi^{-1}(\ra(a))})}(a).\] \end{theorem} \begin{proof} By the Banach-Stone Theorem (\ref{theorembanachstone}), there is a homeomorphism $\phi:\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}[0]\to\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ and a continuous function $P:\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}[0]\to\mathbb{S}^1$ such that $Tf(y)=P(y)f(\phi(y))$ for all $f\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0])$ and $y\in\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}[0]$. Since $T$ is multiplicative we obtain $P=1$. (The same conclusion can be obtained in a similar manner by Milgram's or Jarosz' Theorem.) For each $x\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$, let $\left\{a^x_i:i\in I_x\right\}$ be a net of functions in $C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0])$ satisfying: \begin{enumerate}[label=(\roman*)] \item\label{theoremrenaultauxiliary(i)} $0\leq a^x_i\leq 1$, and $a^x_i(x)=1$; \item\label{theoremrenaultauxiliary(ii)} $\bigcap_i\supp(a^x_i)=\left\{x\right\}$; \item\label{theoremrenaultauxiliary(iii)} If $j\geq i$ then $[a^x_j\neq 0]\subseteq[a^x_i\neq 0]$. \end{enumerate} Items \ref{theoremrenaultauxiliary(ii)}-\ref{theoremrenaultauxiliary(iii)} and compactness of each $\supp(a^x_i)$ imply that $\left\{[a^x_i\neq 0]:i\in I_x\right\}$ is a neighbourhood basis at $x$. For $y\in\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}[0]$, let $a^y_i=T(a^{\phi(y)}_i)=a^{\phi(y)}_i\circ\phi$, so that the net $\left\{a^y_i:i\in I_{\phi(y)}\right\}$ satisfies \ref{theoremrenaultauxiliary(i)}-\ref{theoremrenaultauxiliary(iii)} as well. Continuity of $\lambda_G$ implies that for all $x\in \@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ and $f\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, $\lim_{i\in I_x}\Vert a^x_i f\Vert_{I,\ra}=\int |f(\gamma)|d\lambda^x(\gamma)$, and similarly on $\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$. Given $f,g\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, we use Lemma \ref{lemmadisjointnessl1} to obtain \begin{align*} f\perp g&\iff\forall x\left(f|_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^x}\perp g|_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^x}\text{ in }C(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^x)\right)\\ &\iff\forall x\forall A,B(\lim\Vert a^x_i(Af+Bg)\Vert_{I,\ra}=\lim(|A|\Vert a^x_i f\Vert_{I,\ra}+|B|\Vert a^x_i g\Vert_{I,\ra} \end{align*} and the last condition is preserved by $T$, so by Jarosz' Theorem $T$ is of the form $Tf(\alpha)=\widetilde{P}(\alpha)f(\widetilde{\phi}(\alpha))$ for a certain homeomorphism $\widetilde{\phi}:\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\to\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ and a non-vanishing continuous scalar function $\widetilde{P}$. We can readily see that $\widetilde{\phi}$ and $\widetilde{P}$ are extensions of $\phi$ and $P$, respectively, so instead let us simply denote $\widetilde{\phi}=\phi$ and $\widetilde{P}=P$. The proof that $\phi$ is a groupoid isomorphism, and that $P$ can be decomposed as $P=(D\circ\phi)p$ for the (continuous) Radon-Nikodym derivative $D$ and some continuous morphism $p:\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\to\mathbb{C}\setminus\left\{0\right\}$ is the same as in Theorem \ref{theoremmeasuredgroupoidconvolutionalgebra}, but the verification that $|p|=1$ is different. Given $y\in\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}[0]$ and $f\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, using the definition of $D$ as a Radon-Nikodym derivative, \[\int_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}^y}|Tf|d\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}^y=\int_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}^y} |p|(D\circ\phi)|f\circ\phi|d\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}^y=\int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\phi^{-1}(y)}} |p\circ\phi^{-1}||f|d\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^{\phi(y)}.\] Considering again the functions $a^{\phi(y)}_i$ and $a^y_i$, and the fact that $T$ is isometric we obtain \begin{align*} \int_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}^{\phi^{-1}(y)}} |p\circ\phi^{-1}||f|d\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^{\phi(y)}&=\lim_{i\in I_{\phi(y)}}\Vert a_i^y Tf\Vert_{I,\ra}=\lim_{i\in I_{\phi(y)}}\Vert a_i^{\phi(y)} f\Vert_{I,\ra}= \int |f|d\lambda_{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}^{\phi(y)} \end{align*} for all $f\in C_c(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, which implies that $|p|=1$ $\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}^y$-a.e. Since $p$ is continuous and $\lambda_{\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}$ is fully supported, we conclude that $|p|=1$ on $\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$. \end{proof} \subsection{Steinberg Algebras} Steinberg algebras were independently introduced in \cite{MR2565546} and \cite{MR3274831}, as algebraic analogues of groupoid C*-algebras, and are generalizations of Leavitt path algebras and universal inverse semigroup algebras. We refer to \cite{MR3743184} and \cite{carlsenrout2017} for more details. A locally compact, zero-dimensional étale groupoid is called \emph{ample}. A \emph{bisection} of a groupoid $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is a subset $A\subseteq\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ such that the source and range maps are injective on $A$. If $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is an ample Hausdorff groupoid, we denote by $\KB(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ the semigroup of compact-open bisections of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$, which forms a basis for the topology of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$. In this section, $R$ is a fixed commutative ring with unit. Given an ample Hausdorff groupoid $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$, we denote by $R^{\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}}$ the $R$-module of $R$-valued functions on $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$. Given $A\subseteq\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$, we define $1_A$ as the characteristic function of $A$ (with values in $R$). Steinberg algebras were The goal of this section is to prove that the Steinberg algebra of an ample Hausdorff groupoid $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ together with its diagonal algebra completely characterize $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$. Although the main theorem of this subsection (Theorem \ref{theoremsteinbergalgebras}) is partially stated and proven (for more general \emph{graded} Steinberg algebras) in \cite[Corollary 3.14]{carlsenrout2017}, we can obtain a precise classification of the diagonal-preserving isomorphisms of Steinberg algebras, as described in Theorem \ref{theoremsteinbergalgebras} and Corollary \ref{corollaryclassificationofautomorphismsofsteinbergalgebras} We will need to recover the bisections of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ from $A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, and in particular the compact-open subsets of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$. The main idea is, again, to identify subsets of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ with their characteristic functions, and these are precisely the functions which attain only the values $0$ and $1$. We thus need to assume an extra condition on the ring $R$. \begin{definition}[{\cite[X.7]{MR1878556}}] A (nontrivial) commutative unital ring $R$ is \emph{indecomposable} if its only idempotents are $0$ and $1$. Equivalently, $R$ is indecomposable if it cannot be written as a direct sum $R\simeq R_1\oplus R_2$, where $R_1$ and $R_2$ are nontrivial rings. \end{definition} A subset $A$ of a groupoid $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is a bisection if and only if $AA^{-1}\cup A^{-1}A\subseteq \@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$. A similar type of condition will be used to recover an ample Hausdorff groupoid $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ from the pair $(A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}),D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}))$. \begin{definition} A \emph{normalizer} of $D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ is an element $f\in A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ for which there exists $g\in A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ such that \begin{enumerate} \item[(i)] $fD_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})g\subseteq D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ and $gD_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})f\subseteq D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$; \item[(ii)] $fgf=f$ and $gfg=g$. \end{enumerate} We denote by $N_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ the set of normalizers of $D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$. An element $g$ satisfying (i) and (ii) above will be called and \emph{inverse of $f$ relative to $D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$}. \end{definition} It can be verified that $N_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ is a multiplicative subsemigroup of $A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, which is moreover an \emph{inverse} semigroup. In particular, the inverse relative to $D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ of an element $f\in N_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ is unique. However, we will not necessitate these results. \begin{example} If $A\in\KB(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ then $1_A\in N_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$. More generally, if $\lambda_1,\ldots,\lambda_n$ are invertible elements in $R$ and $U_1,\ldots,U_n$ are compatible disjoint compact-open bisections (that is, $\bigcup_i U_i$ is also a bisection), then $f=\sum_i \lambda_i1_{U_i}$ is a normalizer of $D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$. The unique inverse of $f$ relative to $D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ is given by $f^*=\sum_i\lambda_i^{-1}1_{U_i^{-1}}$, that is, $f^*(a)=f(a^{-1})^{-1}$ for all $a\in\supp(f)^{-1}$. \end{example} In order to recover $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ from $(A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}),D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}))$, we need that all normalizers of $D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ have the form described in the example above, so additional conditions will have to be assumed on the groupoids we consider. The following property was considered in \cite{arxiv1711.01903v2}, when working on the same recovery problem. \begin{definition}\label{definitionlocalbisectionhypothesis} If $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is an ample Hausdorff groupoid and $R$ is an indecomposable (commutative, unital) ring, we say that $(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},R)$ satisfies the \emph{local bisection hypothesis} if $\supp(f)$ is a bisection for all $f\in N_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$. \end{definition} \begin{lemma}\label{lemmaformofnrgforlocalbisectionhypothesis} Suppose that $(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},R)$ satifies the local bisecion hypothesis and $f\in N_r(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$. Then for all $a\in\supp(f)$, $f(a)$ is invertible in $R$. \end{lemma} \begin{proof} Let $g$ be an inverse of $f$ relatively to $D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$. First note that $fg=f1_{\so(\supp(f))}g\in D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$. Let $a\in\supp(f)$. Since $fg$ is an idempotent in $D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, the product in $D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ is pointwise and $R$ is indecomposable, then $fg(\ra(a))\in\left\{0,1\right\}$. Moreover, as $\supp(f)$ is a bisection we have $f(a)=fgf(a)=(fg)(\ra(a))f(a)$, so $(fg)(\ra(a))=1$. Again using that $\supp(f)$ is bisection, we obtain \[1=fg(\ra(a))=f(a)g(a^{-1})\] so $f(a)$ is invertible in $R$.\qedhere \end{proof} The following stronger condition was considered in \cite{carlsenrout2017}, and is more easily checked than the one above \begin{definition} If $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is an ample Hausdorff groupoid and $R$ is an indecomposable (commutative, unital) ring, we say that $(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},R)$ \emph{satisfies condition (S)} if the set of all $x\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ such that the group ring $R\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}_x^x$ has only trivial units\footnote{If $G$ is a group and $R$ is a unital ring, a trivial unit of $RG$ is an element of the form $ug$ where $u$ is invertible in $R$ and $g\in G$.} is dense in $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$.\nocite{MR3743184} \end{definition} The property of a group-ring $RG$ (where $G$ is a group and $R$ is a ring) having only trivial units has been studied, for example, in \cite{MR0002137}. A group $G$ is \emph{indexed} if there exists a non-trivial group morphism from $G$ to $\mathbb{Z}$, and \emph{indicable throughout} if every nontrivial finitely generated subgroup of $G$ is indexed. (Note that if $G$ is indicable throughout then $G$ is torsion-free.) \begin{theorem}[{\cite[Theorem 13]{MR0002137}}] If $G$ is indicable throughout and $R$ is an integral domain, then $RG$ has only trivial units. \end{theorem} Every free group, and every torsion-free abelian group is indicable throughout. The class of indicable throughout groups is closed under products, free products and extensions (see \cite{MR0002137}). The following result from \cite{carlsenrout2017} provides a large class of groupoids satisfying the local bisection hypothesis. Although in \cite{carlsenrout2017} the authors assume stronger hypotheses (namely, that $R$ is an integral domain and $R\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}_x^x$ does not have zero divisors for all $x$ in a dense subset of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$), their proof works under the weaker assumptions we adopt. \begin{lemma}[{\cite[Lemma 3.5(2)]{carlsenrout2017}}]\label{lemmanormalizerssteinbergalgebradiagonal2} Suppose that $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is an ample Hausdorff groupoid, $R$ is an indecomposable ring and that $(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},R)$ satisfies condition (S). Then $(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},R)$ satisfies the local bisection hypothesis \end{lemma} An important class of groupoids consists of the \emph{topologically principal} ones, whose associated algebras have been extensively studied (see, for example, \cite{MR3189105,MR2745642,MR1681679,MR2590626}). In fact it is possible to classify C*-algebras which come from them (see \cite{MR2460017}). \begin{definition}\label{definitiontopologicallyprincipal} A topological groupoid $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is \emph{topologically principal} if the set of all $x\in X$ whose isotropy group $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}_x^x$ is trivial is dense in $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$. \end{definition} It follows that if $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ is an ample Hausdorff topologically principal groupoid and $R$ is an indecomposable ring, then $(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},R)$ satisfies the local bisection hypothesis. We are ready to classify diagonal-preserving isomorphisms of Steinberg algebras of groupoids and rings satisfying the local bisection hypothesis. For this, let us first define the class of maps of interest: \begin{definition}\label{definitioncocycle} Let $R$ and $S$ be rings and $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ be a groupoid. Denote by $\operatorname{Iso}_+(R,S)$ the set of additive isomorphisms from $R$ to $S$. A map $\chi:\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}\to\operatorname{Iso}_+(R,S)$ satisfying $\chi(ab)(rs)=\chi(a)(r)\chi(b)(s)$ for all $(a,b)\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[2]$ and $r,s\in R$ will be called a \emph{cocycle}. \end{definition} \begin{example} Consider $C_2=\left\{1,g\right\}$, the group of order 2, acting on itself by left multiplication and consider the transformation groupoid $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}=C_2\ltimes C_2$. Let $R=S=\mathbb{Z}$. If we define $\chi:\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}\to\operatorname{Iso}_+(R,S)$ by $\chi(1,y)(r)=r$ and $\chi(g,r)=-r$, then $\chi$ is a cocycle. Note that $\chi(g,1)$ is not a ring isomorphism. \end{example} \begin{example} Suppose $R$ is a unital ring and $\chi:\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}\to\operatorname{Iso}_+(R,R)$ is a cocycle. Then $\chi$ is a morphism from the groupoid $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ to the group (under composition) $\operatorname{Iso}_+(R,R)$ if, and only if, $\chi(x)=\mathrm{id}_R$ for all $x\in \@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$. \end{example} \begin{proposition} Let $R$ and $S$ be commutative unital rings, $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ a groupoid and $\chi:\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}\to\operatorname{Iso}_+(R,S)$ a cocycle. Then \begin{enumerate} \item[(a)] For all $x\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$, $\chi(x)$ is a ring isomorphism; \item[(b)] For all $a\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$, if $u\in R$ is invertible then $\chi(a)(u)$ is invertible in $S$, and $\chi(a)(u)^{-1}=\chi(a^{-1})(u)$. \item[(c)] For all $a\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$, $\chi(\so(a))=\chi(\ra(a))$. In other words, the restriction of $\chi$ to $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]$ is \emph{invariant}. \end{enumerate} \end{proposition} \begin{proof} The cocycle condition states that $\chi(ab)(rs)=\chi(a)(r)\chi(b)(s)$ for all $a,b,r,s$. Taking $a=b=x$ yields (a). Taking $b=a^{-1}$, $r=u$ and $s=u^{-1}$ yields (b), and for item (c) we use commutativity of $S$: \[\chi(\so(a))(r)=\chi(a^{-1}a)(1r)=\chi(a^{-1})(1)\chi(a)(r)=\chi(a)(r)\chi(a^{-1})(1)=\chi(\ra(a))(r).\qedhere\] \end{proof} We endow $\operatorname{Iso}_+(R,S)$ with the topology of pointwise convergence, so that a map $\chi$ from a topological space $X$ to $\operatorname{Iso}_+(R,S)$ is continuous if and only if for every $r\in R$, the map $X\ni x\mapsto\chi(x)(r)\in S$ is continuous, that is, locally constant. \begin{theorem}\label{theoremsteinbergalgebras} Let $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ and $\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$ be ample Hausdorff groupoids. Let $R$ and $S$ be two indecomposable (commutative, unital) rings such that $(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},R)$ and $(\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak},S)$ satisfy the local bisection hypothesis. Let $T:A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})\to A_S(\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak})$ be a diagonal-preserving ring isomorphism, that is, $T(D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}))=D_S(\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak})$. Then there exists a unique topological groupoid isomorphism $\phi:\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\to\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ and a continuous cocycle $\chi:\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\to\operatorname{Iso}_+(R,S)$ such that $Tf(a)=\chi(a)(f(\phi(a)))$ for all $a\in\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$ and $f\in A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$. \end{theorem} \begin{proof} Since $T$ preserves the respective diagonal algebras, it also preserves their normalizers, i.e., $T(N_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}))=N_S(\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak})$. Let us describe disjointness first for elements in $N_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$. The local bisection hypothesis implies, by Lemma \ref{lemmaformofnrgforlocalbisectionhypothesis}, that an element $f$ of $N_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ has the form \[f=\sum_{i=1}^n\lambda_i1_{U_i}\] where $\lambda_1,\ldots,\lambda_n$ are invertible elements in $R$ and $U_1,\ldots,U_n$ are disjoint compact-open bisections of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ such that $\bigcup_{i=1}^n U_i=\supp(f)$ is also a compact-open bisection. A similar statement holds for $N_S(\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak})$. If $f,g\in N_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, then $f\subseteq g$ if and only if $f=g p$ for some $p\in D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$: Indeed, \begin{itemize} \item If $f=gp$ then $\supp(f)\subseteq\supp(g)\supp(p)\subseteq\supp(g)$; \item Conversely, if $\supp(f)\subseteq\supp(g)$ take $p=g^*f$. Then \[\supp(p)\subseteq\supp(g^*)\supp(f)\subseteq(\supp(g))^{-1}\supp(g)\subseteq\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}[0]\] where the last inclusion follows from $\supp(g)$ being a bisection. The equality $f=gp$ follows from the definition of $p$ and since $\ra(\supp(f))\subseteq\ra(\supp(g))$. \end{itemize} Therefore $T$ preserves inclusion of normalizers. Since $N_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ contains $\left\{1_U:U\in\KB(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})\right\}$ then it is regular (Definition \ref{definitionregularperpp}), because $\KB(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ is a basis for the topology of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$. Hence $T$ also preserver disjointness of normalizers (Theorem \ref{theoremrelationsrelations}). To prove that $T$ preserves disjointness in all of $A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, we decompose elements of $A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ in terms of elements of $N_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ and $D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$: if $f,g\in A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, then $f\perp g$ if and only if there are finite collections of normalizers $f_i,g_j\in N_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ and elements $\widetilde{f_i},\widetilde{g_j}\in D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ ($1\leq i\leq n$, $1\leq j\leq m$) such that \[f=\sum_i f_i\widetilde{f_i},\quad g=\sum_j g_j\widetilde{g_j}\text{ and }f_i\perp g_j\text{ for all }i,j\] Indeed, if there are such $f_i,g_j,\widetilde{f_i},\widetilde{g_j}$ then $\supp(f)\subseteq\bigcup_i\supp(f_i)$ and $\supp(g)\subseteq\bigcup_j\supp(g_j)$, and the latter sets are disjoint. Conversely, we write $f=\sum_i\lambda_i1_{A_i}$, where the $A_i$ are pairwise disjoint compact-open bisections and $\lambda_i\neq 0$, and take $f_i=1_{A_i}$ and $\widetilde{f_i}=\lambda_i1_{\so(A_i)}$, so that $\supp(f)=\bigcup_{i=1}^n\supp(f_i)$. Similarly, writing $g=\sum_j\widetilde{g_j}g_j$ where $g_j\in N_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ and $\supp(g)=\bigcup_j\supp(g_j)$, then $f\perp g$ implies $f_i\perp g_j$ for all $i$ and $j$. Therefore, $T$ is a $\perp$-isomorphism. Note that $\perp\!\!\!\perp$ and $\perp$ coincide in $A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, since its elements are locally constant (similarly to Example \ref{examplekaniarmoutil}). Then $T$ is a $\perp\!\!\!\perp$-isomorphism, so let $\phi:\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\to\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ be the $T$-homeomorphism. The verification that $\phi$ is a groupoid isomorphism is similar to that of Theorem \ref{theoremmeasuredgroupoidconvolutionalgebra}, so we omit it. Since elements of $A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ (and $A_S(\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak})$) are locally constant, then for all $f\in A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, \[f(\phi(a))=0\iff \phi(a)\in Z(f)\iff x\in Z(Tf)\iff Tf(a)=0.\] and therefore $T$ is basic (by additivity of $T$ and Proposition \ref{propositionbasicnessofgroupvalued}). Let $\chi$ be the $T$-transform. Since $T$ is additive with the pointwise operations, each section $\chi(\alpha)=\chi(\alpha,\cdot)$ is additive (by Proposition \ref{propositionmodelmorphism}). This yields a map $\chi:\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}\to\operatorname{Iso}_+(R,S)$, and we need now to verify that $\chi$ is a cocycle. If $(a,b)\in\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}[2]$ and $r,s\in R$, choose compact-open bisections $U,V$ of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ containing $\phi(a)$ and $\phi(b)$, respectively. Then using multiplicativity of $T$ we obtain \begin{align*} \chi(ab)(rs)&=\chi(ab)\big((r1_U)(s1_V)(\phi(ab))\big)=T\big((r1_U)(s1_V)\big)(ab)\\ &=\big(T(r1_U)T(s1_V)\big)(ab)=\sum_{cd=ab}T(r1_U)(c)T(s1_V)(d)\\ &=\sum_{cd=ab}\chi(c)\big(r1_U(\phi(c))\big)\chi(d)\big(s1_V(\phi(d))\big) \end{align*} If $cd=ab$ is such that the last term above is nonzero, then $\ra(c)=\ra(a)$ and $\phi(c)\in U$, so since $U$ is a bisection we obtain $a=c$. Similarly, $d=b$, therefore $\chi(ab)(rs)=\chi(a)(r)\chi(b)(s)$, and $\chi$ is a cocycle. It remains only to prove that $\chi$ is continuous: Let $r\in R$ be fixed, $a\in\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$ and $U$ any compact-open bisection containing $\phi(a)$. For all $b\in\phi^{-1}(U)$, \[\chi(b)(r)=\chi(b)(r1_U(\phi(b)))=T(r1_U)(b)\] which means that the map $b\mapsto\chi(b)(r)$ coincides with $T(r1_U)$ on $\phi^{-1}(U)$ and thus it is continuous. \end{proof} We should note that according to \cite{arxiv1711.01903v2}, the local bisection hypothesis is preserved by diagonal-preserving isomorphisms, so the same result is valid if we assume, in principle, that only $(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},R)$ satisfies this condition. From this we can immediately classify the group of diagonal-preserving automorphisms of Steinberg algebras satisfying the local bisection hypothesis. Let $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ be a groupoid and $R$ a ring. Denote by $\operatorname{Coc}(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},R)$ the set of all continuous cocycles $\chi:\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}\to\operatorname{Iso}_+(R,R)$, which is a group with the canonical (pointwise) structure: $(\chi\rho)(a)=\chi(a)\circ\rho(a)$ for all $\chi,\rho\in C(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},R)$ and $a\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$, where $\circ$ denotes composition. Let $\operatorname{Aut}(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ be the group of topological groupoid automorphisms of $\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$. Then $\operatorname{Aut}(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$ acts on $\operatorname{Coc}(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},R)$ in the usual (dual) manner: for $\phi\in\operatorname{Aut}(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$, $\chi\in\operatorname{Coc}(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},R)$ and $a\in\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}$ set $(\phi\chi)(a)=\chi(\phi^{-1}a)$. Denote by $\operatorname{Aut}(A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}),D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}))$ the group of diagonal-preserving ring automorphisms of $A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$. From Theorem \ref{theoremsteinbergalgebras} we immediatelly obtain: \begin{corollary}\label{corollaryclassificationofautomorphismsofsteinbergalgebras} If $(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},R)$ satisfies the local bisection hypothesis, then the group $\operatorname{Aut}(A_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}),D_R(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak}))$ is isomorphic to the semidirect product $C(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak},R)\rtimes\operatorname{Aut}(\@ifnextchar[{\@Gwithbrak}{\@Gwithoutbrak})$. \end{corollary} \subsection{Groups of circle-valued functions} \nocite{MR3552377} A natural question in C*-algebra theory is whether we can extend isomorphisms of unitary groups of C*-algebras to isomorphisms (or anti/conjugate-isomorphisms) of the whole C*-algebras. Dye proved in \cite{MR0066568} that this is always possible for continuous von Neumann factors, however this is not true in the general C*-algebraic case, even in the commutative case\footnote{Recall that the unitary group of a commutative C*-algebra $C(X)$, where $X$ is compact Hausdorff, is $C(X,\mathbb{S}^1)$.}. Therefore we should consider isomorphisms between unitary groups which preserve more structure than just the product, such as an analogue to that of Theorem \ref{theoremliwong}. \begin{theorem}\label{theoremisomorphismgroupifcirclevaluedfunctions} Let $X$ and $Y$ be two Stone (zero-dimensional, compact Hausdorff) spaces. Suppose that $T:C(X,\mathbb{S}^1)\to C(Y,\mathbb{S}^1)$ is a group isomorphism such that $1\in f(X)\iff 1\in Tf(X)$. Then there exist a homeomorphism $\phi:Y\to X$, a finite isolated subset $F\subseteq Y$ and a continuous function $p:Y\setminus F\to\{\pm 1\}$ satisfying $Tf(y)=f(\phi(y))^{p(y)}$ for all $y\in Y\setminus F$. In particular, if $X$ (and/or $Y$) do not have isolated points then $F=\varnothing$. \end{theorem} The following lemma, based on \cite{MR3162258}, will be crucial to the proof of the theorem. \begin{lemma}\label{lemmatheoremisomorphismgroupifcirclevaluedfunctions} Suppose that $X$ is a Stone space. For every pair of continuous functions $f,g:X\to\mathbb{S}^1$ and for every finite subset $F\subseteq X$ such that $f(F)\cup g(F)$ does not contain $1$, there exists $h\in C(X,\mathbb{S}^1)$ such that \[h(x)\not\in\left\{f(x),g(x)\right\}\text{ for all }x\text{ and }h(F)=\{1\}.\] \end{lemma} \begin{proof} For every point $y\in F$, choose a clopen set $U_y$ containing $y$ such that $f(U_y)\cup g(U_y)$ does not contain $1$. For every other point $x\in X':=X\setminus\bigcup_{y\in F}U_y$, there is a clopen set $U\subseteq X'$ such that $f(U)\cup g(U)\neq\mathbb{S}^1$. Using compactness of $X'$ and taking complements and intersections if necessary we can find a clopen partition $U_1,\ldots,U_n$ of $X'$ such that $f(U_i)\cup g(U_i)\neq\mathbb{S}^1$ for all $i$. Simply choose $z_i\in \mathbb{S}^1\setminus(f(U_i)\cup g(U_i))$ and define $h=z_i$ on $U_i$, and $h=1$ on $\bigcup_{f\in F}U_f$. \end{proof} \begin{proof}[Proof of Theorem \ref{theoremisomorphismgroupifcirclevaluedfunctions}] For the notion of support we will use (Definition \ref{definitionsigma}), we take $\theta=1$, the constant function at $1$, so regularity of $C(X,\mathbb{S}^1)$ is immediate. Suppose that $f\perp g$ but that $Tf$ and $Tg$ are not disjoint. By Lemma \ref{lemmatheoremisomorphismgroupifcirclevaluedfunctions}, there exists $H\in C(Y,\mathbb{S}^1)$ such that $H\neq Tf,Tg$ everywhere, but that $1\in H(Y)$. Let $h=T^{-1}H$. Then $T(f^{-1}h)$ and $T(g^{-1}h)$ do not attain $1$, which implies that $f^{-1}h$ and $g^{-1}h$ do not attain $1$ as well. Thus \begin{align*} h^{-1}(1)&=X\cap h^{-1}(1)=(f^{-1}(1)\cup g^{-1}(1))\cap h^{-1}(1)\\ &=(g^{-1}(1)\cap h^{-1}(1))\cup(f^{-1}(1)\cap h^{-1}(1))\subseteq (g^{-1}h)^{-1}(1)\cup (f^{-1}h)^{-1}(1)=\varnothing. \end{align*} But $(Th)^{-1}(1)=H^{-1}(1)$ is nonempty, contradicting the given property of $T$. Therefore $f\perp g$ implies $Tf\perp Tg$, and the same argument yields the opposite implication, so $T$ is a $\perp$-isomorphism. Let $\mathcal{A}(X)$ and $\mathcal{A}(Y)$ be the subgroups of order-$2$ elements of $C(X,\mathbb{S}^1)$ and $C(Y,\mathbb{S}^1)$, respectively (i.e., the groups of continuous functions with values in $\left\{-1,1\right\}$). $\mathcal{A}(X)$ and $\mathcal{A}(Y)$ are also regular, since $X$ and $Y$ are zero-dimensional, and the restriction $T|_{\mathcal{A}(X)}:\mathcal{A}(X)\to \mathcal{A}(Y)$ is a $\perp\!\!\!\perp$-isomorphism, because $\perp$ and $\perp\!\!\!\perp$ coincide on $\mathcal{A}(X)$ and $\mathcal{A}(Y)$. Let $\phi:Y\to X$ be the corresponding $T|_{\mathcal{A}(X)}$-homeomorphism. Let $h\in C(X,\mathbb{S}^1)$ be arbitrary. Since $\sigma(h)=\bigcup_{a\in \mathcal{A}(X),a\subseteq h}\sigma(a)$ and $T$ is a $\perp$-isomorphism, we obtain by Theorem \ref{theoremdisjoint} that, for all $h\in C(X,\mathbb{S}^1)$, \[\phi(\sigma(Th))=\bigcup_{\substack{a\in \mathcal{A}(X)\\a\subseteq h}}\phi(\sigma(Ta))=\bigcup_{\substack{a\in \mathcal{A}(X)\\a\subseteq h}}\sigma(a)=\sigma(h)\] Since $\phi$ is a homeomorphism it preserves closures, from which it follows that $T$ is also a $\perp\!\!\!\perp$-isomorphism, and $\phi$ is also the $T$-homeomorphism. \begin{description} \item[Claim:] $f(\phi(y))=1\iff Tf(y)=1$. \end{description} Suppose $f(\phi(y))\neq 1$. Choose a function $g\in C(X,\mathbb{S}^1)$ which coincides with $f$ on a neighbourhood of $\phi(y)$ and such that $1\not\in g(X)$. Then $1\not\in Tg(Y)$ and since $Tf$ coincides with $Tg$ on a neighbourhood of $y$ then $Tf(\phi(y))=Tg(\phi(y))\neq 1$. The other direction is analogous, and thus we have proved the claim. Therefore $T$ is basic. Let $\chi$ be the $T$-transform, so that each section $\chi(y,\cdot)$ is an automorphism of the circle. If $\chi(y,\cdot)$ is continuous then it has the form $\chi(y,z)=z^{p(y)}$ where $p(y)\in\left\{\pm1\right\}$. Let us prove that for all except finitely many $y\in Y$, the section $\chi(y,\cdot)$ is continuous.The following argument is adapted from \cite{MR0029476}. Let $F=\left\{y\in Y:\chi(y,\cdot)\text{ is discontinuous}\right\}$, and suppose that $F$ were infinite. By Proposition \ref{propositiondisjointopensets}, there are countably infinitely many distinct points $y_n\in F$ ($n\in\mathbb{N}$), such that no $y_n$ lies in the closure of the other ones. We can choose a sequence $z_n\to 1$ such that $\chi(y_n,z_n)$ lies in the second or third quadrant\footnote{ To see this: Let $\operatorname{arg}:\mathbb{S}^1\to(-\pi,\pi]$ be a function such that $z=e^{i\operatorname{arg}(z)}$ for all $z\in\mathbb{S}^1$. Suppose $\tau$ is a discontinuous automorphism of the circle. Then there is a neighbourhood $U$ of $1$ such that for every neighbourhood $V$ of $1$, there is a point $z\in V$ for which $\tau(z)\not\in U$. Take an integer $k>1$ such that if $t\not\in U$ then $|\operatorname{arg}t|>\pi/k$. Let $V$ be any neighbourhood of $1$ and $z\in V$ such that $\tau(z)\not\in U$, so $|\operatorname{arg}(\tau(z))|>\pi/k$. Choose a positive integer $m$ such that $\frac{\pi}{m+1}\leq|\operatorname{arg}(\tau(z))|\leq\frac{\pi}{m}$, so in particular $m<k$. Since $m\geq 1$, \[\frac{\pi}{2}\leq\frac{m}{(m+1)}\pi\leq m|\operatorname{arg}(\tau(z))|=|\operatorname{arg}(\tau(z^m))|\leq\pi,\] and the equality in the middle is allowed because $m<k$. Thus $z^m$ is an element of $V^m\subseteq V^k$ such that $\tau(z^m)$ is in the second or third quadrant. Since the sets $V^k$ (where $k$ depends solely on $\tau$ and $U$) form a neighbourhood basis at the identity we are done.}. Define $f(\phi(y_n))=z_n$, $f=1$ on the boundary of $\{\phi(y_n):n\in\mathbb{N}\}$ and extend $f$ continuously to all of $X$. Let $y$ be an accumulation point of $\{y_n\}$, so that in particular $f(\phi(y))=1$. Then \[Tf(y)=\chi(y,1),\quad Tf(y_n)=\chi(y_n,z_n)\] But $y$ is an accumulation point of the $y_n$, and $Tf(y_n)$ lies in the second or third quadrant while $Tf(y)=1$, a contradiction to the continuity of $Tf$. Therefore $F$ is finite, so now we show that it is open in order to conclude that its points are isolated in $Y$. Let $y\in Y$ and choose $z_0\in\mathbb{S}^1$ of the form $z_0=e^{it}$ where $-\pi/4\leq t\leq \pi/4$, but such that $\chi(y,z_0)$ is in the second or third quadrant, so in particular it is not $z_0$ nor $z_0^{-1}$. Denote by $z_0$ the constant function at $z_0$, we that \[T(z_0)(y)=\chi(y,z_0)\neq z_0,z_0^{-1}.\] Since $T(z_0)$ is continuous, there is a neighbourhood $U$ of $y$ such that $\chi(x,z_0)\neq z_0,z_0^{-1}$ for all $x\in U$, so $x\in F$. Therefore $Y'=Y\setminus F$ is also compact, and we already constructed the function $p:Y'\to\{\pm 1\}$ with the desired property. To see that $p$ is continuous, denote by $i$ the constant function $x\mapsto i$ and note that \[p^{-1}(1)=\left\{y\in Y':\chi(y,i)=i\right\}=\left\{y\in Y':T(i)(y)=i\right\}=T(i)^{-1}(i)\cap Y'\] and similarly $p^{-1}(-1)=T(i)^{-1}(-i)\cap Y'$, so these two sets, which are complementary in $Y'$, are closed and hence clopen. \end{proof} \begin{example} As an easy example where the subset $F\subseteq Y$ in the previous theorem is nonempty, let $X=Y=\left\{\ast\right\}$ be (equal) singletons, and let $t:\mathbb{S}^1\to\mathbb{S}^1$ be a discontinuous automorphism of $\mathbb{S}^1$. Consider the map $T:C(X,\mathbb{S}^1)\to C(Y,\mathbb{S}^1)$, $T(f)(\ast)=t(f(\ast))$ (in other words, $T$ is the function obtained from $t$ by identifying $C(X,\mathbb{S}^1)$ and $C(Y,\mathbb{S}^1)$ with $\mathbb{S}^1$). Then $T$ satisfies the hypotheses of the previous theorem but $F=Y$. \end{example} We now endow $C(X,\mathbb{S}^1)$ with the uniform metric: \[d_\infty(f,g)=\sup_{x\in X}|f(x)-g(x)|\] (which is the metric coming from the C*-algebra $C(X,\mathbb{C})$). \begin{theorem}\label{theoremisometricisomorphismcirclegroups} If $X$ and $Y$ are as above and $T:C(X,\mathbb{S}^1)\to C(Y,\mathbb{S}^1)$ is an isometric isomorphism, then there is a homeomorphism $\phi:Y\to X$ and a continuous function $p:Y\to\left\{\pm 1\right\}$ such that $Tf(y)=f(\phi(y))^{p(y)}$ for all $y\in Y$. \end{theorem} \begin{proof} We identify each $\lambda\in\mathbb{S}^1$ with the corresponding constant map on $X$ or $Y$. The constant function $-1$ is characterized by the following two properties: \begin{itemize} \item $(-1)^2=1$; \item If $g^3=1$, then $d_\infty(-1,g)\in\left\{1,2\right\}$. \end{itemize} Thus $T(-1)=-1$. A function $f$ does not attain $1$ if and only if $d_\infty(-1,f)<1$, so $T$ preserves functions not attaining $1$, and we apply Theorem \ref{theoremisomorphismgroupifcirclevaluedfunctions} (or more precisely its proof) in order to obtain a homeomorphism $\phi:Y\to X$, a function $\chi:Y\times\mathbb{S}^1\to\mathbb{S}^1$ and a continuous function $p:Y'\to\left\{-1,1\right\}$, where $Y'=\left\{y\in Y:\chi(y,\cdot)\text{ is continuous}\right\}$, such that \[Tf(y)=\chi(y,f(\phi(y))\qquad\text{and}\qquad\chi(y',t)=t^{p(y')}\] for all $y\in Y$, $y'\in Y'$ and $f\in C(X,\mathbb{S}^1)$. It remains only to prove that $Y'=Y$, i.e., every section $\chi(y,\cdot)$ is continuous. If $\lambda_i\to\lambda$ in $\mathbb{S}^1$ then we also have uniform convergence of the corresponding constant functions, so \[\chi(y,\lambda_i)=T(\lambda_i)y\to T(\lambda)y=\chi(y,\lambda)\ thus $\chi(y,\cdot)$ is continuous for all $y$. \end{proof} \subsection{Milgram's Theorem} In this subsection, we will always assume that $X$ (and similarly $Y$) is a locally compact Hausdorff space. We first generalize Milgram's theorem (and by consequence, Gelfand-Kolmogorov and Gelfand-Naimark) as follows: Let $S_X$ be a non-trivial path-connected Hausdorff topological monoid with zero $0$ and unit $1$, and which is $0$-right cancellative\footnote{A semigroup $S$ with $0$ is \emph{$0$-right cancellative} if for all $s,r,t\in S$, $st=rt\neq 0$ implies $s=r$.} and categorical at $0$\footnote{A monoid $S$ with zero is \emph{categorical at $0$} if $st=0$ implies $s=0$ or $t=0$.}. Common examples of such semigroups are the following, under usual product: $\mathbb{R}$, $\mathbb{C}$, $[-1,1]$, the closed complex unit disc, $\operatorname{Gl}(n,\mathbb{R})\cup\left\{0\right\}$, and other variations. We consider $\theta_X=0$, the zero map from $X$ to $S_X$. Let us also write $C_c(X,S_X)$ for $C_c(X,0)$ (to make the counter-domain explicit). Consider $Y$, $\theta_Y$ and $S_Y$ similarly. Urysohn's Lemma and path-connectedness of $S_X$ implies that $C_c(X,S_X)$ is regular. Following the procedure outlined in page \pageref{generalprocedureforclassification}, we first describe $\perp\!\!\!\perp$ in multiplicative terms: \begin{lemma}\label{lemmamilgram} If $f,g\in C_c(X,S_X)$, then \[f\perp\!\!\!\perp g\iff \exists h\in C_c(X,S_X)\text{ such that }hf=f\text{ and }hg=0.\stepcounter{counterequation}\tag{\arabic{section}.\arabic{counterequation}}\label{conditionmilgram}\] \end{lemma} \begin{proof} Indeed, first assume $f\perp\!\!\!\perp g$. By Urysohn's Lemma and path-connectedness of $S_X$, there exists $h\in C_c(X,S_X)$ such that $h=1$ on $\supp(f)$ and $\supp(h)\subseteq X\setminus\supp(g)$. Then $h$ satisfies the condition \eqref{conditionmilgram}. Conversely, if there is $h\in C_c(X,S_X)$ satisfying \eqref{conditionmilgram}, then $h=1$ on $\supp(f)$, because $S_X$ is $0$-right cancellative, and so $f\Subset h$. As $hg=0$ if and only if $h\perp g$, we conclude that $f\perp\!\!\!\perp g$. \end{proof} As a consequence, any multiplicative isomorphism $C_c(X,S_X)\to C_c(Y,S_Y)$ is a $\perp\!\!\!\perp$-isomorphism. \begin{corollary} If $T:C(X,S_X)\to C_c(Y,S_Y)$ is a multiplicative isomorphism, then $X$ and $Y$ are homeomorphic. \end{corollary} To recover Milgram's original theorem in full generality, we restrict now to the case $S_X=S_Y=\mathbb{R}$ in order to obtain an explicit description of $T$ as above. First, recall a well-known classification of continuous multiplicative isomorphisms of $\mathbb{R}$ (see \cite[Lemma 4.3]{MR0029476}, for example). We present its proof for the sake of completeness. Given $t\in\mathbb{R}$, $\operatorname{sgn}(t)$ denotes the sign of $t$ (the sign of $0$ is $\operatorname{sgn}(0)=0$). \begin{proposition}\label{theoremclassificationmultiplicativeisomorphismsofr} Let $\tau:\mathbb{R}\to\mathbb{R}$ be a multiplicative isomorphism. Then \begin{enumerate}[label=(\alph*)] \item\label{theoremclassificationmultiplicativeisomorphismsofr(a)} Given $x\in\mathbb{R}$, $x\geq 0$ if and only if $\tau(x)\geq 0$; \item\label{theoremclassificationmultiplicativeisomorphismsofr(b)} $\tau(-x)=-\tau(x)$ for all $x\in\mathbb{R}$; \item\label{theoremclassificationmultiplicativeisomorphismsofr(c)} The following are equivalent: \begin{enumerate}[label=(\arabic*)] \item\label{theoremclassificationmultiplicativeisomorphismsofr(c)(1)} $\tau$ is continuous; \item\label{theoremclassificationmultiplicativeisomorphismsofr(c)(2)} $\tau$ is continuous at $0$; \item\label{theoremclassificationmultiplicativeisomorphismsofr(c)(3)} If $0<x<1$ then $0<\tau(x)<1$; \item\label{theoremclassificationmultiplicativeisomorphismsofr(c)(4)} $\tau$ is increasing; \item\label{theoremclassificationmultiplicativeisomorphismsofr(c)(5)} $\tau$ has the form $\tau(x)=\operatorname{sgn}(x)|x|^p$ for some $p>0$; \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate}[label=\ref{theoremclassificationmultiplicativeisomorphismsofr(\alph*)}] \item Simply note that $x\geq 0$ if and only if $x=y^2$ for some $y$. \item If $x=0$ this is trivial. If $x\neq 0$, then $-x$ is the only number satisfying $(-x)^2=x^2$ and $-x\neq x$. \item The implications (5)$\Rightarrow$(1)$\Rightarrow$(2) are trivial. \noindent(2)$\Rightarrow$(3): If $0<x<1$ then $x^n\to 0$, so $\tau(x)^n\to \tau(0)=0$ which implies $\tau(x)<1$. \noindent(3)$\Rightarrow$(4) is immediate from \ref{theoremclassificationmultiplicativeisomorphismsofr(a)} and \ref{theoremclassificationmultiplicativeisomorphismsofr(b)}. \noindent(4)$\Rightarrow$(5): Letting $p=\log_2(\tau(2))>0$ (because $\tau(2)>\tau(1)=1$), we have $\tau(2^q)=(2^q)^p$ for all $q\in\mathbb{Q}$. Thus the restriction of $\tau$ to $[0,\infty)$ is an increasing map with dense image, hence surjective and continuous. Moreover, $\tau$ coincides with $x\mapsto x^p$ on a dense set of $[0,\infty)$, hence $\tau(x)=\operatorname{sgn}(x)|x|^p$ for all $x\geq 0$ and thus for all $x$ by \ref{theoremclassificationmultiplicativeisomorphismsofr(b)}.\qedhere \end{enumerate} \end{proof} We will now classify multiplicative isomorphisms from $C_c(X,\mathbb{R})$ and $C_c(Y,\mathbb{R})$. \begin{theorem}[{Milgram's Theorem, \cite[Theorem A]{MR0029476}, for locally compact spaces}]\label{theoremmilgram} Let $X$ and $Y$ be locally compact Hausdorff spaces and let $T:C_c(X,\mathbb{R})\to C_c(Y,\mathbb{R})$ be a multiplicative isomorphism. Then there exists a homeomorphism $\phi:Y\to X$, a closed, discrete and isolated subset $F\subseteq Y$, and a continuous positive function $p:Y\setminus F\to(0,\infty)$ satisfying \[Tf(y)=\operatorname{sgn}(f(\phi(y)))|f(\phi (y))|^{p(y)}\] for all $f\in C_c(X,\mathbb{R})$ and $y\in Y\setminus F$. \end{theorem} \begin{proof} By Lemma \ref{lemmamilgram}, $T$ is a $\perp\!\!\!\perp$-isomorphism, so let $\phi$ be the $T$-homeomorphism. We prove that $T$ is $\phi$-basic in a few steps: \begin{enumerate}[label=\arabic*.] \item\label{theoremmilgramstep1} \uline{If $f,h\in C_c(X,\mathbb{R})$, then $f=1$ on $\supp(h)$ if and only if $Tf=1$ on $\supp(Th)$:} We have $f=1$ on $\supp(h)$ if and only if $fh=h$, and similarly for $Tf$. Since $T$ is multiplicative we are done. \item\label{theoremmilgramstep2} \uline{If $f,h\in C_c(X,\mathbb{R})$, then $f\neq 0$ on $\supp(h)$ if and only if $Tf\neq 0$ on $\supp(Th)$:} If $f\neq 0$ on $\supp(h)$, we can find $g\in C_c(X,\mathbb{R})$ such that $g=1/f$ on $\supp(h)$. Item \ref{theoremmilgramstep1} implies that $TfTg=1$ on $\supp(Th)$, and in particular $Tf\neq 0$ on $\supp(Th)$. \item\label{theoremmilgramstep3} \uline{If $f\in C_c(X,\mathbb{R})$ and $y\in Y$, then $f(\phi(y))\neq 0$ if and only if $Tf(y)\neq 0$:} Assume $f(\phi(y))\neq 0$. Choose $h\in C_c(X)$ such that $\phi(y)\in\supp(h)\subseteq[f\neq 0]$. Then item \ref{theoremmilgramstep2} implies that $Tf\neq 0$ on $\supp(Th)$, which contains $y$. \item\label{theoremmilgramstep4} \uline{If $f,g,h\in C_c(X,\mathbb{R})$, then $f$ and $g$ coincide and are nonzero on $\supp(h)$ if and only if $Tf$ and $Tg$ coincide and are nonzero on $\supp(h)$:} This is an immediate consequence of item \ref{theoremmilgramstep1} \item\label{theoremmilgramstep5} In particular, from \ref{theoremmilgramstep3}, \uline{$f(\phi(y))=0$ if and only if $Tf(y)=0$.} \item\label{theoremmilgramstep6} \uline{If $f\in C_c(X,\mathbb{R})$ and $y\in Y$, then $f(\phi(y))=1$ if and only if $Tf(y)=1$:} Suppose this was not the case, say $f(\phi(y))=1$ but $Tf(y)\neq 1$, and let us deduce a contradiction. Take a neighbourhood $Y'$ of $y$ with compact closure. In particular from \ref{theoremmilgramstep1}, $f$ is not constant on any neighbourhood of $\phi(y)$, so there is a sequence of distinct points $y_n\in Y'$ and $r>0$ such that \begin{enumerate}[label=(\roman*)] \item $f(\phi(y_n))^n\to 1$; \item $|Tf(y_n)-1|>r$ for all $n$. \end{enumerate} By Propositions \ref{propositiondisjointopensets} and \ref{propositionconstructfunction}\ref{propositionconstructfunction(b)}, we can consider a subsequence of $\left\{y_n\right\}_n$, if necessary, and take a continuous function $g:X\to\mathbb{R}$ which coincides with $f^n$ on a neighbourhood of $\phi(y_n)$ for each $n$. Property (i) allows us to consider another subsequence and change $g$ by $\max(g,1/2)$, so we may assume that $g\neq 0$ everywhere. Let $u\in C_c(X,\mathbb{R})$ be a function with $u=1$ on $\phi(Y')$, so $ug\in C_c(X,\mathbb{R})$ and $ug=f^n$ on a neighbourhood of $\phi(y_n)$ for all $n$. Let $z$ be a cluster point of the sequence $(y_n)_n$, so in particular $T(ug)(z)$ is a cluster point of the sequence $T(ug)(y_n)=Tf(y_n)^n$, where this equality follows from \ref{theoremmilgramstep4} Since $|Tf(y_n)-1|>r>0$, the only possibility is $T(ug)(z)=0$, and by \ref{theoremmilgramstep5} this means that $0=(ug)(\phi(z))=g(\phi(z))$ (because $z\in\overline{Y'}$), contradicting the fact that $g$ is nonzero. \end{enumerate} We conclude, from \ref{theoremmilgramstep5} and \ref{theoremmilgramstep6}, that $T$ is $\phi$-basic. Let $\chi:Y\times\mathbb{R}\to\mathbb{R}$ be the $T$-transform. Let $F=\left\{y\in Y:\chi(y,\cdot)\text{ is discontinuous}\right\}$. Let us prove that $F$ is closed and discrete, or equivalently that $F\cap K$ is finite for all compact $K\subseteq Y$. Otherwise, using the equivalences \ref{theoremclassificationmultiplicativeisomorphismsofr(c)(1)}$\iff$\ref{theoremclassificationmultiplicativeisomorphismsofr(c)(3)} of Proposition \ref{theoremclassificationmultiplicativeisomorphismsofr}\ref{theoremclassificationmultiplicativeisomorphismsofr(c)}, there would be distinct $y_1,y_2,\ldots\in K$ and a strictly decreasing sequence $t_n\to 0$ such that $\chi(y_n,t_n)>n$. Going to a subsequence if necessary, we can construct, by Proposition \ref{propositionconstructfunction}\ref{propositionconstructfunction(b)}, $f\in C_c(X,\mathbb{R})$ with $f(\phi(y_n))=t_n$, so $Tf(y_n)>n$ for all $n$, a contradiction to the boundedness of $Tf$. To prove that $F$ is isolated we prove that it is open: If $z\in F$, then there is $t\in(0,1)$ with $\chi(z,t)>1$. Take $f\in C_c(X,\mathbb{R})$ such that $f=t$ on a neighbourhood $U$ of $\phi(z)$. In particular $Tf(z)>1$, so there is a neighbourhood $W$ of $z$ such that $Tf>1$ on $W$. Then for all $y\in\phi^{-1}(U)\cap W$, $\chi(y,t)=Tf(y)>1$, so $y\in F$. Therefore, $F$ consists of isolated points, since $Y$ is locally compact. For $y\not\in F$, Proposition \ref{propositionmodelmorphism} and Proposition \ref{theoremclassificationmultiplicativeisomorphismsofr} imply that $\chi(y,\cdot)$ has the form $\chi(y,t)=\operatorname{sgn}(t)|t|^{p(y)}$ for some $p(y)>0$. Let $U$ be any open subset of $Y$ not intersecting $F$ and with compact closure. Take any function $f\in C_c(X,\mathbb{R})$ with $f=2$ on $\phi(U)$. Then for $y\in U$, \[2^{p(y)}=\chi(y,2)=\chi(y,f(\phi(y)))=Tf(y)\] so $p(y)=\log_2(Tf(y))$ for $y\in U$, showing that $p$ is continuous on $U$. Since $Y\setminus F$ is the union of such $U$ we are done. \end{proof} \subsection{Group-Valued functions; Hernández-Ródenas Theorem} Given topological groups $G$ and $H$, denote by $\operatorname{AbsIso}(G,H)$ the set of algebraic group isomorphisms from $G$ to $H$, and by $\operatorname{TopIso}(G,H)$ the set of topological (i.e., homeomorphisms which are) group isomorphisms. Let $X$ and $Y$ be compact Hausdorff spaces and $G$ a Hausdorff topological group. In \cite[Theorem 3.7]{MR2324919}, Hernández and Ródenas classified non-vanishing group morphisms (not necessarily \emph{isomorphisms}) $T:C(X,G)\to C(Y,G)$ which satisfy the following properties: \begin{enumerate}[label=(\roman*)] \item There exists a continuous group morphism $\psi:G\to C(X,G)$, where $C(X,G)$ is endowed with the topology of pointwise convergence, such that for all $\alpha\in G$ and all $y\in Y$, $T(\psi(\alpha))(y)=\alpha$; \item For every continuous endomorphism $\theta:G\to G$ and every $f\in C(X,G)$, $T(\theta\circ f)=\theta\circ (Tf)$. \end{enumerate} If $T$ is a group isomorphism and $T^{-1}$ is continuous (with respect to uniform convergence) then condition (i) is immediately satisfied, however this is not true for (ii): For example, if $\operatorname{TopIso}(G,G)$ is non-abelian and $\rho\in\operatorname{TopIso}(G,G)$ is any non-central element, then the map $T:C(X,G)\to C(X,G)$ given by $Tf=\rho\circ f$ is a group isomorphism, and a self-homeomorphism of $C(X,G)$ with the topology of uniform convergence, which does not satisfy (ii). In the next theorem we obtain the same type of classification as in \cite[Theorem 3.7]{MR2324919}, without assuming condition (ii), however we consider only non-vanishing group isomorphisms. Given a compact Hausdorff space $X$, a topological group $G$ and $\alpha\in G$, denote by $\overline{\alpha}$ the constant function $X\to G$, $x\mapsto \alpha$. We endow $C(X,G)$ with the topology of pointwise convergence. \begin{theorem} Suppose that $G$ and $H$ are Hausdorff topological groups, and $X$ and $Y$ are compact Hausdorff spaces for which $(X,\overline{1_G},C(X,G))$ and $(Y,\overline{1_H},C(Y,H))$ are regular. Let $T:C(X,G)\to C(Y,H)$ be a non-vanishing group isomorphism (Definition \ref{definitionnonvanishingbijection}). Then there exist a homeomorphism $\phi:Y\to X$ and a map $w:Y\to\operatorname{AbsIso}(H,G)$ such that $Tf(y)=w(y)(f(\phi(y))$ for all $y\in Y$ and $f\in C(X,G)$. If $T$ is continuous on the constant functions then each $w(y)$ is continuous and $T$ is continuous on $C(X,G)$. If both $T$ and $T^{-1}$ are continuous on the constant functions then $w(y)\in\operatorname{TopIso}(H,G)$ and $T$ is a homeomorphism for the topologies of pointwise convergence. \end{theorem} \begin{proof} By Theorem \ref{theoremnonvanishing} and Proposition \ref{propositionbasicnessofgroupvalued}, there is a homeomorphism $\phi:Y\to X$ such that $T$ is $\phi$-basic, and the sections $\chi(y,\cdot):G\to H$ of the $(\phi,T)$-transform $\chi$ are group morphisms by Proposition \ref{propositionmodelmorphism}. Similar facts hold for $T^{-1}$, so Proposition \ref{propositionbasicisomorphisms} implies that each section $\chi(y,\cdot)$ is bijective. Letting $w(y)=\chi(y,\cdot)$ we are done with the first part. Now note that for all $\alpha\in G$ and $y\in Y$, \[w(y)(\alpha)=w(y)(\overline{\alpha}(\phi(y)))=T\overline{\alpha}(y)\] which implies that every $w(y)$ is continuous if and only if $T$ is continuous on the constant functions (because the map $\alpha\mapsto\overline{\alpha}$ from $G$ to $C(X,G)$ is a homeomorphism onto its image). In this case, from the equality \[Tf(y)=w(y)(f(\phi(y))\qquad\text{for all}\quad f\in C(X,G)\quad\text{and}\quad y\in Y,\] we can readily see that $T$ is continuous. The last part, assuming also that $T^{-1}$ is continuous on the constant functions, is similar, using $T^{-1}$ and $w(y)^{-1}$ in place of $T$ and $w(y)$. \end{proof} \subsection{Kaplansky's Theorem}\label{subsectionkaplansky} Let $R$ be a totally ordered set without supremum or infimum, considered as a topological space with the order topology, and let $X$ be a locally compact Hausdorff space. We consider the pointwise order on $C(X,R)$: $f\leq g$ if and only if $f(x)\leq g(x)$ for all $x$, which makes $C(X,R)$ a lattice: for all $f,g\in C(X,R)$ and $x\in X$, \[(f\lor g)(x)=\max\left\{f(x),g(x)\right\},\qquad\text{and}\qquad(f\land g)(x)=\min\left\{f(x),g(x)\right\}.\] Denote by $C_b(X,R)$ the sublattice of bounded continuous functions from $X$ to $R$. In \cite{MR0020715} Kaplansky proved that if $X$ is compact and \emph{$R$-normal}\footnote{Following \cite{MR0020715}, $X$ is \emph{$R$-normal} if for any pair of disjoint closed sets $F,G\subseteq X$ and $\alpha,\beta\in R$, there is $f\in\mathcal{A}$ which equals $\alpha$ on $F$ and $\beta$ on $G$.}, then the lattice $C(X,R)$ determines $X$ completely, and in \cite{MR0026240}, classified additive lattice isomorphisms between these lattices of functions in the case that $R=\mathbb{R}$. We improve on these results in the following ways: We allow $X$ to be non-compact (only locally compact), obtain a recovery theorem for $X$ from a subcollection $\mathcal{A}$ of $C_b(X,R)$ (Theorem \ref{theoremkaplanskywithoutlowerbound}), and classify lattice isomorphisms in the case of non-real-valued functions for first-countable spaces (Theorem \ref{theoremclassificationlatticemorphismfirstcountable}). We will consider sublattices $\mathcal{A}$ of $C_b(X,R)$ which satisfy \begin{enumerate}[label=(L\arabic*)] \item\label{conditionskaplansky1} for all $f,g\in\mathcal{A}$, $\overline{[f\neq g]}$ is compact; \item\label{conditionskaplansky2} for all $f\in\mathcal{A}$, every open set $U\subseteq X$, every $x\in U$ and $\alpha\in R$, there exists $g\in\mathcal{A}$ such that $g(x)=\alpha$ and $[g\neq f]\subseteq U$. \end{enumerate} \begin{example}[{Kaplansky, \cite{MR0020715}}]\label{examplekaplanskyisl1l2} Suppose that $X$ is compact and $\mathcal{A}$ is an $R$-normal sublattice of $C(X,R)$. Condition \ref{conditionskaplansky1} is trivial, so let us check that $\mathcal{A}$ satisfies \ref{conditionskaplansky2}: Suppose $f$, $U$, $x$ and $\alpha$ are as in that condition. For the sake of the argument we can assume $f(x)\leq\alpha$. Let $\beta$ be any lower bound of $f(X)$, and from $R$-normality find $h\in\mathcal{A}$ such that $h(x)=\alpha$ and $h=\beta$ outside $U$. Then $g=f\lor h$ has the desired properties. \end{example} \begin{example}[{Li--Wong, \cite{MR3162258}}] Suppose that $X$ is compact, $R=\mathbb{R}$, and $\mathcal{A}$ is a regular additive subgroup of $C(X,\mathbb{R})$, so \ref{conditionskaplansky1} is also trivial and again we need to verify condition \ref{conditionskaplansky2}: Let $f$, $U$, $x$ and $\alpha$ be as in \ref{conditionskaplansky2}. By regularity, take $h$ such that $\supp(h)\subseteq U$ and $h(x)=\alpha-f(x)$. Then $g=f+h$ has the desired properties \end{example} We will now recover the main result of \cite{MR0020715}, The following lemma is based on \cite{MR0052760}. \begin{lemma}\label{importantlemmakaplansky} Let $\mathcal{A}$ be a sublattice of $C_b(X,R)$ satisfying \ref{conditionskaplansky1} and \ref{conditionskaplansky2}, and let $f_0$ be any element of $\mathcal{A}$. Let $\mathcal{A}_{\geq f_0}=\left\{f\in\mathcal{A}:f\geq f_0\right\}$. Then $(X,f_0,\mathcal{A}_{\geq f_0})$ is weakly regular and for $f,g\in\mathcal{A}_{\geq f_0}$, \begin{enumerate}[label=(\alph*)] \item\label{importantlemmakaplansky(a)} $f\perp g\iff f\land g=f_0$ (which is the minimum of $\mathcal{A}_{\geq f_0})$; \item\label{importantlemmakaplansky(b)} $f\Subset g$ is equivalent to the following statement: \[\begin{split} \text{``for every bounded subset $\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\subseteq\mathcal{A}$ such that $h\subseteq f$ for all $h\in \@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$,}\\ \text{there is an upper bound $k$ of $\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$ such that $k\subseteq g$.''}\end{split}\tag{K}\] \end{enumerate} \end{lemma} \begin{proof} Weak regularity is immediate from \ref{conditionskaplansky2} and the fact that $R$ does not have a supremum, and item \ref{importantlemmakaplansky(a)} is trivial. Let us prove \ref{importantlemmakaplansky(b)}. First suppose $f\Subset g$ and $\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}\subseteq\mathcal{A}$ is a bounded subset such that $h\subseteq f$ for all $h\in \@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$. Let $\alpha\in R$ be an upper bound of $\bigcup_{h\in\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}}h(X)$. From weak regularity and compactness of $\supp(f)$, we can take finitely many functions $k_1,\ldots,k_n$ such that $k_i\Subset g$, and for every $x\in\supp(f)$ there is some $i$ with $k_i(x)>\alpha$. Letting $k=\bigvee_{i=1}^n k_i$ we obtain the desired properties. Conversely, suppose that condition (K) holds. Let $\alpha$ be any upper bound of $f_0(X)$ and again take $\beta>\alpha$. Let $\mathscr{H}=\left\{h\in\mathcal{A}_{\geq f_0}:h\leq \beta, h\subseteq f\right\}$. Let $k$ be an upper bound of $\@ifnextchar[{\@Hwithbrak}{\@Hwithoutbrak}$ with $k\subseteq g$. By Property \ref{conditionskaplansky2}, we have $\sigma(f)=\bigcup_{h\in\mathscr{H}}\sigma(h)$, so $k\geq\beta$ on $\sigma(f)$ and thus also on $\supp(f)$, which implies $f\Subset k$. Since $k\subseteq g$ then $f\Subset g$. \end{proof} As an immediate consequence of Lemma \ref{importantlemmakaplansky} and Theorem \ref{maintheorem} we have the following generalization of Kaplansky's Theorem: \begin{theorem}[Kaplansky \cite{MR0020715}]\label{theoremkaplansky} Suppose $R$ has no supremum nor infimum, $\mathcal{A}(X)$ and $\mathcal{A}(Y)$ are sublattices of $C_b(X,R)$ and $C_b(Y,R)$, respectively, satisfying conditions \ref{conditionskaplansky1} and \ref{conditionskaplansky2}, and $T:\mathcal{A}(X)\to\mathcal{A}(Y)$ is a lattice isomorphism. Then for any $f_0\in\mathcal{A}$, $T$ restricts to a $\perp\!\!\!\perp$-isomorphism of the regular sublattices $\mathcal{A}(X)_{\geq f_0}$ and $\mathcal{A}(Y)_{\geq Tf_0}$. In particular, $X$ and $Y$ are homeomorphic. \end{theorem} Our immediate goal is to prove that the homeomorphism between $X$ and $Y$ given by Theorem \ref{theoremkaplansky} does not depend on the choice of function $f_0$ in Lemma \ref{importantlemmakaplansky}. \begin{lemma}\label{lemmarestrictionoflatticeisomorphismisbasic}\index{$T$-homeomorphism} Under the conditions of Theorem \ref{theoremkaplansky}, let $\phi:Y\to X$ be the $T|_{\mathcal{A}(X)_{\geq f_0}}$-homeomorphism. If $f,g\in\mathcal{A}(X)_{\geq{f_0}}$ then $\phi^{-1}(\operatorname{int}([f=g]))=\operatorname{int}([Tf=Tg])$. \end{lemma} \begin{proof} We will use the superscript ``$f_0$'' as in Definition \ref{definitionsigma}, that is, for all $f\geq f_0$. \[\sigma^{f_0}(f)=\operatorname{int}(\overline{[f\neq f_0]}),\qquad\text{and}\qquad Z^{f_0}(f)=\operatorname{int}([f=f_0]).\] and similarly with $Tf_0$ in place of $f_0$. Then $\phi$ is the only homeomorphism satisfying $\sigma^{f_0}(f)=\phi(\sigma^{Tf_0}(Tf))$, or equivalently $Z^{f_0}(f)=\phi(Z^{Tf_0}(Tf))$, for all $f\geq f_0$. First, assume that $f\leq g$, and let $U=\operatorname{int}([f=g])$. For all $x\in[f<g]$, choose a function $k_x\in\mathcal{A}(X)_{\geq f_0}$ such that \begin{itemize} \item $\sigma^{f_0}(k_x)\cap [f=g]=\varnothing$; \item $k_x(x)=g(x)$; \item $k_x\leq g$. \end{itemize} Then $g=\sup\left\{f,k_x:x\in[f<g]\right\}$, so $Tg=\sup\left\{Tf,Tk_x:x\in[f<g]\right\}$.. Let us prove that $Tf(y)=Tg(y)$ for all $y\in\phi^{-1}(U)$. Since $U\subseteq\bigcap_{x\in[f<g]}Z^{f_0}(k_x)$, then $\phi^{-1}(U)\subseteq\bigcap_{x\in[x<g]}Z^{Tf_0}(Tk_x)$. Given $y\in\phi^{-1}(U)$, use property \ref{conditionskaplansky2} to find $h\in\mathcal{A}(Y)$ such that $h(y)=Tf(y)$ and $h=Tg$ outside of $\phi^{-1}(U)$. Then $h'=(Tf\lor h)\land Tg$ is an upper bound of $\left\{Tf,Tk_x:x\in[f<g]\right\}$, so it is also an upper bound of $Tg$, and in particular \begin{align*} Tg(y)\leq ((Tf\lor h)\land Tg)(y)=Tf(y)\leq (Tf\lor h)(y)=Tf(y), \end{align*} so $Tg(y)=Tf(y)$. In the general case, if $f=g$ on an open set $U$ then $f=f\land g=g$ on $U$, so the previous case implies that $Tf$ and $Tg$ both coincide with $T(f\land g)$ on $\phi^{-1}(U)$. \end{proof} \begin{theorem}\label{theoremkaplanskywithoutlowerbound} Under the conditions of Theorem \ref{theoremkaplansky}, there exists a unique homeomorphism $\phi:Y\to X$ such that $\phi(\operatorname{int}([Tf=Tg]))=\operatorname{int}([f=g])$ for all $f,g\in\mathcal{A}(X)$. (In this case we will still call $\phi$ the \emph{$T$-homeomorphism}.) \end{theorem} \begin{proof} For each $f_0\in \mathcal{A}(X)$, let $\phi^{f_0}:Y\to X$ be the $T|_{\mathcal{A}(X)_{\geq f_0}}$-homeomorphism. Given $f_0,g_0\in\mathcal{A}(X)$, Lemma \ref{lemmarestrictionoflatticeisomorphismisbasic} implies that $\phi^{f_0\land g_0}$ satisfies the property of both the $T|_{\mathcal{A}(X)_{\geq f_0}}$ and the $T|_{\mathcal{A}(X)_{\geq g_0}}$-homeomorphisms, so $\phi^{f_0}=\phi^{f_0\land g_0}=\phi^{g_0}$. We are done by letting $\phi=\phi^{f_0}$ for some arbitrary $f_0\in\mathcal{A}(X)$. \end{proof} A natural goal now is to classify the lattice isomorphisms as given in Theorem \ref{theoremkaplanskywithoutlowerbound}, which is possible when we consider first-countable spaces. A similar argument to that of Theorem \ref{propositionbasicisomorphisms} appears in \cite{MR2995073}, although in a different context (considering lattices of possibly unbounded real-valued continuous functions on complete metric spaces). See also \cite{MR2999998,MR3404615}. Let us reinforce the assumptions, assumed throughout this subsection, that $X$ denotes a locally compact Hausdorff space and $R$ is a totally ordered set with the order topology. The following is a version of Proposition \ref{propositionconstructfunction} in this setting. \begin{proposition}\label{propositionconstructfunctionkaplansky} Assume further that $X$ and $R$ are first-countable, and that $\theta\in C_b(X,R)$ is such that $C_c(X,\theta)$ satisfies \ref{conditionskaplansky2}. Suppose that $\left\{x_n\right\}_n$ is an injective sequence in $X$, converging to $x_\infty\in X$. Let $g_n\in C(X,R)$ be functions such that $g_n(x_n)\to \theta(x)$. Then there exists $f\in C_c(X,\theta)$ such that $f=g_n$ on a neighbourhood of $x_n$ for each $n$, and that $f=\theta$ outside of a compact containing $\left\{x_n:n\in\mathbb{N}\right\}$ (which may be taken as small as desired with this property). \end{proposition} \begin{proof} Since $x_n\to x_\infty$, the hypotheses on $X$ and $R$ allow us to find open sets $U_n$ such that \begin{enumerate}[label=(\roman*)] \item $x_n\in U_n$ for all $n$; \item $\overline{U_n}\cap\overline{\bigcup_{m\neq n}U_m}=\varnothing$ for all $n$; \item For all sequences $\left\{x_n'\right\}_n\in\prod_n U_n$, we have $x_n'\to x_\infty$ and $g_n(x_n')\to\theta(x_\infty)$; \item $\overline{\bigcup_n U_n}$ is compact. \end{enumerate} We will define $f$ on each of the sets $U_n$ separately. First we find $h^+$ such that $h_n^+>g_n\lor\theta$ on a neighbourhood of $x_n$, and $\supp^\theta(h_n^+)\subseteq U_n$. Similarly, we find $h_n^-$ such that $h_n^-<g_n\land\theta$ on a neighbourhood of $x_n$ and $\supp^\theta(h_n^-)\subseteq U_n$. Given $x\in U_n$, set \[f(x)=\begin{cases} ((h^+\land g_n)\lor\theta)(x),&\text{if }g_n(x)\geq\theta(x)\\ ((h^-\lor g_n)\land\theta)(x),&\text{if }g_n(x)\leq\theta(x) \end{cases}\stepcounter{counterequation}\tag{\arabic{section}.\arabic{counterequation}}\label{equationpropositionconstructfunctionkaplansky}\] Note that if $g_n(x)=\theta(x)$ then both of the expressions above are equal to $\theta(x)$, so $f$ is well-defined. Also, if either $h^+(x)$ or $h^-(x)=\theta(x)$ then $f(x)=\theta(x)$ as well, which proves $\overline{[f\neq\theta]\cap U_n}\subseteq U_n$. Moreover, $f=g_n$ on some neighbourhood of $x_n$. We finish by defining $f=\theta$ on all of $X\setminus\bigcup_n U_n$. We need to prove that $f$ is continuous on all of $X$. Of course, $f$ is continuous on $\bigcup_n U_n$ and on the interior of $X\setminus\bigcup_n U_n$. We just need to prove that $f$ is continuous at boundary points, so let $x\in \partial\bigcup_n U_n$. We have two possibilities: \begin{description} \item[Case 1: $x\in\overline{U_n}$ for some $n$] Property (ii) for the sets $U_n$ implies that there is some neighbourhood of $x$ which does not intersect $U_m$ for $m\neq n$. Moreover, we already know that $\supp^\theta(f)\cap U_n\subseteq U_n$, so $f=\theta$ on some neighbourhood of $x$ \item[Case 2: $x\not\in\overline{U_n}$ for any $n$] We are also assuming that $x\in\overline{\bigcup_n U_n}$, so small neighbourhoods of $x$ will only intersect $U_n$ for large $n$, Property (iii) of the sets $U_n$ implies that $x=x_\infty$. On a neighbourhood of $x$, $f$ will be given either by $\theta$, or will satisfy $\theta\leq f\leq g$ or $g\leq f\leq\theta$ on the sets $U_n$ (depending on whether $g\geq\theta$ or $g\leq\theta$). In any case, property (iii) of the sets $U_n$ implies that $f$ is continuous at $x_\infty$. \end{description} Finally, since $\supp^\theta(f)\subseteq\overline{\bigcup_n U_n}$, which is compact by Property (iv), we obtain $f\in C_c(X,\theta)$. \end{proof} \begin{named}{Remark} We may drop the first-countability requirement on $R$ by weakening the condition that $f=g_n$ on a neighbourhood of $x_n$ to only $f(x_n)=g_n(x_n)$. The proof is essentially the same as above, however the condition $g_n(x_n')\to\theta(x_\infty)$ in (iii) cannot be guaranteed in this case, and when defining $f$, instead of Equation \eqref{equationpropositionconstructfunctionkaplansky} we use \[f(x)=\begin{cases} ((h^+\land g_n(x_n))\lor\theta)(x),&\text{if }g_n(x_n)>\theta(x_n)\\ ((h^-\lor g_n(x_n))\land\theta)(x),&\text{if }g_n(x_n)<\theta(x_n)\\ \theta(x),&\text{if }g_n(x_n)=\theta(x_n). \end{cases}\] (Note that the conditions are on both $x$ \emph{and} $n$.) However, this weaker version is not sufficient for the application in the theorem below. \end{named} \begin{theorem}\label{theoremclassificationlatticemorphismfirstcountable} Suppose that $X$, $Y$ and $R$ are first-countable $C_c(X,\theta_X)$ and $C_c(Y,\theta_Y)$ satisfy \ref{conditionskaplansky2}, and that $T:C_c(X,\theta_X)\to C_c(Y,\theta_Y)$ is a lattice isomorphism. Then there are a unique homeomorphism $\phi:Y\to X$ and a continuous function $\chi:Y\times R\to R$ such that \[Tf(y)=\chi(y,f(\phi(y)))\qquad\text{ for all }y\in Y\text{ and }f\in C_c(X,0)\stepcounter{counterequation}\tag{\arabic{section}.\arabic{counterequation}}\label{equationclassificationoforderkaplanskyisomorphismsforfirstcountble}\] and $\chi(y,\cdot):R\to R$ is an increasing bijection for each $y\in Y$. \end{theorem} \begin{proof} Let $\phi:Y\to X$ be the $T$-homeomorphism. We just need to prove that $T$ is $\phi$-basic, so assume $y\in Y$ and $f(\phi(y))=g(\phi(y))$. In order to prove that $Tf(y)=Tg(y)$, we may assume that $f\leq g$, by considering the auxiliary function $f\land g$. If $y$ is isolated in $Y$, then $f$ and $g$ coincide on the open set $\left\{\phi(y)\right\}$, so $Tf$ and $Tg$ coincide on the open set $\left\{y\right\}$. Assume then that $y$ is not isolated. Since $Y$ is first-countable, let $(y_n)_n$ be an injective sequence in $Y$ converging to $y$. By Proposition \ref{propositionconstructfunctionkaplansky}, there is $h\in C_c(X,\theta)$ such that \begin{itemize} \item If $n$ is even, $h=f$ on a neighbourhood of $\phi(y_n)$; \item If $n$ is odd, $h=g$ on a neighbourhood of $\phi(y_n)$. \end{itemize} Then $\phi(y)\in\overline{\operatorname{int}[f=h]}$, so $t\in\overline{\operatorname{int}[Tf=Th]}$ and so $Tf(y)=Th(y)$. Similarly, $Tg(y)=Th(y)=Tf(y)$. This proves that $T$ is $\phi$-basic. Let $\chi$ be the $(\phi,T)$-transform. Proposition \ref{propositionmodelmorphism}, applied to the signature of lattices (with the binary symbol ``$\lor$'' interpreted as ``join'') implies that the sections $\chi(y,\cdot)$ are lattice isomorphisms of $R$ for all $n$, and in particular homeomorphisms. The proof that $\chi$ is continuous is similar to that of implication \ref{theorembasicisomorphisms(2)}$\Rightarrow\ref{theorembasicisomorphisms(1)}$ of Theorem \ref{theorembasicisomorphisms} (using Proposition \ref{propositionconstructfunctionkaplansky} instead of \ref{propositionconstructfunction}). \end{proof} \begin{example} There are non-first countable spaces for which the conclusion of Theorem \ref{propositionbasicisomorphisms} holds. Let $\Omega=\omega_1\cup\left\{\omega_1\right\}$ be the successor of the first uncountable ordinal. We extend the order of $\omega_1$ to $\Omega$ by setting $\alpha<\omega_1$ for all $\alpha\in\omega_1$, and $\Omega$ is a compact Hausdorff space with the order topology. If $Z$ is any first-countable space, then any continuous function $f:\Omega\to Z$ is constant on a neighbourhood of $\omega_1$: Indeed, for every $n\in\mathbb{N}$ choose $\alpha_n<\omega_1$ such that $|f(\beta)-f(\omega_1)|<1/n$ whenever $\beta\geq\alpha_n$. Letting $\alpha=\sup_n\alpha_n$, we have $\alpha<\omega_1$ and $f(\beta)=f(\omega_1)$ for all $\beta\in[\alpha,\omega_1]$ Now suppose that $R=\mathbb{R}$ and $T:C(\Omega)\to C(\Omega)$ is a lattice isomorphism, and let $\phi:\Omega\to\Omega$ be the $T$-homeomorphism. Since $\omega_1$ is the only non-$G_\delta$ point of $\Omega$, we have $\phi(\omega_1)=\omega_1$. The previous paragraph allows us to identify the lattices \[C_c(\omega_1)\simeq\left\{f\in C(\Omega):f(\omega_1)=0\right\},\] which then induces a lattice isomorphism $T|_{\omega_1}:C_c(\omega_1)\to C_c(\omega_1)$. In this case, note that $\phi|_{\omega_1}$ is the $T|_{\omega_1}$-homeomorphism. We can now prove that $T$ is basic with respect to $\phi$. Let $f,g\in C(\Omega)$. If $f(\omega_1)=g(\omega_1)$, then the first paragraph implies that $f=g$ on some neighbourhood of $\omega_1$ and thus $Tf(\omega_1)=Tg(\omega_1)$. If $f(\alpha)=g(\alpha)$ for some $\alpha<\omega_1$, consider $\widetilde{f}\in C_c(\omega_1,0)$ given by \[\widetilde{f}(x)=\begin{cases} f(x),&\text{if }x\leq\alpha,\\ 0,&\text{otherwise,}\end{cases}\] and define $\widetilde{g}$ similarly. Since $(-\infty,\alpha]$ is open in $\Omega$, then \[Tf(\phi^{-1}(\alpha))=T\widetilde{f}(\phi^{-1}(\alpha))\qquad\text{and}\qquad Tg(\phi^{-1}(\alpha))=T\widetilde{g}(\phi^{-1}(\alpha)),\] and since $\omega_1$ is first-countable, we use the previous example to conclude that \[T\widetilde{f}(\phi^{-1}(\alpha))=T\widetilde{g}(\phi^{-1}(\alpha))\] so $Tf(\phi^{-1}(\alpha))=Tg(\phi^{-1}(\alpha))$. Therefore $T$ is basic (with respect to $\phi$). \end{example} In the case additive lattice isomorphisms of spaces of real-valued functions, we do not require the first-countability hypothesis. \begin{theorem}\label{theoremadditivekaplansky} Suppose $R=\mathbb{R}$, and $T:C_c(X)\to C_c(Y)$ is an additive lattice isomorphism. Then there are a unique homeomorphism $\phi:Y\to X$ and a unique positive continuous function $p:Y\to (0,\infty)$ such that $Tf(y)=p(y)f(\phi(y))$ for all $f\in C_c(X)$ and $y\in Y$. \end{theorem} \begin{proof} First note that for all $f\in C_c(X)$, $|f|=(f\lor 0)-(f\land 0)$, so $T|f|=|Tf|$. Let $\phi:Y\to X$ be the $T$-homeomorphism, given by Theorem \ref{theoremkaplanskywithoutlowerbound}. Now suppose $f(x)=0$ but $Tf(\phi^{-1}(x))\neq 0$. First take a compact neighbourhood $U$ of $x$ and $r>0$ such that $|Tf|>r$ on $\phi^{-1}(U)$. Moreover, $f$ is not constant on any neighbourhood of $x$, so there is a sequence of distinct points $x_n\in U$ such that $|f(x_n)|<n^{-2}$. Using Propositions \ref{propositiondisjointopensets} and \ref{propositionconstructfunction}\ref{propositionconstructfunction(b)}, we can take a subsequence if necessary and consider $g\in C_c(X)$ such that for all $n$, $g=nf$ on a neighbourhood of $x_n$. Then $Tg=nTf$ on a neighbourhood of $\phi^{-1}(x_n)$, however \[nr<nTf(\phi^{-1}x_n)=nTg(\phi^{-1}x_n),\] which contradicts the fact that $g$ is bounded. Therefore $T$ is basic with respect to $\phi$, so let $\chi$ be the $T$-transform. Each section $\chi(y,\cdot)$ is an additive order-preserving bijection (Propositions \ref{propositionmodelmorphism} and \ref{propositionbasicisomorphisms}) and hence has the form $\chi(y,t)=p(y)t$ for some $p(y)>0$. If $Tf(y)\neq 0$, then $f(\phi(y))\neq 0$ as well and $p=Tf/(f\circ \phi)$ on a neighbourhood of $y$, thus $p$ is continuous. \end{proof} \subsection{Li--Wong Theorem}\label{subsectionliwong} In \cite{MR3162258}, Li and Wong proved Theorem \ref{theoremliwong}, which can be seen as a generalization of Theorem \ref{theoremadditivekaplansky}. We will proceed in the opposite direction, i.e., by proving their result (or more precisely, the specific case where the domains are compact) instead as a consequence of the more general Theorem \ref{theoremkaplansky}. Let $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$ \begin{theorem}[Li--Wong \cite{MR3162258}]\label{theoremliwong} Let $X$ and $Y$ be compact Hausdorff spaces, and $\mathcal{A}(X)$ and $\mathcal{A}(Y)$ be two regular vector sublattices of $C(X,\mathbb{K})$ and $C(Y,\mathbb{K})$, respectively. Suppose that $T:\mathcal{A}(X)\to \mathcal{A}(Y)$ is a $\mathbb{K}$-linear isomorphism which preserves non-vanishing functions, that is, for all $f\in\mathcal{A}(X)$, \[0\in f(X)\iff 0\in Tf(Y).\] Then there is a homeomorphism $\phi:Y\to X$ and a continuous non-vanishing function $p:Y\to\mathbb{K}$ such that $Tf(y)=p(y)f(\phi(y))$ for all $f\in\mathcal{A}(X)$ and $y\in Y$. \end{theorem} The following technical lemma is the main necessary tool of the proof. We do not assume that $\mathcal{A}(X)$ contains the constant functions, however since it is a regular lattice then it contains a strictly positive function $F$ satisfying $0<F<1/2$ (see the beggining of the proof of theorem \ref{theoremliwong} below). The use of the constant function ``$1/2$'' in the proof of \cite[Lemma 2.3]{MR3162258} can be replaced by $F$. \begin{lemma}[{\cite[Lemma 2.3]{MR3162258}}] Any $T$ as in Theorem \ref{theoremliwong} is a $\perp$-isomorphism. \end{lemma} \begin{proof}[Proof of Theorem \ref{theoremliwong}] In order to apply Theorem \ref{theoremkaplansky}, we need to modify $T$ to obtain a lattice isomorphism. Since $\mathcal{A}(X)$ is a sublattice, then for all $f\in\mathcal{A}(X)$, \[f^+=\max(f,0),\quad f^-=\max(-f,0)\quad\text{and}\quad|f|=f^++f^-\text{ belong to }\mathcal{A}(X).\] As $\mathcal{A}(X)$ is regular and $X$ is compact, we can take finitely many functions $f_1,\ldots,f_n\in\mathcal{A}(X)$ such that for all $x\in X$, $f_i(x)\neq 0$ for some $i$, and therefore $F=\sum_{i=1}^n|f_i|\in\mathcal{A}(X)$ and $F$ is non-vanishing, so $TF$ is also non-vanishing. We define new classes of functions \[\mathcal{B}(X)=\left\{f/F:f\in\mathcal{A}(X)\right\},\qquad\mathcal{B}(Y)=\left\{f/TF:f\in\mathcal{A}(Y)\right\}.\] It is immediate to see that $\mathcal{B}(X)$ and $\mathcal{B}(Y)$ are regular, and contain the constant functions of $X$ and $Y$, respectively. Define a linear isomorphism $S:\mathcal{B}(X)\to\mathcal{B}(Y)$, $S(f)=T(fF)/TF$, which preserves non-vanishing functions and satisfies $S(1)=1$. Given a scalar $\lambda$, linearity and the non-vanishing property of $S$ imply that, for all $f\in\mathcal{B}(X)$, \begin{align*} \lambda\not\in f(X)&\iff f-\lambda\text{ is non-vanishing}\\ &\iff Sf-\lambda\text{ is non-vanishing}\iff \lambda\not\in Sf(Y), \end{align*} so $f(X)=Sf(Y)$, i.e., $S$ preserves images of functions. As $F>0$ it readily follows that $\mathcal{B}(X)$ is a (self-adjoint) sublattice of $C(X,\mathbb{K})$, however this is not so immediate for $\mathcal{B}(Y)$. As $S$ preserves images of functions, it preserves real functions. If $f\in\mathcal{B}(X)$, then $S(\operatorname{Re}(f))$ and $S(\operatorname{Im}(f))$ are real functions such that $Sf=S(\operatorname{Re}(f))+iS(\operatorname{Im}(f))$. As $T$ is a $\perp$-isomorphism then $S$ is also a $\perp$-isomorphism, so we also obtain $S(\operatorname{Re}(f))\perp S(\operatorname{Im}(f))$. This is enough to conclude that $S$ preserves real and imaginary parts of functions, from which it follows that $\mathcal{B}(Y)$ is self-adjoint. Similarly, $S$ preserves positive and negative parts of functions. In particular, if $f\in\mathcal{B}(Y)$ then $f^+\in\mathcal{B}(Y)$, and this is enough to conclude that $\mathcal{B}(Y)$ is a sublattice of $C(Y,\mathbb{K})$, and that $S$ is an order-preserving isomorphism. We may then consider only real-valued functions, and the complex case will follow by linearity (and since $S$ preserves real and imaginary parts). By Kaplansky's Theorem (\ref{theoremkaplanskywithoutlowerbound}), we can construct the $S$-homeomorphism $\phi:Y\to X$. Now we need to prove that $S$ is $\phi$-basic. Suppose $f(x)\neq 0$ for a given $x\in X$, and let us assume, without loss of generality, that $f(x)>0$. Then $f>0$ on some neighbourhood $U$ of $x$. Again using compactness of $X\setminus U$ and regularity of the sublattice $\mathcal{B}(X)$ we can construct a function $g\in\mathcal{B}(X)$ such that $g=0$ on some neighbourhood of $x$ and $g>0$ on $X\setminus U$. Letting $\widetilde{f}=f\lor g$, we have $\widetilde{f}=f$ on some neighbourhood of $x$, so $S\widetilde{f}=Sf$ on some neighbourhood of $\phi^{-1}(x)$. But $\widetilde{f}$ is non-vanishing, so $S\widetilde{f}$ is also non vanishing and in particular $Sf(\phi^{-1}(x))\neq 0$. This proves that $S$ is basic with respect to $\phi$. Letting $\chi:Y\times\mathbb{R}\to\mathbb{R}$ be the $S$-transform, we have that all sections $\chi(y,\cdot)$ are linear and increasing (Theorem \ref{propositionmodelmorphism}), hence of the form $\chi(y,t)=P(y)t$ for a certain $P(y)>0$. Denoting the constant function $x\mapsto 1$ by $1$ (either on $X$ or $Y$), then \[P(y)=\chi(y,1)=\chi(y,1(\phi(y)))=S1(y)=1(y)=1.\] that is, $\chi(y,t)=t$ for all $t\in\mathbb{R}$. Finally, for all $f\in\mathcal{A}(X)$ and $y\in Y$, \[Tf(y)=(TF)(y)\left[S\left(\frac{f}{F}\right)(y)\right]=(TF)(y)\chi\left(y,\frac{f}{F}(\phi(y))\right)=\frac{TF(y)}{F(\phi(y))}f(\phi(y)),\] as we wanted. \end{proof} \subsection{Jarosz' Theorem}\label{subsectionjarosz} Throughout this subsection, we fix $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$. Given a locally compact Hausdorff space $X$, we let $C_c(X)$ be the Banach space of $\mathbb{K}$-valued, compactly supported, continuous function on $X$, where supports are the usual ones, i.e., $\supp(f)=\overline{[f\neq 0]}$. \begin{theorem}[Jarosz \cite{MR1060366}]\label{theoremjarosz} If $T:C_c(X)\to C_c(Y)$ is a linear $\perp$-isomorphism, then there exist a homeomorphism $\phi:Y\to X$ and a continuous non-vanishing function $p:Y\to\mathbb{K}$ such that $Tf(y)=p(y)f(\phi(y))$ for all $f\in C_c(X)$ and $y\in Y$. \end{theorem} \begin{proof} \textbf{First assume that $X$ and $Y$ are compact}, and let us show that $f\neq 0$ everywhere if and only if $Tf\neq 0$ everywhere. Suppose otherwise, say $f(x)=0$, and we have two cases: first, if $f$ is constant on a neighbourhood of $x$, this means that $Z(f)\neq \varnothing$, and Theorem \ref{theoremdisjoint} implies that $Z(Tf)\neq \varnothing$, and in particular $Tf(y)=0$ for any $y\in Z(Tf)$. In the second case, if $f$ is not constant on any neighbourhood of $x$, an argument similar to the one in the proof of Theorem \ref{theoremadditivekaplansky} yields a contradiction to $Tf$ being bounded, so $Tf(y)=0$ for some $y\in Y$. The result follows in this case from the Li--Wong Theorem (Theorem \ref{theoremliwong}). \textbf{Now let $X$ and $Y$ be arbitrary locally compact Hausdorff.} Given $b\in C_c(X)$, set $T_b:C(\supp(b))\to C(\supp(Tb))$ as $T_bf=(Tf')|_{\supp(Tb)}$, where $f'$ is any element of $C_c(X)$ extending $f$. Note that $T_bf$ does not depend on the choice of $f'$, since, for all $f',g'\in C_c(X)$, \begin{align*} f'|_{\supp(b)}=g'|_{\supp(b)}&\iff\sigma(b)\subseteq [f'=g']\iff\sigma(b)\subseteq Z(f'-g')\\ &\iff\sigma(b)\cap\sigma(f'-g')=\varnothing\iff b\perp(f'-g'), \end{align*} and the last condition is preserved by $T$ since it is an additive $\perp$-isomorphism (Theorem \ref{theoremdisjoint}). Since $f\perp\!\!\!\perp g$ if and only if $f|_{\supp(b)}\perp\!\!\!\perp g|_{\supp(b)}$ for all $b$, the previous case allows us to obtain functions $p^b$ and $\phi^b$ such that $Tf(y)=p^b(y)f(\phi^b(y))$ for all $y\in\supp(Tb)$. Clearly, if $b\subseteq b'$ then $p^{b'}|_{\supp(b)}=p^b$ and $\phi^{b'}|_{\supp(b)}=\phi^{b'}$. Thus defining $p$ and $\phi$ as $p(y)=p^b(y)$ and $\phi(y)=\phi^b(y)$ where $b\in C_c(X)$ is such that $y\in\supp(b)$ we obtain the desired maps. \end{proof} \subsection{Banach-Stone Theorem}\label{subsectionbanachstone} We use the same notation as in the previous subsection. Given a locally compact Hausdorff space $X$, endow $C_c(X)$ with the supremum norm: $\Vert f\Vert_\infty=\sup_{x\in X}|f(x)|$ Recall that, by the Riesz-Markov-Kakutani Representation Theorem (\cite[Theorem 2.14]{MR584266}), continuous linear functionals on $C_c(X)$ correspond to (integration with respect to) regular Borel measures on $X$. As a consequence, the extremal points $T$ of the unit ball of the dual of $C_c(X)$ have the form $T(f)=\lambda f(x)$ for some $x\in X$ and $|\lambda|=1$. Given $f\in C_c(X)$, denote by $N(f)$ the set of extremal points $T$ in the unit ball of the dual space $C_c(X)^*$ such that $T(f)\neq 0$. From the previous paragraph we obtain \[f\perp g\iff N(f)\cap N(g)=\varnothing\tag{BS},\] and the Banach-Stone Theorem is an immediate consequence of Jarosz' Theorem. \begin{theorem}[Banach-Stone \cite{MR1501905}]\label{theorembanachstone} Let $X$ and $Y$ be locally compact Hausdorff spaces and let $T:C_c(X)\to C_c(Y)$ be an isometric linear isomorphism. Then there exists a homeomorphism $\phi:Y\to X$ and a continuous function $p:Y\to\mathbb{S}^1$ for which \[Tf(y)=p(y)f(\phi(y))\qquad\forall f\in C(X),\ \forall y\in Y.\] \end{theorem}
1,116,691,497,046
arxiv
\section{Introduction} The decidability of the distributed version of the Ramadge and Wonham control problem~\cite{ramadge1989control}, where both the plant and the controllers are modelled as Zielonka automata~\cite{zautomata,thebook} and the controller have causal memory is a challenging open problem. Very good introductions to this problem are given in~\cite{alook,mumu}. We assume that the plant is distributed on several finite-state processes which interact asynchronously using shared actions. On every process, the local controller can choose to block some of the actions, called \emph{controllable} actions, but it cannot block the \emph{uncontrollable} actions from the environment. The choice of the local controller is based on two sources of information: \begin{itemize} \item First the controller monitors the sequence of states and actions of its process. This information is called the \emph{local view} of the controller. \item Second when a shared action is played by several processes then all the controllers of these processes can exchange as much information as they want. In particular together they can compute their mutual view of the global execution: their \emph{causal past}. \end{itemize} A correct controller restricts controllable actions so that every possible execution of the plant satisfies some specification. The controller synthesis problem aims at automatically computing a correct controller from its specification. The difficulty of controller synthesis depends on several factors, e.g.: \begin{itemize} \item the size and architecture (pipeline, ring, ...) of the system, \item the information available to the controllers, \item the specification. \end{itemize} Assuming that processes can exchange information upon synchronization and use their causal past to take decisions is a one of the key aspects to get decidable synthesis problems~\cite{gastin}. In early work on distributed controller synthesis, for example in the setting of~\cite{pneuli1990distributed}, the only source of information available to the controllers is their local view. In this setting, distributed synthesis is not decidable in general, except for very particular architectures like the pipeline architecture. The paper~\cite{finkbeiner2005uniform} proposes information forks as an uniform notion explaining the (un)decidability results in distributed synthesis. The idea of using causal past as a second source of information appeared in~\cite{gastin}. \medskip We adopt a modern terminology and call the plant a \emph{distributed game} game and the controllers are \emph{distributed strategies} in this game. A distributed strategy is a function that maps the causal past of processes to a subset of controllable actions. In the present paper we focus on the \emph{local reachability condition}, which is satisfied when each process is guaranteed to terminate its computation in finite time, in a final state. A distributed strategy is winning if it guarantees the local reachability condition, whatever non-deterministic choices are performed by the environment. There exists three classes of plants for which the existence of a winning distributed strategy has been shown decidable: \begin{enumerate} \item when the dependency graph of actions is series-parallel~\cite{gastin}, \item when the processes are connectedly communicating~\cite{madhu}, \item and when the dependency graph of processes is a tree~\cite{acyclic,DBLP:conf/fsttcs/MuschollW14}. \end{enumerate} A series-parallel game is a game such that the dependence graph $(A,D)$ of the alphabet $A$ is a co-graph. Series-parallel games were proved decidable in~\cite{gastin}, for a different setup than ours: in the present paper we focus on process-based control while~\cite{gastin} was focusing on action-based control. Actually action-based control is more general than process-based control, see~\cite{alook} for more details. The results of the present paper could probably be extended to action-based control however we prefer to stick to process-based control in order to keep the model intuitive. To our knowledge, the result of~\cite{gastin} was the first discovery of a class of asynchronous distributed system with causal memory for which controller synthesis is decidable Connectedly communicating games have been introduced~\cite{madhu} under the name of \emph{connectedly communicating processes}. Intuitively, a game is connectedly communicating if there is a bound $k$ such that if a process $p$ executes $k$ steps without hearing from process $q$, directly or indirectly, then $p$ will never hear from $q$ again. The event structure of a connectedly communicating games has a decidable MSO theory~\cite{madhu} which implies that controller synthesis is decidable for these games. An acyclic game as defined in~\cite{acyclic} is a game where processes are arranged as a tree and actions are either local or synchronize a father and its son. Even in this simple setting the synthesis problem is provably non-elementary hard. \paragraph{Our contribution} We develop a new proof technique to address the distributed controller synthesis problem, and provide a unified proof of decidability for series-paralell, connectedly communicating and acyclic games. We design a class of games, called \emph{broadcast games}, which has a decidable controller synthesis problem. This leads to new examples of decidable architectures for controller synthesis, for example triangulated games and DAG games. The new proof technique consists in simplifying a winning strategy by looking for useless parts to be removed in order to get a smaller winning strategy. These parts are called \emph{useless threads}. Whenever a useless thread exists, we remove it using an operation called a \emph{shortcut} in order to get a simpler strategy. Intuitively, a shortcut is a cut-and-paste operation which makes the strategy smaller. By taking shortcuts again and again, we make the strategy smaller and smaller, until it does not have any useless thread anymore. Strategies with no useless thread have bounded size and they can be enumerated which leads to decidability. Performing cut-and-paste in a distributed strategy is not as easy as doing it in a centralized game. In a centralized game with only one process, strategies are trees and one can cut a subtree from a point A and paste it to any other node B in A, and the operation makes sense as long as the unique process is in the same state in A and B. But in the case of a general distributed strategy, it is not sufficient that states of the processes coincide at the source and the destination, one has also to take into account the parallelism and the various information of the different processes, so that the result of the operation is still a distributed strategy. The decidability of series-parallel games established~\cite{gastin} relies also on some simplification of the winning strategies, in order to get \emph{uniform} strategies. The series-parallel assumption is used to guarantee that the result of the replacement of a part of a strategy by a uniform strategy is still a strategy, as long as the states of all processes coincide. Here we work without the series-parallel assumption. This is the reason for introducing the notion of \emph{broadcast}. A broadcast is a part of a strategy where an information is guaranteed to spread in a pool of processes before any of these processes synchronize with a process outside the pool. When two broadcasts are similar, they can be used to perform cut-and-past: upon arrival on A, a process of the pool broadcasts to other processes of the pool that they should jump to B, and play as if the path from A to B had been already taken. The transformation of an arbitrary winning strategy to a simpler one is done by induction on the set of actions, which relies on a notion of \emph{process ordering} of the set of actions. This notion is useful to define the new class of broadcast game and treat examples uniformly. However, to our opinion, the main contribution of the paper is not the notion of process ordering and broadcast games but rather the notion of useless threads and shortcuts ad their use to simplify strategies. The complexity of our algorithm is really bad, so probably this work has no immediate practical applications. This is not surprising since the problem is non-elementary~\cite{acyclic}. Nevertheless we think this paper sheds new light on the difficult open problem of distributed synthesis. Missing proofs and further examples can be found in the appendix and in~\cite{techreport}. \section{Definitions and basic properties} \subsection{Mazurkiewicz traces} The theory of Mazurkiewicz traces is very rich and extensively developed in~\cite{thebook}. Here we only fix notations and recall the notions of traces, prime traces and views, and list a few elementary properties of traces that we will use throughout the paper. We fix an alphabet $A$ and a symmetric and reflexive dependency relation $D \subseteq A\times A$ and the corresponding independency relation $~\mathbb{I}~ \subseteq A\times A$ defined by: \[ \forall a,b\in A, (a ~\mathbb{I}~ b) \iff (a,b)\not\in D\enspace. \] For $u,v\in A^*$, we denote $\alphabet(u)$ the set of letters of $u$ and we write: \[ u ~\mathbb{I}~ v \] whenever $\alphabet(u)\times \alphabet(v)\subseteq ~\mathbb{I}~$. For $B\subseteq A$ we write $ u ~\mathbb{I}~ B $ whenever $\forall b\in B, b~\mathbb{I}~ u$. A Mazurkiewicz trace on $(A,~\mathbb{I}~)$ is an equivalence class of words for the smallest equivalence relation $\approx$ on $A^*$ such that: \[ \forall u,v\in A^*, \forall a,b\in A, ((a ~\mathbb{I}~ b) \implies (uabv\approx ubav))\enspace. \] In most of the paper, a Mazurkiewicz trace is simply called a \emph{trace}. A word in a trace is called a \emph{linearization} of the trace. The empty trace denoted $\epsilon$ is the singleton which contains only the empty word. All words of a trace have the same alphabet, thus the notation $\alphabet(u)$ extends to traces. The length of a trace $u$, denoted $|u|$, is the number of letters of any linearization of $u$. For a subset $B\subseteq A$, a trace on $B$ is a trace $u$ such that $\alphabet(u)\subseteq B$. We denote $B^*$ the set traces on an alphabet $B\subseteq A$. The concatenation on words naturally extends to traces, given two traces $u,v\in A^*$, the trace $uv$ is the equivalence class of any word $u'v'$ such that $u'\in u$ and $v'\in v$. Also the notion of prefix extends to traces. A trace $u\in A^*$ is a prefix of a trace $v\in A^*$, denoted \[ u\sqsubseteq v \] if there exists $w\in A^*$ such that $uw=v$. And $u$ is a suffix of $v$ is there exists $w\in A^*$ such that $v = wu$. \subsection{Prime traces and views} A trace $u\in A^*$ is \emph{prime} if all its linearizations have the same last letter. If this letter is $a\in A$, i.e. if $u\in A^*a$, $u$ is said to be $a$-prime. Let $B\subseteq A$. If all linearization of $u$ ends up with a letter in $B$ then $u$ is said to be $B$-prime. Let $B\subseteq A$ and $u\in A^*$. Then there exists a shortest prefix $\view_B(u)$ of $u$, called the \emph{$B$-view} and denoted \[ \view_B(u) \] such that $u$ factorizes as $u=\view_B(u)\cdot v$ with $v~\mathbb{I}~ B$. If $B$ is a singleton $\{b\}$ then the $B$-view is also called the $b$-view and denoted $\view_b(u)$. \subsection{Processes and automata} Zielonka automata are to traces what finite automata are to finite words. \begin{definition} A Zielonka automaton $\mathcal{A}$ on the alphabet $A$ and the set of processes $\mathbb{P}$ is a tuple \[ \mathcal{A} = (A, (Q_p)_{p\in \mathbb{P}}, (i_p)_{p\in \mathbb{P}},(F_p)_{p\in \mathbb{P}},(A_p)_{p\in \mathbb{P}}, \Delta), \] where \begin{itemize} \item $\mathbb{P}$ is a finite set called the set of processes, \item $Q_p$ is the set of states of process $p$, \item $i_p\in Q_p$ is the initial state of $p$, \item $F_p\subseteq Q_p$ is the set of final states of $p$, \item $A_p$ is the set of actions of process $p$, \item $A=\bigcup_{p\in \mathbb{P}} A_p$ and for $a\in A$, the set $\{p\in \mathbb{P}\mid a\in A_p\}$ is called the domain of $a$ and denoted $\dom(a)$, \item $\Delta \subseteq \bigcup_{a\in A} \{a\}\times\prod_{p\in\dom(a)} Q_p\times Q_p$ is the set of transitions, \end{itemize} We assume that transitions are deterministic i.e. for every $a\in A$, if $(a,(q_p,q'_p)_{p\in \dom(a)})\in \Delta$ and $(a,(q_p,q''_p)_{p\in \dom(a)})\in \Delta$ then $q'_p=q''_p$ for every $p\in \dom(a)$. \end{definition} The automaton $\mathcal{A}$ defines a dependency relation $D$ and its dual commutation relation $~\mathbb{I}~$ on $A$: two letters can commute if and only if they have no process in common in their domains: \begin{align*} &((a,b)\in D) \iff (\dom(a)\cap \dom(b)\neq \emptyset)\enspace,\\ & a ~\mathbb{I}~ b \iff \dom(a)\cap \dom(b) = \emptyset\enspace. \end{align*} This naturally defines a notion of Mazurkiewicz trace on $A$. We extend the notion of views and independence to processes. Let $p\in\mathbb{P}$ then the $p$-view of a trace $u\in A^*$ is \[ \view_p(u) = \view_{A_p}(u)\enspace, \] and since all letters of $A_p$ are dependent on each other, \begin{equation} \label{eq:pviewprime} \forall p\in \mathbb{P}, \forall u\in A^*, \view_p(u) \text{ is prime.} \end{equation} Moreover for $p\in \mathbb{P}$ and $u\in A^*$, $ p~\mathbb{I}~ u\enspace $ is a notation for $A_p~\mathbb{I}~ u$. We extend the notion of domain to traces: \[ \dom(a_1\cdots a_n)=\bigcup_{i=1}^n\dom(a_i)\enspace. \] \subsection{Plays} A play is an asynchronous computation of the automaton defined as follows. \begin{definition}[Plays and maximal plays] The set of plays of the automaton $\AA$ denoted $\plays(\AA)$ is defined inductively, together with a mapping $Q:\plays(\AA) \to \Pi_{p\in\mathbb{P}}Q_p$. The set $\plays(\AA)\subseteq A^*$ is the smallest set of traces on $A$ such that: \begin{itemize} \item $\epsilon$ is a play and $Q(\epsilon)=(i_p)_{p\in \mathbb{P}}$, \item if $u\in\plays(\AA)$, $a\in A$ and there exists $(a,(q_p,q'_p)_{p\in\dom(a)}) \in\Delta$ such that $\forall p\in\dom(a), q_p=Q_p(u)$ then $ua\in\plays(\AA)$ and for every $p\in \mathbb{P}$, \[ Q_p(ua) = \begin{cases} Q_p(u) &\text{ if $p\not\in\dom(a)$,}\\ q'_p &\text{ otherwise.} \end{cases} \] \end{itemize} \end{definition} Intuitively $Q(u)$ denotes the last global state of the play $u$. The definition makes sense because for every $u\in\plays(\AA)$, whatever linearization of $u$ is chosen to compute $Q(u)$ does not change the value of $Q(u)$. This holds because $\forall u\in\plays(\AA), Q_p(u)=Q_p(\view_p(u))$, which can be easily proved inductively. \subsection{Strategies and games} Given an automaton $\AA$, we would like the processes to choose actions so that every play eventually terminates in a final state of $\AA$. Not all actions are controllable by processes, and we assume that $A$ is partitioned in $A=A_c \sqcup A_e$ where $A_c$ is the set of controllable actions and $A_e$ the set of (uncontrollable) environment actions. Intuitively, processes cannot prevent their environment to play actions in $Ae$, while they can forbid some of the actions that are in $A_c$. The choice of actions by processes is dynamic: at every step, a process $p$ chooses a new set of controllable actions, depending on its (local) information about the way the play is (globally) going on. This information of a process $p$ on a play $u$ is assumed to be the $p$-view $\view_p(u)$: intuitively two processes cannot communicate together unless they synchronize on a common action. In this case they exchange as much information about the play as they want. In particular it allows them to compute at this instant their common view of the play, i.e. their causal past. Formally, for every $a$-prime play $ua\in\plays(\AA)$, this view is: \[ \forall p,q\in\dom(a), \view_p(ua)=\view_q(ua) = ua \enspace. \] We adopt a modern terminology and call the automaton $\AA$ together with the partition $A=A_c\cup A_e$ a \emph{distributed game}, or simply a game in this paper. In this game the processes play distributed strategies, defined as follows. \begin{definition}[Distributed strategy] A \emph{strategy for process $p\in\mathbb{P}$} in the game $\AA$ is a mapping $ \sigma_p:A^*\to 2^A $ such that for every $u\in A^*$, \begin{align*} &A_e\subseteq \sigma_p(u)\enspace,\\ &\sigma_p(u)=\sigma_p(\view_p(u))\enspace. \end{align*} A \emph{distributed strategy} in $\AA$ is a tuple $\sigma=(\sigma_p)_{p\in\mathbb{P}}$ where each $\sigma_p$ is a strategy of process $p$. A play $u=a_1\cdots a_n$ is a $\sigma$-play if $u\in\plays(\AA)$ and for every $i\in 1..n$ and every $p\in\dom(a_i)$, $ a_i\in\sigma_p(a_1\cdots a_{i-1}) $. A $\sigma$-play is maximal if it is not the strict prefix of another $\sigma$-play. \end{definition} Note that a strategy is forced to allow every environment action to be executed at every moment. This may seem to be a huge strategic advantage for the environment. However depending on the current state, not every action can be effectively used in a transition because the transition function is not assumed to be total. So in general not every environment actions can occur. In particular it may happen that a process enters a final state with no outgoing transition, where no further action is possible. Our goal is to synthesize winning strategies, which ensure that the game terminates and all processes are in final state. \begin{definition}[Winning strategy] A strategy $\sigma$ is winning if the set of plays consistent with $\sigma$ is finite and for every maximal $\sigma$-play $u$, every process is in a final state i.e. \[ Q(u)\in\Pi_{p\in\mathbb{P}} F_p\enspace, \] \end{definition} The \emph{distributed synthesis problem} asks, given a game $G=(\AA,A_c,A_e)$, whether the game is winning, in the sense where there is a winning strategy in $G$. If yes such a strategy should be computed. \section{Taking shortcuts} In this section we present an elementary operation called a \emph{shortcut}, which can be used to simplify and reduce the duration of a winning strategy. To create a shortcut, one selects a play $xy\in[\sigma]$ consistent with a strategy $\sigma$ and modify the strategy so that as soon as a process sees the play $x$ in its causal past in their causal past, they assume that not only $x$ but also $xy$ has actually occurred. From a more formal point of view, a shortcut is a kind of \emph{cut-and-paste} in the strategy tree: we glue on node $x$ the subtree rooted at node $xy$. This guarantees the new tree to be smaller than the previous one, thus if the new tree is a winning strategy, its duration is guaranteed to be smaller than the duration of the original strategy. However the choice of $x$ and $y$ should be carefully performed so that the new tree is well-defined and is still a winning strategy. We provide a sufficient condition for that: $(x,y)$ should be a \emph{useless thread}. \subsection{Duration and shortcuts} Winning strategies with shorter durations are better. \begin{definition}[Duration of a strategy] The duration $\dur(\sigma)$ of a strategy $\sigma$ is an integer in $\mathbb{N}\cup\{\infty\}$ defined as follows. If the set of $\sigma$-plays is infinite then $\dur(\sigma)=\infty$. Otherwise \[ \dur(\sigma) =\sum_{\text{$u$ maximal $\sigma$-play}} |u|\enspace. \] \end{definition} Taking a shortcut is a convenient way to make a strategy shorter. \begin{definition}[Shortcut] Let $x,y\in A^*$ be such that $xy$ is a $\sigma$-play. Let $\phi_{x,y}:A^*\to A^*$ be the mapping: \begin{equation}\label{defphi} \phi_{x,y}(u) = \begin{cases} u & \text{ if } x \not\sqsubseteq u\\ xyv& \text{ if } u = xv\enspace, \end{cases} \end{equation} Then the $(x,y,\sigma)$-shortcut is the mapping $\sigma_{x,y}:A^*\to A^*$ defined by: \[ \sigma_{x,y} = \sigma \circ \phi_{x,y}\enspace. \] \end{definition} The mapping $\phi_{x,y}$ is well-defined: there exists at most one $v$ such that $u=xv$. There is a priori no reason in general for $\sigma_{x,y}$ to be a distributed strategy. For that we need extra conditions on the pair $(x,y)$ and we introduce the notion of broadcasts and useless thread. \subsection{Broadcasts and useless threads} Intuitively, a \emph{broadcast} is a prime play $u$ in a strategy such that the maximal action of the play and the associated information about the play is broadcasted in priority to a pool of processes $\mathbb{Q}\subseteq \mathbb{P}$, before any of these processes synchronize with a process outside the pool. During a broadcast, as long as a process of the pool $\mathbb{Q}$ plays in parallel of $u$ then it synchronizes only with processes in $\mathbb{Q}$. \begin{definition}[broadcasts] \label{def:broadcast} Let $\mathbb{Q}\subseteq \mathbb{P}$ be a subset of processes. We say that a prime play $u$ is a \emph{$\mathbb{Q}$-broadcast} if for every play $uv$ and every action $a$ maximal in $uv$, \begin{equation} \label{eq:defbroad1} (\dom(a) \cap \mathbb{Q} \neq \emptyset) \text{ and } (\dom(a) \cap \mathbb{P}\setminus \mathbb{Q} \neq \emptyset) \implies (u \sqsubseteq \view_a(uv)) \enspace. \end{equation} \end{definition} In other words, a broadcast prevents a process in $\mathbb{Q}$ to synchronize with a process out of $\mathbb{Q}$ in parallel of $u$. Two basic properties of broadcasts are listed in the following lemma. \begin{lemma} Let $u$ be a prime play with a maximal action $b$. Then $u$ is a $\dom(b)$-broadcast and a $\mathbb{P}$-broadcast. Moreover, for every $\mathbb{Q}\subseteq \mathbb{P}$, $u$ is a $\mathbb{Q}$-broadcast if and only if it is a $(\mathbb{P} \setminus \mathbb{Q})$-broadcast. And $u$ is a $\mathbb{Q}$-broadcast whenever $\mathbb{Q}$ is a singleton. \end{lemma} We give some more examples of broadcasts. In an acyclic game, where processes are arranged in a tree organization (see the section with examples for a formal definition), every time a process $p$ plays then it is a broadcast to its subtree: whenever a process in the subtree synchronizes with processes out of the subtree, it should synchronize with $p$ as well. In a series-parallel game, the dependency alphabet $(A,D)$ is the product of two alphabets $(A_0,D_0)$ and $(A_1,D_1)$, either parallel with $D=D_0\cup D_1$ or synchronized with $D=D_0\cup D_1\cup A_0\times A_1\cup A_1\times A_0$. Then every prime play whose maximal action is $a_0\in A_0$ is a broadcast to $\dom(a_0) \cup A_1$. The reason is that $a_0$ conflicts with every action in $A_1$. The definition of broadcasts can be equivalently reformulated as follows. \begin{proposition}\label{prop:equivbroadcast} A prime play $u$ is a $\mathbb{Q}$-broadcast if and only if for every $uv$ such that $v$ is prime, $(uv \text{ is prime})\text{ or } (v~\mathbb{I}~ \mathbb{Q}) \text{ or } (\dom(v) \subseteq \mathbb{Q})$. \end{proposition} We are interested in broadcasts occuring in \emph{threads}. \begin{definition}[Thread] Let $\mathbb{Q}\subseteq \mathbb{P}$ be a subset of processes (resp. $B\subseteq A$ a subset of actions). A \emph{$\mathbb{Q}$-thread} (resp a $B$-thread) is a pair $(u,v)\in A^*\times A^*$ such that $uv$ is a play and $\dom(v)\subseteq \mathbb{Q}$ (resp. $v\in B^*$). \end{definition} Some threads are called \emph{useless threads}, they can be deleted to reduce the delay of the strategy, for the same result. \begin{definition}[Useless thread] Let $\sigma$ be a strategy. A \emph{useless thread in $\sigma$} is a $\mathbb{Q}$-thread $(x,y)$ such that there exists an action $b$ with the following properties: \begin{align} \label{xbprime} &x \text{ and } xy \text{ are $b$-prime,}\\ \label{xbroadcast} &x \text{ and } xy \text{ are $\mathbb{Q}$-broadcasts}\\% in $\sigma$,}\\ \label{statescoincide} & \text{every process $p\in\mathbb{P}$ has the same state in $x$ and $xy$,}\\ \label{sigmacoincide} & \text{for every play $xv\in A^*$}, (\dom(v)\subseteq \mathbb{Q} \land v~\mathbb{I}~ b)\implies(\sigma(xv)=\sigma(xyv))\enspace \enspace. \end{align} \end{definition} Taking a shortcut of a useless thread in a distributed strategy makes sense because the result is still a distributed strategy. \begin{lemma}\label{lem:uselessdistrib} Let $(x,y)$ be a useless thread in a distributed strategy $\sigma$. Then the $(x,y,\sigma)$-shortcut $\sigma_{x,y}$ is a distributed strategy. \end{lemma} Taking shortcuts of useless threads is really useful for making winning strategies smaller: it transforms a \emph{winning} distributed strategy into another, \emph{shorter}, \emph{winning} distributed strategy. \begin{lemma} \label{lem:winning} Let $(x,y)$ be a useless thread in a winning distributed strategy $\sigma$. Then the $(x,y,\sigma)$-shortcut $\sigma_{x,y}$ is a winning distributed strategy as well, and \begin{equation} \label{eq:taulength} \dur(\sigma_{x,y}) < \dur(\sigma)\enspace. \end{equation} Moreover for every $v\in A^*$, \begin{equation} \label{eq:tauplay} \text{$xv$ is a $\sigma_{x,y}$-play} \iff \text{$xyv$ is a $\sigma$-play}. \end{equation} \end{lemma} According to this lemma, when a distributed game is winning, the winning strategy with the shortest duration has no useless thread. In the next section we introduce a class of games, called broadcast games, where broadcasts occur quite regularly. This is the key to obtain decidability of the synthesis problem: winning strategy with long durations can be simplified into simpler one by taking shortcuts. As a consequence we obtain a computable upper bound on the duration of the longuest winning strategy. \subsection{Application: series-parallel games are decidable} We give an application of the results obtained so far : the synthesis problem for series-parallel games is decidable. Actually, for this result it is enough to use the notion of useless thread in the case where $\mathbb{Q}=\mathbb{P}$. This is summarized by the following corollary. \begin{corollary}\label{cor:shortcut} Let $\sigma$ be a strategy with no useless $\mathbb{P}$-thread. Then for every primary play $xy$, if both $x$ and $y$ have the same maximal letter $b$ and processes are in the same states after $x$ and $xy$ then \[ \{ v \in A^* \mid v ~\mathbb{I}~ b \land xv \in [\sigma] \} \neq \{ v \in A^* \mid v ~\mathbb{I}~ b \land xyv \in [\sigma] \}\enspace. \] \end{corollary} \begin{proof} Otherwise $(x,y)$ would be a useless $\mathbb{P}$-thread.\qed \end{proof} A series-parallel game is a game where the dependence graph $(A,D)$ of the alphabet $A$ is a co-graph i.e. it belongs to the smallest class of graphs containing singletons and closed under parallel product and complementation. Series-parallel games were proved decidable in~\cite{gastin}, for a slightly more general setup than ours called \emph{action-based synthesis}, a setting more general than process-based control~\cite{alook}. In a series-parallel game, either $A$ is a singleton or there exists two cographs $G_0=(A_0,D_0)$ and $G_1=(A_1,D_1)$, $A$ is the disjoint union of $A_0$ and $A_1$ and \begin{align*} &\text{either } D = D_0 \cup D_1 \text{ (parallel product)}\\ &\text{or } D = D_0 \cup D_1 \cup A_0 \times A_1 \text{ (synchronized product)}\enspace. \end{align*} We show by induction that if a strategy $\sigma$ has no useless $\mathbb{P}$-thread then every prime play $u\in[\sigma]$ consistent has length at most $K_A$ where $K_A$ is inductively defined by: \[ K_A = \begin{cases} |Q| & \text{ (singleton case) }\\ \max \{ K_{A_0}, K_{A_1} \} & \text{ (parallel case),}\\ K_{A_0}2^{|A_0|^{K_{A_0}}}|A_0||Q|^{|\mathbb{P}|} + K_{A_1}2^{|A_1|^{K_{A_1}}}|A_1||Q|^{|\mathbb{P}|} & \text{ (synchronized case) } \enspace. \end{cases} \] In the singleton case, this is a direct consequence of the corollary. In the case of a parallel product, every prime play is either in $A_0^*$ or in $A_1^*$, so the inductive step is easy. Assume the product is synchronized. Any primary play $u$ factorizes uniquely as $u=u_1u_2\cdots u_n$ where each prefix $u_1u_2\cdots u_i$ is primary and for every $i <n$, one of the two words $u_i,u_{i+1}$ is a non-empty word in $A_0^+$ and the other a non-empty word in $A_1^+$. Without loss of generality, assume $u_{0}\in A_0^+$ (thus $u_1 \in A_1^+, u_2\in A_0^+$ and so on). Since $\sigma$ has no useless $\mathbb{P}$-thread, then for every $i$, the strategy \[ \sigma[u_0u_1\cdots u_{i}] : w \to \sigma(u_0u_1\cdots u_{i}w) \] has no useless $\mathbb{P}$-thread either. Thus, according to Corollary~\ref{cor:shortcut}, for every pair $u_{i}, u_{j}$ with $i<j$, if $u_{i}$ and $u_{j}$ have the same maximal action $b$ and every process is in the same state after $u_{i}$ and $u_{j}$ then there exists a prime play $v ~\mathbb{I}~ b$ such that $\sigma(u_0u_1\cdots u_{i}v)\neq \sigma(u_0u_1\cdots u_{j}v)$. Without loss of generality, assume $b\in A_0$. Since $v~\mathbb{I}~ b$ then $v\in (A_0\setminus \dom(b))^*$ because the product is synchronized thus $\{b\}\times A_1\subseteq D$. By induction hypothesis, $|v|\leq K_{A_0}$. As a consequence, there are at most $ |A_0||Q|^{|\mathbb{P}|}2^{|A_0|^{K_{A_0}}} $ different possible values for the words $u_i$ whose maximal action is in $A_0$. Thus $n \leq |A_0||Q|^{|\mathbb{P}|}2^{|A_0|^{K_{A_0}}}+|A_1||Q|^{|\mathbb{P}|}2^{|A_1|^{K_{A_1}}}$. Since every prefix $u_1u_2\ldots u_i$ is primary then every $u_i$ is primary. Moreover, by definition of $K_{A_0}$ and $K_{A_1}$, every word $u_i\in A_0^+$ (resp $u_i\in A_1^+$) has length at most $K_{A_0}$ (resp. $K_{A_1}$). This terminates the proof of the inductive step. When the game is winning, the winning strategy with the shortest duration has no useless $\mathbb{P}$-thread. Thus winning strategies can be looked for in a finite set. This gives decidability of the synthesis problem for series-parallel games. \section{Broadcast games} We do not know whether the distributed synthesis problem is decidable in the general case, but we show it is decidable when the game is a broadcast game. The notion of broadcast games is defined so that decidability results of~\cite{madhu,acyclic} can be retrieved quite easily. The definition of a broadcast game is rather ad-hoc and technical, we hope further research will lead to a more general and cleaner definition. \subsection{Definition} The proof of decidability of broadcast games is performed inductively, and relies on the notion of an process ordering of the dependency alphabet $(A,D)$. \begin{definition}[Process ordering] A process ordering of $\mathbb{P}$ is a partial order~\footnote{i.e. a transitive, reflexive and anti-symmetric binary relation.} $\mathcal{C}$ on $\mathbb{P}$ such that for every prime trace $u$, $\dom(u)$ has a $\mathcal{C}$-maximum. \end{definition} In a distributed game, there may exist several process orderings, for example any total order on $\mathbb{P}$ is a process ordering of $\mathbb{P}$. A broadcast game is a game where, periodically, processes create broadcasts. The pool of processes where the broadcast occurs is computed by downward-closure of the set of processes, with respect to the process ordering $\mathcal{C}$. \begin{definition}[Process closure] Let $\mathcal{C}$ be a process ordering of $\mathbb{P}$ and $\mathbb{Q}\subseteq \mathbb{P}$. The process-closure of $\mathbb{Q}$ is: \[ \mathbb{Q}_\mathcal{C} = \{ p \in \mathbb{P} \mid p \mathcal{C} q \text{ for some } q \in \mathbb{Q} \} \enspace. \] \end{definition} We are especially interested in broadcasts consistent with process ordering. \begin{definition}[Well-ordered broadcast] A $\mathbb{Q}$-broadcast $u$ is \emph{well-ordered} if $\mathbb{Q}=\mathbb{Q}_\mathcal{C}$ and the maximal process of $\dom(u)$ plays in the last action of $u$. \end{definition} For every trace $u\in A^*$ and process $p\in\mathbb{P}$, $|u|_p$ denotes the number of actions of process $p$ in $u$, i.e. the number of letters in $u$ whose domain contains $p$. \begin{definition}[Broadcast games] \label{defi:bg} Let $N\in\mathbb{N}$. A game $G$ is a \emph{$(N,\mathcal{C})$-broadcast game} if, for every prime play $uv\in A^*$, whenever \[ \forall q\in \dom(v), |v|_q\geq N \] then there exists a prefix $v'\sqsubseteq v$ such that $uv'$ is a well-ordered $\dom(v)_\mathcal{C}$-broadcast. \end{definition} In case $G$ is an $(N,\mathcal{C})$-broadcast game for some $N$ and $\mathcal{C}$ we also say that $G$ is an $N$-broadcast game or simply a broadcast game. The property of being a broadcast game is decidable. \begin{proposition}\label{prop:decidable} It is decidable whether a game $G$ is a broadcast game. In case it is then $G$ is an $N$-broadcast-game for some $N\leq\Pi_{p\in\mathbb{P}}|Q_p|$. \end{proposition} \subsection{Decidability} \begin{theorem}\label{theo:main} Whether a broadcast game is winning or not is decidable. \end{theorem} The algorithm consists in enumerating all possible strategies whose plays have length less than some computable bound, and check whether any of these strategies is winning. This bound is defined by equation~\eqref{eq:bound} below. The bound is quite large, which is not surprising since the problem is non-elementary~\cite{acyclic}. We provide some examples and applications in the next section. The proof is easy to sketch but harder to implement because distributed systems are not so easy to handle. For every subset of processes $\mathbb{Q} \subseteq \mathbb{P}$, we compute inductively a bound $K_\mathbb{Q}$ such that any winning strategy which has a $\mathbb{Q}$-thread of duration more than $K_\mathbb{Q}$ can be simplified in a shorter winning strategy, by taking a shortcut associated to a useless thread. With every $B\subseteq A$ we associate a constant $K_{B}\in\mathbb{N}$ as follows. According to Ramsey theorem, for every $m,n\in \mathbb{N}$, there exists a constant $R(m,n)$ such that every undirected complete graph with at least $R(m,n)$ vertices whose edges are labelled with $m$ different colors contains a monochromatic clique of size $n$. Then we define inductively $K_\emptyset=0$ and \begin{equation} \label{eq:bound} K_\mathbb{Q}= |Q| \cdot R\left(2^{|\mathbb{Q}|}, N 2^{|A|} |A| |Q|^{|\mathbb{P}|} 2^{|A|^{\max_{\mathbb{Q}'\subsetneq \mathbb{Q}}K_{\mathbb{Q}'}}}\right)\enspace. \end{equation} Next lemma states that in a $(N,\mathcal{C})$-broadcast game, very long strategies have useless threads. \begin{lemma}\label{lem:useless} Let $\sigma$ be a distributed strategy of a $(N,\mathcal{C})$-broadcast game. Assume that for some $\mathbb{Q}\subseteq \mathbb{P}$, $\sigma$ has a $\mathbb{Q}$-thread of length more than $K_{\mathbb{Q}}$. Then there is a useless thread in $\sigma$. \end{lemma} \begin{proof}[of Theorem~\ref{theo:main}] Let $\sigma$ be a winning strategy of minimal duration. By minimality, according to Lemma~\ref{lem:winning}, strategy $\sigma$ does not contain any useless thread. By Lemma~\ref{lem:useless}, every play of $\sigma$ has length less than $K_\mathbb{P}$. There is a finite number of distributed strategies with this property, and for each such strategy $\sigma$, there is a simple algorithm that checks whether $\sigma$ is winning or not: look non-deterministically for a losing play consistent with $\sigma$. Thus the existence of a winning strategy is decidable. \qed \end{proof} \section{Examples of broadcast games} In this section we provide several examples of broadcast games. The first example is the class of connectedly communicating games, whose decidability was already known. The second example are acyclic games whose decidability was already known in the case where all actions are local or binary. The last example is the class of triangulated games. We terminate with a discussion about dynamic broadcast games. \subsection{Connectedly communicating games are broadcast games} Connectedly communicating games have been introduced~\cite{madhu} under the name of \emph{connectedly communicating processes}, and the authors did establish the decidability of the MSO theory of the corresponding event structure, which implies that controller synthesis is decidable. A game is \emph{connectedly communicating} if there exists some bound $k\in\mathbb{N}$ such that whenever a process $q$ never plays while process $p$ plays $k$ times, then $p$ and $q$ will stay forever in separate threads. Formally, a game is $k$-communicating for some $k\in\mathbb{N}$ if for every processes $p,q\in \mathbb{P}$ and play $uvw$ in $G$, \[ \label{eq:ccpdef} ( (|v|_p\geq k) \land (|v|_q=0) \land w \text{ prime}) \implies (|w|_p=0 \lor |w|_q=0)\enspace. \] It is quite clear that every $k$-communicating game is a $k$-broadcast game. The process ordering is an arbitrary total order $\mathcal{C}$ on $\mathbb{P}$. Then, for every play $uv$, the hypothesis $\forall q \in\dom(v), |v|_q \geq k$ in the definition of broadcast games implies that no process $q \in\dom(v)$ will ever synchronize with a process $p\not \in \dom(v)$ after $uv$ thus $uv$ itself is a (well-ordered) $\dom(v)_\mathcal{C}$-broadcast. Thus every connectedly communicating is a $k$-broadcast game. \subsection{DAG games are $1$-broadcast games} An acyclic game as defined in~\cite{acyclic} is a game where processes are arranged as a tree and actions are either local or synchronize a father and its son. Formally, the processes are arranged as a tree $T_\mathbb{P}=(\mathbb{P},E_\mathbb{P})$, and each action is either a local action whose domain is a singleton or a binary synchronizing action such that $\dom(a) = \{p,q\}$ and $(p,q)\in E_\mathbb{P}$ i.e. $q$ is the father of $p$ in the process tree. We extend the definition of~\cite{acyclic} to the case of non-binary actions and we relax the assumption about the tree structure into the existence of a process ordering $\leq_\mathbb{P}$ on $\mathbb{P}$ such that for every action $a\in A$ and processes $p_0,p_1,p_2\in \mathbb{P}$, \begin{equation}\label{eq:acyclic} (p_0\in \dom(a) \land p_1\in\dom(a) \land p_0 \leq_\mathbb{P} p_2) \implies (p_1 \leq_\mathbb{P} p_2 \lor p_2 \in \dom(a)) \enspace. \end{equation} This condition holds in particular in the case where there is a tree structure $T_\mathbb{P}=(\mathbb{P},E_\mathbb{P})$ on $\mathbb{P}$, $\leq_\mathbb{P}$ is the ancestor relation, and the domain of every action is a connected subset of $T_\mathbb{P}$. Indeed, in this case if $p_0 \leq_\mathbb{P} p_2$ then either $p_2\in\dom(a)$ or $p_2$ is an ancestor of all processes in $\dom(a)$ thus in particular $p_1\leq_\mathbb{P} p_2$. Also this shows that four player games, ($\mathbb{P}=\{1,2,3,4\}$) where all actions are allowed except those with supports $\{1,4\}$, $\{1,3\}$ and $\{2,4\}$ are decidable. Every DAG game is a $(1,\mathcal{C})$-broadcast game, whenever property~\eqref{eq:acyclic} is satisfied. Let $uv$ be a primary play. We shall find a prefix $v' \sqsubseteq v$ such that $v'$ is a $\dom(v)_\mathcal{C}$-broadcast. Let $p$ be the maximal process in $\dom(v)$ and $v'$ the shortest prime prefix of $v$ whose last action is an action of $p$. Then we show that $v'$ is a $\dom(v)_\mathcal{C}$-broadcast. Let $uv'w$ be a play and $c$ be a maximal action of $uv'w$ such that $\dom(c) \cap \dom(v)_\mathcal{C}\neq \emptyset$ and $\dom(c) \not \subseteq \dom(v)_\mathcal{C}$. Let $p_0 \in \dom(c) \cap \dom(v)_\mathcal{C}$ and $p_1 \in \dom(c) \setminus \dom(v)_\mathcal{C}$. Then $p$ is the $\leq_\mathbb{P}$-maximum of $\dom(v)_\mathcal{C}$ thus $p_0 \leq_\mathbb{P} p$ and $\neg(p_1 \leq_\mathbb{P} p)$. Thus according to~\eqref{eq:acyclic} we get $p\in\dom(c)$. As a consequence $ uv' = \view_{c}(uv')\sqsubseteq uv'w. $ Thus every DAG game with property~\eqref{eq:acyclic} is decidable. \subsection{Triangulated games and three player games} Any $3$-player game $G$ with processes $\{1,2,3\}$ is a $1$-broadcast game, with the order $1 \mathcal{C} 2 \mathcal{C} 3$. We show that any game $G$ with alphabet $A$ is a $(1,\mathcal{C})$-broadcast game. Let $uv$ be a prime trace. We shall find a prefix $v'\sqsubseteq v$ such that $uv'$ is a well-ordered $\dom(v)_\mathcal{C}$ broadcast. The case where $\dom(v)$ is a singleton is obvious. In the remaining case we select any primary prefix $v'$ of $v$ whose last action $a$ is binary or ternary. This is a $\dom(a)$-broadcast because all players in $\dom(a)$ can immediately observe $uv'$. Consequently, all $3$-player games are $1$-broadcast games and are decidable. A triangulated game is a game where processes are arranged as an undirected graph $G_\mathbb{P}=(\mathbb{P},E_\mathbb{P})$ such that all simple cycles in the graph have length $3$, and moreover we assume that \begin{equation}\label{eq:triang} \forall a\in A, \dom(a) \text{ is connected in } G_\mathbb{P}. \end{equation} This definition is inspired by~\cite{diekert1996note}. If we shrink all triangles of a triangulated game then we get a tree, thus we can fix a root of this tree and attribute a depth to these triangles. Then we order processes with respect to the average depth of the triangles it belongs to, and there is a natural notion of descendants, that we do not detail here. Like in an acyclic game, every action of a process $p$ is broadcasted to its descendants. The reason is every synchronisation of a descendant with a non-descendant process should include $p$, according to hypothesis~\eqref{eq:triang} and by absence of simple cycles of length $4$. Consequently, all triangulated games with property~\eqref{eq:triang} are $1$-broadcast games and are decidable. \subsection{Dynamic broadcast games and strategies} The condition for being a broadcast game is \emph{dynamic}, by opposition to \emph{static} restrictions on the architecture of processes or the dependency alphabet: a broadcast game may well behave for some time as a ring or another architecture whose decidability is unknown, as long as it performs a broadcast from time to time. Actually probably the requirement of the existence of broadcasts could be put on strategies rather than on the games themselves. \section*{Conclusion} We have presented a theorem and proof techniques that unifies several known decidability results for distributed games, and presented new examples of distributed games for which the existence of a winning strategy is decidable. Probably the distributed synthesis problem is still decidable if the hypothesis about the regular occurences of broadcasts is not put on the game itself but rather used to restrict the class of strategies used by the players. The decidability of distributed synthesis in the general case is still open to our knowledge, even in the simple case of ring games with five processes. Another intriguing open problem is the case of \emph{weakly $k$-connectedly communicating} plants. In such a plant, whenever two processes play both $k$ times in a row without hearing from each other, they will never hear from each other anymore. Although this assumption seems only sightly lighter than the \emph{$k$-connectedly communicating} one, we do not know how to use techniques of this paper to solve the controller synthesis for such plants. \section*{Acknowledgements} We thank Blaise Genest for interesting discussions on the topic. \bibliographystyle{splncs03} \section*{History of the paper} The author believes it may be useful to the reviewers of a conference paper to have access to the history of previous submissions, if any, as well as to get some insight on the previous reviews. An earlier version of this paper was submitted to LICS 2016. The feedback of the reviewers was very precious to improve the paper. Since then the notion of broadcast game has been simplified, as well as the treatment of examples. Also more comments have been added in the paper and the most formal proofs are not pushed to the appendix. Below are the conclusions of the reviews for LICS 2016. \subsection{Reviewer1} "To my opinion, the paper makes an interesting contribution. Although I doubt whether that the full class of broadcast games is useful in practice, it covers important subclasses and the arguments for the decidability are elegant. " \subsection{Reviewer2} "This looks like a significant piece of work and very well suited to LICS. I did not have time to check all the proof details, but the proof of the main result is included in the paper and appears to have been carefully worked out. The argument involves essentially a type of pumping lemma for strategies in Zielonka automata that can be used to pump down subtraces that meet a certain condition - this may be a contribution of the work in its own right. Decidability of synthesis follows from the fact that any trace over a given length must contain a pumpable subtrace, so that it suffices to consider strategies of a given length. My main reason for not giving a higher score is that the writing appears in places to have been hasty, with many typos, and the presentation overall is a bit unbalanced. In places, like Lemma 1, the author goes out of his way to give the reader a gentle introduction by going into detail about some simple results about Zielonka automata. However, once the main contributions of the paper are introduced, the presentation is quite technical, and very little by way of intuition is provided. In particular, definitions 7 and 8, which are at the core of the contribution, are quite complex, but we are giving nothing by way of explanation or intuition for what they are saying. It is clear from the introduction that the author does have intuitions concerning these definitions: they should be revisited here, and some very simple examples presented to explain how the definitions relate to the intuitions. " \subsection{Reviewer3} "I think that the results are exciting. Even though algorithmically this does not lead to effective computation of winning and is probably worse than previous solution the unification of the previous results in this form is very nice. However, the level of writing is not sufficient for the paper to be accepted to LICS. The paper is written at a level that makes it very hard to follow. Definitions are given without further explanations and the intuition of the writer has to be taken up from a sequence of definitions and equations." \fi \newpage \section*{Appendix} \section{Elementary properties of traces} Not all properties of the concatenation operator and the prefix relation on words are preserved on traces, however the following are: \begin{align} \label{eq:prefantisym} &\forall u,v\in A^*, ((u\sqsubseteq v) \land (v\sqsubseteq u) \implies u=v)\enspace,\\ \label{eq:prefcancel} &\forall u,v,w\in A^*, (uv=uw) \implies (v=w)\enspace,\\ & \label{eq:preftrace} \forall u,v,w\in A^*, (uv\sqsubseteq uw)\implies(v\sqsubseteq w)\enspace. \end{align} The following lemma lists some basic properties of traces used in the proofs. \begin{lemma} For every trace $u,v,x\in A^*$ and $a\in A$ and $B\subseteq A$,\begin{align} \label{eq:indepcarac} &(u ~\mathbb{I}~ B) \iff (\view_B(u) = \epsilon) \\ \label{eq:decomp2} &(x\sqsubseteq uv) \implies\exists x_0\sqsubseteq u,\exists x_1\sqsubseteq v, x =x_0x_1\\ &\label{eq:decomp} (x\sqsubseteq uv) \implies\exists x_0,x_1,x_2,x_3\in A^*,\\ \notag& \hspace{1cm}(x= x_0x_1) \land (u = x_0x_2) \land (v=x_1x_3) \land (x_2~\mathbb{I}~ x_1)\\ &\label{eq:primsuff} \text{$uv$ is $B$-prime $\implies v$ is $B$-prime}\\ &\label{eq:primconcat2} \text{$u$ and $v$ are $B$-prime $\implies uv$ is $B$-prime}\\ &\label{eq:primsuff2} \text{If } ua \text{ is prime}, (av \text{ is $B$-prime } \iff uav \text{ is $B$-prime })\\ \label{eq:seeletter} &(u \text{ is $B$-prime } \land \neg (a~\mathbb{I}~ u)) \implies (au \text{ is $B$-prime }) \\ \label{eq:primsuff4} &(u\sqsubseteq \view_B(uv))\iff (\view_B(uv) = u\view_B(v))\\ \label{eq:primsuff3} &(uw\sqsubseteq \view_B(uv))\implies (w \sqsubseteq \view_B(v))\\ \label{eq:viewview2} &\text{$\view_B(\view_B(u)) = \view_B(u)$}\\ \label{eq:viewview} &\text{$\view_B(uv) = \view_B(u\view_B(v))$}\\ & \notag \text{If } ua \text{ is prime},\\ &\hspace{.5cm} \label{eq:primconcat} uav \sqsubseteq \view_B(uavw) \iff av \sqsubseteq \view_B(avw) \end{align} \end{lemma} \begin{proof} The equivalence~\eqref{eq:indepcarac} is immediate from the definition of $\view_B$. Equation~\eqref{eq:decomp2} is a corollary of~\eqref{eq:decomp} which is well-known,\\ see~\cite{thebook} for example. It can be proved by induction on $|x|$. We prove~\eqref{eq:primsuff}. If the last letter of a word $v'\in v$ is not in $B$, then the same holds for every $u'v'$ where $u'\in u$ thus $uv$ is not $B$-prime since $u'v'\in uv$. We prove~\eqref{eq:primconcat2}. Assume both $u$ and $v$ are $B$-prime. Every linearization of $uv$ is a shuffle of a linearization of $u$ and a linearization of $v$ thus it terminates with a letter in $B$. Hence $uv$ is $B$-prime. We prove~\eqref{eq:primsuff2}. Assume $ua$ prime. The converse implication follows from~\eqref{eq:primsuff}. Assume $av$ is $B$-prime. We prove that $uav$ is $B$-prime by induction on $|u|$. If $|u|=0$ then $u=\epsilon$ and $uav=av$ is $B$-prime by hypothesis. By induction let $n\in \mathbb{N}$ and assume $u'av$ is $B$-prime for all $u'$ such that $|u'|\leq n$. Let $u$ such that $|u|=n+1$, we prove that $uav$ is $B$-prime. Since $|u|=n+1$, there exists $b\in A$ and $u'\in A^*$ such that $u=bu'$ and $|u'|=n$. Using~\eqref{eq:primsuff} and the induction hypothesis so on one hand we know that $u'av$ is $B$-prime. By definition of a trace, for any trace $w$, \begin{equation} \label{eq:commute} bw =\{ xbz \mid x,z \text{ words on $A$ }, xz \in w, b~\mathbb{I}~ x\}\enspace. \end{equation} Let $y$ a linearization of $uav=bu'av$, we prove that the last letter of $y$ is in $B$. According to~\eqref{eq:commute}, $y$ factorizes as $y=xbz$ with $xz\in u'av$ and $x~\mathbb{I}~ b$. Since $xz\in u'av$ and $u'av$ is $B$-prime, if $z$ is not empty then it ends with a letter in $B$ and so does $y$. Assume now that $z$ is empty, then $y=xb$ with $x\in u'av$ and $x~\mathbb{I}~ b$. Since $y\in bu'av$ then $A(u'a)\subseteq A(y)$ and $A(v)\subseteq A(y)$. Since $A(y)=A(x)\cup \{b\}$ and $x~\mathbb{I}~ b$ every letter of $u'a$ and $av$ commute with $b$ thus $bu'a=u'ab$ and $bv = vb$. Since $bu'a=ua$ is prime, $bu'a=u'ab$ implies $a=b$. Since $bv=vb$ then $av=va$ and since $av$ is $B$-prime, $a=b\in B$. Finally $b\in B$ and since $y=xb$ the last letter of $y$ is in $B$, which terminates the proof of the inductive step, and the proof of~\eqref{eq:primsuff2}. We prove~\eqref{eq:seeletter} by contradiction. Assume $au$ is not $B$-prime then there exists a word $v'$ and a letter $c\not\in B$ such that $v'c\in au$. Let $u'\in u$ then $au'\in au$ and $v'c \approx au'$ thus $A(v')\cup \{c\} = A(u')\cup \{a\}$. If $a\not\in A(v')$ then $a=c$ and $v'c \approx au'$ implies $a~\mathbb{I}~ u$ which is false by hypothesis. Thus $a\in A(v')$. Let $w'$ be the longest prefix of $v'$ which does not contain $a$ and $x'$ the suffix of $v'$ such that $v'=w'ax'c$. Then $au' \approx w'ax'c$ and $a\not \in A(w')$ thus $w' ~\mathbb{I}~ a$ . Then $w'ax'c \approx aw'x'c$ thus $aw'x'c\approx au'$ hence $w'x'c \approx u'$ and $w'x'c\in u$. Since $c\not \in B$, this contradicts the hypothesis $u$ is $B$-prime. We prove~\eqref{eq:primsuff4}. The converse implication in~\eqref{eq:primsuff4} is obvious so it is enough to prove the direct implication. Assume $u\sqsubseteq \view_B(uv)$. According to~\eqref{eq:prefantisym} it is enough to prove both $\view_B(uv) \sqsubseteq u\view_B(v)$ and $u\view_B(v)\sqsubseteq \view_B(uv)$. We start with $u\view_B(v)\sqsubseteq \view_B(uv)$.. Since $u\sqsubseteq \view_B(uv)$, then $\view_B(uv)=uw$ for some $w\in A^*$ and $uv=uww'$ for some $w'~\mathbb{I}~ B$. Then $v=ww'$ according to~\eqref{eq:prefcancel} and since $w'~\mathbb{I}~ B$, then $\view_B(v)\sqsubseteq w$, thus $u\view_B(v)\sqsubseteq uw=\view_B(uv)$ and we got the first prefix relation. Now we prove the converse prefix relation. Since $\view_B(v)\sqsubseteq w$ then by definition of $\view_B$ there exists $w''\in A^*$ such that $w=\view_B(v) w''$ and $w''~\mathbb{I}~ B$. Then $uv = u\view_B(v) w''w'$ and $w''w'~\mathbb{I}~ B$ thus by definition of $\view_B$, $\view_B(uv) \sqsubseteq u\view_B(v)$. By definition of $w$ this implies $uw \sqsubseteq u\view_B(v)$ thus according to~\eqref{eq:preftrace} $w\sqsubseteq \view_B(v)$. Finally $w=\view_B(v)$ and $u\view_B(v)=uw=u\view_B(v)$ which terminates the proof of~\eqref{eq:primsuff4}. Equation~\eqref{eq:primsuff3}, is a direct corollary of~\eqref{eq:primsuff4}. Let $v',w'$ such that $\view_B(uv)=uwv'$ and $uv=\view_B(uv)w'$. Then according to~\eqref{eq:primsuff4}, $uwv'=u\view_B(wv'w')$ thus according to~\eqref{eq:prefcancel}, $wv'=\view_B(wv'w')$ and since $v=wv'w'$, we get$w\sqsubseteq \view_B(v)$. By definition $\view_B(uv)$ is the shortest prefix of $uv$ such that $uv$=$\view_B(uv)v'$ with $v'~\mathbb{I}~ B$, thus by hypothesis there exists $w'$ such that $uv=uww'v'$. $v=ww'v'$ thus by definition of $\view_B(v)$ again, $ww'\sqsubseteq \view_B(v)$ thus $w\sqsubseteq\view_B(v)$. We prove~\eqref{eq:viewview2}. Since $\view_B(u)\sqsubseteq u$, then $\view_B((\view_B(u))\sqsubseteq \view_B(u)$ according to~\eqref{eq:prefantisym} it is enough to prove $\view_B(u)\sqsubseteq\view_B(\view_B(u))$. By definition of $\view_B$, $u=\view_B(u)u'$ with $u'~\mathbb{I}~ B$ and $\view_B(u)=\view_B(\view_B(u))u''$ with $u''~\mathbb{I}~ B$. Thus $u = \view_B(\view_B(u))u''u'$ with $u''u'~\mathbb{I}~ B$ hence by definition of $\view_B$, $\view_B(u)\sqsubseteq\view_B(\view_B(u))$. We prove~\eqref{eq:viewview}. Since $u\view_B(v)\sqsubseteq uv$, then $\view_B(u\view_B(v))\sqsubseteq \view_B(uv)$ and according to~\eqref{eq:prefantisym} it is enough to prove $\view_B(uv) \sqsubseteq \view_B(u\view_B(v))$. By definition of $\view_B(v)$ there exists $v'~\mathbb{I}~ B$ such that $v=\view_B(v)v'$ then $uv=u\view_B(v)v'$ thus $\view_B(uv)\sqsubseteq u\view_B(v)$. According to~\eqref{eq:viewview2}, $\view_B(\view_B(uv)) = \view_B(uv)$ thus $\view_B(uv)\sqsubseteq \view_B(u\view_B(v))$ which terminates the proof of~\eqref{eq:viewview}. We prove~\eqref{eq:primconcat}. Assume $ua$ is prime. The direct implication is immediate using~\eqref{eq:primsuff3}. For the converse implication, assume $av \sqsubseteq \view_B(avw)$. Since $\view_B(uavw)=\view_B(uav\view_B(w))$, without loss of generality we can assume $w=\view_B(w)$ thus $\view_B(avw)=avw$, thus we can replace $v$ with $vw$ and assume $w=\epsilon$. Then $\view_B(av)=av$ thus according to~\eqref{eq:primsuff4} $v=\view_B(v)$ and $a=\view_B(a)$. Then $uav$ factorizes as $uav=w_0w_1$ with $w_0=\view_B(uav)$ and $w_1~\mathbb{I}~ B$. Then according to~\eqref{eq:decomp}, $ua=u_0u_1$ such that $w_0=u_0z_0$ and $w_1=u_1z_1$. Since $ua$ is $a$-prime then either $u_1=\epsilon$ or $u_1$ is $a$-prime. In case $u_1=\epsilon$ then $w_0=uaz_0$ thus $ua\sqsubseteq\view_B(uav)$ and according to~\eqref{eq:primsuff4}, $\view_B(uav)=u\view_B(av)=uav$ so the proof is done. Otherwise, $u_1$ is $a$-prime but $w_1\in B$ thus $a~\mathbb{I}~ B$, a contradiction with $a=\view_B(a)$. This terminates the proof of~\eqref{eq:primconcat}. \qed \end{proof} \section{Taking shortcuts} {\noindent {\bf Proposition~\ref{prop:equivbroadcast}} A prime play $u$ is a $\mathbb{Q}$-broadcast if and only if for every $uv$ such that $v$ is prime, \begin{equation} \label{eq:defbroad2} (uv \text{ is prime})\text{ or } (v~\mathbb{I}~ \mathbb{Q}) \text{ or } (\dom(v) \subseteq \mathbb{Q}) \enspace. \end{equation} } \begin{proof} For the direct implication, assume that $u$ is a $\mathbb{Q}$-broadcast, $v$ is prime and $uv$ is not prime. Let $b$ be the last action of $v$. Then $u \not \sqsubseteq \view_b(uv)$. Thus by definition of a broadcast, either $\dom(b) \subseteq \mathbb{Q}$ or $\dom(b) \cap \mathbb{Q} = \emptyset$. And by induction the same holds for every letter $b$ of $v$. Since $v$ is prime then either $\dom(v) \subseteq \mathbb{Q}$ or $\dom(v)\cap \mathbb{Q} = \emptyset$. Which terminates the proof of the direct implication. Conversely, assume that for every $uv$ such that $v$ is prime, condition~\eqref{eq:defbroad2} holds. Let $a$ be a maximal action of $uv$. If $uv$ is prime then $u\sqsubseteq uv = \view_a(uv)$ thus the right handside of the implication~\eqref{eq:defbroad1} in the definition of broadcasts is satisfied. If $uv$ is not prime then according to~\eqref{eq:defbroad2} either $v~\mathbb{I}~\mathbb{Q}$ or $\dom(v) \subseteq \mathbb{Q}$. In the first case in particular $\dom(a) \cap \mathbb{Q}=\emptyset$. In the second case in particular $\dom(a) \subseteq \mathbb{Q}$. In both cases the implication~\eqref{eq:defbroad1} in the definition of broadcasts is satisfied because the assumption is false. In both cases the condition defining a $\mathbb{Q}$-broadcast is satisfied. \qed \end{proof} The first clause in the disjonction~\eqref{eq:defbroad2} can be reformulated in several ways. \begin{proposition} \label{prop:broadcarac} Let $B\subseteq A$ and $a,b\in A$ and $u,v\in A^*$ such that $u$ is $a$-prime and $v$ is $b$-prime. Then the following conditions are equivalent: \begin{align*} (uv \text{ is $b$-prime}) \iff & \neg (a ~\mathbb{I}~ v) \\ \iff & a\sqsubseteq\view_b(av)\\ \iff & u\sqsubseteq\view_b(uv)\enspace. \end{align*} \end{proposition} \begin{proof} We prove one by one all implications from the bottom to the top and finally the implication from the very top to the very bottom. Assume $(u\sqsubseteq\view_b(uv))$ then according to~\eqref{eq:primconcat} $(a\sqsubseteq\view_b(av))$. Assume $a~\mathbb{I}~ v$ then in particular $a~\mathbb{I}~ b$ and $\view_b(av)=\view_b(v)=v$ because $v$ is $b$-prime thus $(a\not\sqsubseteq\view_b(av))$. Assume $\neg(a~\mathbb{I}~ v)$. Then $av$ is $b$-prime according to~\eqref{eq:seeletter} thus $uv$ is $b$-prime according to~\eqref{eq:primsuff2}. Assume $uv \text{ is $b$-prime}$ then $uv = \view_b(uv)$ thus $u\sqsubseteq \view_b(uv)$. \qed \end{proof} {\noindent {\bf Lemma~\ref{lem:uselessdistrib}} \emph{ Let $(x,y)$ a useless thread in a distributed strategy $\sigma$. Then the $(x,y,\sigma)$-shortcut $\sigma_{x,y}$ is a distributed strategy. }} \begin{proof} We denote $\tau=\sigma_{x,y}=\sigma\circ \phi_{x,y}$ the $(x,y,\sigma)$-shortcut. To prove that $\tau$ is a distributed strategy, we take any process $p\in \mathbb{P}$ and $u\in A^*$ and prove that \[ \tau_p(u) = \tau_p(\view_p(u))\enspace. \] By definition of $v$ and the shortcut $\tau$, $\tau_p(u)=\tau_p(xv)=\sigma_p(xyv)$ and since $\sigma$ is a distributed strategy $\sigma_p(xyv)=\sigma_p(\view_p(xyv))$, thus it is enough to prove: \begin{equation} \label{todo1} \sigma_p(\view_p(xyv)) = \tau_p(\view_p(xv)). \end{equation} We distinguish between three cases. First case: assume $(x \not\sqsubseteq u \land x \not\sqsubseteq \view_p(u))$. Then $\tau_p(u) = \sigma_p(u)=\sigma_p(\view_p(u))=\tau_p(\view_p(u))$, where the first and third equality hold by definition of a shortcut, and the second equality holds because $\sigma$ is a distributed strategy. Thus~\eqref{todo1} holds in the first case. Second case: assume $x \sqsubseteq \view_p(u)$. Since $\view_p(u)\sqsubseteq u$ this implies $x\sqsubseteq u$, hence there exists $w\in A^*$ such that $u=xw$. We start with proving \begin{equation}\label{eqz0} \view_p(xyv)=xy\view_p(v)\enspace. \end{equation} Since $x\sqsubseteq \view_p(xw)$, ~\eqref{eq:primsuff4} implies \begin{equation} \label{eqz10} \view_p(xw)=x\view_p(w)\enspace. \end{equation} Since $(x,y)$ is a useless thread, acccording to~\eqref{xbprime}, both $x$ and $xy$ are $b$-prime. Since moreover $\view_p(xw)=x\view_p(w)$ we can apply~\eqref{eq:primconcat} twice and get first $\view_p(bw)=b\view_p(w)$ and then~\eqref{eqz0}. Now that~\eqref{eqz0} is proved we can conclude the second case: \begin{align} \sigma_p(\view_p(xyw)) \label{eqz3} &= \sigma_p(xy\view_p(w))\\ \label{eqz4} &= \tau_p(x\view_p(w))\\ \label{eqz5} &= \tau_p(\view_p(xw))\enspace, \end{align} where~\eqref{eqz3} comes from~\eqref{eqz0}, \eqref{eqz4} hold by definition of shortcuts and $\tau$, and \eqref{eqz5} comes from~\eqref{eqz10}. Thus~\eqref{todo1} holds in the second case. We are now left with the third and last case: \begin{equation}\label{eq:hypostrat} x \not\sqsubseteq \view_p(u) \land x\sqsubseteq u, \end{equation} which we assume until the end of the proof. Then $u=xv$ for some $v\in A^*$. We first take care of the special case where $\view_p(v)=\epsilon$ Then $v ~\mathbb{I}~ p$ according to~\eqref{eq:indepcarac} thus $\view_p(xyv)=\view_p(xy)$. Hence, \begin{align} \sigma_p(\view_p(xyv)) \label{eqw2}&=\sigma_p(\view_p(xy))\\ \label{eqw3}& = \sigma_p(xy) \\ \label{eqw4}&= \tau_p(x)\\ \label{eqw5}& = \sigma_p(x)\\ \label{eqw6}& = \sigma_p(\view_p(x))\\ \label{eqw8}& = \sigma_p(\view_p(xv))\\ \label{eqw7}& = \tau_p(\view_p(xv)) \enspace, \end{align} where~\eqref{eqw2} and~\eqref{eqw8} hold because $v~\mathbb{I}~ p$, \eqref{eqw3} and~\eqref{eqw6} hold because $\sigma_p$ is a distributed strategy, \eqref{eqw4} holds by definition of $\tau$, \eqref{eqw5} holds because $(x,y)$ is a useless thread and according to~\eqref{sigmacoincide}, \eqref{eqw7} holds by definition of $\tau$ and because by hypothesis $x\not\sqsubseteq\view_p(u)$. This shows that~\eqref{todo1} holds when $\view_p(v)=\epsilon$. Now assume that $\view_p(v)\neq\epsilon$ (and we keep assuming~\eqref{eq:hypostrat} as well). Since $(x,y)$ is a useless thread in $\sigma$, then according to~\eqref{xbroadcast}, $x$ is a $\mathbb{Q}$-broadcast in $\sigma$. We apply the caracterisation of broadcasts given by Proposition~\ref{prop:equivbroadcast}, to $x\view_p(v)$, which is allowed because $\view_p(v)$ is prime and $x\view_p(v)$ is a $\sigma$-play because it is a prefix of the $\sigma$-play $xv$. Thus according to Proposition~\ref{prop:broadcarac}, one of the three following properties holds: \begin{align} \label{eqa1}&x \sqsubseteq \view_p(x\view_p(v))\\ \label{eqa2}&\text{or } \dom(\view_p(v))\subseteq \mathbb{Q}\\ \label{eqa3}&\text{or } \dom(\view_p(v)) ~\mathbb{I}~ \mathbb{Q}\enspace. \end{align} Since $x \not\sqsubseteq \view_p(x\view_p(v))$ by hypothesis,~\eqref{eqa1} is not possible and we are left with the two other cases~\eqref{eqa2} and ~\eqref{eqa3}. We assume first that~\eqref{eqa3} holds. Since $\view_p(v)\neq\epsilon$, it implies that $p\not \in \mathbb{Q}$. Since $(x,y)$ is a $\mathbb{Q}$-thread then $\dom(y)\subseteq \mathbb{Q}$ thus $p~\mathbb{I}~ y$. We can conclude the proof of ~\eqref{todo1} in the case where~\eqref{eqa3} holds: \begin{align} \sigma_p(\view_p(xyv)) \label{eq1}&= \sigma_p(\view_p(xy\view_p(v)))\\ \label{eq2}&= \sigma_p(\view_p(x\view_p(v)y))\\ \label{eq3}&= \sigma_p(\view_p(x\view_p(v)))\\ \label{eq4}& =\sigma_p(\view_p(xv))\\ \label{eq6}&= \tau_p(\view_p(xv))\enspace, \end{align} where~\eqref{eq1} and~\eqref{eq4} hold according to~\eqref{eq:viewview}, \eqref{eq2} holds because $\view_p(v) ~\mathbb{I}~ \mathbb{Q}$ and $\dom(y)\subseteq \mathbb{Q}$, \eqref{eq3} holds because $p ~\mathbb{I}~ y$, and~\eqref{eq6} holds by definition of $\tau_p$, since by hypothesis $x\not \sqsubseteq \view_p(u)$ and $u=xv$. This proves~\eqref{todo1} in the case where~\eqref{eqa3} holds. Now we are left with the case where~\eqref{eqa2} holds, i.e. $\dom(\view_p(v))\subseteq \mathbb{Q}^*$ (and we keep assuming $\view_p(v)\neq\epsilon$ and~\eqref{eq:hypostrat} as well). We first establish \begin{equation}\label{viewpBb} \view_p(v)\in (A\setminus\{b\})^*. \end{equation} Since by hypothesis $x\not \sqsubseteq \view_p(xv)$ and $x$ is $b$-prime then according to~\eqref{eq:primconcat}, $b\not \sqsubseteq \view_p(bv)$ thus according to~\eqref{eq:seeletter}, $b~\mathbb{I}~ \view_p(v)$ which implies~\eqref{viewpBb}. Since $(x,y)$ is a useless thread in $\sigma$, we can apply~\eqref{sigmacoincide} to $\view_p(v)$ hence \begin{equation}\label{monkey} \sigma_p(x\view_p(v))=\sigma_p(xy\view_p(v))\enspace. \end{equation} Finally, \begin{align} \label{eqb1} \sigma_p(\view_p(xyv))&= \sigma_p(\view_p(xy\view_p(v)))\\ \label{eqb2}&= \sigma_p(xy\view_p(v))\\ \label{eqb3}&=\sigma_p(x\view_p(v))\\ \label{eqb4}&= \sigma_p(\view_p(x\view_p(v)))\\ \label{eqb5}&= \sigma_p(\view_p(xv))\\ \label{eqb6}&= \tau_p(\view_p(xv)), \end{align} where equalities~\eqref{eqb1} and~\eqref{eqb5} hold according to~\eqref{eq:viewview}, equalities~\eqref{eqb2} and~\eqref{eqb4} hold because $\sigma$ is a distributed strategy, \eqref{eqb3} comes from~\eqref{monkey}, and finally~\eqref{eqb6} is by definition of $\tau$ and because by hypothesis $x\not\sqsubseteq \view_p(xv)$. This terminates the proof of~\eqref{todo1} in the last case. As a consequence, $\tau$ is a distributed strategy. \qed \end{proof} {\noindent {\bf Lemma~\ref{lem:winning}} \emph Let $(x,y)$ a useless thread in a winning distributed strategy $\sigma$. Then the $(x,y,\sigma)$-shortcut $\sigma_{x,y}$ is a winning distributed strategy as well, and \begin{equation} \label{eq:taulength} \dur(\sigma_{x,y}) < \dur(\sigma)\enspace. \end{equation} Moreover for every $v\in A^*$, \begin{equation} \label{eq:tauplay} \text{$xv$ is a $\sigma_{x,y}$-play} \iff \text{$xyv$ is a $\sigma$-play}. \end{equation} } \begin{proof} We denote $\tau=\sigma_{x,y}=\sigma\circ \phi_{x,y}$ the $(x,y,\sigma)$-shortcut. We first prove property~\eqref{eq:tauplay}. Let $xv\in A^*$ be a $\tau$-play, we prove that $xyv$ is a $\sigma$-play by induction on $v$. When $v=\epsilon$ then $xy$ is a $\sigma$-play because by hypothesis $(x,y)$ is a thread. For the inductive step, assume $xyv$ is a $\sigma$-play, let $c\in A$ such that $xvc$ is a $\tau$-play, and let us prove that $xyvc$ is a $\sigma$-play. Since $xvc$ is a $\tau$-play, $c\in\tau_p(xv)$ for every $p\in\dom(c)$. Thus by definition of $\tau$, $c\in\sigma_p(xyv)$ for every $p\in\dom(c)$ hence $xyvc$ is a $\sigma$-play by definition of $\sigma$-plays. \medskip Now we prove that $\tau$ is winning. Since $\sigma$ is winning, the set of $\sigma$-plays is finite. According to property~\eqref{eq:tauplay} and the definition of $\tau$, every $\tau$-play is either a $\sigma$-play or is a subword of a $\sigma$-play thus $K$ is also an upper bound on the length of $\tau$-plays, hence every maximal $\tau$-play is finite. Let $u$ be a maximal $\tau$-play. If $x\not\sqsubseteq u$ then $u$ is a maximal $\sigma$-play and since $\sigma$ is winning $u$ is a winning play. Assume now that $x\sqsubseteq u$ and $u=xw$. According to~\eqref{eq:tauplay}, since $xw$ is a maximal $\tau$-play, $xyw$ is a maximal $\sigma$-play, and since $\sigma$ is winning, all processes are in a final state $xyw$. Since $(x,y)$ is a useless shell, \eqref{statescoincide} states that all processes are in the same state in $x$ and $xy$, and since transitions are deterministic, all processes are in the same state in $xw$ and $xyw$. So finally all processes are in a final state in $xw$. Thus $\tau$ is winning. \medskip Now we prove property~\eqref{eq:taulength}. According to~\eqref{eq:tauplay}, the mapping $\phi_{x,y}$ used to define $\tau=\sigma_{x,y}$ in~\eqref{defphi} maps maximal $\tau$-plays to maximal $\sigma$-plays. Moreover, according to~\eqref{eq:prefcancel}, $\Phi$ is an injection, and by definition it preserves the length on $\{u\mid x\not \sqsubseteq u\}$ and increases the length of $|y|$ on $\{u\mid x \sqsubseteq u\}$. This shows that $\len(\sigma)\geq \len(\tau) + q \cdot |y|$ where $q$ is the number of maximal $\tau$-plays prefixed by $x$. Since $x$ is a $\tau$-play, $q\geq 1$. According to~\eqref{xbprime}, $y\neq\epsilon$ thus we get property~\eqref{eq:taulength}. This terminates the proof of Lemma~\ref{lem:winning}. \qed \end{proof} {\noindent {\bf Lemma~\ref{lem:useless}} \emph Let $\sigma$ be a distributed strategy of a $(N,\mathcal{C})$-broadcast game. Assume that for some $\mathbb{Q}\subseteq \mathbb{P}$, $\sigma$ has a $\mathbb{Q}$-thread of length more than $K_{\mathbb{Q}}$. Then there is a useless thread in $\sigma$. } \begin{proof} Without losing generality, we can choose $\mathbb{Q}_\mathcal{C}$ minimal for the inclusion so that every $\mathbb{Q}'_\mathcal{C}$-thread in $\sigma$ has length less than $K_{\mathbb{Q}'_\mathcal{C}}$ whenever $\mathbb{Q}'_\mathcal{C} \subsetneq \mathbb{Q}_\mathcal{C}$. By hypothesis there exists $\mathbb{Q}\subseteq A$ and a $\sigma$-play $uv$ such that $\dom(v) \subseteq \mathbb{Q}$ and $\mid v \mid \geq K_\mathbb{Q}$. Without loss of generality we can assume $\mathbb{Q}_\mathcal{C}=\mathbb{Q}$. And w.l.o.g., since there are at most $|Q|$ maximal events in a trace, we can assume that $uv$ is prime and $\mid v \mid \geq \frac{K_\mathbb{Q}}{|Q|}$. We consider the complete graph $G_c$ with vertices $1,\ldots, |v|$ and the label of the edge $\{i < j\}$ is defined as follows. We denote $v[i,j]$ the subword of $v$ between positions $i$ and $j$ (both included) and $ A[i,j]=\alphabet(v[i,j])\subseteq \alphabet(v) $ and $ P[i,j]= \dom(v[i,j])_\mathcal{C}\subseteq \mathbb{Q}_\mathcal{C}. $ Then for $1 \leq i_0 \leq i_1 \leq i_2 \leq |v|$ we set \[ E[i_0,i_1,i_2]=(A[i_0,i_2],a_{i_1},Q_{i_1},L[i_1]) \text{ with } \] \begin{itemize} \item $a_{i_1}$ the letter at position $i_1$ in $v$, \item $Q_{i_1}$ the states of the processes in $uv[1,i_1]$. \item $ L[i_1] = \{ w \in A^* \mid (\dom(w) \subseteq \mathbb{Q}_\mathcal{C}) \land (w ~\mathbb{I}~ \max_\mathcal{C} \dom(v)) \}.$ \end{itemize} Let $N_L$ be the number of possible values of $E[i_0,i_1,i_2]$ and $H=N * N_L$. Notice that every trace in $L[i_1]$ is a $\mathbb{Q}'$-thread with $\mathbb{Q}' = (\mathbb{Q}\setminus \max_\mathcal{C} \dom(v))$. Since $\mathbb{Q}'_\mathcal{C} \subsetneq \mathbb{Q}_\mathcal{C}$ then we can apply the inductive hypothesis: every trace in $L[i_1]$ has length at most $|A|^{K_\mathbb{Q}'}$. Thus \[ H = N * N_L \leq N * 2^{|A|} |A| |Q|^{|\mathbb{P}|} 2^{|A|^{K_\mathbb{Q}'}}\enspace. \] By definition of $K_\mathbb{Q}$, $|v| \geq R(2^{\mathbb{Q}},H)$. Hence by definition of Ramsey numbers, $G_c$ contains a monochromatic clique $i_1 < i_2 < \ldots < i_{H}$. Let $v=v_0 v_1 v_2 \ldots v_H v_{H+1}$ be the corresponding factorization of $v$, such that $\dom(v_1)=\ldots = \dom(v_{H})$. For every $1\leq i \leq H$, every process in $\dom(v_1)$ has played at least $N$ times in $v'_i=v_{i*N +1}\ldots v_{i*N + N}$. Thus, since the game is a broadcast game, for every $1\leq i \leq N_L$, there exists a prefix $v''_i\mathcal{C} v'_i$ such that \[ \text{$uv_0\ldots v_{i*N}v''_i$ is a well-ordered $\dom(v_1)_\mathcal{C}$-broadcast.} \] By definition of $N_L$ we can find two indices $1\leq i < j \leq N_L$ such that $ E(i,|v''_i|,i+|v_i|) = E(j,|v''_j|,j+|v_j|)\enspace. $ Let $B=\alphabet(v'_i)=\alphabet(v'_j)$ and $\mathbb{Q}'=\dom(v_1)=\dom(v'_i)_\mathcal{C}=\dom(v'_j)_\mathcal{C}$ and $x=v_0\ldots v_{i-1}v'_i$ and $y=v_0\ldots v_{j-1}v'_j$. Then both $ux$ and $uy$ are $\mathbb{Q}'$-broadcasts, end up with the same letter, processes have the same state and local strategies coincide $(L[i]=L[j)$. Thus all conditions are met and $\sigma$ has a useless thread.\qed \end{proof} \section{Broadcast games} {\noindent {\bf Proposition~\ref{prop:decidable}} \emph{ It is decidable whether a game $G$ is a broadcast game. In case it is then $G$ is an $N$-broadcast-game for some $N\leq\Pi_{p\in\mathbb{P}}|Q_p|$.} } \begin{proof} Let $M=\Pi_{p\in\mathbb{P}}|Q_p|$. Let $\mathcal{C}$ be an inductive decomposition of $A$. A standard pumping argument of automata theory shows that the conditions in Definition~\ref{defi:bg} are satisfied for some $N$ is and only if they are satisfied when $N=M$ and both traces $u$ and $v$ have length less than $M$. Let $b \in A$ and $\mathbb{Q}\subseteq \mathbb{P}$. Let $I_b=\{a\in A\mid a ~\mathbb{I}~ b\}$ and $I_\mathbb{Q}=\{a\in A\mid a ~\mathbb{I}~ \mathbb{Q}\}$. Then a $b$-primary play $u$ is a $\mathbb{Q}$-broadcast if and only if there does not exist a prime trace $v\in A^*$ such that $uv$ is a play and \begin{align} \label{eqr1} &(v \not \in I_b^*)\\ \label{eqr2} &\land (\dom(v) \not \subseteq \mathbb{Q})\\ \label{eqr3} &\land (v \not \in I_\mathbb{Q}^*)\enspace. \end{align} The conjonction of these three conditions is indeed equivalent to the opposite of the definition fo broadcasts given in Proposition~\ref{prop:equivbroadcast}. Indeed, according to Proposition~\ref{prop:broadcarac}, \[ (uv \text{ is prime}) \iff \neg(v~\mathbb{I}~ b) \iff (v \not \in I_b^*). \] Again, a standard pumping argument shows that if there exists $v\in A^*$ which satisfies~\eqref{eqr1}, \eqref{eqr2} and~\eqref{eqr3} and such that $uv$ is a play then $v$ can be chosen of length at most $3M$. Thus, the proposition holds since whether $G$ is a broadcast game or not can be decided by enumerating all $N\leq M$ and every $\mathbb{Q}\subseteq \mathbb{P}$ and for each of those, enumerating all $u,v$ of length less than $M$, and for those which satisfies the conditions in Definition~\ref{defi:bg}, enumerating all prefixes $v'\sqsubseteq v$ and check whether~\eqref{eqr1}, \eqref{eqr2} and~\eqref{eqr3} are satisfied with $u$ replaced by $uv'$. If a witness is found then $G$ is not a broadcast game, otherwise $G$ is a broadcast game. \qed \end{proof}
1,116,691,497,047
arxiv
\section{Introduction} Let $W$ be a finite Coxeter group of rank $n$ and $h$ its Coxeter number. A formula due to Deligne \cite{deligne} states that the number of factorizations of a Coxeter element as a product of $n$ reflections is \[ \frac{n!}{|W|} h^n. \] The value in the case of the symmetric group is $(n+1)^{n-1}$, and this number is also known to be the number of Cayley trees on $n$ vertices. Chapoton \cite{chapoton} give another interpretation of Deligne's formula: this number counts the maximal chains in the lattice of noncrossing partitions \cite{armstrong}. Our first goal (in Section~\ref{secncc}) is to prove a one-parameter generalization of this result. A noncrossing chain is a sequence $\hat0 = \pi_0 \lessdot \pi_1 \lessdot \dots \lessdot \pi_n = \hat 1$ in the lattice of noncrossing partitions. By weighting some of the cover relations in these chains with a parameter $q$, the refined enumeration turns out to be \[ \frac{n!}{|W|} \prod_{i=1}^n ( d_i + q(h-d_i) ) \] where the $d_i$'s are the degrees of the group. This is done by generalizing a recursion due to Reading~\cite{reading}, and using known results on Fuss-Catalan numbers \cite{armstrong}. Our second goal (in Section~\ref{secclassgen}) is to study the equivalence classes of noncrossing chains defined as follows. The group $W$ acts naturally on the set partition lattice, and there is an induced action on the set of maximal chains of set partitions. The number of orbits is an integer $K(W)$ that has been calculated in our previous work \cite{josuat}. The subset of noncrossing chains is not stable under this action, but let us say that two noncrossing chains are equivalent if they are in the same orbit. We show that the generating function of each equivalence class has a simple form as a product. Eventually (in Section~\ref{sechook}), we show how our results lead to some hook-length formula for trees in type A and B, more precisely, in type A we recover Postnikov's hook formula \cite{postnikov,du} and in type B we obtain a variant. \section*{Acknowledgement} We thank the anonymous referee who provided the proof of Proposition~\ref{standard2} (which in the previous version of the article was proved only for the infinite families, and for some of the exceptional cases via a computer). \section{Definitions} Let $S=\{s_1,\dots,s_n\}$ be the set of simple generators of $W$, and $T$ the set of reflections. Let $V$ be the standard geometric representation of $W$, i.e. an $n$-dimensional Euclidean space such that each $t\in T$ is an orthogonal reflection through the hyperplane ${\rm Fix}(t) = \{ v \in V \,:\, t(v)=v \}$. These hyperplanes are called the {\it reflecting hyperplanes}. In particular, $H_i = {\rm Fix}(s_i)$ are called the {\it simple hyperplanes}. \begin{defi} Let $\mathcal{P}(W)$ denote the set of (generalized) set partitions, i.e. linear subspaces of $V$ that are an intersection of reflecting hyperplanes. It is partially ordered with reverse inclusion (i.e. $\pi\leq\rho$ if $\rho \subseteq \pi$ as linear subspaces). Let $\mathcal{M}(W)$ denote the set of maximal chains of $\mathcal{P}(W)$. \end{defi} For each $\pi \in \mathcal{P}(W)$, we define the {\it stabilizer} and {\it pointwise stabilizer} as, respectively: \begin{align*} \stab(\pi) &= \big\{ w\in W \, : \, w(L)=L \big\}, \\ \stab^*(\pi) &= \big\{ w\in W \, : \, \forall x \in L, \, w(x)=x \big\}. \end{align*} In the classical case, an interval partition is a set partition where each block is a set of consecutive integers, for example $123|4|56$. In the present context, there is a natural generalization (which might have been considered in previous work, with different terminology). \begin{defi} An element $\pi \in \mathcal{P}(W)$ is an {\it interval partition} if it is an intersection of simple hyperplanes. Let $\mathcal{P}^I(W) \subseteq \mathcal{P}(W)$ denote the set of interval partitions, and $\mathcal{M}^I(W) \subset \mathcal{M}(W)$ denote the set of maximal chains in $\mathcal{P}^I(W)$. \end{defi} The set $\mathcal{P}^I(W)$ is a sublattice of $\mathcal{P}(W)$ and is isomorphic to a boolean lattice. It follows that $\mathcal{M}^I(W)$ has cardinality $n!$. The coatoms of $\mathcal{P}^I(W)$ are exactly the lines $L_1,\dots,L_n$ defined by: \begin{equation} \label{defli} L_i = \bigcap\limits_{ \substack{ 1\leq j \leq n \\[1mm] j\neq i } } H_j. \end{equation} Let $W_{(i)}$ denote the (standard maximal parabolic) subgroup of $W$ generated by the $s_j$ with $j \neq i$. Then $W_{(i)} = \stab^*(L_i)$. We will need the following fact (see \cite[Proposition~3.3]{josuat}) where $w_0$ denote the longest element of $W$ (with respect to the simple generators $s_i$ and the associated length function). \begin{prop} \label{wli} Each line $L\in\mathcal{P}(W)$ can be written $w(L_i)$ for some $w\in W$ and $1\leq i \leq n$. If $w\in W$ and $i\neq j$, then $w(L_i)=L_j$ implies $w_0(L_i)=L_j$. \end{prop} A consequence is the following: \begin{prop} Each orbit $O\in\mathcal{M}(W)/W$ contains an element of $\mathcal{M}^I(W)$. \end{prop} \begin{proof} Let $C\in O$. Using Proposition~\ref{wli}, there exists $w\in W$ such that the coatom $L$ in the chain $w(C)$ is an interval partition, i.e. $L$ is one the $L_i$ previously defined. At this point we can make an induction on the rank. Let us sketch how the induction work, using ideas present in \cite{josuat}. There is a natural bijection between $\mathcal{M}(W_{(i)})$ and the chains in $\mathcal{M}(W)$ having $L_i$ as coatom. This bijection sends $\mathcal{M}^I(W_{(i)})$ to the chains in $\mathcal{M}^I(W)$ having $L_i$ as coatom. By induction, there is $u\in W_{(i)}$ such that $uw(C)\in\mathcal{M}^I(W)$, whence the result. \end{proof} Let us motivate the next definition by some considerations in the ``classical'' case. Let $\pi_1,\pi_2,\pi_3$ be the noncrossing partitions represented in Figure~\ref{setpart} from left to right. Here, $\pi$ is represented by drawing an arc between two consecutive elements of each block. Both $\pi_2$ and $\pi_3$ are covered by $\pi_1$, and more precisely they are obtained from $\pi_1$ by splitting the block $\{1,2,5,7\}$. But we can make one distinction: $\pi_2$ is obtained by removing one arc from $\pi_1$, and its two blocks $\{1,2\}$ and $\{5,7\}$ form an interval partition of the block $\{1,2,5,7\}$ of $\pi_1$. This is not the case for $\pi_3$. \begin{figure}[h!tp] \psset{unit=3mm} \begin{pspicture}(1,0)(7,3) \psdots(1,0)(2,0)(3,0)(4,0)(5,0)(6,0)(7,0) \rput(1,-0.8){\small 1} \rput(2,-0.8){\small 2} \rput(3,-0.8){\small 3} \rput(4,-0.8){\small 4} \rput(5,-0.8){\small 5} \rput(6,-0.8){\small 6} \rput(7,-0.8){\small 7} \psarc(1.5,0){0.5}{0}{180} \psarc(3.5,0){0.5}{0}{180} \psarc(3.5,0){1.5}{0}{180} \psarc(6,0){1}{0}{180} \end{pspicture} \hspace{1cm} \begin{pspicture}(1,0)(7,2) \psdots(1,0)(2,0)(3,0)(4,0)(5,0)(6,0)(7,0) \rput(1,-0.8){\small 1} \rput(2,-0.8){\small 2} \rput(3,-0.8){\small 3} \rput(4,-0.8){\small 4} \rput(5,-0.8){\small 5} \rput(6,-0.8){\small 6} \rput(7,-0.8){\small 7} \psarc(1.5,0){0.5}{0}{180} \psarc(3.5,0){0.5}{0}{180} \psarc(6,0){1}{0}{180} \end{pspicture} \hspace{1cm} \begin{pspicture}(1,0)(7,2) \psdots(1,0)(2,0)(3,0)(4,0)(5,0)(6,0)(7,0) \rput(1,-0.8){\small 1} \rput(2,-0.8){\small 2} \rput(3,-0.8){\small 3} \rput(4,-0.8){\small 4} \rput(5,-0.8){\small 5} \rput(6,-0.8){\small 6} \rput(7,-0.8){\small 7} \psarc(3.5,0){0.5}{0}{180} \psarc(4,0){3}{0}{180} \psarc(3.5,0){1.5}{0}{180} \end{pspicture} \caption{Noncrossing partitions. \label{setpart} } \end{figure} To generalize this distinction, consider the group $\stab^*(\pi_1) \subset \mathfrak{S}_7$. It has an irreducible factor $\mathfrak{S}_4$ acting on the block $\{1,2,5,7\}$. The simple roots of $\mathfrak{S}_7$ are $e_1-e_2,\dots, e_6-e_7$ where $(e_i)_{1\leq i \leq 7}$ is the standard basis of $\mathbb{R}^7$. The ones of the irreducible factor $\mathfrak{S}_4$ of $\stab^*(\pi_1)$ are $e_1-e_2$, $e_2-e_5$, $e_5-e_7$. It can be seen that the simple roots of $\stab^*(\pi_2)$ are included in the ones of $\stab^*(\pi_1)$, but it is not the case for $\pi_3$. Let us turn to the general case. Let $\Phi$ be a root system of $W$ (in the sense of Coxeter groups, see \cite{humphreys}), and let $\Phi^+$ be a choice of positive roots. For each $\pi\in\mathcal{P}(W)$, the group $\stab^*(\pi)$ is a reflection subgroup of $W$, and its set of roots is $\Phi \cap \pi^{\perp}$. We will always take $\Phi^+\cap\pi^{\perp}$ as a natural choice of positive roots, and accordingly $\stab^*(\pi)$ has a natural set of simple roots and simple generators. In this setting, we have the following: \begin{defi} Let $\pi_1,\pi_2 \in \mathcal{P}(W)$, we denote $\pi_2 \sqsubseteq \pi_1$ and say that $\pi_2$ is an {\it interval refinement} of $\pi_1$ if the simple roots of $\stab^*(\pi_2)$ are included in the simple roots of $\stab^*(\pi_1)$. \end{defi} Note that $\pi_2 \sqsubseteq \pi_1$ implies $\pi_1\subseteq \pi_2$, i.e. $\pi_2 \leq \pi_1$ in the lattice $\mathcal{P}(W)$. Also, interval partitions are exactly the interval refinements of the maximal partition. Some preliminary definitions are needed before going to noncrossing partitions. \begin{defi} Let $T\subset W$ be the set of reflections. A {\it reduced $T$-word} of $w$ is a factorization $w=t_1\dots t_k$ where $t_1,\dots,t_k \in T$ and $k$ is minimal. Let $u,v\in W$, the {\it absolute order} is defined by the condition that $u <_{abs} v$ if some reduced $T$-word of $u$ is a subword of some reduced $T$-word of $v$. \end{defi} \begin{defi} If $\sigma\in\mathfrak{S}_n$, we call $c=s_{\sigma(1)}\dots s_{\sigma(n)}$ a {\it standard Coxeter element} of $W$ with respect to $S$. Any element conjugated in $W$ to a standard Coxeter element is called a {\it Coxeter element}. \end{defi} This might differ from the terminology used in other references, but we need here some properties of the standard Coxeter elements that are not true in general. In what follows, we always assume that $c$ is a standard Coxeter element. \begin{defi} A set partition $\pi\in \mathcal{P}(W)$ is {\it noncrossing} with respect to $c$ if $\pi = \fix(w) $ for some $w\in W$ such that $w <_{abs} c$. This $w$ is actually unique and will be denoted $\underline\pi$ (see \cite[Theorem~1]{brady}). Let $\mathcal{P}^{NC}(W,c) \subset \mathcal{P}(W) $ denote the subset of noncrossing partitions with respect to $c$, and $\mathcal{M}^{NC}(W,c) \subset \mathcal{M}(W)$ denote the set of maximal chains of $\mathcal{P}^{NC}(W,c)$. If $\pi\in \mathcal{P}^{NC}(W,c)$, then $\underline \pi$ is the Coxeter element of a unique parabolic subgroup of $W$ that we denote $W_{(\underline\pi)}$ or $W_{(\pi)}$ (although this interferes with the notation $W_{(s)}$ for maximal standard parabolic subgroup, there should be no confusion). \end{defi} Note in particular that $\fix(\underline{\pi})=\pi$. We refer to \cite{armstrong} for more on the subject of noncrossing partitions. In general, $\mathcal{P}^{NC}(W,c)$ is not stable under the action of $W$. But from the invariance of the absolute order under conjugation, we can see that $\mathcal{P}^{NC}(W,c)$ is stable under the action of $c$. \begin{rema} Noncrossing partitions are usually defined as a subset of $W$, but here it is natural to have the inclusion $\mathcal{P}^{NC}(W,c) \subset \mathcal{P}(W) $. These two points of view are equivalent under the correspondence $\underline\pi \leftrightarrow \pi$ and we will also allow to identify noncrossing partitions with a subset of $W$. For example, if $u,v\in W$ are noncrossing, the notion of interval refinement $u\sqsubseteq v$ is well defined, and $u\in W$ is called an interval partition if it is so as a noncrossing partition. \end{rema} \begin{prop} We have $\mathcal{P}^I(W) \subset \mathcal{P}^{NC}(W,c)$. Let $\pi_1\in\mathcal{P}^{NC}(W,c)$ and $\pi_2\in\mathcal{P}(W)$ with $\pi_2 \sqsubseteq \pi_1$, then $\pi_2\in\mathcal{P}^{NC}(W,c)$. \end{prop} \begin{proof} The maximal partition is noncrossing since $\{0\} = \fix(c)$, so the first point follows the second one. To prove the second point, we need Proposition~\ref{standard2} from the next section. Let $r_1,\dots,r_k$ be the reflections associated with the simple roots of $\pi_1^{\perp}$, and we can assume there is $j\leq k$ such that $r_1,\dots,r_j$ are the reflections associated with the simple roots of $\pi_2^{\perp}$. Since $\pi_1$ is noncrossing, it means there is $u\in W$ with $u<_{abs} c$ and $\fix(u)=\pi_1$. It is known that $u$ is a Coxeter element of the subgroup $\stab^*(\pi_1)\subset W$. But Proposition~\ref{standard2} shows more: it is a standard Coxeter element, so there is $\sigma \in \mathfrak{S}_k$ such that $u=r_{\sigma(1)} \dots r_{\sigma(k)}$. Let $v$ be obtained from this factorization by keeping only the factors $r_1,\dots,r_j$. Then, we have $v <_{abs} u <_{abs} c$ and $\fix(v)= \pi_2 $, so $\pi_2$ is noncrossing. \end{proof} \begin{rema} It is interesting to note that similar results hold for {\it nonnesting partitions} in the sense of Postnikov (defined only in the crystallographic case). A set partition $\pi \in \mathcal{P}(W)$ is nonnesting when the simple roots of $\stab^*(\pi)$ form an antichain in the poset of positive roots. A subset of an antichain being itself an antichain, if $\pi_2 \sqsubseteq \pi_1 $ and $\pi_1$ is nonnesting, then $\pi_2$ is nonnesting. Any interval partition is nonnesting, since the simple roots form an antichain. Note also that the intuition from the ``classical'' case is clear: it is impossible to create a crossing or a nesting by removing arcs. \end{rema} \section{Chains of noncrossing partitions} \label{secncc} \begin{defi} For any chain $\Pi=(\pi_0,\dots,\pi_n)\in\mathcal{M}^{NC}(W,c)$, let ${\rm nir}(\Pi)$ be the number of $i$ such that $\pi_i$ is not an interval refinement of $\pi_{i+1}$. Let \[ M(W,q) = \sum_{ \Pi \in \mathcal{M}^{NC}(W,c) } q^{ {\rm nir} (\Pi) }. \] \end{defi} It is not {\it a priori} obvious that $M(W,q)$ does not depend on the choice of the standard Coxeter element $c$. This will be proved below. The coatoms of the lattice $\mathcal{P}^{NC}(W,c)$ are exactly the products $ct$ for $t\in T$. Since $T$ is stable by conjugation, the set $cT$ of coatoms is stable by conjugation by $c$. An interesting property of standard Coxeter elements is that this action has good properties, (see Propositions~\ref{standard1} and \ref{standard2}) similar to those of a bipartite Coxeter element obtained by Steinberg \cite{steinberg}. In what follows, an orbit for the action of $c$ will be called a {\it $c$-orbit}. Note that the action of $c$ becomes conjugation when we see noncrossing partitions as elements of $W$, i.e. $\underline{ c(\pi) } = c \underline\pi c^{-1}$ if $\pi\in\mathcal{P}^{NC}(W,c)$. \begin{prop} \label{standard1} Let $h$ be the Coxeter number of $W$ (i.e. the order of $c$ in $W$). For any $t\in T$, the $c$-orbit of $ct$ satisfies one of the following condition: \begin{itemize} \item It contains $h$ distinct elements, and exactly 2 interval partitions $L_i$ and $L_j$, related by $L_i=w_0(L_j)$. \item Or it contains $\frac h2$ distinct elements, and exactly 1 interval partition $L_i$, satisfying $w_0(L_i)=L_i$. Moreover, $c^{h/2}$ restricted to $L_i$ is $-1$ (i.e. $c^{h/2} \notin W_{(i)}$). \end{itemize} \end{prop} The full proof is in Appendix~\ref{appen} but let us give some comments. A standard Coxeter element $c=s_{\sigma(1)}\dots s_{\sigma(n)}$ is called {\it bipartite} if there is $j$ such that $s_{\sigma(1)} , \dots , s_{\sigma(j)}$ are pairwise commuting, and $s_{\sigma(j+1)}\dots s_{\sigma(n)}$ too. Steinberg \cite{steinberg} proved that for a bipartite Coxeter element $c$, the $c$-orbit of a reflection contains either $h$ elements and 2 simple reflections, or $\frac h2$ elements and 1 simple reflection. If $h$ is even, another property of the bipartite Coxeter element is $c^{h/2} = w_0 $. What we have is a variant that holds for any standard Coxeter element. It is natural to expect that our result can be seen as a consequence of Steinberg's but we have been unable to realize this in a uniform way. Since the standard Coxeter element $c$ is conjugated with a bipartite Coxeter element, and the bijection $t\mapsto ct$ from $T$ to $cT$ commutes with $c$-conjugation, we see that the $c$-orbit of $ct$ contains either $h$ or $\frac h2$ elements. In the case where $w_0$ is central, we can easily complete the proof of Proposition~\ref{standard1}. It is known that in this case, $h$ is even and $c^{h/2} = w_0 = -1 $, which acts trivially on $\mathcal{P}(W)$ (see \cite[Section 3.19]{humphreys}). So every orbit has $\frac h2$ elements. Proposition~\ref{wli} shows that there is at most one interval partition in each orbit, and the equality $\#T = \frac {nh}2 $ shows that there is exactly one interval partition in each orbit. See Appendix~\ref{appen} for the other cases \begin{rema} \label{permutefactor} Suppose $h$ is even and let $L_i$ be such that $c^{h/2}(L_i)=L_i$. As mentioned above, we have $c^{h/2}=w_0$ when $c$ is a bipartite Coxeter element. In the general case, since $w_0$ and $c^{h/2}$ are both in $\stab(L_i) - \stab^*(L_i)$, we have $w_0c^{h/2} \in W_{(i)}$. From the properties of $x\mapsto w_0 x w_0$, one can deduce that the map $x\mapsto c^{h/2} x c^{h/2}$ permutes the irreducible factors of $W_{(i)}$ in the same way as $x\mapsto w_0 x w_0$. This will be needed in the sequel. \end{rema} It is known that parabolic Coxeter elements can be characterized with the absolute order, see \cite[Lemma 1.4.3]{bessis}, so that $ct$ is a Coxeter element of $W_{(ct)}$. The point of the next proposition is that it is actually a standard Coxeter element. \begin{prop} \label{standard2} For any $t\in T$, $ct$ is a standard Coxeter element of the parabolic subgroup $W_{(ct)}$ for the natural choice of simple generators. \end{prop} \begin{proof} The elements $ct$ ($t\in T$) are the coatoms of $\mathcal{P}^{NC}(W)$. By an immediate induction, the proposition implies (and therefore is equivalent) to the stronger fact that $\underline\pi$ is a standard Coxeter element of $\stab^*(\pi)$ for each $\pi\in\mathcal{P}^{NC}(W)$. The proof of this has been provided by an anonymous referee, and relies on results by Reading~\cite{reading2}. More specifically, the result follows from \cite[Theorem~6.1]{reading2}. A consequence of this theorem is that a noncrossing partition $\underline \pi$ is a product of its so-called {\it cover reflections}. Besides, \cite[Lemma~1.3]{reading2} states that these cover reflections are the simple generators of a parabolic subgroups. \end{proof} We are now ready to prove how $M(W,q)$ can be computed inductively, and in particular that it does not depend on the choice of a standard Coxeter element. \begin{prop} \label{proprecmwq} If $W$ is irreducible, we have: \begin{equation} \label{recmwq} M(W,q) = \frac{2+q(h-2)}{2} \sum_{s\in S} M( W_{(s)} , q ). \end{equation} \end{prop} \begin{proof} For each $\Pi = (\pi_0,\dots,\pi_n) \in \mathcal{M}^{NC}(W,c)$, let $\Pi' = (\pi_0,\dots,\pi_{n-1}) $. The coatom of $\Pi$ is $\pi_{n-1}=ct$ for some $t\in T$, and the set of such $\Pi$ with $ct$ as coatom is in bijection with $\mathcal{M}^{NC}(W_{(ct)},ct)$ via the map $\Pi \mapsto \Pi'$. Moreover, ${\rm nir}(\Pi) = {\rm nir}(\Pi') $ if $ct \sqsubseteq c $ (i.e. $ct\in \mathcal{P}^I(W)$) and ${\rm nir}(\Pi) = {\rm nir}(\Pi') + 1 $ otherwise. So, distinguishing the chains in $\mathcal{M}^{NC}(W,c)$ according to their coatoms gives: \begin{equation} \label{ind1} M(W,q) = \sum_{t\in T} q^{ \chi [ \; ct \notin \mathcal{P}^I(W) \; ] } M( W_{(ct)} , q ). \end{equation} Note that to write this equation, we need to use Proposition~\ref{standard2}. While it should be clear from the definition that the generating function of the chains $ (\pi_0,\dots,\pi_{n-1}) \in \mathcal{M}^{NC}(W_{(ct)},ct) $ with respect to the statistic ${\rm nir}$ is $M( W_{(ct)} , q )$, this quantity was only defined with respect to a standard Coxeter element. Since $ct$ is indeed a standard Coxeter element of $W_{(ct)}$, we get the term $M( W_{(ct)} , q )$ which we assume we already know by induction. Let $O\subset T$ be an orbit under conjugation by $c$. So if $t_1,t_2\in O$, $W_{(ct_1)}$ and $W_{(ct_2)}$ are conjugated in $W$, so they are isomorphic and $M( W_{(ct_1)} , q )=M( W_{(ct_2)} , q )$. If $cO = \{ co \; : \; o\in O \}$ contains $h/2$ elements and 1 interval partition $L_i$, we get \begin{equation} \label{ind2} \sum_{t\in O} q^{ \chi [ \; ct \notin \mathcal{P}^I(W) \; ] } M( W_{(ct)} , q ) = (1+q(\tfrac h2 - 1) ) M(W_{(i)},q). \end{equation} If it contains $h$ elements and 2 interval partitions $L_i$ and $L_j$, then \[ \sum_{t\in O} q^{ \chi [ \; ct \notin \mathcal{P}^I(W) \; ] } M( W_{(ct)} , q ) = (2+q(h - 2) ) M(W_{(i)},q), \] and since the previous equation is true with $i$ replaced with $j$, we also have \begin{equation} \label{ind3} \sum_{t\in O} q^{ \chi [ \; ct \notin \mathcal{P}^I(W) \; ] } M( W_{(ct)} , q ) = \frac{2+q(h - 2)}2 ( M(W_{(i)},q) + M(W_{(j)},q) ). \end{equation} Now, we can split the sum in the righ-hand side of \eqref{ind1} to group together the $t\in T$ that are in the same orbit, and using Equations~\eqref{ind2} and \eqref{ind3}, we get the desired formula for $M(W,q)$. \end{proof} Besides, in the reducible case it is straightforward to show that \begin{equation} \label{recmwq2} M(W_1\times W_2,q) = \binom{m+n}{m} M(W_1,q)\times M(W_2,q) \end{equation} if the respective ranks of $W_1$ and $W_2$ are $m$ and $n$. Equation~\eqref{recmwq} and \eqref{recmwq2} can be used to compute $M(W,q)$ by induction for any $W$, with the initial value $M(A_1,q)=1$. This recursion permits to make a link with the Fuss-Catalan numbers ${\rm Cat}^{(m)}(W)$ (see \cite[Chapter 5]{armstrong}). These numbers can be defined in terms of the degrees of the group $d_1,\dots,d_n$ and the Coxeter number $h=d_n$ by \[ {\rm Cat}^{(m)}(W) = \frac{1}{|W|} \prod_{i=1}^n (hm + d_i). \] Chapoton \cite{chapoton} showed that ${\rm Cat}^{(m)}(W)$ is the number of multichains $\pi_1\leq\dots \leq \pi_m$ in $\mathcal{P}^{NC}(W,c)$, i.e. ${\rm Cat}^{(m)}(W)=Z(W,m+1)$ where $Z(W,m)$ is the zeta polynomial of $\mathcal{P}^{NC}(W,c)$. Fomin and Reading \cite{fomin} introduced the so-called generalized cluster complex $\Delta^m(\Phi)$, and showed that its number of maximal simplices is ${\rm Cat}^{(m)}(W)$ (where $\Phi$ is the root system of $W$). Using this generalized cluster complex, they obtain in \cite[Proposition 8.3]{fomin} that \begin{equation} \label{recfomin} {\rm Cat}^{(m)}(W) = \frac{(m-1)h+2}{2n} \sum_{s\in S} {\rm Cat}^{(m)}(W_{(s)}) \end{equation} in the irreducible case. Besides, there holds \begin{equation} \label{recfomin2} {\rm Cat}^{(m)}(W_1\times W_2) = {\rm Cat}^{(m)}(W_1) \times {\rm Cat}^{(m)}(W_2) \end{equation} in the reducible case. Comparing the recursions \eqref{recmwq}, \eqref{recmwq2} and \eqref{recfomin}, \eqref{recfomin2} shows that \[ M(W,q) = n! (1-q)^n Z\big(W,\tfrac{1}{1-q}\big), \] where we use the zeta polynomial rather than writing ``${\rm Cat}^{(\frac{q}{1-q})}(W)$'' because it is generally assumed that $m\in\mathbb{N}$ when we write ${\rm Cat}^{(m)}(W)$. Then, the formula for ${\rm Cat}^{(m)}(W)$ in terms of the degrees proves the proposition below (note that the particular case $q=1$ is the result by Chapoton mentioned in the introduction): \begin{prop} \[ M(W,q) = \frac{n!}{|W|} \prod_{i=1}^n \big( d_i + q(h-d_i) \big). \] \end{prop} It is also possible to obtain this formula by solving the recursion \eqref{recmwq} case by case. We will not give the details, since lengthy calculations are needed for the differential equations arising in the infinite families case. Let us just present the case of the group $A_n$, where we get that the series $A(z)=\sum_{n\geq 0} M(A_n,q) \frac{z^n}{n!}$ satisfies the differential equation \[ A' = A^2 + \tfrac{qz}{2} (A^2)'. \] After multiplying the equation by $A^{q-2}$, it can be rewritten \[ \bigg(\frac{A^{q-1}}{q-1}\bigg)' = (zA^q)'. \] After checking the constant term, we arrive at the functional equation $A^{q-1} = 1 + (q-1)zA^q$. It would be possible to extract the coefficients of $A$ with the Lagrange inversion formula. Another method is to use results about Fuss-Catalan numbers in type A. It is known that ${\rm Cat}^{(m-1)}(A_{n-1}) = \frac{1}{mn+1}\binom{mn+1}{n}$, which is the number of complete $m$-ary trees with $n$ internal vertices, so that $F = 1+ \sum_{n\geq 1} {\rm Cat}^{(m-1)}(A_{n-1}) z^n $ satisfies $F=1+zF^m$. The equation for $A$ can be rewritten \[ A^{1-q} = 1 + z(1-q)A \] So, comparing the functional equations shows $F(z) = A(\frac{z}{1-q} )^{1-q} $ if $m=\frac{1}{1-q}$. This is also $F(z)=1+zA(\frac{z}{1-q})$. Taking the coefficient of $z^{n+1}$, we obtain: \[ \frac{ 1 }{ \frac{n+1}{1-q} +1 } \binom{ \frac{n+1}{1-q}+1 }{ n+1 } = \frac{1}{(1-q)^n n!} M(A_n,q), \] hence \[ M(A_n,q) = \frac{n! (1-q)^n}{ \frac{n+1}{1-q} +1 } \binom{ \frac{n+1}{1-q}+1 }{ n+1 } = \prod_{i=1}^{n-1} ( i+1 + q(n-i) ). \] \section{Generating functions of equivalence classes and hook formulas.} \label{secclassgen} \begin{defi} For any $\Pi \in \mathcal{M}^{NC}(W,c)$, let $[\Pi]$ denote its equivalence class for the $W$-action: \[ [\Pi] = \{ w(\Pi) \; : \; w\in W \} \cap \mathcal{M}^{NC}(W,c). \] We also define the class generating function: \[ M([\Pi],q) = \sum_{\Omega\in [\Pi]} q^{{\rm nir}(\Omega)}. \] \end{defi} These classes partition the set $\mathcal{M}^{NC}(W,c)$, so that we have \begin{equation} \label{classeq} M(W,q) = \sum_{[\Pi]} M([\Pi],q) \end{equation} where we sum over all distinct equivalence classes. We need some definitions before giving the formula for $M([\Pi],q)$. Let $\tau \lessdot \pi$ be a cover relation in $\mathcal{P}^{NC}(W,c)$. The group $W_{(\pi)}$ can be decomposed into irreducible factors (that can be thought of as ``blocks'' of the set partition $\pi$). There is only one of these factors where $\underline \tau$ and $ \underline \pi$ differ, as can be seen from the factorization of the poset $\mathcal{P}(W_{(\pi)})$ induced by the factorization of $W_{(\pi)}$. \begin{defi} With $\tau$ and $\pi$ as above, let $h(\tau,\pi)$ be the Coxeter number of the irreducible factor of $W_{(\pi)} $ where $\underline\tau$ and $\underline\pi$ differ. \end{defi} \begin{defi} Let $g(\tau,\pi)$ be minimal $g>0$ such that $\underline\pi^g \, \underline \tau \, \underline \pi ^{-g} = \underline\tau $ and the map $x\to \underline\pi^g x \underline\pi^{-g}$ stabilizes each irreducible factor of $W_{(\tau)}$. \end{defi} Note that by examining the irreducible factors of $W_{(\pi)}$, we can see that we have $\underline\pi^{h(\tau,\pi)} \, \underline \tau \, \underline \pi ^{-h(\tau,\pi)} = \underline\tau $. From $\underline\pi^g \, \underline \tau \, \underline \pi ^{-g} = \underline\tau $ and Proposition~\ref{standard2}, we have either $g(\tau,\pi) = h(\tau,\pi)$ or $g(\tau,\pi) = \frac12 h(\tau,\pi)$. Note also that when $h(\tau,\pi)$ is even, as noted in Remark~\ref{permutefactor}, we known that the map $x\to \underline\pi^{\frac 12 h(\tau,\pi)} x \underline\pi^{-\frac 12 h(\tau,\pi)}$ permutes the irreducible factors of $W_{(\tau)}$. \begin{prop} \label{classgen} Let $ \Pi = (\pi_0,\dots,\pi_n) \in\mathcal{M}^{NC}(W,c) $, let $h_i=h(\pi_{i-1},\pi_i)$ and $g_i=g(\pi_{i-1},\pi_i)$ for $2\leq i \leq n$. Then we have: \[ M( [\Pi] , q ) = \prod_{i=2}^{n} \Big( \frac{2g_i}{h_i} + q \Big(g_i - \frac{2g_i}{h_i} \Big) \Big). \] \end{prop} The proof is rather similar to that of Proposition~\ref{proprecmwq}. We need a few lemmas. \begin{lemm} \label{classlem1} If $\Omega = (\omega_0,\dots,\omega_n ) \in [\Pi]$, there is $k\geq 0$ such that $\omega_{n-1} = c^k (\pi_{n-1} )$. \end{lemm} \begin{proof} Let $L_i$ (respectively, $L_j$) be an interval partition in the $c$-orbit of $\omega_{n-1}$ (respectively, $\pi_{n-1}$). The fact that these exist follows Proposition~\ref{standard1}. If $L_i=L_j$, the $c$-orbits are the same and this ends the proof. Suppose now that $L_i\neq L_j$. Since $\Omega\in[\Pi]$, there is $w\in W$ such that $w(L_i)=L_j$, so Proposition~\ref{wli} shows that $w_0(L_i)=L_j$. Then, Proposition~\ref{standard1} shows that $L_i$ and $L_j$ are in the same $c$-orbit. So $\omega_{n-1}$ and $\pi_{n-1}$ are in the same $c$-orbit. \end{proof} \begin{lemm} Let $\Omega = (\omega_0,\dots,\omega_n ) \in[\Pi]$, and assume inductively that Proposition~\ref{classgen} is true for the group $W_{(\omega_{n-1})}$. Let $\langle \Omega \rangle$ denote the class of $\Omega$ for the action of $W_{(\omega_{n-1})}$, i.e. \[ \langle \Omega \rangle = \{ w(\Omega) \; : \; w\in W_{(\omega_{n-1})} \} \cap \mathcal{M}^{NC}(W,c). \] Then the generating function of $\langle\Omega\rangle$ is: \begin{equation} \label{genomega} M( \langle\Omega\rangle ,q) = q^{\chi[ \; \omega_{n-1} \notin \mathcal{P}^I(W) \; ]} \prod_{i=2}^{n-1} \Big( \frac{2g_i}{h_i} + q \Big(g_i - \frac{2g_i}{h_i} \Big) \Big). \end{equation} \end{lemm} \begin{proof} Let $\Omega'=(\omega_0,\dots,\omega_{n-1})$. Removing the last element of a chain gives a bijection between $\langle \Omega \rangle$ and \[ [\Omega'] = \{ w( \Omega' ) \, : \, w \in W_{(\omega_{n-1})} \} \cap \mathcal{M}^{NC}( W_{(\omega_{n-1})} , \underline{\omega_{n-1}} ). \] By induction, we can obtain $M([\Omega'],q)$. Since $\Omega \in [\Pi]$, it is straightforward to check that we have $g(\omega_{i-1},\omega_i)=g(\pi_{i-1},\pi_i)$ and $h(\omega_{i-1},\omega_i)=h(\pi_{i-1},\pi_i)$, although we see $\omega_{i-1},\omega_i$ as elements of $\mathcal{P}^{NC}(W_{(\omega_{n-1})}, \underline{\omega_{n-1}})$ and $\pi_{i-1},\pi_i$ as elements of $\mathcal{P}(W,c)$. We have $M(\langle\Omega\rangle,q) = q^{\chi[ \; \omega_{n-1} \notin \mathcal{P}^I(W) \; ]} M([\Omega'],q)$, and this gives the formula for $M(\langle\Omega\rangle,q)$. \end{proof} \begin{lemm} The minimal integer $g>0$ such that $\langle \Pi \rangle = \langle c^g(\Pi) \rangle$ is $g_n$. \end{lemm} \begin{proof} This $g$ satisfies $c^g(\pi_{n-1})=\pi_{n-1}$, so that either $g=h_n$ or $g=\frac{h_n}2$. If we are not in the case where $c^{h_n/2}(\pi_{n-1})=\pi_{n-1}$, we have $g=h_n=g_n$. So, suppose $c^{h_n/2}(\pi_{n-1})=\pi_{n-1}$. Consider the factorization of the poset $\mathcal{P}(W_{(\pi_{n-1})})$ induced by the factorization of $W_{(\pi_{n-1})}$ in irreducible factors. From the definition of $g_n$, the action of $c^{g_n}$ stabilizes each factor of the poset, so it is the same action as some element $w\in W_{(\pi_{n-1})}$. So $\langle \Pi \rangle = \langle c^{g_n}(\Pi) \rangle$ and this proves $g\leq g_n$. Reciprocally, suppose that $c^g(\Pi)=w(\Pi)$ for some $w\in W_{(\pi_{n-1})}$. It follows that $c^g$ stabilizes the irreducible factors of $W_{(\pi_{n-1})}$. If the permutation on the factors is nontrivial, it would be possible to distinguish $c^g(\Pi)$ from $w(\Pi)$. So $g_n\geq g$, and eventually $g=g_n$. \end{proof} \begin{lemm} The classes $\langle \Omega \rangle$ form a partition of the set $[\Pi]$. A set of representatives is $\{\Pi,c(\Pi),\dots,c^{g_n-1}(\Pi)\}$. \end{lemm} \begin{proof} The first point is clear. From the previous lemma, the elements in the set $\{\Pi,c(\Pi),\dots,c^{g_n-1}(\Pi)\}$ are in distinct classes. It remains to show that the list is exhaustive. Knowing Lemma~\ref{classlem1}, it remains to prove that if $\Omega\in[\Pi]$ is such that $\omega_{n-1}=\pi_{n-1}$, then there is $k$ such that $ \langle \Omega \rangle = \langle c^k(\Pi) \rangle $. Let $w\in W$ such that $\Omega = w(\Pi)$. In particular, $w(\pi_{n-1})=\pi_{n-1}$. If $w\in W_{(\pi_{n-1})}$, we have $\langle \Omega \rangle = \langle \Pi \rangle $. Otherwise, it means that $w\in \stab(\pi_{n-1})-\stab^*(\pi_{n-1}) $. Since the class $[\Pi]$ contains a chain of interval partitions, we might as well assume that $\pi_{n-1}$ is an interval partition. It comes from Proposition~\ref{standard1} that $w c^{h/2} \in W_{(\pi_{n-1})} $. So we obtain $ \langle \Omega \rangle = \langle c^{h/2}(\Pi) \rangle $. This completes the proof. \end{proof} We can now prove Proposition~\ref{classgen}. \begin{proof} Since the classes $\langle \Omega \rangle$ form a partition of $[\Pi]$, we have: \[ M([\Pi],q) = \sum_{\langle \Omega \rangle } M(\langle \Omega \rangle ,q ), \] and $M([\Pi],q)$ can be obtained by summing Equation~\eqref{genomega}. From the previous lemma, the number of distinct classes $\langle\Omega\rangle$ is $g_n$. As we have seen above (just before Proposition~\ref{classgen}), either $g_n=h_n$ or $g_n= \frac 12 h_n$, so that $\frac{2g_n}{h_n} $ is an integer. From Proposition~\ref{standard1}, $\frac{2g_n}{h_n}$ among the distinct classes $\langle\Omega\rangle$ are such that their coatom is an interval partition. So, we get \[ \sum_{\langle \Omega \rangle } q^{\chi[ \; \omega_{n-1} \notin \mathcal{P}^I(W) \; ]} = \Big( \frac{2g_n}{h_n} + q \Big(g_n - \frac{2g_n}{h_n} \Big) \Big). \] So, we get the desired formula for $M([\Pi],q)$ by summing Equation~\eqref{genomega} over the classes $\langle \Omega \rangle $. \end{proof} \section{Hook formulas for types A and B} \label{sechook} This section is devoted to explicit combinatorial description in type A and B, where Equation~\eqref{classeq} can be interpreted as a hook-length formula for trees. \begin{defi} \label{defandretrees} Let $\mathcal{A}_n$ denote the set of {\it André trees} on $n$ vertices, i.e. trees such that: \begin{itemize} \item each internal node has either one son or two unordered sons, \item the vertices are labeled with integers from $1$ to $n$, and the labels are decreasing from the root to the leaves. \end{itemize} \end{defi} The 5 elements of $\mathcal{A}_4$ are represented in Figure~\ref{tree5}. These trees were introduced by Foata and Schützenberger \cite[Chapter 5]{foata1}, who proved that $\# \mathcal{A}_n = T_n$ (in fact their definition requires increasing labels instead of decreasing here, but this is clearly equivalent). They were also used by Stanley \cite{stanley2} to prove $K(A_n)=T_n$. \begin{figure}[h!tp] \pstree[levelsep=6mm] { \Tcircle{\tiny 4} } { \pstree[levelsep=6mm] { \Tcircle{\tiny 3} } { \pstree[levelsep=6mm] { \Tcircle{\tiny 2} } { \Tcircle{\tiny 1} } } } \hspace{1cm} \pstree[levelsep=6mm] { \Tcircle{\tiny 4} } { \pstree[levelsep=5mm,treesep=3mm] { \Tcircle{\tiny 3} } { { \Tcircle{\tiny 2} } { \Tcircle{\tiny 1} } } } \hspace{1cm} \pstree[levelsep=5mm,treesep=3mm] { \Tcircle{\tiny 4} } { { \Tcircle{\tiny 3} } {\pstree[levelsep=6mm,treesep=3mm] { \Tcircle{\tiny 2} } { { \Tcircle{\tiny 1} } } } } \hspace{1cm} \pstree[levelsep=5mm,treesep=3mm] { \Tcircle{\tiny 4} } { { \Tcircle{\tiny 2} } {\pstree[levelsep=6mm,treesep=3mm] { \Tcircle{\tiny 3} } { { \Tcircle{\tiny 1} } } } } \hspace{1cm} \pstree[levelsep=5mm,treesep=3mm] { \Tcircle{\tiny 4} } { { \Tcircle{\tiny 1} } {\pstree[levelsep=6mm,treesep=3mm] { \Tcircle{\tiny 3} } { { \Tcircle{\tiny 2} } } } } \caption{The André trees with 4 vertices. \label{tree5}} \end{figure} Let us describe Stanley's bijection. We see it as a map $\mathcal{M}(A_{n-1}) \to \mathcal{A}_n$ that induces a bijection $\mathcal{M}(A_{n-1}) / A_{n-1} \to \mathcal{A}_n $. We present an example on Figure~\ref{stanmap} and refer to \cite{stanley2} for more details. Suppose that we start from the minimal partition $1|2|3|4|5|6|7$ and at each step, two blocks merge into a larger block. We need 6 steps before arriving to the maximal partition $1234567$. Each vertex $v$ of the tree represents a subset $b$ of $\{1,\dots,n\} $ of cardinality at least $2$, that appears as a block of an element in the chain. This vertex $v$ has label $i$ if the block $b$ appears after the $i$th merging. If $v_1,v_2$ are two vertices and $b_1,b_2$ the corresponding subsets of $\{1,\dots,n\}$ then $v_1$ is below $v_2$ in the tree if $b_1\subset b_2$. In the example of Figure~\ref{stanmap}, the correspondence between blocks and labels is: $46 \to 1$, $15 \to 2$, $37 \to 3$, $3467 \to 4$, $125 \to 5$ , $1234567 \to 6$. \begin{figure}[h!tp] \parbox{4cm}{\small $1234567$ \\ $125|3467$ \\ $15|2|3467$ \\ $15|2|37|46$ \\ $15|2|3|46|7$ \\ $1|2|3|46|5|7$ \\ $1|2|3|4|5|6|7$ } \hspace{1cm} \begin{pspicture}(0,0)(0,-0.7) \pstree[levelsep=5mm,treesep=3mm] { \Tcircle{\tiny 6} } { { \pstree[levelsep=6mm,treesep=8mm] { \Tcircle{\tiny 5} } { { \Tcircle{\tiny 2} } } } {\pstree[levelsep=6mm,treesep=3mm] { \Tcircle{\tiny 4} } { { \Tcircle{\tiny 3} }{ \Tcircle{\tiny 1} } } } } \end{pspicture} \caption{Stanley's bijection. \label{stanmap} } \end{figure} \begin{prop} Let $\Pi\in\mathcal{M}^{NC}(A_{n-1})$, and $T\in\mathcal{A}_n$ its image under Stanley's bijection. Then we have \[ M([\Pi],q) = \prod_{\substack{ v \in T \\ h_v \neq 1 } } ( 2 + q(h_v-1) ). \] where $h_v$ is the hook of the vertex $v$. \end{prop} \begin{proof} Let $2\leq i\leq n$. There are $a>0$ and $b>0$ such that $\pi_i$ is obtained from $\pi_{i-1}$ by merging two blocks of size $a$ and $b$ into one block of size $a+b$. The integer $h_i$ is the Coxeter number of $\mathfrak{S}_{a+b}$, i.e. $h_i=a+b$. If $a>1$ or $b>1$, i.e. one of the two blocks has cardinality at least 2, there is a nontrivial factor $\mathfrak{S}_a$ or $\mathfrak{S}_b$ that needs $a+b$ rotations through the cycle to go back to itself, so that $g_i=a+b$. But if $a=b=1$, we have $g_i=1=\frac{h_i}2$. Let $v$ be the vertex of $T$ with label $i$. From the properties of the bijection, the two sons of $v$ contains $a-1$ and $b-1$ vertices, and $h_v=a+b-1$. So, we obtain: \[ \frac{2g_i}{h_i} + q(g_i - \frac{2g_i}{h_i} ) = \begin{cases} 2 + q(h_v-1) \text{ if } h_v>1, \\ 1 \text{ otherwise.} \end{cases} \] So Proposition~\ref{classgen} specializes as stated above. \end{proof} As a consequence, Equation~\eqref{classeq} gives the following: \begin{theo} \begin{equation} \label{hookA} \prod_{i=1}^{n-1} ( i+1 + q(n-i) ) = \sum_{T \in \mathcal{A}_n } \prod_{\substack{ v \in T \\ h_v \neq 1 } } ( 2 + q(h_v-1) ). \end{equation} \end{theo} For example, for $n=4$, and taking the 5 trees as in Figure~\ref{tree5}, we get: \begin{align*} (2+3q)(3+2q)(4+q) & = (2+q)(2+2q)(2+3q) + (2+2q)(2+3q) + \\ & \qquad (2+q)(2+3q) + (2+q)(2+3q)+(2+q)(2+3q). \end{align*} We have to make the connection with previously-known results. Let $\mathcal{T}_n$ denote the set of binary plane trees on $n$ vertices, and $\mathcal{T}^\ell_n$ denote the set of pairs $(T,L)$ where $T\in\mathcal{T}_n$ and $L$ is a decreasing labeling of the vertices. It is well-known that the number such labelings $L$ for a given $T$ is \[ \frac{n!}{ \prod_{v\in T} h_v }. \] Moreover, there is a map $\mathcal{T}^\ell_n \to \mathcal{A}_n $ which consists in ``forgetting'' the notion of left and right among the sons of each internal vertex. It is such that each $T \in \mathcal{A}_n $ has $2^{{\rm in}(T)}$ preimages, where ${\rm in}(T)$ is the number of internal vertices of $T$ (i.e. $v\in T$ such that $h_v>1$). Then, we can rewrite the right-hand side of \eqref{hookA}: \begin{align*} & \sum_{T \in \mathcal{A}_n } \prod_{\substack{ v \in T \\ h_v \neq 1 } } ( 2 + q(h_v-1) ) = \frac{1}{2^n} \sum_{T \in \mathcal{A}_n } 2^{{\rm in}(T)} \prod_{ v \in T } ( 2 + q(h_v-1) ) \\ & = \frac{1}{2^n} \sum_{T \in \mathcal{T}^\ell_n } \prod_{ v \in T } ( 2 + q(h_v-1) ) = \frac{n!}{2^n} \sum_{T \in \mathcal{T}_n } \prod_{ v \in T } \big( \frac{ 2 + q(h_v-1) } {h_v} \big). \end{align*} So we arrive at \[ \prod_{i=1}^{n-1} ( i+1 + q(n-i) ) = \frac{n!}{2^n} \sum_{T \in \mathcal{T}_n } \prod_{ v \in T } ( q + \frac{2-q}{h_v} ). \] The particular case $q=1$ is Postnikov's hook-length formula \cite[Corollary 17.3]{postnikov}, proved in investigating the volume of generalized permutohedra. A one-parameter generalization was conjectured by Lascoux and proved by Du and Liu \cite{du}, it is exactly the previous equation up to the change of variable $(q,2-q)\to(q,1)$. Let us turn to the type B analogue, where we can adapt Stanley's bijection. (Note that a type B analogue of André trees or permutations has been considered by Purtill \cite{purtill}, in relation with type B Springer numbers.) For brevity, the integers $-1$, $-2$, etc. will be represented $\bar 1$, $\bar 2$, etc. A set partition of type B is a set partition of $\{\bar n,\dots, \bar 1\}\cup \{1,\dots,n\}$, unchanged under the map $x\to -x$, and such that there is at most one block $b$ such that $b=-b$ (called the 0-block when it exists). For example, $1 \bar 2 5|\bar 1 2 \bar 5 | 3 \bar 3 6 \bar 6 | 4 | \bar 4 \in \mathcal{P}(B_6)$. \begin{defi} A {\it pointed André tree} is an André tree with a distinguished vertex $v\in T$ having 0 or 1 son. Let $\mathcal{A}^*_n$ denote the set of pointed André trees on $n$ vertices. \end{defi} A tree $T\in \mathcal{A}^*_n$ is represented with the convention that the distinguished vertex has a starred label $i^*$. We can create a new tree as follows: increase the labels by 1, then add a new vertex with label $1$ attached to the distinguished vertex. This is clearly a bijection between $\mathcal{A}^*_n$ and $\mathcal{A}_{n+1}$, showing that $\# \mathcal{A}^*_n = T_{n+1} = K(B_n) $. See Figure~\ref{treebij} for an example. \begin{figure}[h!tp] \pstree[levelsep=6mm] { \Tcircle{\tiny 5} } { \pstree[levelsep=6mm,treesep=1mm] { \Tcircle{\tiny $4^*$ \hspace{-3mm} } } {{ \pstree[levelsep=5mm,treesep=3mm] { \Tcircle{\tiny 3} } {{ \Tcircle{\tiny 2} }{ \Tcircle{\tiny 1}}}}} } \hspace{1cm} \pstree[levelsep=6mm] { \Tcircle{\tiny 6} } { \pstree[levelsep=5mm,treesep=3mm] { \Tcircle{\tiny 5} } {{ \Tcircle{\tiny 1} }{ \pstree[levelsep=5mm,treesep=3mm] { \Tcircle{\tiny 4} } {{ \Tcircle{\tiny 3} }{ \Tcircle{\tiny 2}}}}} } \caption{ The bijection $\mathcal{A}^*_n \to \mathcal{A}_{n+1} $. \label{treebij} } \end{figure} Let $\Pi=(\pi_0,\dots,\pi_n)\in\mathcal{M}(B_n)$. We build a tree $T\in \mathcal{A}_n^*$ by adapting Stanley's map. A vertex in $T$ represents either the 0-block in some $\pi_i$, or a pair of distinct opposite blocks in some $\pi_i$ where the elements of the pair have cardinality at least 2. This vertex has label $i$ if this 0-block, or pair of opposite blocks, appears in $\pi_i$ but not in $\pi_{i-1}$. A vertex $v_1$ is below another vertex $v_2$ in the tree when the blocks represented by $v_1$ are included in the blocks represented by $v_2$. Eventually, we have the following rule: the distinguished vertex has label $i$ if and only if $\pi_{i}$ has a 0-block, and $\pi_{i-1}$ has none. See Figure~\ref{stanleyB} for an example. \begin{figure}[h!tp] \parbox{4cm}{\small $1\bar 1 2 \bar 2 3 \bar 3 4 \bar 4 5 \bar 5 6 \bar 6 $ \\ $1\bar 13 \bar 3 |2\bar 456|\bar 24 \bar 5 \bar 6$ \\ $1\bar 13 \bar 3 |25|\bar 2\bar 5 | 4\bar 6 |\bar 46$ \\ $13 | \bar 1 \bar 3 |25|\bar 2\bar 5 | 4\bar 6 |\bar 46$ \\ $1 | \bar 1 | 3 | \bar 3 | 2 5 |\bar 2\bar 5 | 4\bar 6 |\bar 46$ \\ $1 | \bar 1 | 2 | \bar 2 | 3 |\bar 3 | 5 | \bar 5 | 4\bar 6 |\bar 46$ \\ $1 | \bar 1 | 2 | \bar 2 | 3 | \bar 3 | 4 | \bar 4 | 5 | \bar 5 | 6 | \bar 6$ } \hspace{1cm} \begin{pspicture}(0,0)(0,-0.7) \pstree[levelsep=5mm,treesep=3mm] { \Tcircle{\tiny 6} } { {\pstree[levelsep=6mm,treesep=3mm] { \Tcircle{\tiny 5} } { { \Tcircle{\tiny 2} }{ \Tcircle{\tiny 1} } } } { \pstree[levelsep=6mm,treesep=8mm] { \Tcircle{\tiny $4^*$ \hspace{-3mm} } } { { \Tcircle{\tiny 3} } } } } \end{pspicture} \caption{Stanley's bijection adapted to type B. \label{stanleyB} } \end{figure} \begin{prop} Let $\Pi\in\mathcal{M}(B_n)$ and $T\in\mathcal{A}^*_n$ its image under the bijection we have just defined. For any vertex $v$ of the tree $T\in\mathcal{A}^*_n$, we define a factor $\beta(v)$ to be $1+q(h_v-1)$ if $v$ belongs to the minimal path joining the root to the distinguished vertex, $2+q(h_v-1)$ otherwise. Then we have: \[ M([\Pi],q) = \prod_{\substack{ v\in T \\ h_v \neq 1 }} \beta(v). \] \end{prop} \begin{proof} Let $2\leq i\leq n$, let $v$ be the vertex with label $i$. Suppose first that $\pi_i$ is obtained from $\pi_{i-1}$ by merging two pairs of distinct opposite blocks into a pair of distinct opposite blocks (such as $25|\bar 2 \bar 5$ and $4 \bar 6|\bar4 6 $ in the example). This is the case where $v$ is not in the minimal path from the root to the distinguished vertex. This means that $W_{(\pi_i)}$ is obtained from $W_{(\pi_{i-1})}$ by replacing a factor $\mathfrak{S}_a\times \mathfrak{S}_b$ with $\mathfrak{S}_{a+b}$. As in the type A case, we get $g_i=h_i=a+b+1$, and $a-1$, $b-1$ are the number of vertices in the subtrees of $v$. This gives $\frac{2g_i}{h_i} + q(g_i - \frac{2g_i}{h_i} ) = \beta(v)$. Suppose then that $\pi_i$ is obtained from $\pi_{i-1}$ by merging two pairs of distinct opposite blocks into a 0-block (such as $13$ and $\bar 1\bar 3$ in the example). This is the case where $v$ is the distinguished vertex. This means that $W_{(\pi_i)}$ is obtained from $W_{(\pi_{i-1})}$ by replacing a factor $\mathfrak{S}_j = A_{j-1}$ into $B_j$ where $j$ is the size of the 0-block, and also the hook-length of $v$. We obtain $h_i=2j$, and $g_i=j$. Also in this case, this gives $\frac{2g_i}{h_i} + q(g_i - \frac{2g_i}{h_i} ) = \beta(v)$. Eventually, suppose that $\pi_i$ is obtained from $\pi_{i-1}$ by merging a pair of distinct opposite blocks to the 0-block (such as $2\bar 456|\bar 2 4 \bar 5 \bar 6$ in the example). This is the case where $v$ is in the minimal path from the root to the distinguished vertex (but is not the distinguished vertex). This means that $W_{(\pi_i)}$ is obtained from $W_{(\pi_{i-1})}$ by replacing a factor $A_{j-1} \times B_k $ into $B_{j+k}$. Here, $k>0$ is the number of vertices in the subtree of $v$ containing the distinguished vertex, and $j-1\geq 0$ is the number of vertices in the other subtree. We get $h_i=2(j+k)$, $g_i=j+k=h_v$, and $\frac{2g_i}{h_i} + q(g_i - \frac{2g_i}{h_i} ) = \beta(v)$. So Proposition~\ref{classgen} specializes as stated above. \end{proof} So, in the type B case, Equation~\eqref{classeq} gives: \begin{theo} \[ \prod_{i=1}^{n} ( i + q(n-i) ) = \sum_{T \in \mathcal{A}^*_n } \prod_{\substack{ v \in T \\ h_v \neq 1 } } \beta(v). \] \end{theo} For example, let $n=3$. We take the 5 elements of $\mathcal{A}^*_n$ as they appear in Figure~\ref{tree5} after we apply the bijection $\mathcal{A}_{n+1}\to \mathcal{A}^*_n$, and we get: \begin{align*} 3(2+q)(1+2q) & = (1+q)(1+2q) + (1+q)(1+2q) + (1+2q) + \\ & \qquad (1+2q) + (2+q)(1+2q). \end{align*} Strictly speaking, the identity in the previous theorem might be not considered as a hook-length formula since $\beta(v)$ does not depend only on the hook-length $h_v$. Still, it is on its own an interesting variant of the type A case.
1,116,691,497,048
arxiv
\section{Introduction} Coronal mass ejections (CMEs) and solar flares reside among the most powerful and impressing manifestations of solar activity \citep[for overviews see, e.g.,][]{kahler1992,schwenn2006}. CMEs and flares may or may not occur together, with the association rate strongly increasing for more energetic events \citep[][]{sheeley1983,yashiro2009}. The question, if and how the two phenomena are physically linked has been widely debated. The commonly accepted model of a combined CME-flare event is the eruptive flare scenario \citep[e.g., review by][]{priest2002}. A flux rope embedded into a magnetic arcade starts to rise, causing the magnetic field lines that tie the coronal structure to the solar surface to become more and more stretched to finally form a vertical current sheet beneath the eruption. If the field lines in the current sheet start to reconnect, the sudden release of magnetic energy powers a solar flare. In addition, the newly reconnected field lines add poloidal magnetic flux to the rising flux rope, and thus sustain the upward propelling force \citep[e.g.][]{vrsnak2008}. In this way, the energy released in magnetic reconnection is supposed to be distributed both to enhance the kinetic energy of the CME flux rope and to drive dynamic processes in the associated flare, such as the generation of shocks, outflow jets, plasma heating, and the acceleration of high-energetic particles. Observational evidence for the coupling and correlation between the flare and the CME characteristics has been presented in several studies dealing with large event samples. Most commonly, such studies use proxies for the energetics of flares and CMEs that can be easily derived from observations, such as the GOES soft X-ray peak flux of the flare and the mean plane-of-sky speed of the CME \citep[e.g.][]{moon2002,burkepile2004,vrsnak2005,mahrous2009}. Recently, also studies of the full CME acceleration profile have been performed, reporting a close synchronization of the impulsive CME acceleration phase and the rise phase of the soft X-ray flux of the associated flare in at least 50\% of the events under study \citep{zhang2001,maricic2007,bein2012}. Measurements of the CME acceleration are difficult, since the impulsive acceleration of the eruption often lasts only some tens of minutes \citep[e.g.][]{zhang2004,zhang2006} and takes place close to the Sun at distances $\lesssim 3$~$R_\odot$ \citep[e.g.][]{macqueen1983,stcyr1999,vrsnak2001}. This means that imaging of the low corona at high cadence is required. Recent studies have shown that high cadence EUV imagery in combination with white-light coronagraphs provide a good means in order to trace the onset and early stages of CME eruptions \citep[e.g.][]{gallagher2003,vrsnak2007,temmer2008}. \cite{bein2011} report that in about 70\% of 96 impulsive CMEs that were studied in such combined EUV and white-light imagery, the CME peak acceleration occured at heights as low as $\lesssim$~0.5~$R_\odot$. Important information on the primary energy release in solar flares can be obtained from hard X-ray (HXR) spectra. Supra-thermal electrons accelerated during the impulsive energy release process precipitate toward the solar surface where they lose their energy in Coulomb collisions with the ambient plasma, heating it to several million degrees. The heated chromospheric plasma expands into the coronal part of the flare loop, where it causes enhanced soft X-ray emission. A tiny part $(\sim 10^{-5})$ of the kinetic energy in non-thermal electrons impinging on the chromosphere is radiated away as non-thermal bremsstrahlung in the HXR domain. This HXR radiation by itself is energetically not important but the spectral characteristics of the radiated bremsstrahlung provide important diagnostics on the energy distribution and the total energy in the flare-accelerated electrons, which contain a large fraction of the total energy released during a flare \cite[e.g.][]{hudson1991,dennis2003}. Whereas many studies use observations of the thermal flare plasma, as observed in the soft X-ray domain (primarily by the GOES satellites), to characterize the flare evolution, there are only a few studies which consider the information on flare-accelerated electrons contained in HXR data in comparison with the associated CME dynamics. \citet{qiu2004} and \citet{jing2005} inferred magnetic reconnection rates from the apparent motion of chromospheric flare ribbons and found that the reconnection rate was temporally correlated with the CME/filament acceleration as well as with the flare HXR emission. \citet{temmer2008,temmer2010} presented detailed case studies of the impulsive acceleration in fast CMEs and the evolution of the HXR flux and spectral characteristics of the associated flare, finding a tight synchronization between the flare HXR peak and the CME acceleration peak. However, a study on the relation between the CME acceleration and the evolution of the associated flare energy release and particle acceleration for a larger event sample is still missing. Such study can provide insight not only into the temporal correlation but also into the scaling between characteristic parameters of the flare energy release and the CME acceleration. In the present paper, we study a sample of 37 impulsive CME-flare pairs for which the CME acceleration phase could be measured and where hard X-ray observations of the flare peak were available. The CMEs are observed at high spatial and temporal resolution by the EUV imagers and white-light coronagraphs onboard the Solar Terrestrial Relations Observatory \citep[STEREO;][]{kaiser2008}. Using high resolution X-ray spectra provided by the Reuven Ramaty High Energy Solar Spectroscopic Imager \citep[RHESSI;][]{lin2002}, we study the characteristics of the accelerated electron spectra as well as the hot flaring plasma. \section{Observations}\label{sec:observations} For the study of the CME kinematics, acceleration and source region characteristics, we used coronal EUV and white light images provided by STEREO's Sun Earth Connection Coronal and Heliospheric Investigation suite \citep[SECCHI;][]{howard2008}. The SECCHI Extreme Ultraviolet Imager \citep[EUVI;][]{wuelser2004} observes the solar disk and off-limb corona up to a distance of 1.7~$R_\odot$ from sun center. EUVI delivers filtergrams in four passbands observing plasma at chromospheric and coronal temperatures. We mainly used the 171~{\AA} passband (dominated by emission of Fe~\textsc{ix/x} ions, $T\sim 10^6$~K), and in some cases the 195~{\AA} passband (Fe~\textsc{xii} and Fe~\textsc{xiv} ions; $T\sim 1.5\times 10^6$~K). The nominal time cadence of the 171~{\AA} images is $2.5$~min but can be as high as $\sim75$~s for campaign data. Images taken in the 195~\AA\, passband have a nominal cadence of $10$~min, which has increased to 5~min in 2009. The evolution of the CME further away from the Sun was followed in data from the STEREO COR1 and COR2 coronagraphs \citep{thompson2003}. COR1 has a field-of-view (FOV) from 1.4 to 4 $R_{\odot}$ from Sun center, COR2 from 2.5 to 15~$R_{\odot}$. The observing cadence of the COR1 observations is mainly 5 minutes (but can be up to 20 minutes), the cadence of COR2 total brightness images is 30 minutes. The overlapping FOVs of the EUVI, COR1 and COR2 instruments enabled us to identify and connect the same CME structure in the observations by the different instruments with high cadence. Flare observations were provided by the Reuven Ramaty High Energy Solar Spectroscopic Imager \citep[RHESSI;][]{lin2002} detecting X-ray and $\gamma$-ray emission from the Sun in the energy range 3~keV to 17~MeV. RHESSI is an indirect Fourier imager providing X-ray images at high angular resolution (as good as $\sim2.3''$) and spectroscopy at unsurpassed spectral resolution ($\sim$1~keV below 100~keV). Since our aim is to compare the evolution of the energy release in solar flares to the characteristics of the associated CME dynamics, we searched for events for which the CME acceleration phase was well observed and the flare impulsive phase was covered by RHESSI observations. We took care not to include flares which were partially occulted by the solar limb. In the time period January 2007 to May 2010 (i.e.\ covering the first 3.5 years of STEREO observations), we identified a sample of 37 CME-flare events, which fulfilled these requirements. The GOES flare class distribution of the events selected is GOES class M: 3, C: 16, B: 11, and $\leq$GOES A: 7 events.\footnote{We note that the STEREO mission was launched in solar minimum conditions, and thus the strongest events are missing in our sample.} Out of these, 14 events showed appreciable non-thermal hard X-ray emission. The remaining 23 events showed either weak non-thermal X-ray emission or solely thermally produced X-ray emission. We note that in our sample we have a selection bias towards impulsive CMEs, i.e. CMEs that have a short main acceleration phase. Due to the distinct anticorrelation of the CME acceleration duration and the CME peak acceleration \citep{zhang2006,bein2011}, this implies also that the involved acceleration values are high. The reason for this selection bias is twofold. On the one hand, we aimed to select CMEs where the main acceleration profile could be reconstructed. This tends to exclude events with gradual (i.e. long-duration, almost constant) acceleration. On the other hand, we aimed at comparing the CME acceleration curves with RHESSI observations of the main flare phase. Since RHESSI has a low-Earth orbit (with an orbital period of 96 min), the solar observations are regularly interrupted by eclipses of the satellite. This again tends to exclude gradual, long-duration events. \section{Methods}\label{sec:methods} \subsection{CME kinematics and acceleration} The height-time curves of the selected CMEs were determined by obtaining the position of the leading edge in STEREO EUVI, COR1 and COR2 running difference image sequences. The raw image data were calibrated and processed to improve the visibility of the CME leading edge. First, the images were reduced with the \verb"secchi_prep.pro" routine available in the SSW (SolarSoftWare) tree, which provides for the subtraction of the CCD (Charge Coupled Device) bias, correction for variable exposure time and conversion to physical units. EUVI images were differentially rotated to a common reference time before running difference images were generated. In case of faint CMEs, a normalizing-radial-graded filter \citep[][]{morgan2006} was applied. For COR1 and COR2 observations, a pre-event image was subtracted and a sigma filter was applied to obtain higher contrasts of the transient faint CME structures. For the measurements of the CME evolution, running difference images were constructed by subtracting from each image the image recorded immediately before. If the time cadence of the data was very high or a CME moved very slowly, we rather created difference images out of frames taken further apart in time ($\sim$5--10~min for EUVI data, $\sim$10--20~min for COR1 data). The CME kinematics were then derived by following the evolution of the detected CME leading edge along the main propagation direction, starting from the determined CME-flare source region. We developed an algorithm to automatically identify the CME leading edge based on the information that it appears as a bright front with a sharp intensity drop to regions outside the CME \cite[for details see][]{bein2011}. This algorithm works fine for clear CME fronts but fails for faint ones, in which case we identified the leading edge by visual inspection. We note that our CME height measurements are not corrected for projection effects. However, we predominantly selected events where the source region is located close to the solar limb, in order to minimize the influence of projection effects. For 60\% of our events, the projected radial distance $r$ from Sun center is $\gtrsim 0.8\,R_\odot$, for 85\% of events $r\gtrsim 0.6\,R_\odot$. Based on the derived CME height-time curves, the velocity and acceleration profiles can be determined by the application of numerical differentiation to the height-time data. Since errors in the height-time curve are enhanced when taking the derivative, a smoothing and fitting method is used based on free-knot cubic splines. This fitting technique also allowed us to estimate errors in CME velocity and acceleration by propagating the uncertainties of the fitted spline coefficients to the first and second derivative. For details on the data processing, automated CME tracking, spline fitting and error analysis we refer to \citet{bein2011}. We note that the 37 CME-flare pairs under study are a subsample of the 95 CMEs that were studied in \citet{bein2011}. In Figure~\ref{fig1}, we show the CME height-, velocity- and acceleration-time curves for a sample CME-flare event (2010 February 8) together with the GOES and RHESSI X-ray flux evolution of the associated flare. Further examples are shown in Figs.~\ref{fig2} and \ref{fig3-1}. The parameters obtained from the fitted height-time curves and their first and second derivatives are the CME peak velocity $v_\mathrm{max}$, the CME peak acceleration $a_\mathrm{max}$ and the times at which velocity and acceleration reached their maximum ($t_\mathrm{vmax}$ and $t_\mathrm{amax}$). We also determined the acceleration duration $t_\mathrm{acc}$ defined as the time interval $t_\mathrm{start} < t_\mathrm{amax} < t_\mathrm{end}$, where $t_\mathrm{start}$ and $t_\mathrm{end}$ are the times at which the CME accelerated/decelerated to $\sim10\%$ of its peak value. In addition, we derived characteristic height parameters, namely the height $h_0$ where the CME was first identified, the height $h_\mathrm{vmax}$ at which the CME velocity reached its maximum, and the height $h_\mathrm{amax}$ at which the CME acceleration reached its maximum. The height $h_0$ of the first CME observation provides us with a rough measure of the CME initiation height and thus also with an upper limit for the size of the pre-eruptive structure \cite[cf.][]{bein2011}. \subsection{Flare X-ray spectroscopy} RHESSI X-ray spectra yield information on fast electrons accelerated during the flare process as well as on the thermal flare plasma. For each event, we derived a background-subtracted photon spectrum integrated over 20~s during the hard X-ray peak, i.e.\ the peak of the non-thermal emission, using all RHESSI front detectors except 2 and 7 \citep{smith2002}. In addition, we also derived spectra during the peak of the soft X-ray emission ($\sim$3--12 keV), to better characterize the thermal flare plasma. In Figs.~\ref{fig1}--\ref{fig3-1}, we show the CME kinematics together with the X-ray flux of the associated flare for three CME-flare pairs. A sample RHESSI spectrum observed at the HXR peak of the C6.2 class flare-CME event on 2010 February 8 is shown in Fig.~\ref{fig1a}. Using the OSPEX software \citep[Object Spectral Executive;][]{schwartz2002}, we applied a forward fit to the spectra using either a combined non-thermal thick-target bremsstrahlung model (at higher energies) and an isothermal model (at the low energy end), or solely an isothermal model. The resulting spectral parameters for the thermal fit comprise the emission measure EM and temperature $T$ of the hot flaring plasma, and for the thick-target model the number of accelerated electrons e$^-$, the electron power-law index $\delta$, and the low-energy cut-off $E_c$ of the accelerated electron spectrum. However, the number of electrons and the low-energy cutoff are intrinsically linked, and $E_c$ cannot be determined with accuracy. Thus, as an additional parameter characterizing the strength of the non-thermal emission, we also determined for each power-law spectrum the (fitted) photon flux at 50~keV, $F_{50}$, and the power $P_{20}$ contained in electrons accelerated to kinetic energies $>$20~keV. The obtained flare parameters were then correlated with the parameters characterizing the CME kinematics and dynamics (see Sect.~\ref{sec:vel_acc_profiles}). In order to study the relative timing of the CME acceleration and the flare energy release as evidenced in the evolution of non-thermal HXR emission, we reconstructed RHESSI light curves at energies above the low-energy cut-off $E_c$ derived from the spectral fits. Based on these light curves we determined start, peak and end time as well as the duration of the non-thermal flare emission, which were then compared with the acceleration profile of the associated CME (see Sect.~\ref{sec:timing}). \section{Results}\label{sec:results} \subsection{Correlations of characteristic CME and flare parameters}\label{sec:vel_acc_profiles} Figs.~\ref{fig2a} and~\ref{fig2b} show scatter plots of the CME peak velocity and CME peak acceleration, respectively, against the characteristic flare spectral parameters, namely the emission measure EM, temperature~$T$, number of accelerated electrons~e$^{-}$, power-law index~$\delta$ of the accelerated electron spectrum, photon flux $F_{50}$ at 50~keV, and power $P_{20}$ in accelerated electrons with energies $>$20~keV. Note that the number of data points in the various scatter plots may differ from each other due to the different number of observables available for each CME-flare pair. Our sample covers many weak flares, and not all events show significant non-thermal emission. Thus, we only consider non-thermal fitting parameters for reasonably well observed power-law spectra which have an electron spectral index $\delta\lesssim 8$ (which applies to 14 events out of a total of 37 under study). In each scatter plot, we also show the regression line and the linear correlation coefficient $c$ for the respective quantities. Except one ($EM$ vs. $a_\mathrm{max}$), all the correlations in Figs.~\ref{fig2a} and~\ref{fig2b} are significant at a level of $95$\% or higher. Both the CME velocity and the CME acceleration show distinct scalings with the X-ray spectral parameters characterizing the non-thermal energy release in the associated flare. The correlation between the CME peak velocity $v_\mathrm{max}$ and the number of flare-accelerated electrons $e^-$ gives a linear correlation coefficient of $c = 0.73$, the relation $v_\mathrm{max}$ vs. $F50$ gives $c = 0.78$, and correlating $v_\mathrm{max}$ and $P20$ we obtain $c = 0.80$ (see Fig.~\ref{fig2b}). In Fig.~\ref{fig3}, we plot the CME peak velocity against the product of the non-thermal power in electrons, $P20$, and the duration of the non-thermal HXR emissions. The correlation coefficient of this product, which is a measure for the total kinetic energy contained in electrons accelerated during the flare impulsive phase, and the CME peak velocity is very high with $c= 0.85$. We obtain the same result when comparing $v_\mathrm{max}$ with the product of the non-thermal photon flux $F50$ and the HXR duration ($c = 0.85$). Thus, the best observed scaling is the flare non-thermal energy and the peak velocity attained in the CME, which is directly linked to its kinetic energy $E = mv^2/2$, where $m$ is the total CME mass. The slope of the non-thermal power-law index $\delta$ is found to be inversely correlated with the CME peak velocity, $c = -0.52$, i.e.\ fast CMEs are preferentially associated with flares with harder power-law spectra. We also found a positive scaling of $v_\mathrm{max}$ with the observed duration of HXR emission, $c = 0.58$, i.e.\ CMEs which reach higher velocities tend to be associated with flares of prolonged electron acceleration. The CME peak acceleration $a_\mathrm{max}$ (Fig.~\ref{fig2b}) reveals correlations with the non-thermal flare parameters comparable to that obtained for $v_\mathrm{max}$ (Fig.~\ref{fig2a}). However, in general the obtained correlation coefficients for $a_\mathrm{max}$ are slightly lower than that obtained for $v_\mathrm{max}$, except for the relation $a_\mathrm{max}$ vs. $\delta$ which is higher, $c = -0.61$. The correlation coefficient of CME peak acceleration $a_\mathrm{max}$ and the number of flare-accelerated electrons $e^-$ is $c= 0.52$. The non-thermal photon flux $F50$ and the electron power $P20$ show a distinct positive scaling with the CME peak acceleration $a_\mathrm{max}$ with correlation coefficients of $c = 0.77$ and $c = 0.72$, respectively. These results are indicative of a tight coupling between particle acceleration in flares and the associated CME dynamics. We can speculate that the fact that $a_\mathrm{max}$ scales somewhat better with the electron spectral index $\delta$ than $v_\mathrm{max}$ implies that the CME peak acceleration is stronger linked to the hardness of the flare-accelerated electron spectrum, and thus with the number of electrons accelerated to high energies, whereas the CME peak velocity is better related to the total number and energy in flare-accelerated electrons, which are dominated by the low-energy end of the particle distribution. The CME peak velocity and peak acceleration also show a positive scaling with the thermal flare parameters, i.e.\ the emission measure EM and temperature $T$. However, the correlations are significantly smaller than that obtained for the non-thermal flare parameters. The CME peak velocity $v_\mathrm{max}$ and the flare emission measure EM derived at the HXR peak time are weakly correlated with $c = 0.32$, whereas EM and $a_\mathrm{max}$ are basically uncorrelated ($c = 0.08$). A better scaling is observed for the flare temperature $T$ and CME velocity and acceleration with a correlation coefficient of $c = 0.48$ for $v_\mathrm{max}$ vs.\ $T$ and $c= 0.45$ for $a_\mathrm{max}$ vs. $T$. However, the emission measure and temperature derived at the peak of the flare SXR emission are most probably a better indicator of the maximum thermal energy reached in the flare. Indeed, RHESSI EM and $T$ derived at the flare SXR peak show a somewhat better correlation with the CME $v_\mathrm{max}$ and $a_\mathrm{max}$, with the highest correlation coefficient of $c\approx 0.5$ for the relation $v_\mathrm{max}$ vs.\ $T$. In addition, considering the GOES 1--8~{\AA} soft X-ray peak flux as an additional indicator for the thermal energy content of the flares, we find that $v_\mathrm{max}$ correlates better with the GOES peak flux ($c= 0.62$) than with the RHESSI $T$ and EM. For the relation between CME acceleration $a_\mathrm{max}$ and the GOES peak flux, the correlation coefficient is smaller, $c= 0.41$. We also obtained high correlations between the height $h_0$ above the solar surface at which a CME was observed for the first time, which can be interpreted as a measure of the initiation height of the pre-eruptive structure, and the non-thermal flare parameters (see Fig.~\ref{fig4}). The CME initiation height $h_0$ shows a high positive correlation with the spectral index $\delta$ of flare-accelerated electrons ($c= 0.77$) and a high inverse correlation with the non-thermal X-ray flux $F50$ ($c = -0.72$). This means that CMEs erupting at low coronal heights, i.e.\ in regions of stronger magnetic fields, are associated with flares in which a larger number of electrons is accelerated to high energies. The other CME height parameters we measured, $h_\mathrm{vmax}$ and $h_\mathrm{amax}$, i.e.\ the heights at which the CME velocity and CME acceleration reached their maximum, respectively, showed only weak or no correlations at all with the derived flare parameters. The highest correlation coefficient was obtained for the relation $h_\mathrm{vmax}$ and HXR duration $t_\mathrm{HXR}$ ($c= 0.47$), i.e.\ long duration events reach their peak velocity further out in the corona. We also compared the CME acceleration duration $t_\mathrm{acc}$ with the RHESSI spectral fit parameters, revealing no distinct relation except a weak correlation between $t_\mathrm{acc}$ and $\delta$ with $c= 0.41$. Consequently, CMEs with longer acceleration duration (and thus preferentially smaller peak acceleration, see \cite{bein2011}) show some tendency to be accompanied by flares with softer HXR spectra. \subsection{Relative Timing of CME dynamics and Flare Energy Release}\label{sec:timing} The first and second derivatives of the obtained CME height-time curves provided us with the times where the CME reached its maximum velocity and its maximum acceleration, as well as with the start and end time of the main CME acceleration phase. For each event, we derived the time difference of the start of the CME acceleration and the start of the non-thermal HXR emission of the associated flare as well as the time difference between the peak of the CME acceleration and the peak of the non-thermal HXR flare emissions, which marks the instant of the strongest particle acceleration. In Fig.~\ref{fig7}, we show the distribution of the time lags obtained between the start of the flare HXR emission and the start of the CME acceleration. We find that in 83\% of the events the CME acceleration starts {\it before} the flare HXR emission. The distribution gives a mean of $+6.0\pm 9.0$~min and a median of $+6.0\pm 6.5$ min. Fig.~\ref{fig8} shows the distribution of the time difference between the peak of the flare HXR emission and the peak of the CME acceleration. We find that the maximum CME acceleration $a_\mathrm{max}$ occurs well synchronized with the flare HXR peaks. The arithmetic mean of the time lag distribution gives $-1.1\pm 5.7$~min, the median $-1.4 \pm 2.2$~min. In all but one case the time lags lie within in an interval of $[-10,+10]$~min. In $\sim$75\% of the CME-flare events under study, the flare HXR peak and the CME acceleration peak occur within 5~min of each other -- a time range, which corresponds to the typical uncertainties in the obtained CME acceleration peak times (cf.\ the shaded areas in Figs.~\ref{fig1}--\ref{fig3-1}). For comparison, the mean CME acceleration duration in the events under study is about 25~min. In Fig.~\ref{fig9}, we plot the distribution of the time lags between the peak of the flare HXR emission and the time when the CME reached its maximum velocity. We find that the CME velocities always reach their maximum {\it after} the HXR peak. The derived time difference $\Delta t$ lies in the range 2--117~min, with the median of the distribution at $-16.3\pm 8.5$~min. \section{Discussion and Conclusions}\label{sec:discussion} We investigated the physical relation between coronal mass ejections and their associated flares using several approaches. On the one hand, we determined the correlation and scaling of various parameters characterizing the CME acceleration with the flare's X-ray spectral parameters, which yield information on accelerated electrons as well as on the state of the thermal flare plasma. On the other hand, we studied the temporal relation between the CME acceleration and the flare energy release as evidenced in the non-thermal HXR radiation. Our results reveal a tight coupling between both phenomena. The CME peak velocity and peak acceleration yield distinct scalings with the flare parameters characterizing the accelerated electron spectra, in terms of the total number $e^-$ of accelerated electrons, the power in electrons $P20$, the HXR flux $F50$ at 50 keV, and the spectral index~$\delta$ of the electron spectra, with correlation coefficients in the range of 0.5 to 0.8 (all significant at least on the 95\% level). This means that CMEs with higher peak velocity and higher peak acceleration are accompanied by flares in which more electrons are accelerated, and in which a larger fraction of electrons is accelerated to higher energies (as it is revealed by the harder X-ray power-law spectra). The highest correlation coefficient in this study ($c = 0.85$) was obtained for the relation of the CME peak velocity $v_\mathrm{max}$, which (together with its mass) determines the kinetic energy of the CME, and the product of the power in electrons above 20~keV and the duration of the HXR emission, $P20\cdot t_\mathrm{HXR}$, which is a measure of the total energy in flare-accelerated electrons. These findings strongly support the general idea that the acceleration of the CME and the particle acceleration in the associated flare draw their energy from a common source, probably magnetic reconnection occurring in the current sheet behind the erupting structure. In general, the CME peak velocity is somewhat better correlated with the non-thermal flare parameters than the CME peak acceleration. However, there is one exception: the hardness of the accelerated electron spectrum yields a better correlation with the CME peak acceleration ($cc \approx -0.6$) than with the CME peak velocity ($cc \approx -0.5$), indicating that the hardness of the accelerated electron spectrum injected into the flare loops is intimately coupled to the impulsive acceleration process of the rising CME structure. We also found a distinct correlation of the CME initiation height $h_0$ and the spectral index $\delta$ of the flare-accelerated electrons ($c\approx 0.8$), as well as a distinct anti-correlation between $h_0$ and the non-thermal photon flux $F50$ ($c \approx -0.7$). We note that statistical studies of the CME main acceleration found an anticorrelation between the CME peak acceleration and the size and/or height of the pre-eruptive structure, with correlation coefficients of about $c \approx -0.5$ \citep{vrsnak2007,bein2011}. This anticorrelation has been interpreted in terms of the Lorentz force driving the CME eruption and the variation of the coronal magnetic field strength with height: CMEs originating at low coronal heights, i.e. regions of stronger magnetic fields, have larger Lorentz forces available and can thus reach larger acceleration values than CMEs originating from high in the corona where the magnetic field is smaller due to the (exponentially) decaying gas pressure and the related expansion of the magnetic field lines. Thus, the correlation between the hardness~$\delta$ of the flare electron spectrum and the CME initiation height~$h_0$ might be a secondary effect caused by the anticorrelation between the CME peak acceleration $a_{\rm max}$ and its initiation height~$h_0$. However, the correlation between flare $\delta$ and CME $h_0$ ($c \approx 0.8$) is significantly higher than that between CME $a_{\rm max}$ and CME $h_0$ \cite[$|c| \approx 0.5$;][]{bein2011}. We also stress that the initiation height $h_0$ is the CME parameter that gives the highest correlation coefficient with the hardness $\delta$ of the accelerated flare electron spectrum. These findings suggest that the height~$h_0$ of the pre-eruptive structure is a decisive parameter for the efficiency of the associated flare to accelerate electrons to high energies. The correlation coefficients obtained between the thermal flare parameters and the CME peak velocity and peak acceleration are significantly smaller ($c\lesssim 0.5$) than those obtained for the non-thermal parameters. The fact, that both EM and $T$ show lower correlations with the CME $v_\mathrm{max}$ and $a_\mathrm{max}$ can be interpreted as an effect of the thermal flare plasma being only a secondary product within the flare process. The hot coronal flare plasma is generally assumed to be created by chromospheric heating and evaporation induced by the flare-accelerated electrons \citep[e.g.][]{neupert1968,brown1973,veronig2005}, which are a primary product of the flare energy release process. Several previous studies revealed a distinct relation between the CME mean velocities and the associated flares' soft X-ray peak flux measured by GOES, which characterizes the thermal energy content of flares \citep{moon2002,burkepile2004}. We note that these studies incorporated flare-CME events over a larger spread in flare importance by incorporating also X-class events. A sample of $55$ CME-flare pairs in the study of \citet{moon2002} suggest a correlation coefficient of $c\approx 0.5$ for the relation between the time integrated GOES SXR flux and the CME velocity. \citet{burkepile2004} estimated the kinetic energy of CMEs originating from close to the solar limb and found a higher correlation with the flare soft X-ray peak flux ($c = 0.74$). For our event sample, the correlation coefficient for $v_\mathrm{max}$ vs.\ GOES peak flux is somewhere in the middle with $c\approx 0.6$. We can summarize that the results obtained in the present paper for the thermal flare plasma are qualitatively in line with previous studies, and that our findings suggest that the CME peak acceleration and velocity are stronger coupled to the particle acceleration in the associated flares than to the maximum thermal energy content of the flare plasma. The comparison of flare HXR flux evolution and the acceleration profile of the CME main acceleration shows that in $\sim 80\%$ of the events under study, the non-thermal flare emission starts {\it after} the CME acceleration, on average delayed by $\approx 6$~min. This finding agrees with investigations of the flare SXR emission in relation to CME acceleration by \cite{maricic2007} and \cite{bein2012} who also found that for the majority of the events, the CME acceleration starts before the flare SXR emission. Such delay of the flare start with respect to the start of the main CME acceleration is well in line with the standard flare model, where the rising flux rope stretches the field lines underneath. At a certain instant, in the current sheet behind the erupting structure magnetic reconnection will set in \cite[e.g. due to tearing instability, when the height-to-width ratio exceeds a certain threshold;][]{furth1963}, causing the main flare energy release and acceleration of high-energy particles. Under these standard flare-CME model assumptions, we can estimate the length of the current sheet at the onset of magnetic reconnection. For 14 flare-CME pairs in our sample, it was possible to derive the current sheet length from the CME height at the onset time of the non-thermal HXR emission (i.e.\ particle acceleration) minus the initial height of the pre-eruptive structure; the distribution is plotted in Figure~\ref{fig10}. The median of the distribution indicates a current sheet length at the onset of magnetic reconnection of $0.03 \pm 0.01 R_{\odot}$, i.e. $21 \pm 7$~Mm in the events under study. The flare HXR peaks occur well synchronized with the peak of the CME acceleration profile. In 75\% of the cases they occur within $\pm5$~min, i.e. within the typical uncertainties in the determination of the CME accleration peak time. This means that at the time of the highest CME accleration also the rate of particle acceleration is highest. This finding agrees with the case studies by \citet{temmer2008,temmer2010} who also found a close synchronization of the flare HXR peak and the CME acceleration peak in well observed limb events as well as fast halo CMEs. Other studies used the derivative of the flare SXR light curves to approximate the time evolution of the flare energy release, based on the Neupert effect \cite[e.g.][]{dennis1993,veronig2002}. For example, \citet{zhang2004} reported a close synchronization of the peak of the SXR flux derivative and the time of maximum acceleration in two long-duration CME-flare events. Statistically, 50--75\% of the events reveal a high degree of synchronization of the growth rate of SXR emission and CME acceleration, whereas about 25\% show strong deviations between the timing of the CME peak acceleration and the flare impulsive phase \citep{maricic2007,bein2012}. To date, there exist no simulations of coupled CME-flare eruptions which directly incorporate particle acceleration mechanisms to theoretically investigate the coupling of the CME dynamics and properties of accelerated flare particles. \citet{reeves2006} and \citet{reeves2010} performed MHD simulations of a flux rope eruption that leads to the formation of a large-scale current sheet and a multi-threaded flare beneath the CME, for which they calculated the thermal energy release and the expected rate of the flare SXR emission. They found that in cases where the background magnetic field and/or the magnetic reconnection rate is high, the CME acceleration and the associated thermal flare energy release are synchronized. Slow reconnection rates cause the CME acceleration to peak earlier, whereas for fast reconnection rates the acceleration peak shifted to later times in the eruption. The set of events we studied in this paper includes predominantly impulsive CMEs, characterized by high acceleration rates over a short acceleration duration. This selection, compared to the simulation results by \citet{reeves2010}, may explain why in our sample basically all events show a high synchronization of the peaks of the CME acceleration and the non-thermal flare emission. \acknowledgments This activity has been supported by the European Community Framework Programme~7, High Energy Solar Physics Data in Europe (HESPE), grant agreement no.: 263086, the Austrian Space Applications Programme (ASAP-6 project \#819664 SOLDYN), and the Austrian Science Fund (FWF): V195-N16. \bibliographystyle{apj}
1,116,691,497,049
arxiv
\section*{Methods} \subsection{Algorithmic details.} For a given two-qubit operation $U$, calculating the 15 single-qubit-gate parameters used in the circuit of Figure~1 b is facilitated by working in the so-called ``magic'' basis\cite{hillPRL1997,krausPRA2001} given in the main text. Transforming to the magic basis from the two-qubit computational basis $\ket{\highstate\highstate}, \ket{\highstate\lowstate}, \ket{\lowstate\highstate}, \ket{\lowstate\lowstate}$ is accomplished by use of the unitary matrix \begin{equation} \Lambda = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & i & 0 & 0 \\ 0 & 0 & i & 1 \\ 0 & 0 & i & -1 \\ 1 &-i & 0 & 0 \end{pmatrix}. \end{equation} The properties of the magic basis rely on unit matrix determinants; thus we first strip $U$ of any global phase by dividing it by a fourth-root of its determinant, making it a member of SU(4). Global phases exactly vanish in any observable quantity, allowing this modification. In what follows, matrices in the computational basis are denoted with capital letters, and those in the magic basis by lower-case letters; e.g., $m=\Lambda^\dagger M \Lambda$. We first find the three degrees of freedom $\alpha$, $\beta$, $\delta$ that produce the correct local equivalence class. We decompose the circuit in Figure~1 b as \begin{equation} U=\left(C \otimes D\right) \cdot V \cdot \left(A \otimes B\right), \label{eq:computationalcircuit} \end{equation} where $V$ determines the local equivalence class and can be generated using the gate operations appearing in Figure~1 b within the dashed box. In order to construct $V$, we transform both $V$ and $U$ into the magic basis as $v$ and $u$ and choose $\alpha$, $\beta$, $\delta$ such that the eigenvalues of $v v^\ensuremath{{\textrm T}}$ match those of $u u^\ensuremath{{\textrm T}}$. (We include a global phase $e^{-i \pi/4}$ in $V$ to make it an element of SU(4).) This is done by comparing the analytical form of the eigenvalues of $v v^\ensuremath{{\textrm T}}$ to those of $u u^\ensuremath{{\textrm T}}$. Since $u u^\ensuremath{{\textrm T}}$ is unitary, it has complex eigenvalues of modulus one: $\lambda_j = e^{i \phi_j}$ ($j \in \{1, 2, 3, 4\}$). We find that $\alpha$, $\beta$, and $\delta$ are given by the means of pairs of eigenvalue phases. One possibility is $\alpha = (\phi_1+\phi_2)/2$, $\beta = (\phi_1+\phi_3)/2$, and $\delta = (\phi_2+\phi_3)/2$. Since no ordering of the eigenvalues is required, there are many such combinations that produce members of $U$'s local equivalence class. The proof of this assignment is by explicit calculation of the eigenvalues of $v v^\ensuremath{{\textrm T}}$ and is analogous to that given in ref.~\cite{shendePRA2004} for the controlled-NOT (CNOT) gate rather than our phase gate. Second, we find the four single-qubit rotations $A, B, C, D$ that comprise the remaining 12 degrees of freedom. Note that $v v^\ensuremath{{\textrm T}}$ and $u u^\ensuremath{{\textrm T}}$ are unitary symmetric matrices and therefore have real, orthonormal eigenvectors\cite{krausPRA2001,makhlinQIP2002}. Because they share eigenvalues, it is possible to simultaneously diagonalize them with matrices $k$ and $l$ such that \begin{equation} kvv^\ensuremath{{\textrm T}} k^\ensuremath{{\textrm T}} = luu^\ensuremath{{\textrm T}} l^\ensuremath{{\textrm T}} . \label{eq:kvvkluul} \end{equation} Here, $k$ and $l$ are eigenvector matrices whose columns have been permuted such that equation~(\ref{eq:kvvkluul}) is valid. They are both members of SO(4) (if necessary, one of the eigenvectors can be negated to change the matrix determinant from -1 to 1). By rearranging equation~(\ref{eq:kvvkluul}), we obtain \begin{equation} I = v^\dagger k^\ensuremath{{\textrm T}} l u u^\ensuremath{{\textrm T}} l^\ensuremath{{\textrm T}} k v^* = v^\dagger k^\ensuremath{{\textrm T}} l u \left(v^\dagger k^\ensuremath{{\textrm T}} l u\right)^\ensuremath{{\textrm T}} \end{equation} ($I$ is the identity matrix), from which we define $m \equiv v^\dagger k^\ensuremath{{\textrm T}} l u$, also in SO(4). We thus have that \begin{equation} u = l^\ensuremath{{\textrm T}} k v m \label{eq:magiccircuit} \end{equation} where $l^\ensuremath{{\textrm T}} k$ and $m$ are both real and in SO(4). Since they are real orthogonal matrices in the magic basis, they represent single-qubit rotations. We transform equation~(\ref{eq:magiccircuit}) into the computational basis and compare it with equation~(\ref{eq:computationalcircuit}) to find \begin{eqnarray} C \otimes D = \Lambda\left(l^\ensuremath{{\textrm T}} k\right)\Lambda^\dagger \\ A \otimes B = \Lambda m \Lambda^\dagger . \end{eqnarray} To finish, we split $A \otimes B$ and $C \otimes D$ into $A, B, C, D \in \textrm{SU(2)}$ and solve for $\theta$, $\phi$, and $\phi_z$ for each.
1,116,691,497,050
arxiv
\section{Introduction} Impulsive systems are of vital importance on most scientific fields. They can be found in applications ranging from biology and population dynamics to economics and engineering. Usually, many situations are modeled by differential equations. Controls, delays, impulses, and non-linear perturbations are added to capture either feedback or activity characterization. The interest of this article is the non-autonomous non-instantaneous impulsive semi-linear system involving state-delay and non-local conditions, which is motivated by applications, such as species population, nanoscale electronic circuits consisting of single-electron tunneling junctions, and mechanical systems with impacts \cite{Lakshmikantham1989,Samoilenko1995,Yang2001}. In particular, impulses represent sudden deviations of the states at specific times, by either instantaneous jumps or continuous intervals. Our mathematical motivation is to extend the existence and uniqueness of solutions on a finite-dimensional Banach space \cite{Abada2010,Balachandran1996,Muslim2018} for the aforementioned semi-linear system. It is worth to highlight that some existence and controllability results on the impulsive autonomous case have been done by \cite{Nieto2010,Pandey2014,Wang2015}, and \cite{Leiva2017}. The latter authors are the pioneer in implementing Karakostas' fixed-point theorem to prove the existence of solutions on semi-linear equations with instantaneous impulses. Furthermore, techniques from \cite{Bashirov2015} and Rothe's fixed-point theorem have been used in \cite{Guevara2018W,Guevara2018H,Malik2019} for studying the approximate and exact controllability of this family of systems with delays, instantaneous impulses, and memory considerations. Important results on autonomous impulsive systems involving delay were developed by \cite{Driver:1977,Hernandez2006,Li2006}. In addition, problems with non-local conditions including impulses can be found in \cite{Leiva2018}. Finally, \cite{Hernandez2013} introduced the class of non-instantaneous impulsive systems, and \cite{Pierri2013} showed the existence of solutions for these systems. Later, \cite{A.Anguraj2016,Agarwal2017b,LeivaZouhair2020} showcased relevant studies for models on non-instantaneous impulsive differential equations. However, it is not of our knowledge that there are results on the existence of solutions for semi-linear non-autonomous systems including all conditions simultaneously. This is the center of our research work. This article is structured as follows. Section \ref{System Description} describes the analyzed system and notation. Section \ref{Preliminary Theory and Hypotheses} deals with preliminary concepts, definitions, and hypotheses used throughout this work. Section \ref{Existence and Uniqueness of Solutions} is devoted to the existence and uniqueness of solutions for the system in the light of Karakostas' fixed-point theorem, which is an extension of the fixed-point theorem due to M. A. Krasnosel’ski{\v\i} developed in \cite{Karakostas2003}. Finally, sections \ref{Example} and \ref{Final Remarks} illustrate these results with an example of the considered system and present conclusions and guidelines for open problems. \section{System Description}\label{System Description} Let $N\in\mathbb{N}$, and denote $I_N$ as the set $\{1,2,\ldots,N\}$. In this article, the existence and uniqueness of solutions for the following semi-linear non-autonomous system are proved: \begin{equation}\label{eqExist} \begin{cases} z^{\prime}(t) = \boldsymbol{A}(t)z(t) +f(t,z_{t}), &t \in \bigcup\limits_{i=0}^{N}\left(s_{i}, t_{i+1}\right], \\ z(t)= G_{i}(t,z(t)), & t\in(t_{i},s_{i}],\ i\in I_N,\\ z(t)=\phi(t)- g(z_{\theta_1},z_{\theta_2},\dots, z_{\theta_q})(t), & t \in [-r,0], \end{cases} \end{equation} where $s_{i}, t_{i}, \theta_{j}, r \in (0,\tau)$, with $t_{i}\leq s_{i}<t_{i+1}$, $\theta_{j}< \theta_{j+1}$, for $i\in I_N$ and $j\in I_{q}$, $s_{0}=t_{0}=0$, and $t_{N+1}=\theta_{q+1}=\tau$, all fixed real numbers. The system solutions are denoted by $z:\mathcal{J}=[-r,\tau]\longrightarrow \mathbb{R}^n$ and the non-instantaneous impulses are represented by $G_i:(t_i,s_i]\times\mathbb{R}^{n}\longrightarrow \mathbb{R}^n,\ i\in I_N.$ $\boldsymbol{A}$ is a continuous matrix such that $\boldsymbol{A}(t)\in\mathbb{R}^{n\times n}$, $t \in \mathbb{R}$. $z_{t}$ stands for the translated function of $z$ defined by $z_{t}(s) = z(t+s)$, with $s\in [-r,0]$. The function $f :\overline{\mathbb{R}_+}\times PC_{r}([-r,0]; \mathbb{R}^{n}) \longrightarrow \mathbb{R}^n$ represents the non-linear perturbation of the differential equation in the system, where $\overline{\mathbb{R}_+}=[0,+\infty)$, and $g: PC_{r}^q([-r,0]; (\mathbb{R}^{n})^{q}) \longrightarrow PC_{r}([-r,0]; \mathbb{R}^{n})$ indicates the behavior in the non-local conditions. The function \begin{equation}\label{phi} \phi:[-r,0]\longrightarrow\mathbb{R}^n \end{equation} represents the historical pass on the time interval $[-r,0]$. In order to properly set system \eqref{eqExist}, the following Banach spaces are considered. Denote $C(\mathcal{U};\mathbb{R}^n)$ as the space of continuous functions on a set $\mathcal{U}\subset \mathbb{R}$. $PC_{r}=PC_{r}([-r,0];\mathbb{R}^n)$ is the space of continuous functions of the form \eqref{phi} except on a finite number of points $r_i,\ i\in I_l$, with \ $l \leq N$, where the side limits $\phi(r_i^+),\ \phi(r_i^-)$ exist, and $\phi(r_i)=\phi(r_i^-)$, for all $i\in I_N$, endowed with the supremum norm. The natural Banach space for the solutions of system \eqref{eqExist} is defined as: \begin{equation*} \begin{split} PC_{r\tau}=PC_{r\tau}(\mathcal{J};\mathbb{R}^n)= \Big\{z: & \mathcal{J} \longrightarrow \mathbb{R}^{n}\ {\Big |} \ z\big |_{[-r,0]}\in PC_{r},\ z\big |_{[0,\tau]} \in C(\mathcal{J}^{\prime}; \mathbb{R}^{n}),\\ &\text{ there exist }\ z(t_{k}^{+}),\ z(t_{k}^{-}),\ \text{ and }\ z(t_{k})=z(t_{k}^{-}),\ k\in I_N \Big\}, \end{split} \end{equation*} where $\mathcal{J}^{\prime} = [0,\tau] \backslash \{t_1, t_2, \dots, t_{_{N}} \}$, and endowed with the norm \begin{equation*} \|z\|=\|z\|_{0} = \sup_{t \in \mathcal{J}} \|z(t)\|_{\mathbb{R}^n},\quad z\in PC_{r\tau}. \end{equation*} The cartesian product space given by $ \left(\mathbb{R}^n\right)^q=\mathbb{R}^n\times \mathbb{R}^n\times \dots \times \mathbb{R}^n=\displaystyle\prod_{i=1}^q \mathbb{R}^n$ is equipped with the norm $\displaystyle\|z\|_{\left(\mathbb{R}^n\right)^q}=\sum_{i=1}^q \|z_{i}\|_{\mathbb{R}^n}$, for $ z\in \left(\mathbb{R}^n\right)^q.$ The space $PC_{r}^q=PC_{r}^q([-r,0]; (\mathbb{R}^{n})^{q})$ is defined analogously and endowed with the norm \begin{equation*} \|z\|_{PC_{r}^q}=\sup_{t\in [-r,0]}\|z(t)\|_{\left(\mathbb{R}^n\right)^q},\quad z\in PC_{r}^q. \end{equation*} \section{Preliminary Theory and Hypotheses}\label{Preliminary Theory and Hypotheses} In this section, the evolution operator based on the corresponding linear system is defined. This work can be extended to infinite-dimensional Banach spaces. Thus, the properties of the evolution operator are included, aiming to establish their similarities with the latter case, where uniform continuity is lost unless the evolution operator is assumed to be compact. Finally, the system solutions are characterized, and hypotheses for applying Karakostas' fixed-point theorem are presented. Let $\boldsymbol{U}$ be the evolution operator corresponding to system \eqref{eqExist} \begin{equation}\label{evolutionOp} \boldsymbol{U}(t,s) = \Phi(t)\Phi^{-1}(s),\quad\text{ for all } t, s\in \mathbb{R}, \end{equation} where $\Phi$ is the fundamental matrix solution of the associated linear system \begin{equation}\label{uncontrolled} z^{\prime}(t) = \boldsymbol{A}(t)z(t). \end{equation} Therefore, there exist constants $\widehat{M},\ \omega >0$ and $M \geq 1$ such that: \begin{equation*} \|\boldsymbol{U}(t, s)\| \leq \widehat{M} e^{\omega(t-s)}\leq M, \quad 0 \leq s \leq t \leq \tau, \end{equation*} The following proposition exhibits a characterization of the system \eqref{eqExist} solutions and is based on the works done in \cite{Leiva2018} and \cite{Pierri2013}. \begin{proposition} \label{characterization} The semi-linear system \eqref{eqExist} has a solution $z\in PC_{r\tau}(\mathcal{J};\mathbb{R}^n)$ if, and only if, \begin{equation} \label{solution} z(t)=\begin{cases} \boldsymbol{U}(t,0)[\phi(0)-g(z_{\theta_1},z_{\theta_2},\dots, z_{\theta_q})(0)]\\ + \displaystyle \int_{0}^{t}\boldsymbol{U}(t,s)f(s,z_{s})ds, & t \in (0,t_{1}],\\ \displaystyle \boldsymbol{U}(t,s_{i})G_{i}(s_{i},z(s_{i})) +\displaystyle \int_{s_{i}}^{t} \boldsymbol{U}(t,s) f(s,z_{s})ds, & t \in\left(s_{i}, t_{i+1}\right],\ i\in I_N,\\ G_{i}(t,z(t)), & t \in (t_{i},s_{i}],\ i\in I_N,\\ \phi(t)- g(z_{\theta_1},z_{\theta_2},\dots, z_{\theta_q})(t), \quad & t\in [-r,0]. \end{cases} \end{equation} \end{proposition} Observe that on some interval $[-r,p_1)$, if a solution $z$ of the form \eqref{solution} is defined, and there is not $p_2>p_1$ such that a solution can be defined on $[-r,p_2)$, then, $[-r,p_1)$ is a \textit{maximal interval} of existence. In this work, the following hypotheses are assumed: \begin{enumerate} \item[\textbf{H1}] The next conditions hold: \begin{enumerate} \item[(i)] The function $g$ fulfills that $g(0)=0$, and there exists $N_q > 0$ such that, for all $y,\ z \in PC_r^q$ and $t\in [-r,0]$, \begin{equation*} \|g(y)(t)-g(z)(t)\|_{\mathbb{R}^n} \leq N_{q} \|y(t)-z(t)\|_{(\mathbb{R}^n)^q}. \end{equation*} \item[(ii)] There exists a constant $L >0$ such that, for all $i\in I_N$, the functions $G_i$ satisfy $G_i(\cdot,0) = 0,$ and, if $\varphi_1,\varphi_2\in PC_{r\tau}$, for $t\in (t_i,s_i]$, then, \begin{equation*} \|G_i(t,\varphi_1(t))-G_i(t,\varphi_2(t))\|_{\mathbb{R}^n}\leq L\|\varphi_1-\varphi_2\|,\quad\text{where}\quad L + N_{q} q < \frac{1}{2}. \end{equation*} \end{enumerate} \item[\textbf{H2}] The function $f$ satisfies the following conditions: \begin{align*} \|f(t,\varphi_1)-f(t,\varphi_2)\|_{\mathbb{R}^n} &\leq K(\|\varphi_1\|, \|\varphi_2\| )\|\varphi_1-\varphi_2\|,\\ \|f(t,\varphi)\|_{\mathbb{R}^n} &\leq \Psi(\|\varphi\|), \end{align*} where $K: \overline{\mathbb{R}_{+}} \times \overline{\mathbb{R}_{+}} \longrightarrow \overline{\mathbb{R}_{+}}$ and $\Psi: \overline{\mathbb{R}_{+}} \longrightarrow \overline{\mathbb{R}_{+}}$ are continuous and non-decreasing functions in their arguments, and $\varphi, \varphi_1, \varphi_2 \in PC_{r}([-r,0];\mathbb{R}^n)$. \item[\textbf{H3}] The following relations hold for $\tau$ and $\rho >0$: \begin{enumerate} \item[(i)] $ MN_{q} q \left(\|\tilde{\phi}\|+\rho\right)+ M \tau \Psi\left(\|\tilde{\phi}\| +\rho \right) \leq \rho, $ \item[(ii)] $ML\left(\|\tilde{\phi}\|+\rho\right) +\| \alpha \|_{\mathbb{R}^n} + M \tau\Psi\left(\|\tilde{\phi}\|+\rho\right)\leq \rho, $ \item[(iii)] $\displaystyle L\left(\|\tilde{\phi}\|+\rho\right) + \|\beta\|_{\mathbb{R}^n} \leq \rho, $ \end{enumerate} where $\alpha, \beta \in \mathbb{R}^n$ are arbitrarily fixed, and the function $\tilde{\phi}$ is defined as: \begin{equation}\label{phi_tilde} \tilde{\phi}(t)= \begin{cases} \boldsymbol{U}(t,0)\phi(0), & t \in (0, t_1], \\ \alpha, & t \in \bigcup\limits_{i=1}^N (s_i,t_{i+1}],\\ \beta, & t \in\bigcup\limits_{i=1}^N (t_i,s_i],\\ \phi(t), &t \in [-r,0]. \end{cases} \end{equation} \item[\textbf{H4}] The following relations hold for $\tau$ and $\rho >0$: \begin{enumerate} \item[(i)] $M N_{q} q+ M \tau K\left(\|\tilde{\phi}\|+\rho,\|\tilde{\phi}\| +\rho \right)<1,$ \item[(ii)]$\displaystyle ML + M \tau K\left(\|\tilde{\phi}\|+\rho, \|\tilde{\phi}\|+\rho \right) < 1.$ \end{enumerate} \end{enumerate} Theorem \ref{Karakostas} is the statement of Karakostas' fixed-point theorem (see \cite{Karakostas2003, Leiva2017}). \begin{theorem}[Karakostas]\label{Karakostas} Let $\mathcal{P}$ and $\mathcal{Q}$ be Banach spaces, $D \subset \mathcal{P}$ a closed and convex subset, and $J:D \longrightarrow \mathcal{Q}$ a continuous compact operator. Let $F: D \times \overline{J(D)}\longrightarrow D$ be a continuous operator such that the family given by \newline $\left\{F(\cdot, y): y\in \overline{J(D)}\right\}$ is equicontractive. Then, $F(z, J(z))=z$ admits a solution in $D$. \end{theorem} \section{Existence and Uniqueness of Solutions}\label{Existence and Uniqueness of Solutions} In this section, the proofs of the existence and uniqueness of the solution for system \eqref{eqExist} are presented. To apply Karakostas' fixed-point theorem, the operators $J$ and $F$ are defined. Then, a fixed point on a subset of $PC_{r\tau}$ for equation \eqref{fp} is proved. Therefore, the problem of finding a solution of the form \eqref{solution} becomes a fixed-point problem. Consider the following continuous operators \begin{align*} J:PC_{r\tau}(\mathcal{J};\mathbb{R}^n) &\longrightarrow PC_{r\tau}(\mathcal{J};\mathbb{R}^n),\\ F: PC_{r\tau}(\mathcal{J};\mathbb{R}^n) \times PC_{r\tau}&(\mathcal{J};\mathbb{R}^n) \longrightarrow PC_{r\tau}(\mathcal{J};\mathbb{R}^n), \end{align*} and a fixed $\eta \in\mathbb{R}^n$. For $y,\ z\in PC_{r\tau}$, \begin{equation}\label{OperatorJ} J(y)(t)=\begin{cases} \displaystyle \boldsymbol{U}(t,0)\left[\phi(0)-g(y_{\theta_1},y_{\theta_2},\dots, y_{\theta_q})(0)\right]\\ + \displaystyle\int_{0}^{t}\boldsymbol{U}(t,s)f(s,y_{s})ds, \quad & t \in (0,t_1],\\ \boldsymbol{U}(t,s_i)G_i(s_i,y(s_i)) \\ \displaystyle + \int_{s_i}^{t}\boldsymbol{U}(t,s)f(s,y_{s})ds,\quad & t \in (s_i,t_{i+1}],\ i\in I_N ,\\ \eta, \quad &t \in \bigcup\limits_{i=1}^N (t_i,s_i],\\ \phi(t), \quad &t \in [-r,0], \end{cases} \end{equation} \begin{equation}\label{OperatorF} F(z,y)(t)= \begin{cases} y(t) , \quad &t\in \bigcup\limits_{i=0}^N (s_i,t_{i+1}],\\ G_i (t,z(t)), \quad &t\in (t_i,s_{i}],\ i\in I_N ,\\ \phi(t)-g(z_{\theta_1},z_{\theta_2},\dots, z_{\theta_q})(t), \quad &t \in [-r,0]. \end{cases} \end{equation} From the $J$ and $F$ definition, the following fixed-point equation is equivalent to solve system \eqref{eqExist}: \begin{equation} \label{fp} F(z,J(z)) = z,\quad z\in PC_{r\tau}. \end{equation} First, it is observed that $J$ is compact, and the set $\left\{F(\cdot, y)\ :\ y \in \overline{J(D_{\rho})}\right\}$ is equicontractive, where \begin{equation}\label{Dset} D_{\rho}=D_{\rho}(\tau, \phi):= \left\{\varphi \in PC_{r\tau}(\mathcal{J};\mathbb{R}^n)\ :\ \|\varphi - \tilde{\phi}\| \leq \rho\right\},\quad \text{ for } \rho >0. \end{equation} This set is closed and convex, and $\tilde{\phi}$ is given by \eqref{phi_tilde}. So, the hypotheses of Theorem \ref{Karakostas} are satisfied. Lemma \ref{lemma} highlights the relevance of hypotheses \textbf{H1}, \textbf{H2}, and how they fit into the main results. Theorems \ref{existence}, \ref{uniqueness} and \ref{prolongation} follow on this foundation. \begin{lemma} \label{lemma} Let hypotheses \textbf{H1} and \textbf{H2} be satisfied. Then, the operators $J$ and $F$ satisfy the following assertions: \begin{enumerate} \item[(i)] $J$ is continuous. \item[(ii)] $J$ maps bounded sets onto bounded sets. \item[(iii)] $J$ maps bounded sets onto equicontinuous sets. \item[(iv)] $J$ is a compact operator. \item[(v)] The set $\left\{F(\cdot,y): y\in\overline{J(D_{\rho})}\right\}$ is comprised of equicontractive operators, with $D_{\rho}$ as in \eqref{Dset}. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate} \item[(i)] \textit{$J$ is continuous.}\vspace{.3cm} Taking $y,\ z \in PC_{r\tau}$, trivially, for $t\in[-r,0]$, \begin{equation*} \|J (z)(t)-J(y)(t)\|_{\mathbb{R}^n} =\|\phi(t)-\phi(t)\|_{\mathbb{R}^n}=0. \end{equation*} Thus, \begin{equation}\label{r0} \sup_{t\in [-r,0]} \|J (z)(t)-J(y)(t)\|_{\mathbb{R}^n}= 0. \end{equation} By \textbf{H1}, \textbf{H2}, and $t \in (0,t_1]$, the following estimate holds: \begin{equation*} \begin{split} \|J(z)(t)-J(y)(t)\|_{\mathbb{R}^n} & \leq M\|g(y_{\theta_1},\dots, y_{\theta_q})(0)-g(z_{\theta_1},\dots, z_{\theta_q})(0)\|_{\mathbb{R}^n}\\ & + M \int_{0}^{t}\|(f(s,z_s)-f(s,y_s))\|_{\mathbb{R}^n}ds\\ & \leq MN_{q}\sum_{i=1}^q\|y-z\|\\ & + M \int_{0}^{t}K\left(\|z_s\|,\|y_s\|\right)\|z_s-y_s\|ds\\ & \leq MN_{q} q \|z-y\|+M t_1 K(\|z\|,\|y\|)\|z-y\|. \end{split} \end{equation*} Taking the sup, \begin{equation}\label{r1} \sup_{t\in (0,t_1]}\|J(z)(t)-J(y)(t)\| \leq M\left[ N_{q} q+ t_1K(\|z\|,\|y\|)\right]\|z-y\|. \end{equation} Now, for $i\in I_N$ and $t\in (s_i,t_{i+1}]$, \begin{equation*} \begin{split} \|J(z)(t)-J(y)(t) \|_{\mathbb{R}^n} & \leq M\|G_i(s_i,z(s_i))- G_i(s_i,y(s_i) )\|_{\mathbb{R}^n}\\ & + M \int_{s_i}^{t}\|(f(s,z_s)-f(s,y_s))\|_{\mathbb{R}^n}ds\\ & \leq M L \|z(s_i)-y(s_i)\|_{\mathbb{R}^n}\\ & + M \int_{s_i}^{t}K(\|z_s\|,\|y_s\|)\|z_s-y_s\|ds\\ & \leq M L \|z-y\|+M \tau K(\|z\|,\|y\|)\|z-y\|. \end{split} \end{equation*} Thus, \begin{equation*} \sup_{t\in (s_i,t_{i+1}]}\|J(z)(t)-J(y)(t)\| \leq M\left[L + \tau K(\|z\|,\|y\|) \right]\|z-y\|. \end{equation*} Together with \eqref{r0}, \eqref{r1}, and since $J$ is constant on $\bigcup\limits_{i=1}^{N} (t_i,s_i]$, it yields that there exists $N_{y,z}>0$ such that: \begin{equation*} \|J(z)-J(y)\|\leq N_{y,z}\|z-y\|. \end{equation*} Hence, $J$ is continuous. And, in fact, it is Lipschitz continuous.\vspace{.3cm} \item[(ii)] \textit{$J$ maps bounded sets onto bounded sets.}\vspace{.3cm} Without loss of generality, set $R >0$ arbitrarily and prove that there exists $d >0$ such that, for every $y \in B_{R} =\overline{B_{R}(0)}=\left\{ z \in PC_{r\tau}\ :\ \|z\| \leq R\right\}$, it follows that $\|J(y) \| \leq d$. For $t\in[-r,0]$, it gives that: \begin{equation*} \|J(y)(t) \|_{\mathbb{R}^n} = \|\phi(t)\|_{\mathbb{R}^n} \leq \|\phi\|=:d_0. \end{equation*} Let $y \in B_{R}$ and $t \in (0,t_1]$. \textbf{H2} yields \begin{equation*} \begin{split} \|J(y)(t) \|_{\mathbb{R}^n} & \leq \left\|\boldsymbol{U}(t,0)\left[\phi(0)-g(y_{\theta_1},\dots, y_{\theta_q})(0)\right]\right\|_{\mathbb{R}^n}\\ &+ \int_{0}^{t}\|\boldsymbol{U}(t,s)f(s,y_s)\|_{\mathbb{R}^n}ds\\ & \leq M \left(\|\phi(0)\|_{\mathbb{R}^n}+ N_{q} q\|y\|\right) + M t_1\Psi(\|y\|)\\ & \leq M\left(\|\phi(0)\|_{\mathbb{R}^n}+ N_{q} qR\right) + M t_1\Psi(R)=:d_1. \end{split} \end{equation*} Similarly, for each $i\in I_N$, if $t\in (s_i,t_{i+1}]$, then, \begin{equation*} \begin{split} \|J(y)(t) \|_{\mathbb{R}^n} & \leq \|\boldsymbol{U}(t,s_i)G_i(s_i,y(s_i))\|_{\mathbb{R}^n} + \int_{s_i}^{t}\|\boldsymbol{U}(t,s)f(s,y_s)\|_{\mathbb{R}^n}ds\\ & \leq ML\|y\|+ M (t_{i+1}-s_i)\Psi(\|y||)\\ & \leq M LR + M \tau \Psi(R)=:d_2. \end{split} \end{equation*} Finally, whenever $t\in (t_i,s_i]$, for $i\in I_N$, it follows that: \begin{equation*} \|J(t)\|_{\mathbb{R}^n}=\|\eta\|=:d_3. \end{equation*} Taking $d=\displaystyle\max_{0\leq i\leq3}\{d_i\}$, boundedness is proved.\vspace{.3cm} \item[(iii)] \textit{$J$ maps bounded sets onto equicontinuous sets.}\vspace{.3cm} Let $B_{R}$ as in (ii), and $y\in B_R$, arbitrary. For some $ 0<\nu_1 <\nu_2\leq t_1$, the following estimate holds: \begin{equation}\label{eq3.a} \begin{split} \|J(y)(\nu_2)&-J(y)(\nu_1) \|_{\mathbb{R}^n}\\ &\leq \left\|\left[\boldsymbol{U}(\nu_2,0)-\boldsymbol{U}(\nu_1,0)\right]\left[\phi (0)-g(y_{\theta_1},\ldots,y_{\theta_q})(0)\right]\right\|_{\mathbb{R}^n} \\ &+\left\|\int_{0}^{\nu_2}\boldsymbol{U}(\nu_2,s)f(s,y_s)ds-\int_{0}^{\nu_1}\boldsymbol{U}(\nu_1,s)f(s,y_s)ds\right\|_{\mathbb{R}^n} \\ & \leq \left\|\boldsymbol{U}(\nu_2,0)-\boldsymbol{U}(\nu_1,0)\right\|\left(\| \phi(0)\|_{\mathbb{R}^n}+N_{q}\displaystyle\sum_{i=1}^q \|y_{\theta_i}(0)\| _{\mathbb{R}^n}\right) \\ & + \int_{0}^{\nu_1}\left\|\left(\boldsymbol{U}(\nu_2,s)-\boldsymbol{U}(\nu_1,s)\right)f(s,y_s)\right\|_{\mathbb{R}^n}ds \\ & + \int_{\nu_1}^{\nu_2}\|\boldsymbol{U}(\nu_2,s)f(s,y_s)\|_{\mathbb{R}^n}ds \\ & \leq \left\|\boldsymbol{U}(\nu_2,0)-\boldsymbol{U}(\nu_1,0)\right\| \left(\| \phi(0) \|_{\mathbb{R}^n}+N_{q}qR\right) \\ & + \Psi(R)\int_{0}^{\nu_1}\|\boldsymbol{U}(\nu_2,s)-\boldsymbol{U}(\nu_1,s)\|ds + M\Psi (R)(\nu_2-\nu_1). \end{split} \end{equation} Similarly, for each $i\in I_N$ and every $\nu_1,\nu_2$, with $s_i <\nu_1 <\nu_2 \leq t_{i+1}$, it follows that: \begin{equation} \begin{split} \|J(y)(\nu_2)&-J(y)(\nu_1) \|_{\mathbb{R}^n}\\ & \leq \|\boldsymbol{U}(\nu_2,s_i)-\boldsymbol{U}(\nu_1,s_i)\| \| G_i(s_i, y(s_i)) \|_{\mathbb{R}^n} \\ & + \left\|\int_{s_i}^{\nu_2} \boldsymbol{U}(\nu_2,s)f(s,y_s)ds-\int_{s_i}^{\nu_1}\boldsymbol{U}(\nu_1,s)f(s,y_s)ds\right\|_{\mathbb{R}^n} \\ & \leq \|\boldsymbol{U}(\nu_2,s_i)-\boldsymbol{U}(\nu_1,s_i)\| \, L\| y\| \\ & + \int_{s_i}^{\nu_1}\|\boldsymbol{U}(\nu_2,s)-\boldsymbol{U}(\nu_1,s)\| \Psi(\| y_s \|) ds + M \int_{\nu_1}^{\nu_2} \Psi(\| y_s \|) ds \\ & \leq \|\boldsymbol{U}(\nu_2,s_i)-\boldsymbol{U}(\nu_1,s_i)\| L R \\ &+ \Psi(R)\int_{s_i}^{\nu_1}\|\boldsymbol{U}(\nu_2,s)-\boldsymbol{U}(\nu_1,s)\|ds + M\Psi(R)(\nu_2-\nu_1). \label{eq3.b} \end{split} \end{equation} By \eqref{eq3.a} and \eqref{eq3.b}, the continuity and boundedness of $\boldsymbol{U}(t,s)$ yield that, as $\nu_2$ approaches to $\nu_1$, $\|J(y)(\nu_2)-J(y)(\nu_1) \|_{\mathbb{R}^n}$ goes to zero, independently of $y$. Therefore, $J(B_{R})$ is equicontinuous on the set $\bigcup\limits_{i=0}^N (s_i,t_{i+1}]$. In the same fashion, equicontinuity on $[-r,0]$ and $\bigcup\limits_{i=1}^N (t_i,s_i]$ is obtained. And, the family of functions $J(B_{R})$ is equicontinuous on the interval $\mathcal{J}\backslash \{t_1,\ldots,t_N\}$.\vspace{.3cm} \item[(iv)] \textit{$J$ is a compact operator.}\vspace{.3cm} Let $B\subset PC_{r\tau}$ be a bounded subset, and $\left\{\omega_{n}\right\}_{n\in\mathbb{N}}$, a sequence on $J(B)$. Then, (ii) and (iii) imply that it is uniformly bounded and equicontinuous on $[-r, t_1]$. Note that $\left\{\omega_n|_{[-r,0]} \right\}_{n\in\mathbb{N}} = \{\phi \}$. Arzelà-Ascoli theorem on \newline $\left\{\omega_n|_{[0,t_1]} \right\}_{n\in\mathbb{N}}\subset C\left([0,t_1];\mathbb{R}^n\right)$ implies there is a uniformly convergent subsequence $\left\{\omega^{1}_{n}\right\}_{n\in\mathbb{N}}$ on $[-r, t_1]$. Consider the sequence $\left\{\omega^{1}_{n}\right\}_{n\in\mathbb{N}}$ on the interval $[s_1, t_2]$. It is uniformly bounded and equicontinuous, and as before, it has a convergent subsequence $\left\{\omega^{2}_{n}\right\}_{n\in\mathbb{N}}$ on $[s_1, t_2]$. Therefore, a uniformly convergent subsequence $\left\{\omega^{2}_{n}\right\}_{n\in\mathbb{N}}$ of $\left\{ \omega_n \right\}_{n\in\mathbb{N}}$ on the interval $[-r,t_2]$ is obtained, since each $\omega^2_n$ has the same definition on $[t_1,s_1]$. Continuing this process on the intervals $[t_2,s_2],\,[s_2, t_3],\, [t_3, s_3], \ldots, [s_N, \tau]$, it is concluded that there is a subsequence $\left\{\omega^{N+1}_{n}\right\}_{n\in\mathbb{N}}$ of $\left\{ \omega_n\right\}_{n\in\mathbb{N}}$, uniformly convergent on $[-r, \tau]$. Thus, the set $\overline{J(B)}$ is compact, and by the characterization of sequentially compact spaces, $J$ is compact.\vspace{.3cm} \item[(v)] \textit{The set $\left\{F(\cdot, y)\ :\ y \in \overline{J(D_{\rho})}\right\}$ is comprised of equicontractive operators.}\vspace{.3cm} Let $\rho>0$, $y\in \overline{J(D_{\rho})}$, $\ x,\ z \in PC_{r\tau}$, and $t \in [-r, 0]$. Thus, \textbf{H1} yields \begin{equation}\label{eq5.a} \begin{split} \|F(z, y)(t)-F(x, y)&(t) \|_{\mathbb{R}^n}\\ & \leq \left\|g(x_{\theta_1},\dots, x_{\theta_q})(t)-g(z_{\theta_1},\dots, z_{\theta_q})(t)\right\|_{\mathbb{R}^n} \\ & \leq N_{q} q\|z-x\|. \end{split} \end{equation} For each $i\in I_N$ and $t \in (t_i, s_i]$, it follows that: \begin{equation} \label{eq5.b} \begin{split} \|F(z, y)(t)-F(x, y)(t) \|_{\mathbb{R}^n} & \leq \|G_i(t,z(t))-G_i(t,x(t))\|_{\mathbb{R}^n} \\ & \leq L \|z-x \|. \end{split} \end{equation} Moreover, on the intervals $(s_i,t_{i+1}],\ i\in\{ 0\}\cup I_N$, it follows that: \begin{equation}\label{eq5.c} \|F(z, y)(t)-F(x, y)(t)\|_{\mathbb{R}^n} =\|y(t)-y(t)\|_{\mathbb{R}^n}=0. \end{equation} Combining \eqref{eq5.a}-\eqref{eq5.c}, the next estimate holds: \begin{equation*} \|F(z, y)-F(x, y) \| \leq\frac{1}{2}\|z-x \|. \end{equation*} Hence, $F$ is a contraction on the first variable, independently of $y \in \overline{J(D_{\rho})}$. \end{enumerate} \end{proof} \begin{theorem}\label{existence} Assume \textbf{H1} - \textbf{H3}. Then, problem \eqref{eqExist} has, at least, one solution on the interval $\mathcal{J}=[-r,\tau]$. \end{theorem} \begin{proof} For $\rho>0$, let $D_{\rho}$ as in \eqref{Dset}, and define the operators $\widetilde{J}$ and $\widetilde{F}$ as: \begin{equation*} \widetilde{J}=J\big|_{D_{\rho}}:D_{\rho}\longrightarrow PC_{r\tau}(\mathcal{J};\mathbb{R}^n)\quad\text{and}\quad \widetilde{F}=F\big|_{D_{\rho}\times\overline{\widetilde{J}(D_{\rho})}}:D_{\rho}\times\overline{\widetilde{J}(D_{\rho})}\longrightarrow D_{\rho}. \end{equation*} Because of Lemma \ref{lemma}, $\widetilde{J}$ is continuous and compact, and the family\\ $\left\{F(\cdot, y)\ :\ y \in \overline{J(D_{\rho})}\right\}$ is equicontractive. Continuity of $\widetilde{F}$ follows analogously. The goal is to prove that, indeed, $\widetilde{F}\left(D_{\rho}, \overline{\widetilde{J}(D_{\rho})}\right) \subset D_{\rho}.$ Thus, Theorem \ref{Karakostas} assumptions will be satisfied, and an equivalent solution will be obtained. Take an arbitrary $z \in D_{\rho}$, for $t\in[-r,0]$, it yields \begin{equation} \label{eqT.a} \begin{split} \left\|\widetilde{F}\left(z,\widetilde{J}(z)\right)(t)-\tilde{\phi}(t)\right\|_{\mathbb{R}^n}&= \|g(z_{\theta_1},\ldots,z_{\theta_q})(t)\|_{\mathbb{R}^n} \\ &\leq N_q\sum_{j=1}^q\|z_{\theta_j}(t)\|_{\mathbb{R}^n} \\ &\leq MN_qq\|z\| \\ &\leq MN_qq(\|\tilde{\phi}\|+\rho) \leq \rho. \end{split} \end{equation} Similarly, $t \in (0,t_1]$ imply \begin{equation} \label{eqT.b} \begin{split} \left\|\widetilde{F}\left(z,\widetilde{J}(z)\right)(t)-\tilde{\phi}(t)\right\|_{\mathbb{R}^n} & \leq M \left\|g(z_{\theta_1},\dots, z_{\theta_q})(0)\right\|_{\mathbb{R}^n}\\ & + \int_{0}^{t}\left\|\boldsymbol{U}(t,s)f(s,z_s)\right\|_{\mathbb{R}^n} ds \\ & \leq MN_{q}\sum_{i=1}^q\|z\|+ M \int_{0}^{t} \|f(s,z_s)\|ds \\ & \leq MN_{q}q\|z\| + M t_1 \Psi(\|z||) \\ & \leq MN_{q}q\left(\left\|\tilde{\phi}\right\|+\rho\right)+M \tau \Psi\left(\left\|\tilde{\phi}\right\|+\rho\right) \leq \rho. \end{split} \end{equation} Likewise, $t \in (s_i,t_{i+1}],\ i\in I_N$, gives \begin{equation} \label{eqT.c} \begin{split} \left\|\widetilde{F}\left(z,\widetilde{J}(z)\right)(t)-\tilde{\phi}(t)\right\|_{\mathbb{R}^n} & \leq \|\boldsymbol{U}(t,s_i)G_i(s_i,z(s_i))- \alpha \|_{\mathbb{R}^n} \\ & + \int_{s_i}^{t}\left\|\boldsymbol{U}(t,s)f(s,z_s)\right\|_{\mathbb{R}^n} ds \\ & \leq M L\|z\|+\| \alpha\|_{\mathbb{R}^n} + M (t_{i+1}-s_i) \Psi(\|z||) \\ & \leq M L\left(\left\|\tilde{\phi}\right\|+\rho\right)+ \|\alpha\|_{\mathbb{R}^n} + M \tau \Psi\left(\left\|\tilde{\phi}\right\|+\rho\right)\\ & \leq \rho. \end{split} \end{equation} Additionally, for $i\in I_N$, if $t \in (t_i,s_i]$, then, \begin{equation} \label{eqT.d} \begin{split} \left\|\widetilde{F}\left(z,\widetilde{J}(z)\right)(t)-\tilde{\phi}(t)\right\|_{\mathbb{R}^n} & = \|G_i(t,z(t)) -\beta \|_{\mathbb{R}^n} \\ & \leq L\|z\|+\| \beta\|_{\mathbb{R}^n} \\ &\leq L\left(\left\|\tilde{\phi}\right\|+\rho\right) +\| \beta\|_{\mathbb{R}^n} \leq \rho. \end{split} \end{equation} Thus, equations \eqref{eqT.a} through \eqref{eqT.d} give \begin{equation*} \left\|\widetilde{F}\left(z,\widetilde{J}(z)\right)-\tilde{\phi}\right\|=\sup_{t\in\mathcal{J}}\left\|\widetilde{F}\left(z,\widetilde{J}(z)\right)(t)-\tilde{\phi}(t)\right\|_{\mathbb{R}^n}\leq\rho. \end{equation*} Applying Theorem \ref{Karakostas} to $\widetilde{J}$ and $\widetilde{F}$, it follows $ \widetilde{F}\left(z,\widetilde{J}(z)\right) = z $, i.e., there exists a fixed-point solution $z\in D_{\rho}\subset PC_{r\tau}$, equivalent to the system \eqref{eqExist} solution given by Proposition \ref{characterization}. \end{proof} The following theorem proves the uniqueness of the solution to system \eqref{eqExist}. \begin{theorem}\label{uniqueness} Asumming \textbf{H1} - \textbf{H4}, system \eqref{eqExist} has a unique solution on $\mathcal{J}=[-r, \tau]$. \end{theorem} \begin{proof} Consider two solutions $z_1$ and $z_2$ to \eqref{eqExist}, which satisfy \eqref{solution}. Let $\rho>0$ such that $z_1,\ z_2\in D_{\rho}$. Then, for $t \in [-r,0]$, the following estimate holds: \begin{equation} \label{eqT2.0} \begin{split} \|z_1(t)-z_2(t)\|_{\mathbb{R}^n} & \leq \left\|g\left(z_{2_{\theta_1}},\dots, z_{2_{\theta_q}}\right)(t)-g\left(z_{1_{\theta_1}},\dots, z_{1_{\theta_q}}\right)(t)\right\|_{\mathbb{R}^n} \\ & \leq N_q q \|z_1-z_2\| \\ & \leq \frac{1}{2}\|z_1-z_2\|. \end{split} \end{equation} If $t \in (0,t_1]$, \textbf{H2} implies \begin{equation} \label{eqT2.1} \begin{split} \|z_1(t)-z_2(t)\|&_{\mathbb{R}^n} \\ & \leq \left\|\boldsymbol{U}(t,0)\right\|\left\|g\left(z_{2_{\theta_1}},\ldots, z_{2_{\theta_q}}\right)(0)-g\left(z_{1_{\theta_1}},\ldots, z_{1_{\theta_q}}\right)(0)\right\|_{\mathbb{R}^n} \\ & + \int_{0}^{t}\left\|\boldsymbol{U}(t,s)\left( f\left(s,z_{1_{s}}\right)-f(s,z_{2_s})\right)\right\|ds \\ & \leq \left[M N_{q} q+ M t_1K\left(\|z_1\|,\|z_2\|\right)\right]\|z_1-z_2\| \\ &\leq \left[M N_{q} q+ M \tau K\left(\|\tilde{\phi}\|+\rho,\|\tilde{\phi}\| +\rho\right)\right] \|z_1-z_2\|, \end{split} \end{equation} and, $t \in (s_{i},t_{i+1}]$, $i\in I_N$, yields \begin{equation} \label{eqT2.2} \begin{split} \|z_1(t)-z_2(t)\|_{\mathbb{R}^n} & \leq \|\boldsymbol{U}(t,s_i)\| \|G_i\left(s_i,z_1(s_i)\right) - G_i\left(s_i,z_2(s_i)\right) \|_{\mathbb{R}^n} \\ & + \int_{s_i}^{t}\left\|\boldsymbol{U}(t,s)\left(f(s,z_{1_{s}})-f(s,z_{2_{s}})\right)\right\|ds \\ & \leq \left[ M L + M (t_{i+1}-s_i)K\left(\|z_1\|, \|z_2\| \right) \right] \|z_1-z_2\| \\ & \leq \left[ M L + M \tau K\left(\left\|\tilde{\phi}\right\|+\rho, \left\|\tilde{\phi}\right\|+\rho \right) \right] \|z_1-z_2\|. \end{split} \end{equation} Lastly, if $t \in (t_i,s_i],\ i\in I_N$, then, \begin{equation} \label{eqT2.3} \begin{split} \|z_1(t)-z_2(t)\|_{\mathbb{R}^n} & \leq \left\|G_i(t,z_1(t)) - G_i(t,z_2(t))\right\|_{\mathbb{R}^n} \\ & \leq L \|z_1-z_2\| \\ & \leq \frac{1}{2} \|z_1-z_2\|. \end{split} \end{equation} Therefore, taking the sup limit of equations \eqref{eqT2.0}-\eqref{eqT2.3} and \textbf{H4} imply that there exists a constant $m$, with $0<m<1$, such that: \begin{equation*} \begin{split} \|z_1-z_2\| & =\sup_{t\in \mathcal{J}}\|z_1(t)-z_2(t)\|_{\mathbb{R}^n}\leq m\|z_1-z_2\|. \end{split} \end{equation*} Hence, $z_1=z_2$. \end{proof} Finally, the next theorem and corollary extend the system solution towards $[-r, +\infty)$. \begin{theorem}\label{prolongation} Assume \textbf{H1} - \textbf{H4} are satisfied, and consider the solution $z$ over a maximal interval $[-r,p_1)$. Then, $p_1= + \infty$, or there exists a convergent sequence $\left\{\tau_n\right\}_{n\in\mathbb{N}}$ to $p_1$, such that: \begin{equation}\label{seq} \lim_{n\rightarrow\infty} z(\tau_n) = \tilde{z}\in \partial B_{\left\|\tilde{\phi}\right\|+\rho}\subset \mathbb{R}^n . \end{equation} \end{theorem} \begin{proof} Assume $p_1<+\infty$, and suppose that there exists a neighborhood $V$ of the boundary of $B_{\left\|\tilde{\phi}\right\|+\rho}$ such that if $t\in [p_2,p_1)$, with $s_{N}<p_2<p_1$, then, $z(t)\notin V$. Without loss of generality, assume that $ V=B_{\left\|\tilde{\phi}\right\|+\rho} \backslash E$, with $E \subset B_{\left\|\tilde{\phi}\right\|+\rho}$ a closed set and $z(t)\in E$, for $t \in [p_2,p_1)$. Consider $p_2\leq s<t<p_1$. It follows that: \begin{equation*} \begin{split} \|z(t)-z(s)\|_{\mathbb{R}^n} & \leq \left\|\boldsymbol{U}(t,s_N)-\boldsymbol{U}(s,s_N)\right\|\left\|G_N(s_N,z(s_N))\right\|_{\mathbb{R}^n}\\ & + \int_{s}^t\left\|\boldsymbol{U}(t,\xi)\right\|\left\|f(\xi,z_\xi)\right\|_{\mathbb{R}^n} d\xi\\ & + \int_{s_N}^s\left\|\boldsymbol{U}(t,\xi)-\boldsymbol{U}(s,\xi)\right\|\left\|f(\xi,z_\xi)\right\|_{\mathbb{R}^n} d\xi\\ &\leq \left\|\boldsymbol{U}(t,s_N)-\boldsymbol{U}(s,s_N)\right\|L\|z\| + M(t-s)\Psi\left(\left\|\tilde{\phi}\right\|+\rho\right)\\ & + \Psi\left(\left\|\tilde{\phi}\right\|+\rho\right) \int_{s_N}^s \left\|\boldsymbol{U}(t,\xi)-\boldsymbol{U}(s,\xi)\right\| d\xi . \end{split} \end{equation*} Then, uniform continuity of the evolution operator yields \begin{equation*} \lim_{s\rightarrow p_1^-} \|z(t)-z(s)\|_{\mathbb{R}^n} = 0. \end{equation*} Thus, there exists $\tilde{z}\in \mathbb{R}^n$ such that $z(p_1^-)=\tilde{z}\in E$, and a solution can be defined at $p_1$ through extending $z$ by continuity, which contradicts the maximality of $[-r,p_1)$. Thus, either $p_1=+\infty$, or a sequence $\{\tau_n\}_n$ exists and fulfills \eqref{seq}. \end{proof} \begin{corollary} \label{Corollary_prol} Under Theorem \ref{prolongation} assumptions, suppose that: \begin{equation*} \|f(t,\varphi)\|_{\mathbb{R}^n} \leq h(t)\left(1+\|\varphi (0)\|_{\mathbb{R}^n}\right), \end{equation*} for $\varphi \in PC_{r}$ and $h: \overline{\mathbb{R}_+}\longrightarrow\overline{\mathbb{R}_+}$ continuous. Then, there exists a unique solution to problem \eqref{eqExist} on $[-r,+\infty)$. \end{corollary} \begin{proof} Consider $t\in [s_N, p_1)$. It follows that: \begin{equation*} \begin{split} \|z(t)\|_{\mathbb{R}^n} &\leq \left\|\boldsymbol{U} (t,s_N)\right\|\left\|G_N(s_N, z(s_N))\right\|_{\mathbb{R}^n} +\int_{s_N}^t\left\|\boldsymbol{U}(t,s)\right\|\left\|f(s,z_s)\right\|ds\\ &\leq ML\|z(s_N)\|_{\mathbb{R}^n} + \int_{s_N}^{p_1}M h(s)ds+ \int_{s_N}^{t} M h(s)\|z (s)\|_{{\mathbb{R}^n}} ds . \end{split} \end{equation*} Hence, Gr{\"o}nwall's inequality yields \begin{equation*} \|z(t)\|_{\mathbb{R}^n}\leq M\left(L\|z(s_N)\|_{\mathbb{R}^n} + \int_{s_N}^{p_1} h(s)ds\right)\exp\left( \int_{s_N}^{p_1} M h(s)ds\right). \end{equation*} By Theorem \ref{prolongation}, the solution stays bounded, as desired. \end{proof} \section{Example} \label{Example} In this section, particular definitions for functions $G_i$, $g$ and $f$, $i\in I_N$, exemplify the results of this work. To this end, consider an arbitrary finite-dimensional continuous operator $\boldsymbol{A}$, such that $\boldsymbol{A}(t)$ is a $n\times n$ matrix. Given $N, R \in \mathbb{N}$, the non-linear term, $f:\overline{\mathbb{R}_+} \times PC_r([-r,0];\mathbb{R}^n) \longrightarrow \mathbb{R}^n$, the functions describing non-instantaneous impulses, $G_i: (t_{i},s_{i}] \times \mathbb{R}^n \longrightarrow \mathbb{R}^n$, and non-local conditions, $ g:PC_r^q([-r,0];(\mathbb{R}^n)^q)\longrightarrow PC_r([-r,0];\mathbb{R}^n)$, are given as follows, for $z\in PC_{r\tau}$ and $i\in I_N$, \begin{equation*} \begin{split} f(t, \varphi)=\frac{1}{R}\left(\begin{array}{ccc} (\varphi_{1}(-r))^2 \\ (\varphi_2(-r))^2 \\ \vdots\\ (\varphi_n(-r))^2 \end{array}\right),\quad G_i(t,z(t))=\frac{\cos{(s_i)}}{R} \left(\begin{array}{ccc} \sin{(z_{1}(t))} \\ \sin{(z_{2}(t))} \\ \vdots\\ \sin{(z_{n}(t))} \end{array}\right), \end{split} \end{equation*} \begin{equation*} \begin{split} g(\varphi)=\sum_{i=1}^{q} \frac{1}{R} \varphi_i. \end{split} \end{equation*} Clearly, $g$ verifies that $g(0)=0$, and if $t\in [-r,0]$, then, \begin{equation*} \begin{split} \|g(y)(t)-g(z)(t)\|_{\mathbb{R}^n} \leq \frac{1}{R} \|y(t)-z(t)\|_{(\mathbb{R}^n)^q}, \quad \text{ for all } y,\ z \in PCp^q. \end{split} \end{equation*} The functions $G_i$, $i\in I_N$, satisfy $G_i(\cdot,0) = 0,$ and, for any $y,\ z\in PC_{r\tau}$, given $t\in (t_i,s_i]$, \begin{equation*} \begin{split} \left\|G_i(t,y(t))-G_i(t,z(t))\right\|_{\mathbb{R}^n} & \leq \frac{|\cos(s_i)|}{R} \left(\sum_{k=1}^N |\sin(y_k(t))-\sin(z_k(t))|^2\right)^{1/2}\\ & \leq \frac{|\cos(s_i)|}{R}\|y-z\|. \end{split} \end{equation*} For $R$ sufficiently large, it yields \begin{equation*} \displaystyle \frac{|\cos(s_i)|}{R} + \frac{1}{R} q < \frac{1}{2}. \end{equation*} Finally, given $t\geq 0$, $y,\ z\in PC_{r\tau}$, and $\varphi \in PC_{r}$, the function $f$ satisfies \begin{equation*} \begin{split} \|f(t,y_t)-f(t,z_t)\|_{\mathbb{R}^n}& \leq \frac{1}{R}\left(\sum_{k=1}^{n} \ \left(|y_k(t-r)|+|z_k(t-r)|\right)^{2} \right)^{1/2} \|y-z\|\\ &\leq K(\|y\|,\|z\|)\|y-z\|, \end{split} \end{equation*} and \begin{equation*} \|f(t,\varphi)\|_{\mathbb{R}^n}=\frac{1}{R}\left\|\left(\begin{array}{ccc} (\varphi_{1}(-r))^2 \\ (\varphi_2(-r))^2 \\ \vdots\\ (\varphi_n(-r))^2 \end{array}\right)\right\|_{\mathbb{R}^n}\leq\Psi(\|\varphi\|), \end{equation*} where $K$ and $\Psi$ are continuous non-decreasing functions. Hence, hypotheses \textbf{H1} and \textbf{H2} are satisfied. For $R$ sufficiently large, conditions \textbf{H3} and \textbf{H4} are similarly verified. Then, by Theorem \ref{uniqueness} and Corollary \ref{Corollary_prol}, system \eqref{eqExist}, with the foregoing definitions, admits a unique solution on $[-r,+\infty)$. \section{Final Remarks} \label{Final Remarks} In this work, existence and uniqueness of solutions for semi-linear systems of non-autonomous differential equations considering non-instantaneous impulses, delay, and non-local conditions simultaneously were proved. The technique used was based on Karakostas' fixed-point theorem, by transforming the existence of solutions problem into a fixed-point existence problem of a certain operator equation satisfying the specific conditions. This led to choose the adequate hypotheses to meet the requirements of that theorem. Observe that this work can be generalized to infinite-dimensional Banach spaces. However, proving equicontinuity of specific operator families and the main operator compactness must be carefully treated before applying a fixed-point theorem. The strongly continuous semigroup in the non-autonomous system requires compactness to ensure the uniform continuity away from zero. A different version of Arzel\`a-Ascoli theorem must be considered on the corresponding functional spaces. To end, the controllability of these systems is part of our outgoing research. In particular, the exact and approximate controllability of this system can be proven using Rothe's fixed-point theorem \cite{Leiva2014} and the techniques developed in \cite{Bashirov2015}. \bibliographystyle{\mmnbibstyle}
1,116,691,497,051
arxiv
\section{Introduction} A mixture of experts (ME) model \citep{jordan1994hierarchical} provides a flexible framework for expressing the distribution of a response variable conditional on a set of covariates. It expresses the continuous or discrete response variable using a finite mixture model with covariate-dependent component models and mixture weights; the density of the component models are from the exponential family. ME models have gained widespread use in applications; see the survey in \citet[Section IX]{yuksel2012twenty} for an extensive list of application areas and \citet{gormley2018mixtures} for a concise introduction to ME models. ME models have been extended in many ways. \citet{hunter2012semiparametric} relax the parametric assumption of the component models and propose a semi-parametric inference methodology for mixtures of linear regression models. \citet{wood2002bayesian} and \citet{geweke2007smoothly} allow smoothing spline components and propose a Bayesian inference methodology. \citet{villani2012generalized} extend the component models to density functions outside the exponential family and use Bayesian variable selection in all parts of the model. \citet{rasmussen2002infinite} propose an infinite mixture of Gaussian process experts. \citet{quiroz2013dynamic} allow the component membership indicators to change over time. While a ME model for a conditional density is very flexible, it is still too restrictive for many time series applications where the conditional density tends to change over time. Our application on predicting software faults in continuously upgraded software in Section \ref{subsec:Software-trouble-reports} is a typical industrial example, where the non-standard distribution of a response variable changes over time as the software matures. A flexible probabilistic model which can adapt to changes over time is crucial for online prediction in industrial applications where the predictive distribution can be used for decision making under uncertainty, for example in deciding whether or not to release a software upgrade at any point in time. See also \citet{weigend1995nonlinear} for additional examples, such as the prediction of daily electricity demand. Motivated by this, we extend the class of ME models in \citet{villani2012generalized} to allow parameters in the mixture components and the mixture weights to change over time. To avoid overfitting, we use a Bayesian regularization approach where the parameters follow random walk prior processes \citep{fahrmeir2010bayesian}. We call this class of models dynamic mixture of experts since they extend the dynamic generalized linear models in \citet{west1985dynamic}. Sequential Monte Carlo (SMC) is a popular class of algorithms to estimate the posterior distribution in time-varying parameter models \citep{del2006sequential,doucet2000sequential}. SMC methods have been proposed for standard finite mixture models with static parameters, using a fixed number of components \citep{chopin2002sequential} and unknown number of components \citep{fearnhead2004particle}. However, ME models are often richly parameterized with regression coefficients in both the components and the mixture weights. Off-the-shelf SMC methods based on commonly used proposal distributions such as the bootstrap filter will therefore suffer from particle degeneracy. Our main contribution is a fast and efficient SMC algorithm based on a proposal distribution tailored to the class of dynamic ME models. The proposal exploits the model structure where the potentially high-dimensional regression coefficients influence the conditional density only through the low-dimensional linear predictors transformed by the link functions. This makes it possible to combine the linear Bayes method \citep{west1985dynamic} and ideas from the EM algorithm for mixtures to first update the linear predictors using a low-dimensional Laplace approximation and then propagate their updated mean and variance to the high-dimensional regression coefficients; see Section \ref{subsec:Proposal-distribution-based-1}. A key advantage of our procedure is that it is possible to write general computer code where a user can easily add a new model by supplying the first and second derivative of the component densities with respect to the linear predictors. Inference on the number of components and the discount factor is performed using the log predictive score, a marginal likelihood-based model comparison criteria that is less sensitive to the prior \citep{geweke2007smoothly,villani2009regression}. The rest of the paper is organized as follows. Section \ref{sec:The model} introduces the dynamic mixture of experts model and the prior processes. Section \ref{sec:Inference} presents the SMC algorithm based on a proposal distribution from linear Bayes theory. Section \ref{sec:Applications-and-simulations} presents an industrial application to online prediction of faults in a large-scale software project where allowing parameters to evolve over time considerably improves predictive performance. Section \ref{sec:Applications-and-simulations} also explores the properties of the inference method on simulated data. \section{The dynamic mixture of experts\label{sec:The model}} \subsection{Dynamic mixture of experts} Let $D_{j}=(y_{j},\mathbf{\tilde{x}}_{j})$ represent data from a time dependent process observed at different time points $j=1,\ldots,J$, where $y_{j}$ denotes the univariate response variable and $\mathbf{\tilde{x}}_{j}=(\tilde{x}_{j}^{(1)},\ldots,\tilde{x}_{j}^{(P)})^{'}$ is a covariate vector of dimension $P$. The $D_{j}$ may contain only one observation as in standard time series applications or it may be a data batch containing several observations as in the software upgrade process described in Section \ref{sec:Applications-and-simulations}. We propose the dynamic mixture of experts model \begin{equation} f_{j}\left(y_{j}|\mathbf{\tilde{x}}_{j},\boldsymbol{\omega}_{j},\boldsymbol{\lambda}_{j}\right)=\sum_{k=1}^{K}\omega_{jk}\left(\mathbf{z}_{j}\right)f_{jk}\left(y_{j}|\lambda_{jk}\left(\mathbf{x}_{j}\right)\right),\label{eq:Dynamic model} \end{equation} for online (real time) prediction of $y_{j}$ given the value of the covariate $\tilde{\boldsymbol{x}}_{j}$; $\mathbf{z}_{j}$ and $\mathbf{x}_{j}$ are subsets of $\mathbf{\tilde{x}}_{j}$ of dimensions $q$ and $p$ respectively. $\lambda_{jk}(\mathbf{x}_{j})$ and $\omega_{jk}(\mathbf{z}_{j})$, $k=1,\ldots,K$, are time-varying covariate-dependent parameter and mixture weight functions of the $k^{th}$ component model respectively, $\boldsymbol{\lambda}_{j}=(\lambda_{j1}(\mathbf{x}_{j}),\ldots,\lambda_{jK}(\mathbf{x}_{j}))$ and $\boldsymbol{\omega}_{j}=(\omega_{j2}(\mathbf{z}_{j}),\ldots,\omega_{jK}(\mathbf{z}_{j}))$. The covariates in the mixture weights can be distinct from the covariates in the mixture components. The component models depend on the structure of the response variable and are typically density functions from the exponential family, e.g. Gaussian if $y_{j}$ is continuous, or the Poisson, binomial or negative binomial for count data, or multinomial if $y_{j}$ is categorical. However, as in \citet{villani2012generalized}, we allow the component models to be any well-behaved density functions, not necessarily limited to the exponential family, and the model parameter may be multidimensional with each of its components connected to the covariates through its own link function. The component model parameters $\lambda_{jk}=\lambda_{jk}(\mathbf{x}_{j})$, $k=1,\ldots,K$ are connected to their linear predictors through a link function $g$ as follows \begin{equation} \eta_{jk}=g\left(\lambda_{jk}\right)=\mathbf{x}_{j}^{'}\boldsymbol{\beta}_{jk},\label{eq:Mixture expected value} \end{equation} where $\mathbf{x}_{j}=(1,x_{j}^{(1)},...,x_{j}^{(p)})'$ and $\boldsymbol{\beta}_{jk}=(\beta_{jk}^{(0)},\ldots,\beta_{jk}^{(p)})^{'}$. For component models with more than one parameter, Eq. \ref{eq:Dynamic model} can be extended by linking each parameter to its own linear predictor, see \citet{villani2012generalized}. Furthermore, the mixture weights depend on the covariate $\boldsymbol{z}_{j}$, through the multinomial logit link function \begin{equation} \omega_{jk}=\frac{\exp\left(\psi_{jk}\right)}{1+\sum_{j=2}^{J}\exp\left(\psi_{jk}\right)},\label{eq: mixture weights link function} \end{equation} with \begin{equation} \psi_{jk}=\mathbf{z}_{j}^{'}\boldsymbol{\theta}_{jk},\,\,\,k=2,\ldots,K\label{eq:mixing weight predictor} \end{equation} where $\mathbf{z}_{j}=(1,z_{j}^{(1)},...,z_{j}^{(q)})'$, and $\boldsymbol{\theta}_{jk}=(\theta_{jk}^{(0)},\ldots,\theta_{jk}^{(q)})^{'}$. In the following, we refer to $\boldsymbol{\beta}_{jk}$ and $\boldsymbol{\theta}_{jk}$ as the regression \emph{coefficients} in the component distributions and mixture weights, respectively, and to $\boldsymbol{\lambda}_{j}=(\lambda_{j,1},\ldots,\lambda_{j,K})$ and $\boldsymbol{\omega}_{j}=(\omega_{j2},\ldots,\omega_{jK})$ as the model \emph{parameters}.To simplify notation, we stack all the regression coefficients at time $j$ into one vector $\boldsymbol{\gamma}_{j}=(\boldsymbol{\beta}_{j}^{'},\boldsymbol{\theta}_{j}^{'})^{'}$, where $\boldsymbol{\beta}_{j}^{'}=(\boldsymbol{\beta}_{j1},\ldots,\boldsymbol{\beta}_{jK})$ and $\boldsymbol{\theta}_{j}=(\boldsymbol{\theta}_{j2},\ldots,\boldsymbol{\theta}_{jK})$, and the linear predictors for all components into $\boldsymbol{\rho}_{j}=(\boldsymbol{\eta}_{j}^{'},\boldsymbol{\psi}_{j}^{'})^{'}$. \subsection{Prior process\label{subsec:Prior-process}} To allow the model parameters to vary over time, we let the regression coefficients follow the random walk process: \begin{equation} \boldsymbol{\gamma}_{j}=\boldsymbol{\gamma}_{j-1}+\boldsymbol{\varepsilon}_{j},\,\,\,\,\,\,\,\,\,\,\,\boldsymbol{\varepsilon}_{j}\sim N\left(0,\,\mathbf{U}_{j}\right),\label{eq:Prior distribution} \end{equation} with a predifined initial distribution $p(\boldsymbol{\gamma}_{0})$. This prior process is equivalent to a Bayesian regularization penalizing the regression coefficients from high fluctuations to avoid overfitting and poor predictive performance \citep{fahrmeir2011bayesian,fahrmeir2004penalized}. In some applications it is sufficient to set $\boldsymbol{U}_{j}=\boldsymbol{U}$ which is a special case of the formulation in (\ref{eq:Prior distribution}). However, it is more useful for online inference to let $\boldsymbol{U}_{j}$ change over time. Fully Bayesian inference requires a prior for each $\boldsymbol{U}_{j}$. Common priors are: i) an inverse-Wishart density for a full matrix $\boldsymbol{U}_{j}$ \citep{gamerman1998markov}, ii) an inverse-gamma density \citep{fahrmeir2004penalized} or a random walk process \citep{lang2002function} on the elements of a diagonal $\boldsymbol{U}_{j}$. An alternative to placing a prior on each $\boldsymbol{U}_{j}$ is to approximate each $\boldsymbol{U}_{j}$ recursively using the discount factor approach in \citet{west1985dynamic}. Let $\boldsymbol{C}_{j}$ denote the posterior covariance of $\boldsymbol{\gamma}_{j}$ and set $\boldsymbol{U}_{j}=(\alpha^{-1}-1)\boldsymbol{C}_{j-1}$ for a given discount factor $0<\alpha<1$. A value of $\alpha$ close to one shrinks $\boldsymbol{U}_{j}$ towards zero, leading to very little variation in $\boldsymbol{\gamma}_{j}$ over time; a value of $\alpha$ close to zero gives the regression parameters more flexibility and allows the model to adapt well to local fluctuations in the parameter; for instance, change points or level shifts in the parameter. The discount factor approach has some advantages compared to a fully Bayesian approach. It is computationally much quicker as it avoids extra simulations from the posterior of $\boldsymbol{U}_{j}$. The discount factor conveniently controls the smoothness of the parameter evolution through time with a single parameter, and it allows building static and dynamic models in a unified way just by changing the value of $\alpha$. Following \citet{liu2001combined}, models with $.95\leq\alpha<1$ are essentially static, and those with $\alpha<.95$ are dynamic. We will use the discount factor approach in the applications section (Section \ref{sec:Applications-and-simulations}). Our inference methodology applies also to the case of a fully Bayesian approach where $\boldsymbol{U}_{j}$ is estimated using an additional step. The ideas in Section \ref{subsec:Proposal-distribution-based-1} can be integrated in the framework of particle Markov chain Monte Carlo \citep{andrieu2010particle} and SMC2 \citep{chopin2013smc2} methods which allow inference in models with both fixed and time-varying (latent) parameters. \section{Inference, prediction and model comparison\label{sec:Inference}} The model described in Section \ref{sec:The model} is a state-space model, which allows us to exploit the vast literature \citep{gordon1993novel,pitt1999filtering,doucet2000sequential,doucet2006efficient,doucet2009tutorial,klaas2012toward} available on sequential Monte Carlo (SMC). SMC methods are particularly appropriate for sampling from the online posterior and real-time predictive distributions. The presented model often has a large number of parameters, and off-the-shelf SMC algorithms with simple proposal distribution like the bootstrap filter will therefore perform poorly. This section describes our proposed algorithm for sampling from the online posterior using a particle filter tailored specifically to the class of dynamic mixture of experts models. We also present the model comparison criteria used to select the number of mixture components and the discount factor. \subsection{The marginal particle filter approximation of the online posterior distribution} Particle filter algorithms use Bayes' theorem to make prior-to-posterior updates, $p\left(\boldsymbol{\gamma}_{j}|\,D_{1:j-1}\right)\rightarrow p\left(\boldsymbol{\gamma}_{j}|\,D_{1:j}\right)$, sequentially in time using a \emph{prediction step} \begin{equation} p\left(\boldsymbol{\gamma}_{j}|\,D_{1:j-1}\right)=\int p\left(\boldsymbol{\gamma}_{j}|\,\boldsymbol{\gamma}_{j-1}\right)p\left(\boldsymbol{\gamma}_{j-1}|\,D_{1:j-1}\right)d\boldsymbol{\gamma}_{j-1},\label{eq:Prior predictive-1} \end{equation} followed by a \emph{measurement update step} \begin{align} p\left(\boldsymbol{\gamma}_{j}|\,D_{1:j}\right) & \propto f_{j}\left(y_{j}|\mathbf{\tilde{x}}_{j},\boldsymbol{\gamma}_{j}\right)p\left(\boldsymbol{\gamma}_{j}|\,D_{1:j-1}\right),\label{Filtering distribution-1} \end{align} where $f_{j}(\cdot)$ is the response density defined in (\ref{eq:Dynamic model}) and $D_{1:j}$ denotes the data observed until time $j$. Therefore, $p\left(\boldsymbol{\gamma}_{j}|\,D_{1:j-1}\right)$ is the prior at time $j$ in the sense that it is prior to observing the data $D_{j}$. The challenging part of this sequential inference approach is that the integral in (\ref{eq:Prior predictive-1}) is only tractable for linear Gaussian models \citep{west1985dynamic,gordon1993novel}. One way to sample from (\ref{Filtering distribution-1}) is to use particle filter algorithms. These algorithms use a \textit{proposal distribution} $q$ to estimate (\ref{eq:Prior predictive-1}) sequentially by importance sampling \citep{gordon1993novel} which approximates any target\emph{ }distribution $p$ empirically by \textit{the importance weights,} $w\propto\nicefrac{p}{q}$, computed at a finite sample of \textit{particles} -- parameter values proposed from $q$\textit{.} Particle filtering is very attractive for real-time predictions; it does not require a scan of the full dataset every time a new observations becomes available. However, a well known problem with particle methods is that the importance weights of only a few particles tend to be substantially different from zero, leading to very few effective samples. To mitigate this weight degeneracy issue, it is suggested to add a resampling step in which particles with low weights are discarded and replaced by copies of the particles with high weights. Various resampling strategies are available in the literature \citep{gordon1993novel,liu1998sequential,carpenter1999improved,fearnhead2003line} and a comparison of these resampling schemes is provided by \citet{douc2005comparison}. We are here interested in the online predictive distribution $p\left(y_{j}|\mathbf{\tilde{x}}_{j},\boldsymbol{y}_{1:j-1}\right)$ which only depends on the filtering density up to time $j-1$ \citep{doucet2000sequential}. We use the marginal particle filter of \citet{klaas2012toward} which relies on the approximation \begin{equation} \hat{p}\left(\boldsymbol{\gamma}_{j}|\,D_{1:j-1}\right)=\sum_{m=1}^{M}w_{j-1}^{m}p\left(\boldsymbol{\gamma}_{j}|\,\boldsymbol{\gamma}_{j-1}^{m}\right),\label{approximate prior predictive} \end{equation} of the prior in Eq. \ref{eq:Prior predictive-1}; $\{\boldsymbol{\gamma}_{j-1}^{m},w_{j-1}^{m}\}_{m=1}^{M}$ is the particle approximation of $p(\boldsymbol{\gamma}_{j-1}|\,D_{1:j-1})$. Given new data $D_{j}$, the marginal particle filter uses the proposal distribution \[ q(\boldsymbol{\gamma}_{j}|\,D_{1:j})=\sum_{h=1}^{M}w_{j-1}^{h}q(\boldsymbol{\gamma}_{j}|\,\boldsymbol{\gamma}_{j-1}^{h},\,D_{1:j}) \] to generate the particle approximation $\{\boldsymbol{\gamma}_{j}^{m},w_{j}^{m}\}_{m=1}^{M}$ to $p\left(\boldsymbol{\gamma}_{j}|\,D_{1:j}\right)$. The importance weights \begin{equation} w_{j}^{m}\propto\frac{f_{j}\left(y_{j}|\mathbf{\tilde{x}}_{j},\boldsymbol{\gamma}_{j}^{m}\right)\sum_{h=1}^{M}w_{j-1}^{h}p\left(\boldsymbol{\gamma}_{j}^{m}|\,\boldsymbol{\gamma}_{j-1}^{h}\right)}{q\left(\boldsymbol{\gamma}_{j}^{m}|\,D_{1:j}\right)}.\label{Importance weights} \end{equation} depend on the likelihood $f_{j}(y_{j}|\mathbf{\tilde{x}}_{j},\boldsymbol{\gamma}_{j})$ expressed using the covariate and the regression coefficients rather than the model parameters as in (\ref{eq:Dynamic model}). Although the marginal particle filter is appealing, its computational cost is $O(M^{2})$. To reduce the computational cost, in the next section we suggest using linear Bayes methods \citep{west1985dynamic} to construct a proposal that is tailored to the true posterior, which is crucial for particle methods in high-dimensional parameter spaces. \subsection{A computationally fast proposal distribution for high-dimensional marginal particle filters\label{subsec:Proposal-distribution-based-1}} \citet{west1985dynamic} develop a linear Bayes method \citep{goldstein2007bayes} for dynamic generalized linear models with recursions for the posterior mean and covariance over time, making no assumptions on the distributional form of the posterior. \citet{ravines2007efficient} use these recursive moments to design a multi-move proposal for MCMC targeting the joint smoothing posterior in dynamic generalized linear models. We combine the linear Bayes method in \citet{west1985dynamic} with ideas from the EM algorithm \citet[ch.9]{bishop2006pattern} to design a proposal distribution $q\left(\boldsymbol{\gamma}_{j}|D_{1:j}\right)$ targeting the filtering density $p(\boldsymbol{\gamma}_{j}|\,D_{1:j})$ in dynamic mixture of experts models. The proposed method allows general mixture components outside of the exponential family with any twice differentiable link function. Similar to Eq. 2.8 in \citet{west1985dynamic}, we can write the joint posterior of the regression coefficients and the linear predictors as \begin{equation} p(\boldsymbol{\gamma}_{j},\boldsymbol{\rho}_{j}\vert D_{1:j})=p(\boldsymbol{\rho}_{j}\vert D_{1:j})p(\boldsymbol{\gamma}_{j}\vert\boldsymbol{\rho}_{j},D_{1:j-1}),\label{eq:posteriorBetaTheta} \end{equation} where we recall that $\boldsymbol{\rho}_{j}=(\boldsymbol{\eta}_{j}^{'},\boldsymbol{\psi}_{j}^{'})^{'}$contains the linear predictors in all components and mixture weights. The second factor in (\ref{eq:posteriorBetaTheta}) does not condition on $D_{j}$ since $\boldsymbol{\gamma}_{j}$ only enters the likelihood function through the scalar-valued linear predictors in each component, $\eta_{jk}=\mathbf{x}_{j}^{'}\boldsymbol{\beta}_{jk}$ and $\psi_{jk}=\mathbf{z}_{j}^{'}\boldsymbol{\theta}_{jk}$ for $k=1,\ldots,K.$ Our proposal is tailored to the posterior $p\left(\boldsymbol{\gamma}_{j}|D_{1:j}\right)$ by the following steps: \begin{enumerate} \item Approximate the prior $p(\boldsymbol{\gamma}_{j}\vert D_{1:j-1})$ using a Gaussian with mean and covariance computed from particles at time $j-1$. \item Obtain the second factor in (\ref{eq:posteriorBetaTheta}) by conditioning $p(\boldsymbol{\gamma}_{j}\vert D_{1:j-1})$ on the linear restrictions $\boldsymbol{\rho}_{j}$. \item Propose from $q(\boldsymbol{\gamma}_{j}|D_{1:j})=N(\boldsymbol{\mu}_{j},\boldsymbol{H}_{j})$, where $\boldsymbol{\mu}_{j}$ and $\boldsymbol{H}_{j}$ are obtained from the law of iterated expectation and law of total variance on (\ref{eq:posteriorBetaTheta}) using a Gaussian approximation of $p(\boldsymbol{\rho}_{j}\vert D_{1:j})$. \end{enumerate} To give the details of the three steps, define $\boldsymbol{\eta}_{j}=\boldsymbol{X}_{j}\boldsymbol{\beta}_{j}$ where $\boldsymbol{X}_{j}=I_{K}\otimes\boldsymbol{x}_{j}^{'}$ and $\boldsymbol{\psi}_{j}=\boldsymbol{Z}_{j}\boldsymbol{\theta}_{j}$ where $\boldsymbol{Z}_{j}=I_{K}\otimes\boldsymbol{z}_{j}^{'}$, so that we can compactly write $\boldsymbol{\rho}_{j}=\boldsymbol{W}_{j}\boldsymbol{\gamma}_{j}$ where $\boldsymbol{\gamma}_{j}=(\boldsymbol{\beta}_{j}^{'},\boldsymbol{\theta}_{j}^{'})^{'}$, $\boldsymbol{\rho}_{j}=(\boldsymbol{\eta}_{j}^{'},\boldsymbol{\psi}_{j}^{'})^{'}$ and \[ \boldsymbol{W}_{j}=\left(\begin{array}{cc} I_{K}\otimes\boldsymbol{x}_{j}^{'} & \boldsymbol{0}\\ \boldsymbol{0} & I_{K}\otimes\boldsymbol{z}_{j}^{'} \end{array}\right). \] We can use particles from time step $j-1$ to approximate $\boldsymbol{\gamma}_{j}|D_{1:j-1}\sim N(\boldsymbol{\bar{\gamma}}_{j},\Sigma_{\boldsymbol{\gamma}_{j}})$, where \begin{equation} \boldsymbol{\bar{\gamma}}_{j}=\sum_{m=1}^{M}w_{j-1}^{m}\boldsymbol{\gamma}_{j-1}^{m},\,\,\,\,\,\Sigma_{\boldsymbol{\gamma}_{j}}=\mathbf{U}_{j}+\sum_{m=1}^{M}w_{j-1}^{m}\left(\boldsymbol{\gamma}_{j-1}^{m}-\boldsymbol{\bar{\gamma}}_{j}\right)\left(\boldsymbol{\gamma}_{j-1}^{m}-\boldsymbol{\bar{\gamma}}_{j}\right)^{2},\label{eq: prior moments} \end{equation} and then obtain the mean and covariance of the second factor of (\ref{eq:posteriorBetaTheta}) by conditioning this distribution on the linear constraints $\boldsymbol{\rho}_{j}=\boldsymbol{W}_{j}\boldsymbol{\gamma}_{j}$ \citep[eq. 2.28-2.29]{rue2005gaussian} yielding \begin{align*} E\left[\boldsymbol{\gamma}_{j}\vert\boldsymbol{\rho}_{j},D_{1:j-1}\right] & =\bar{\boldsymbol{\boldsymbol{\gamma}}}_{j}+\Sigma_{\boldsymbol{\gamma}_{j}\rho_{j}}\Sigma_{\rho_{j}}^{-1}(\boldsymbol{\rho}_{j}-\bar{\boldsymbol{\rho}}_{j})\\ V\left[\boldsymbol{\gamma}_{j}\vert\boldsymbol{\rho}_{j},D_{1:j-1}\right] & =\Sigma_{\boldsymbol{\gamma}_{j}}-\Sigma_{\boldsymbol{\gamma}_{j}\rho_{j}}\Sigma_{\rho_{j}}^{-1}\Sigma_{\rho_{j}\boldsymbol{\gamma}_{j}}, \end{align*} where $\bar{\boldsymbol{\rho}}_{j}=\boldsymbol{W}_{j}\bar{\boldsymbol{\gamma}}_{j}$, $\Sigma_{\rho_{j}}=\boldsymbol{W}_{j}\Sigma_{\boldsymbol{\gamma}_{j}}\boldsymbol{W}_{j}^{'}$, $\Sigma_{\rho_{j}\boldsymbol{\gamma}_{j}}=\boldsymbol{W}_{j}\Sigma_{\boldsymbol{\gamma}_{j}}$, $\Sigma_{\boldsymbol{\gamma}_{j}\rho_{j}}=\Sigma_{\boldsymbol{\gamma}_{j}}\boldsymbol{W}_{j}^{'}$. Now, the proposal is $q\left(\boldsymbol{\gamma}_{j}|D_{1:j}\right)=N\left(\boldsymbol{\mu}_{j},\boldsymbol{H}_{j}\right)$ with moments obtained from applying the law of iterated expectations and the law of total variance to (\ref{eq:posteriorBetaTheta}), \begin{align} \boldsymbol{\mu}_{j} & =E_{\boldsymbol{\rho}_{j}}\left[E\left[\boldsymbol{\gamma}_{j}\vert\boldsymbol{\rho}_{j},D_{1:j-1}\right]\vert D_{1:j}\right]=\bar{\boldsymbol{\boldsymbol{\gamma}}}_{j}+\Sigma_{\boldsymbol{\gamma}_{j}\rho_{j}}\Sigma_{\rho_{j}}^{-1}\left(E_{\boldsymbol{\rho}_{j}}(\boldsymbol{\rho}_{j}\vert D_{1:j})-\bar{\boldsymbol{\rho}}_{j}\right)\label{eq: Posterior mean of reg coef}\\ \boldsymbol{H}_{j} & =E_{\boldsymbol{\rho}_{j}}\left[V\left[\boldsymbol{\gamma}_{j}\vert\boldsymbol{\rho}_{j},D_{1:j-1}\right]\vert D_{1:j}\right]+V_{\boldsymbol{\rho}_{j}}\left[E\left[\boldsymbol{\gamma}_{j}\vert\boldsymbol{\rho}_{j},D_{1:j-1}\right]\vert D_{1:j}\right]\label{eq: Posterior var of reg coef}\\ & =\Sigma_{\boldsymbol{\gamma}_{j}}-\Sigma_{\boldsymbol{\gamma}_{j}\rho_{j}}\left(\Sigma_{\rho_{j}}^{-1}-\Sigma_{\rho_{j}}^{-1}V_{\boldsymbol{\rho}_{j}}(\boldsymbol{\rho}_{j}\vert D_{1:j})\Sigma_{\rho_{j}}^{-1}\right)\Sigma_{\rho_{j}\boldsymbol{\gamma}_{j}}\nonumber \end{align} It remains to compute $E_{\boldsymbol{\rho}_{j}}(\boldsymbol{\rho}_{j}\vert D_{1:j})$ and $V_{\boldsymbol{\rho}_{j}}(\boldsymbol{\rho}_{j}\vert D_{1:j})$. A second order Taylor expansion of $\log p(\boldsymbol{\rho}_{j}\vert D_{1:j})$ around $\bar{\boldsymbol{\rho}}_{j}$ leads to the following approximations \citep{doucet2000sequential}: \[ V_{\boldsymbol{\rho}_{j}}(\boldsymbol{\rho}_{j}\vert D_{1:j})=\left[-\left.\nabla_{}\nabla_{\boldsymbol{\rho}_{j}}\log p(\boldsymbol{\rho}_{j}\vert D_{1:j})\right|_{\boldsymbol{\rho}_{j}=\bar{\boldsymbol{\rho}}_{j}}\right]^{-1}, \] \begin{equation} E_{\boldsymbol{\rho}_{j}}(\boldsymbol{\rho}_{j}\vert D_{1:j})=\bar{\boldsymbol{\rho}}_{j}+V_{\boldsymbol{\rho}_{j}}(\boldsymbol{\rho}_{j}\vert D_{1:j})\left.\nabla_{\boldsymbol{\rho}_{j}}\log p(\boldsymbol{\rho}_{j}\vert D_{1:j})\right|_{\boldsymbol{\rho}_{j}=\bar{\boldsymbol{\rho}}_{j}}.\label{eq:linear predictor estimator} \end{equation} Letting $\pi_{jk}=\log\omega_{jk}f_{jk}\left(y_{j}|\lambda_{jk}\right)$, the gradient can be computed by direct calculation \begin{align*} \nabla_{\boldsymbol{\rho}_{j}}\log p(\boldsymbol{\rho}_{j}\vert D_{1:j}) & =\sum_{k=1}^{K}\mathrm{Pr}(s_{j}=k\vert D_{1:j})\nabla_{\boldsymbol{\rho}_{j}}\pi_{jk}-\Sigma_{\rho_{j}}^{-1}(\boldsymbol{\rho}_{j}-\bar{\boldsymbol{\rho}}_{j}), \end{align*} where $\mathrm{Pr}(s_{j}=k\vert D_{1:j})\propto\omega_{jk}f_{jk}(y_{j}\vert D_{1:j-1},\lambda_{jk})$ are the posterior probabilities of the observation $y_{j}$ coming from component $k$ (see \citealp[ch. 9.3]{bishop2006pattern} for similar expressions for the EM algorithm). Similarly, the Hessian is, \begin{align*} \nabla\nabla_{\boldsymbol{\rho}_{j}}\log p(\boldsymbol{\rho}_{j}\vert D_{1:j}) & =\sum_{k=1}^{K}\mathrm{Pr}(s_{j}=k\vert D_{1:j})\nabla\nabla_{\boldsymbol{\rho}_{j}}\pi_{jk}-\Sigma_{\rho_{j}}^{-1} \end{align*} Note that the component parameters $\eta_{jk}$ and $\psi_{jk}$ enter additively in $\log\omega_{jk}f_{jk}\left(y_{j}|\lambda_{jk}\right)$; therefore, their gradients can be computed separately. If the batches $D_{j}$ contain several observations, then $\boldsymbol{\mu}_{j}$ and $\boldsymbol{H}_{j}$ can be computed by iterating the procedure described above over the observations in the batch; see \citet{gamerman1991dynamic} for a similar approach. Starting with the first observation, we proceed through the following iterations: \begin{enumerate} \item Compute $\boldsymbol{\mu}_{j}^{(i)}$ and $\boldsymbol{H}_{j}^{(i)}$ from Eq. \ref{eq: Posterior mean of reg coef} and Eq. \ref{eq: Posterior var of reg coef}. \item Set $\bar{\boldsymbol{\gamma}}_{j}=\boldsymbol{\mu}_{j}^{(i)}$ and $\Sigma_{\boldsymbol{\gamma}_{j}}=\boldsymbol{H}_{j}^{(i)}$. \item Return to (1) until the last observation in the batch. \end{enumerate} \subsection{Model comparison and prediction \label{subsec:Model-comparison-and}} Our model depends on the choice of the number of mixture components $K$ and the discount factor $\alpha$. We propose to infer those quantities using a sequential version of the marginal likelihood \citep{doucet2000sequential} \begin{equation} p\left(y_{1:J}\right)=p\left(y_{1}\right)\prod_{j=2}^{J}p\left(y_{j}|y_{1:j-1}\right),\label{eq:sequential marginal likelihood-1} \end{equation} where \begin{align} p\left(y_{j}|y_{1:j-1}\right) & =\int f_{j}\left(y_{j}|\mathbf{\tilde{x}}_{j},\boldsymbol{\gamma}_{j}\right)p\left(\boldsymbol{\gamma}_{j}|\,D_{1:j-1}\right)d\boldsymbol{\gamma}_{j}.\label{eq:likelihood estimate-1} \end{align} Given a sample of $M$ particles $\{\boldsymbol{\gamma}_{j-1}^{m}\}_{m=1}^{M}$ and the corresponding importance weights $\{w_{j-1}^{n}\}_{n=1}^{N}$, the predictive distribution (\ref{eq:likelihood estimate-1}) is approximated as the sum \[ \hat{p}\left(\boldsymbol{y}_{j}|\boldsymbol{y}_{1:j-1}\right)=\sum_{m=1}^{M}w_{j-1}^{m}f_{j}\left(y_{j}|\mathbf{\tilde{x}}_{j},\gamma_{j}^{m}\right), \] where $\gamma_{j}^{m}$ are generated from the transition distribution $p(\boldsymbol{\gamma}_{j}|\boldsymbol{\gamma}_{j-1}^{m})$. However, it is well known that the initial factors in the marginal likelihood (\ref{eq:sequential marginal likelihood-1}) are sensitive to the initial prior for the parameters \citep{villani2009regression}. One can therefore use only the $J^{\star}$ last time periods, which is often referred to as the log predictive score when defined as an average on the log scale: \[ LPS=\frac{1}{J^{*}}\sum_{j=J-J^{*}+1}^{J}\log\hat{p}\left(\boldsymbol{y}_{j}|\boldsymbol{y}_{1:j-1}\right). \] For the models proposed in Section \ref{sec:Applications-and-simulations}, we set $J^{\star}$ to $\nicefrac{J}{2}$ since we assume that the particle approximation to the marginal likelihood should be stable after $\nicefrac{J}{2}$. Computing the $LPS$ for different combinations of the number of mixture components $K$ and the discount factor $\alpha$ makes it possible to select good values for these model specification parameters. \section{Application and simulation study\label{sec:Applications-and-simulations}} \subsection{Predicting faults in large-scale software projects\label{subsec:Software-trouble-reports}} Large-scale industrial software projects are continually upgraded to fix bugs and/or to add new features. The changes made in the source code in each upgrade are measured by code complexity metrics; these may include, the number of commits (NC) which represents how many changes were made from the previous to the current version, the number of changed modules (CM), and the number of faults corrected (NFC) per line of code. The latter is the ratio of the total number of faults corrected and the total number of code lines excluding comments. Furthermore, we have metrics representing the proportion of files written in C++ (CF), the proportion of files written in Java (JF). The last metric considered is the aggregate file complexity (FC), which is a score calculated based on the number of control flows in the code, e.g. if, for and while loop statements. Our aim is to build an online prediction model for the number of faults in a planned upgrade release. We use a software trouble reports dataset from a large-scale project at a major telecom company; the dataset contains a history of $1801$ upgrades that were created during a period of $650$ days (roughly $21$ months). The response variable is the number of faults $y_{t}$ reported on the upgrade created at time $t$, and the covariate $\mathbf{\tilde{x}}_{t}$ represents the six code complexity metrics. These metrics, excluding the CF, JF and NFC are integers ranging from zero to a value that scales up to six digits, for some of them. Therefore, to reduce the scale variations, we apply the $\log(1+\tilde{x}_{t})$ transformation to the integer complexity metrics; after this transformation the highest covariate value is no greater than $15$. A common way of modeling time-varying parameters in continuous time models is to partition the time into short consecutive intervals $[\tau_{0},\tau_{1}),\ldots,[\tau_{J-1},\tau_{J})$ where $\tau_{0}=\min(t)<\tau_{1}<\ldots<\tau_{J-1}<\tau_{J}=\max(t)$, and allow the parameters to change between intervals; see e.g. \citet{fahrmeir2011bayesian}. Because time is partitioned into several intervals, the original data is split into a sequence of batches; see Figure \ref{fig:Upgrade-packages-grouped}. \bigskip{} \begin{figure}[H] \subfloat[]{\includegraphics[scale=0.6]{\string"data_process\string".png}} \bigskip{} \subfloat[]{\includegraphics[scale=0.65]{\string"grouped_data\string".png} } \caption{{\small{}(a) The upgrading process. An upgrade (UP) at time $t$ is created by making $x_{t}$ changes on the previous version of the software (created at time $t-1$) and $y_{t}$ faults are reported on the version created at time $t$. (b) Process of grouping upgrades according to intervals partitioning the training time.}\label{fig:Upgrade-packages-grouped}} \end{figure} All data points observed at $t\in[\tau_{j-1},\tau_{j}),\,j=1\ldots,J$, are collected into the data batch $D_{j}=\left\{ \mathbf{y}_{j},\mathbf{\tilde{X}}_{j}\right\} $ consisting of a vector of the response observations $\boldsymbol{y}_{j}=(y_{1j},\ldots,y_{N_{j},j})^{\prime}$ and a set of the covariates $\mathbf{\tilde{X}}_{j}=(\mathbf{\tilde{x}}_{1j},\ldots,\tilde{\mathbf{x}}_{N_{j},j})^{\prime}$, where $N_{j}$ is the number of data points in the interval. We use the index $i$ for the temporal order of the data points within each interval. Hence $y_{ij}$ is the value of the response variable for the $i^{th}$ data point in interval $j$, and $\mathbf{\tilde{x}}_{ij}$ is its covariate vector. To model these data, we apply the model (\ref{eq:Dynamic model}) with Poisson regression component models and the initial distribution $\boldsymbol{\gamma}_{0}\sim N(0,\,\,I)$, where $I$ is a unit diagonal matrix; see Appendix \ref{sec:Examples-of-component} for details. The time is partitioned into $30$ days-long intervals, which leads to $21$ intervals in total. Experimentation with intervals lengths of one week, two weeks and three months did not improve the LPS. Table \ref{tab:Log-predictive-score} compares different fitted models based on their LPS. The table displays various dynamic models, with discount factor $\alpha=0.5$, and their static versions, where $\alpha=0.99$. The models in the table have different variables in the component models and the number of commits (NC) as the only covariate $\mathbf{z}$ in the mixture weights. To select $\mathbf{z}$, we fix the covariates in the component models to $\tilde{X}$ and, starting from $\mathbf{z}=\tilde{X}$, we eliminate variables in $\mathbf{z}$ systematically based on the LPS. \begin{center} \begin{table}[H] {\footnotesize{}\caption{{\small{}LPS for different models fitted to the software trouble reports data.}{\footnotesize{} }{\small{}Results are based on a posterior sample of $20$000 particles}{\footnotesize{}.\label{tab:Log-predictive-score} }} }{\footnotesize\par} \centering{}{\footnotesize{}}% \begin{tabular}{llccccc} \hline \multirow{2}{*}{\textbf{\footnotesize{}Component model}} & \multirow{2}{*}{\textbf{\footnotesize{}Type}} & & \multicolumn{3}{c}{\textbf{\footnotesize{}Number of components}} & \tabularnewline \cline{3-7} & & & \textbf{\footnotesize{}1} & \textbf{\footnotesize{}2} & \textbf{\footnotesize{}3} & \textbf{\footnotesize{}4}\tabularnewline \hline \multirow{2}{*}{{\footnotesize{}CM}} & \textbf{\footnotesize{}Dynamic} & & \textbf{\footnotesize{}$\boldsymbol{-211.40}$} & \textbf{\footnotesize{}$\boldsymbol{-160.78}$} & \textbf{\footnotesize{}$\boldsymbol{-160.05}$} & {\footnotesize{}$\boldsymbol{-173.35}$}\tabularnewline \cline{2-7} & {\footnotesize{}Static} & & {\footnotesize{}$-216.88$} & {\footnotesize{}$-189.28$} & {\footnotesize{}$-185.96$} & {\footnotesize{}$-184.99$}\tabularnewline \hline \multirow{2}{*}{{\footnotesize{}CM + FC}} & \textbf{\footnotesize{}Dynamic} & & {\footnotesize{}$\boldsymbol{-211.27}$} & {\footnotesize{}$\boldsymbol{-158.36}$} & {\footnotesize{}$\boldsymbol{-157.28}$} & {\footnotesize{}$\boldsymbol{-158.20}$}\tabularnewline \cline{2-7} & {\footnotesize{}Static} & & {\footnotesize{}$-216.43$} & {\footnotesize{}$-188.83$} & {\footnotesize{}$-189.25$} & {\footnotesize{}$-186.62$}\tabularnewline \hline \multirow{2}{*}{{\footnotesize{}CM+FC+NC}} & \textbf{\footnotesize{}Dynamic} & & {\footnotesize{}$\boldsymbol{-211.34}$} & {\footnotesize{}$\boldsymbol{-159.61}$} & {\footnotesize{}$\boldsymbol{-164.38}$} & {\footnotesize{}$\boldsymbol{-168.09}$}\tabularnewline \cline{2-7} & {\footnotesize{}Static} & & {\footnotesize{}$-217.67$} & {\footnotesize{}$-196.02$} & {\footnotesize{}$-189.36$} & {\footnotesize{}$-185.50$}\tabularnewline \hline \multirow{2}{*}{{\footnotesize{}CM+FC+NC+NFC}} & \textbf{\footnotesize{}Dynamic} & & {\footnotesize{}$\boldsymbol{-211.77}$} & {\footnotesize{}$\boldsymbol{-161.57}$} & {\footnotesize{}$\boldsymbol{-173.17}$} & {\footnotesize{}$\boldsymbol{-178.00}$}\tabularnewline \cline{2-7} & {\footnotesize{}Static} & & {\footnotesize{}$-217.70$} & {\footnotesize{}$-193.96$} & {\footnotesize{}$-189.46$} & {\footnotesize{}$-185.45$}\tabularnewline \hline \multirow{2}{*}{{\footnotesize{}CM+FC+NC+NFC+JF}} & \textbf{\footnotesize{}Dynamic} & & {\footnotesize{}$\boldsymbol{-211.01}$} & {\footnotesize{}$\boldsymbol{-161.34}$} & {\footnotesize{}$\boldsymbol{-167.58}$} & {\footnotesize{}$\boldsymbol{-175.22}$}\tabularnewline \cline{2-7} & {\footnotesize{}Static} & & {\footnotesize{}$-219.38$} & {\footnotesize{}$-194.23$} & {\footnotesize{}$-185.18$} & {\footnotesize{}$-182.80$}\tabularnewline \hline \multirow{2}{*}{{\footnotesize{}CM+FC+NC+NFC+JF+CF}} & \textbf{\footnotesize{}Dynamic} & & {\footnotesize{}$\boldsymbol{-210.91}$} & {\footnotesize{}$\boldsymbol{-165.12}$} & {\footnotesize{}$\boldsymbol{-171.83}$} & {\footnotesize{}$\boldsymbol{-176.71}$}\tabularnewline \cline{2-7} & {\footnotesize{}Static} & & {\footnotesize{}$-217.76$} & {\footnotesize{}$-193.95$} & {\footnotesize{}$-187.96$} & {\footnotesize{}$-186.54$}\tabularnewline \hline \end{tabular}{\footnotesize\par} \end{table} \par\end{center} Table \ref{tab:Log-predictive-score} shows that dynamic models outperform static models, with a difference in LPS between $5$ and $8$ for single component models and well above $30$ for several of the multicomponent models. The best static model has a LPS = $-182.80$ compared LPS = $-157.28$ for the best dynamic model. There is a very large jump in LPS when going from one to two components, in particular for the dynamic versions. While two components seem to be sufficient for the dynamic models, the static models require more components. The dynamic model CM+FC with two components is selected for further analysis since adding more complexity gives no significant increase in LPS. Figure \ref{fig:Predictive distribution} displays the predictive distribution for the one and two component versions of the selected model at three time points: $j=1$, $j=9$ and $j=20$. The predictive distribution at the time point $j$ is constructed using the data batch $D_{j+1}$ as test set. Clearly, the predictive distribution varies over time; there is a very large shift of predictive probability mass toward a smaller number of faults as time evolves. The two-component model adapts well to these dynamic variations in the data and gives very impressive predictions on the test data, while the one-component version does not perform as well, agreeing with the LPS in Table \ref{tab:Log-predictive-score}. To investigate the efficiency of the proposed inference methodology, we fit the selected two-component dynamic Poisson model using a particle filter with $1000$ particles, which is at least an order of magnitude smaller than what can easily be afforded in real applications. We replicate the particle filter independently $100$ times using different seeds. Figure \ref{fig:EvolutionPredDensity} shows that the variability of the predictive distribution for the 100 seeds is small; the figure also includes the predictive distribution from a single run with $100,000$ particles to represent the ground truth. This shows that the proposed method is very efficient and even a small number of particles gives an adequate numerical precision which is enough for most applications. \begin{figure}[H] \begin{singlespace} \noindent \begin{centering} \includegraphics[scale=0.85]{predictive_distribution_selected_model} \par\end{centering} \end{singlespace} \caption{{\small{}The evolution of the predictive distribution of one-component and two-components of the dynamic CM+FC model fitted to the software fault data at three time points $j=1$, $j=9$, and $j=20$. For each interval $j$, the empirical distribution of the data are computed using the out-of-sample test data $D_{j+1}$.}{\footnotesize{}\label{fig:Predictive distribution}}} \end{figure} \bigskip{} \begin{figure}[H] \begin{centering} \includegraphics[scale=0.85]{prediction_different_time_points} \par\end{centering} \caption{{\small{}Illustrating graphically the efficiency of the proposed inference methodology. The shaded area represents the $25\%$ and $97.5\%$ quantiles of the predictions from $100$ independent iterations of the particle filter with $1000$ particles. The blue curve represents the prediction obtained from a particle filter with $100,000$ particles. The predictions are based on the two-components CM+FC dynamic Poisson model fitted to the software faults data.}{\footnotesize{}\label{fig:EvolutionPredDensity}}} \end{figure} Table \ref{tab:Log-predictive-score} clearly shows that an important part of the time variation in the predictive distribution stems from the time-varying parameters. Figure \ref{fig:Evolution of the parameter} and \ref{fig:evolution parameter mixing wgt} illustrate, respectively, the posterior mean evolution of the regression coefficients with 95\% highest probability density (HPD) intervals in the component models and the mixing weights of the selected two-component Poisson model. The HPD intervals are computed pointwise at each time point. These figures clearly show evidence that parameters in both component and mixing weights models change over time. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.8]{parameter_evolution_for_component_densities} \par\end{centering} \caption{{\small{}Online posterior mean and pointwise 95\% HPD intervals for the regression coefficients in the component models of the two-components CM+FC dynamic Poisson model fitted to the software faults data. The columns represent mixture components and the rows represent the log intercept, followed by the coefficients of the covariates.}{\footnotesize{} \label{fig:Evolution of the parameter}}} \end{figure} \begin{figure}[H] \begin{centering} \includegraphics[scale=0.9]{parameter_evolution_mixture_weights} \par\end{centering} \caption{{\small{}Online posterior mean and pointwise 95\% HPD intervals for the regression coefficients in the mixture weights of the two-components CM+FC dynamic Poisson model fitted to the software faults data.}{\footnotesize{}\label{fig:evolution parameter mixing wgt}}} \end{figure} \subsection{Simulation study\label{subsec:Simulations}} This section reports on a simulation experiment which studies the performance of static and dynamic mixture of experts models on data generated from both static and dynamic data generating processes (DGPs). The aim is to train several models with different number of components and discount factors to each DGP and use the LPS to select the best model to see if the proposed inference methodology is able to recognize the underlying data generating process. The data generating process is unknown in real applications and the usual strategy in modeling the data is to fit static models. It is therefore interesting to evaluate how the selected model would differ from its static version. We simulate $50$ pairs of training and validation datasets from each of the three different data generating processes specified in Table \ref{table:DGPsimulation}. Data are simulated sequentially over $12$ time intervals with $100$ observations within each interval; hence each dataset contains $1200$ observations. For the data generating processes $M_{2}$ and $M_{3}$, each pair of training and validation datasets is simulated using one parameter path randomly generated from the specified random walk process in Table \ref{table:DGPsimulation}. \begin{table}[H] \noindent \centering{}\caption{{\small{}The three data generating processes used in the simulation study. The parameter values are obtained from fitting the three models to the software upgrade data with CM as covariate. The covariates are $\mathbf{x}_{ij}=(1,x_{ij})^{\prime}$ and $\mathbf{z}_{ij}=(1,z_{ij})^{\prime}$, with $x_{ij}$ and $z_{ij}$ iid $U(-1,1)$.\medskip{} }} \begin{tabular}{l} \hline \tabularnewline Model $M_{1}$ - \textbf{Static Poisson regression}\tabularnewline $y_{ij}|\mathbf{x}_{ij}\sim\mathrm{Po}\left(\lambda_{ij}\right)$\tabularnewline $\log\lambda_{ij}=\mathbf{x}_{ij}^{\prime}\boldsymbol{\varphi}$\tabularnewline $\boldsymbol{\varphi}=(.11,2.29)$\tabularnewline \tabularnewline Model $M_{2}$ - \textbf{Dynamic Poisson regression}\tabularnewline $y_{ij}|\mathbf{x}_{ij}\sim Po\left(\lambda_{ij}\right)$\tabularnewline $\log\lambda_{ij}\mathbf{=x}_{ij}^{\prime}\boldsymbol{\vartheta}_{j}$,$\qquad\qquad\boldsymbol{\vartheta}_{j}=\boldsymbol{\vartheta}_{j-1}+e_{j},\,e_{j}\sim\mathrm{N}\left(0,Q\right)$\tabularnewline $\boldsymbol{\vartheta}_{0}=(.11,2.29)$, $Q=\textrm{Diag}(.17,\,.2)$\tabularnewline \tabularnewline Model $M_{3}$ - \textbf{Dynamic mixture of Poisson regression experts}\tabularnewline $y_{ij}|\mathbf{x}_{ij},\mathbf{z}_{ij}\sim\sum_{k=1}^{2}\phi_{ijk}\mathrm{Po}\left(\lambda_{ijk}\right)$\tabularnewline $\log\lambda_{ijk}=\mathbf{x}_{ij}^{\prime}\boldsymbol{\beta}_{jk}$,$\qquad\quad\boldsymbol{\beta}_{jk}=\boldsymbol{\beta}_{j-1,k}+u_{jk},\,u_{jk}\sim\mathrm{N}\left(0,\,U_{k}\right)$\tabularnewline $\phi_{ij,2}=\textrm{logit}\left(\mathbf{z}_{ij}^{\prime}\boldsymbol{\theta}_{j}\right)$$\hspace{0.7cm}\boldsymbol{\theta}_{j}=\boldsymbol{\theta}_{j-1}+v_{j},v_{j}\sim\mathrm{N}\left(0,\,V\right)$\tabularnewline $\boldsymbol{\beta}_{1,0}=(1.1,2.17$), $\boldsymbol{\beta}_{2,0}=(-0.8,1.94)$, $\boldsymbol{\theta}_{0}=(2.63,-4.41)$\tabularnewline $U_{1}=\textrm{Diag}(.08,.15)$, $U_{2}=\textrm{Diag}(.07,.1)$, $V=\textrm{Diag}(.08,.17)\qquad\qquad\qquad\qquad\qquad$\tabularnewline \tabularnewline \hline \end{tabular}\label{table:DGPsimulation} \end{table} The training dataset is used to fit several models with $K=1,2$ and $3$ Poisson components and discount factors $\alpha\in\{.4,.5,.6,.7,.8,.9,.99\}$, giving a total of $21$ models; the best model is chosen by LPS. For each DGP, Figure \ref{fig:Model selection frequency} displays the selection frequency of $K$ and $\alpha$ for all fitted models. For $M_{1}$ and $M_{2}$, LPS works well and the most frequently selected model is the single component Poisson model with $\alpha=0.99$, and $\alpha=0.4$ respectively. For $M_{3}$, the most frequently selected model is the three components mixture model with $\alpha=0.6$, and not the correct two components mixture model. This slight overestimation is not surprising as LPS is often observed to have a tendency to be generous with the number of components in a mixture without having a large impact on the final predictive density, see e.g. \citet{villani2012generalized}.\bigskip{} \begin{figure}[H] \begin{centering} \includegraphics[scale=0.6]{model_selection_frequency_heatmap} \par\end{centering} \caption{{\small{}Fitting different models to the data generated from the three DGPs. Each panel displays the number of times each model was selected based on the LPS.}{\footnotesize{} \label{fig:Model selection frequency}}} \end{figure} Figure \ref{fig:Comparison-of-the selected and static models} compares the performance of the model i) with $K=K_{\mathrm{opt}}$ and $\alpha=\alpha_{\mathrm{opt}}$, where $K_{\mathrm{opt}}$ and $\alpha_{\mathrm{opt}}$ are the values chosen from LPS and ii) the corresponding static model with $K=K_{\mathrm{opt}}$ and $\alpha=0.99$. The figure shows boxplots of the difference in the LPS values in the validation set for both models. For $M_{1}$, the average LPS difference between the selected and the static models is around zero, which shows that the dynamic model does not overfit on static data. On the other hand, for the two dynamic data generating processes, $M_{2}$ and $M_{3}$, the dynamic model selected in the validation step clearly outperforms the static model and the difference in LPS increases with the number of components. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.6]{lps_diff_boxplot} \par\end{centering} \caption{{\small{}Boxplot of the difference in LPS of the model selected in the validation step and the corresponding static model. }\label{fig:Comparison-of-the selected and static models}} \end{figure} \section{Conclusion} We introduce a general class of dynamic mixture of experts models for online predictions; the model allows the regression coefficients in each mixture component and weight to vary over time. The component models can be essentially any density function, not necessarily limited to the exponential family. We propose an efficient SMC algorithm for sequential inference and online prediction that is tailored to handle the proposed model class with potentially high-dimensional parameter spaces. The algorithm handles models with static and dynamic parameters in a unified way. The model is applied to online prediction of the number of faults in a continuously upgraded large-scale industrial software project. We show that allowing the parameters to evolve over time greatly improves the model's predictive performance. A simulation study documents that the proposed model selection procedure is i) effective in reducing flexibility when data comes from a static single-component model, ii) able to fit data from multi-component models with time-varying parameters. \bibliographystyle{chicago} \phantomsection\addcontentsline{toc}{section}{\refname}
1,116,691,497,052
arxiv
\section*{Author Contribution}} \NewEnviron{contrib}{% \section*{Author Contribution} \BODY } \usepackage{epsfig} \usepackage{graphicx} \usepackage{pdfpages} \title{Paired Image-to-Image Translation Quality\\Assessment Using Multi-Method Fusion} \author{% Stefan Borasinski\\ Cyanapse Limited\\ Brighton, United Kingdom\\ \texttt{[email protected]} \\ \And Esin Yavuz\\ Cyanapse Limited\\ Brighton, United Kingdom\\ \texttt{[email protected]} \\ \And S\'ebastien B\'ehuret\thanks{Corresponding author.}\\ Cyanapse Limited\\ Brighton, United Kingdom\\ \texttt{[email protected]} \\ } \begin{document} \maketitle \begin{abstract} How best to evaluate synthesized images has been a longstanding problem in image-to-image translation, and to date remains largely unresolved. This paper proposes a novel approach that combines signals of image quality between paired source and transformation to predict the latter's similarity with a hypothetical ground truth. We trained a Multi-Method Fusion (MMF) model via an ensemble of gradient-boosted regressors using Image Quality Assessment (IQA) metrics to predict Deep Image Structure and Texture Similarity (DISTS), enabling models to be ranked without the need for ground truth data. Analysis revealed the task to be feature-constrained, introducing a trade-off at inference between metric computation time and prediction accuracy. The MMF model we present offers an efficient way to automate the evaluation of synthesized images, and by extension the image-to-image translation models that generated them. \end{abstract} \section{Introduction} \begin{figure} \begin{center} \includegraphics[width=0.6\linewidth]{Figure1.eps} \end{center} \caption{MMF process for evaluating image-to-image translation results. Perceptual differences between original and translated image pairs are evaluated using multiple IQA methods. The target metric, DISTS, is calculated between the translated image and the corresponding ground truth ({\bf Left}). The MMF model is trained on the collected IQA scores to best predict the DISTS score between an output transformation and its presumed ground truth ({\bf Middle}). At inference where ground truth images are unavailable, DISTS prediction is used as a weak indicator of transformed image quality to either accept or reject a given transformation ({\bf Right}).} \label{fig:MMF} \label{fig:onecol1} \end{figure} Image-to-image translations with style transfer enable high-level image modifications that would be considered impossible in the past \cite{gatys2016image, isola2017image}. Evaluating the outcome, however, is not a trivial task \cite{borji2019pros}. While different sets of neural network weights can lead to high quality images on some subsets of the data domain, they may fail for others, and failure cases may be drastically bad. Efforts have been primarily focussed on post-hoc analysis, where quantitative and qualitative evaluations of model quality traditionally require a large body of generated samples. Identifying the superior model in general is not sufficient, however. The optimal model needs to be identified on an image-by-image basis at inference. But without an ad-hoc differentiator of transformed image quality, identifying the models that generated useful transformations would otherwise require manual intervention. Good indicators of transformed image quality should, at a minimum, be able to filter out failure cases, yielding a pool of top image candidates that could be considered reasonable transformations by humans. To this extent, Image Quality Assessment (IQA) methods can be used to assess the perceptual quality of images automatically. The motivation of an IQA-led approach to evaluating the quality of live synthesized images over other established methods lie in their ability to serve as a proxy of the human eye when discriminating at the single image level, enabling automated and continuous on-line monitoring of results. By contrast, traditional evaluation methods such as Fr\'echet Inception Distance \cite{heusel2017gans}, Average Precision, and qualitative analysis of pre-deployment results, are unsuited to assess the quality of single images in a live environment. They may indicate over a finite test set which models produce better quality transformations on average, but lack the case-by-case selectivity required when dealing with images in this context. In scenarios where a ground truth dataset is available, for example in paired image-to-image translation during training, full-reference IQA methods offer a simple, fast, and direct means of evaluating perceptual similarity between a synthetised image (the output) and the ground truth (the target). With a view to image-to-image translation in a production setting however, where ground truth data is unavailable at the point of inference, direct comparison between the output and a corresponding target is impossible, which further precludes IQA methods from being applied in their traditional sense. Nonetheless, in many cases the original image (the source) contains aspects of image quality instructive to the output image, most notably the distribution of spatial structures. The sensitivity of IQA methods to key image quality features suggests a novel application in image-to-image translation where they can be gainfully redeployed as weak indicators of transformed image quality by comparing the source and its target only. We describe a Multi-Method Fusion (MMF) approach to model evaluation in a similar vein to that put forward by Liu et al. \cite{liu2012image}. Whereas Liu et al. looked to find a suitable weighting of IQA methods with the intent of predicting scores from image quality databases, we took advantage of the availability of ground truth data during training in paired image-to-image translation to propose an alternate possibility, one where IQA methods perform the role of annotator. Once a target metric of perceptual similarity has been established, the objective of MMF can simply be reformulated as predicting the target metric score between an output image and its hypothetical ground truth at inference using IQA scores extracted between source and target images. We selected DISTS \cite{ding2020image} as the target metric due to its textural sensitivities and encouraging correlation with available mean opinion score data \cite{borasinski2020iqa}. To mitigate the unpredictability of images and encompass the broad diversity of naturally occurring distortions in photos, we included a population of IQA methods to improve the likelihood that a mapping between IQA scores and DISTS will be successful. Other perceptually-aligned metrics may be more applicable depending on the specific use case. In their MMF approach, Liu et al. \cite{liu2012image} used support vector regression, which offered superior performance over other methods at the time. We opted for gradient boosting \cite{friedman2001greedy} using the MLJAR Automated Machine Learning (AutoML) framework instead \cite{mljar}, which offers more flexibility, and much faster computation in comparison to the scikit-learn \cite{pedregosa2011scikit} implementation of support vector regression, owing to the use of GPUs. \section{Related Work} \subsection{Image Quality Assessment} IQA was originally envisioned as a means to objectively evaluate broadcast images with reference to the human visual system \cite{goldmark1940quality, horton1929electrical, jesty1953television}. The field of IQA has since seen a multitude of novel applications \cite{wang2011applications}, and even more so during the last decade as artificial intelligence methods for image generation and manipulation have been shown to produce plausible results. In deep learning, loss functions motivated by perceptual considerations neatly illustrate the tight confluence between IQA and image processing systems. Image denoising and restoration tasks have been successfully addressed by directly optimizing IQA methods \cite{ding2021comparison, zhao2016loss}, and conversely new IQA methods themselves have been adapted from losses that incorporate deep feature representations \cite{ding2020image, zhang2018unreasonable}. In order to act as a proxy for human judgments of image similarity, IQA algorithms aim to detect a certain type of difference between a transformed image and a reference image. A detailed taxonomy of the most widely known IQA methods can be found in \cite{ding2021comparison}. Pixel-based error visibility methods such as mean absolute error (MAE), peak signal-to-noise ratio (PSNR), total variation (TV), normalized Laplacian pyramid distance (NLPD) \cite{laparra2016perceptual}, and most apparent distortion (MAD) \cite{larson2010most} each detect a certain type of distance between pixels. These methods are the most straightforward, but they are also the least correlated with human perception. Structural similarity-based methods are variants of the original structural similarity (SSIM)\cite{wang2004image} algorithm. Structural similarity is inspired by the human visual system, and it has been the most popular choice of metric in IQA. Structural similarity-based methods include SSIM, multi-scale structural similarity (MS-SSIM) \cite{wang2003multiscale}, feature similarity (FSIM) \cite{zhang2011fsim}, gradient magnitude similarity deviation (GMSD) \cite{xue2013gradient}, and visual saliency induced quality index (VSI) \cite{zhang2014vsi}. Information-theoretic methods such as visual information fidelity (VIF) \cite{sheikh2006image} and spatial domain VIF (VIFs) are based on mutual information. Learning-based methods are based on deep neural network representations of images. Benefiting from the recent advances in the field, these methods now provide a powerful alternative to SSIM-based methods. Learning-based methods include deep image structure and texture similarity (DISTS) \cite{ding2020image2}, and learned perceptual image patch similarity (LPIPS) \cite{zhang2018unreasonable}. \subsection{Multi-Method Fusion} MMF is in essence a classical machine learning task whereby the features are the scores from full-reference image quality metrics. Fusion-based methods are based on a super-evaluator where fused individual image quality scores are often matched with human opinion scores \cite{liu2012image, ma2019blind}. Collecting human opinion scores is, however, a time-consuming and expensive task. Training human assessors is often not trivial as human judgment is affected by many different factors and is highly variable. Where reference images are available, a more reliable outcome can be achieved by using matching ground truth images. \subsection{Gradient Boosting} Gradient-boosted decision trees \cite{friedman2001greedy} allow fast and efficient prediction for tabular data through ensembling of multiple weak models to solve supervised learning problems. Unlike other modes of regression, decision tree-based algorithms are able to natively handle the multi-collinear nature of features calculated from the same pair of images, eliminating the need for further pre-processing. Recent variations include CatBoost \cite{dorogush2018catboost}, LightGBM \cite{ke2017lightgbm} and XGBoost \cite{chen2016xgboost} to improve speed and accuracy, each using a different strategy for ensembling. We combined these three methods to diversify the learning process. \subsection{Automated Machine Learning} In the recent years, AutoML has become a popular approach to perform feature engineering, design model archtectures and optimize hyperparameters with state-of-the-art algorithms. To address the inherently high-dimensional feature and parametric spaces of our MMF implementation and to facilitate the model development cycle, we opted for the use of an AutoML framework. Among the popular AutoML frameworks, MLJAR offers a flexibile training pipeline capable of achieving high performances on various datasets. It combines K-Means centers and Golden Features search with advanced feature selection to enrich datasets, Hill-Climbing to fine-tune final models, and Greedy Search over base models to compute ensembles. We therefore used the \textsc{mljar-supervised} \cite{mljar} Python package to build our MMF model. \section{Proposed Method} Image-to-image translation involves mapping an image in $ \mathcal{X}$ into domain $ \mathcal{Y}$ by a mapping function $F : \mathcal{X} \rightarrow \mathcal{Y}$. In paired image-to-image translation, a ground truth image $y \in \mathcal{Y}$ is available for each source image $x \in \mathcal{X}$ at training time. The perceptutal distance function $E = D(F(x),y)$ indicates the level of similarity between translated images and their ground truth targets. Our goal is to train a MMF model to estimate a perceptual distance function $\hat{E} = D(F(\hat{x}),\hat{y})$ and further assess the success of tranformation by $F$ for a novel image $\hat{x}$ and its hypothetical and unavailable ground truth $\hat{y}$. Our approach is illustrated in Figure \ref{fig:MMF}. We propose a MMF strategy that combines various IQA metrics that assess the similarity between source images $x_{n}$ and their translated outputs $F(x_{n})$. The similarity between $F(x_{n})$ and corresponding ground truth targets $y_{n}$ is estimated by calculating the value of the perceptual distance function $E_{n}$, as quantified by the DISTS score (Fig. \ref{fig:MMF}, Left). Lower DISTS scores indicate more similarity between the translated output and the ground truth images. IQA methods give different scores for each source image $x$ as a result of the nature of the differences between $x$,$y$ and $F(x)$. The trained MMF model leverages these differences, as reflected by IQA scores, by combining them and finding optimal weights using a supervised learning strategy (Fig. \ref{fig:MMF}, Middle) to estimate $\hat{E}$ between a novel source image $\hat{x}$ and its presumed ground truth $\hat{y}$ during inference (Fig. \ref{fig:MMF}, Right). As in \cite{liu2012image}, the rationale behind MMF in image-to-image translation is to weight a collection of complementary yet diverse signifiers of transformed image quality into a singular, more powerful evaluator. \section{Experiments} Our MMF process collects and integrates an ensemble of up to 14 IQA methods, which were used with their default trained hyperparameters where applicable. Toward this, 12 metrics were adapted or directly imported without further modification from the \textsc{scikit-image} \cite{scikit-image} and \textsc{IQA-pytorch} \cite{ding2021comparison} Python packages. This includes PSNR, SSIM, MS-SSIM, FSIM, VSI, GMSD, NLPD, MAD, VIF, VIFs, LPIPS and DISTS metrics. The last two metrics, TV and MAE, were reimplemented in NumPy \cite{harris2020array}. TV was reimplemented as a full reference metric, converted into a ratio (TV Ratio) by dividing the sum total variation in the input image by that in the output transformation. MAE was calculated as the mean of the absolute difference between individual pixel values of two images. MAD was not included in the MMF model for all tasks due to its time complexity. As the number of features is restricted by the total time to compute IQA metrics at inference, one immediate way to improve predictive accuracy of the regression algorithm is to augment the dataset by computing K-Means centers and including linear combinations of original features, dubbed Golden Features \cite{slezak1999classification}. To further sharpen predictive accuracy, the MMF model not only fuses the aforementionned IQA methods themselves, but also ensembles weak evaluators. To this end, a family of gradient tree boosting algorithms, LightGBM, CatBoost and XGBoost, were entered into the training cycle. We used the MLJAR AutoML framework in the 'compete' mode to augment the data, train and combine the models under the regimen described below. To enrich the dataset, K-Means centers were computed with optional scaling as needed. The information about distance to K-Means centers and center number was added to each sample and the best performing models were trainined with this data. For Golden Features, original feature pairs were either subtracted, added, multiplied or turned into ratios to create new features. To select the most useful linear combinations of features from all possible unique pairs that are capable of improving predictions, all possible pairs were examined. Independent decision trees were trained on the task with a maximum depth of 3 on a single combination of features at a time using a subsample of 5,000 points split equally between the train and test sets. Test mean squared error was calculated for each new feature and the top features were entered into the primary dataset in order to retrain the best performing models. The framework's feature selection algorithm works as follows. Random features were inserted into the data and the permutation-based features importance was computed. The features with importance less than a random variable were removed. The ML models are trained using selected features only. Multiple models were trained for each of the LightGBM, CatBoost and XGBoost algorithms. For each algorithm, a single base model was trained using default hyperparameter settings, and multiple additional models were trained using randomly sampled hyperparameters. Validation was performed automatically using the most adapted algorithm with the root-mean-square error metric, including 5 to 10-fold cross-validation and 80/20 train/test split hold-out validation. To fine-tune models, the framework performed a local search over hyperparameters via hill-climb. The framework additionally trained stacked models using out-of-fold predictions to extend training data, and boost-on-errors models where sample weights were boosted on the errors from the best models. The greedy search algorithm outlined in \cite{caruana2004ensemble} combined the best performing models, consisting of stacked, boost-on-errors and ensembled LightGBM, CatBoost and XGBoost models, some of which incorporated K-Means centers or Golden Features. To reduce overfitting, the ensembling algorithm initialized non-empty sorted ensembles, selected base models with replacement, and further bagged base models during selection. It is the final stacked ensemble produced by the framework that constitute the MMF model presented in this paper. We demonstrate the results on two image-to-image translation tasks. The first task is base color extraction from synthetic images of consumer goods via style transfer using a variant of Pix2Pix \cite{isola2017image}. The synthetic dataset was by provided by ZEG.ai\footnote {ZEG.ai Ltd. (London, United Kingdom).} (commercially restricted) and includes pairs of synthetic bottle images under complex and albedo-like diffuse-only lighting conditions. For this dataset we show the error visibility maps, and MMF evaluation. The second task involves day to night transformations using a pre-trained Pix2Pix network\cite{isola2017image}. We used the dataset \cite{laffont2014transient} and trained model weights publicly available at \url{https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix}. For this dataset we show MMF evaluation and samples of transformations with the predicted scores in the best, average and worst DISTS score ranges. All the analysis and benchmarks are performed on raw neural network outputs, which correspond to 256x256 pixels resolution RGB images. \subsection{Error Visibility Maps} \begin{figure*} \begin{center} \includegraphics[width=0.9\linewidth]{Figure2.eps \end{center} \caption{An example original and ground truth image pair from the test dataset for the base color extraction task {\bf (Left)}, and corresponding metric error visibility maps for SSIM, MAE and TV Ratio {\bf (Right)}. For illustrative and comparative purposes, SSIM error signals have been remapped to $1 - SSIM$ and clamped between [0, 1], and TV Ratio has been inverted. While SSIM operates on grayscale images, MAE and TV Ratio operate on RGB images. Images were therefore converted into grayscale to compare the magnitude of pixelwise error as indicated by gray intensity, giving scores of $SSIM = 0.970$, $MAE = 0.00867$ and $TV Ratio = 1.0146$. MAE and TV Ratio were also caclulated on RGB images, giving scores of $MAE_{RGB} = 2.649$ and $TV Ratio_{RGB} = 1.020$. Lower remapped SSIM scores, MAE scores and inverted TV Ratio scores suggest better transformations.} \label{fig:maps} \end{figure*} The differences between the types of errors picked up by IQA methods can be seen in the error visibility maps that represent the per-pixel metric scores. Figure \ref{fig:maps} shows an example original and ground truth image pair (Fig. \ref{fig:maps}, Left) and corresponding error visibility maps (Fig. \ref{fig:maps}, Right) for three different IQA methods (SSIM, MAE, TV Ratio) for the base color extraction task. SSIM, which is sensitive to changes in luminance and structure, penalizes the removal of lighting from the label as well as loss in detail caused by flattening the bottom edge of the cap. Compared to the other methods, error signals extracted by SSIM are less finely localized due to local patch averaging, producing blurry error outlines. The MAE map primarily captures changes in chromaticity, and does not provide information on the structure of an image. By contrast, TV captures absolute differences in gradient magnitude across neighboring pixels and informs on the local structure of an image, highlighting areas that have become smoother, and leading to an accentuation of borders. A decrease in the output TV is expected for successful transformations for this task due to lighting smoothing in the output transformation, further leading to an increase of the TV Ratio. \subsection{IQA Metrics Benchmarks} \setlength{\tabcolsep}{0.1em} \begin{table} \small \begin{center} \label{table:benchmark} \begin{tabular}{|l|c|c|c|c|} \hline Method & Wall-Clock Time (ms) & Original vs. Ground Truth & Original vs. Translated & Translated vs. Ground Truth\\ \hline\hline PSNR $\uparrow$ & 0.315 $\pm$ 0.018 & 24.474 $\pm$ 4.127 & 27.918 $\pm$ 4.569 & 23.724 $\pm$ 3.793\\ TV Ratio $\uparrow$ & 0.850 $\pm$ 0.026 & 1.071$\pm$ 0.030 & 0.960 $\pm$ 0.050 & 0.896 $\pm$ 0.052\\ MAE $\downarrow$ & 0.448 $\pm$ 0.065 & 16.504$\pm$ 7.375 & 120.831 $\pm$ 48.809 & 122.223 $\pm$ 45.918\\ SSIM $\uparrow$ & 1.018 $\pm$ 0.094 & 0.943 $\pm$ 0.024 & 0.955 $\pm$ 0.040 & 0.917 $\pm$ 0.043\\ MS-SSIM $\uparrow$ & 3.597 $\pm$ 0.128 & 0.866 $\pm$ 0.124 & 0.948 $\pm$ 0.076 & 0.873 $\pm$ 0.119\\ FSIM $\uparrow$ & 121.504 $\pm$ 3.215 & 0.945 $\pm$ 0.026 & 0.970 $\pm$ 0.020 & 0.932 $\pm$ 0.026\\ VSI $\uparrow$ & 15.862 $\pm$ 0.633 & 0.966 $\pm$ 0.019 & 0.990 $\pm$ 0.008 & 0.966 $\pm$ 0.017\\ GMSD $\downarrow$ & 0.989 $\pm$ 0.038 & 0.130 $\pm$ 0.036 & 0.049 $\pm$ 0.027 & 0.124 $\pm$ 0.033\\ NLPD $\downarrow$ & 5.384 $\pm$ 0.110 & 0.304 $\pm$ 0.121 & 0.198 $\pm$ 0.092 & 0.319 $\pm$ 0.117\\ MAD $\downarrow$ & 179.336 $\pm$ 3.582 & 110.802 $\pm$ 21.870 & 83.518 $\pm$ 25.858 & 127.658 $\pm$ 19.980\\ VIF $\uparrow$ & 74.309 $\pm$ 2.624 & 0.780 $\pm$ 0.060 & 0.676 $\pm$ 0.106 & 0.585 $\pm$ 0.085\\ VIFs $\uparrow$ & 6.159 $\pm$ 0.134 & 0.820 $\pm$ 0.048 & 0.689 $\pm$ 0.102 & 0.598 $\pm$ 0.081\\ LPIPS $\downarrow$ & 28.924 $\pm$ 0.864 & 0.051$\pm$ 0.017 &0.062 $\pm$ 0.046 & 0.100 $\pm$ 0.046\\ DISTS $\downarrow$ & 28.860 $\pm$ 0.751 & 0.095 $\pm$ 0.026 & 0.062 $\pm$ 0.032 & 0.117 $\pm$ 0.030\\ \hline \end{tabular} \bigskip \caption{Average metric scores $\pm$ standard deviation (n=50) and computation clock time for the base color extraction task. The vertical arrows represent the direction of variation of each metric score, with upward arrows indicating increased similarity with larger values, and downward arrows indicating increased errors with larger values. Original vs. Ground Truth scores indicate the zone of viability for a successful transformation. Original vs. Translated scores show how close on average transformations approach that zone. Translated vs. Ground Truth scores indicate the average similarity between transformation and target.} \end{center} \end{table} In order to evaluate the performance of each IQA metric function, we ran several benchmarks. Individual metric scores provide a baseline to assess whether the calculated score for a given sample pair is in a reasonable range, which can in turn be used as an additional factor to predict the transformation quality. It is the role of the trained MMF model to weight the information provided by these metrics to appropriately predict similarity between an output image and its hypothetical ground truth. For the base color extraction task, results are shown in Table \ref{table:benchmark}. Each metric was calculated 50 times over a total of 707 unique combinations of original, translated and ground truth images. Calculation times were then averaged for each metric. The 14 IQA metrics could be calculated in a combined wall-clock time of less than a third of a second per image, with the option of further reductions by tweaking their parameters if necessary. For the day to night translation task, results are shown in Table \ref{table:benchmark2}. Likewise, metrics were calculated a total of 50 times between the original, translated and ground truth images, then averaged. The MAD metric was excluded owing to expense of calculation. For metrics that indicate increased similarity with higher values, as represented by upward arrows (and vice versa for metrics that indicate decreased similarity with higher values, as represented by downward arrows, such as DISTS), high metric scores between an original image and translation can be a result of very weak changes, which generally indicate an unsuccessful translation. By contrast, low metric scores suggest that the structure or the texture in the translated image is drastically different compared to that of the original image. \begin{table} \small \begin{center} \begin{tabular}{|l|c|c|c|} \hline Method & Original vs. Ground Truth & Original vs. Translated & Translated vs. Ground Truth\\ \hline\hline PSNR $\uparrow$ & 9.169 $\pm$ 2.380 & 9.451 $\pm$ 1.879 & 13.668 $\pm$ 3.563\\ TV Ratio $\uparrow$ & 1.038 $\pm$ 0.221 & 0.988 $\pm$ 0.090 & 1.047 $\pm$ 0.139\\ MAE $\downarrow$ & 103.997 $\pm$ 17.072 & 98.121 $\pm$ 15.766 & 113.335 $\pm$ 38.449\\ SSIM $\uparrow$ & 0.365 $\pm$ 0.126 & 0.391 $\pm$ 0.071 & 0.435 $\pm$ 0.120\\ MS-SSIM $\uparrow$ & 0.264 $\pm$ 0.140 & 0.295 $\pm$ 0.104 & 0.309 $\pm$ 0.170\\ FSIM $\uparrow$ & 0.641 $\pm$ 0.056 & 0.646 $\pm$ 0.053 & 0.695 $\pm$ 0.073\\ VSI $\uparrow$ & 0.854 $\pm$ 0.032 & 0.865 $\pm$ 0.028 & 0.877 $\pm$ 0.041\\ GMSD $\downarrow$ & 0.264 $\pm$ 0.032 & 0.263 $\pm$ 0.038 & 0.245 $\pm$ 0.045\\ NLPD $\downarrow$ & 0.923 $\pm$ 0.120 & 0.871 $\pm$ 0.163 & 0.803 $\pm$ 0.207\\ MAD $\downarrow$ & N/A & N/A & N/A\\ VIF $\uparrow$ & 0.056 $\pm$ 0.031 & 0.043 $\pm$ 0.013 & 0.044 $\pm$ 0.019\\ VIFs $\uparrow$ & 0.097 $\pm$ 0.047 & 0.094 $\pm$ 0.034 & 0.051 $\pm$ 0.028\\ LPIPS $\downarrow$ & 0.595$\pm$ 0.083 &0.672 $\pm$ 0.045 & 0.644 $\pm$ 0.054\\ DISTS $\downarrow$ & 0.325 $\pm$ 0.073 & 0.404 $\pm$ 0.050 & 0.362 $\pm$ 0.051\\ \hline \end{tabular} \bigskip \caption{Average metric scores $\pm$ standard deviation (n=50) for the day to night translation task. MAD was omitted due to time complexity. For vertical arrows, see caption of Table \ref{table:benchmark}.} \label{table:benchmark2} \end{center} \end{table} \subsection{MMF Model Evaluation} \begin{figure} \begin{center} \includegraphics[width=0.5\linewidth]{Figure3.eps \end{center} \caption{Correlation between actual and predicted DISTS. The 100x100-bin heatmaps shows a visual representation of the points contributing to $r^{2}$ score in relation to the y=x line of best fit. For the base color extraction task, $r^{2}=0.666$ ({\bf Top}), and for the day to night translation task, $r^{2}=0.722$ ({\bf Bottom}).} \label{fig:corr} \end{figure} The MMF model consisted of a trio of gradient-boosted regression algorithms, LightGBM, CatBoost and XGBoost, that were trained and ensembled on image quality scores between the input and the output, to predict similarity quantified by DISTS between the output and the ground truth. Correlation between predicted vs. actual DISTS is shown in Figure \ref{fig:corr} for the base color extraction and day to night translation tasks. For the base color extraction task (Fig. \ref{fig:corr}, Top), the complete training dataset for the MMF model was created by applying each of the 13 metrics (all metrics except MAD) to 31,108 input and output transformation pairs generated from 44 image-to-image translation models over a total of 707 unique input images, and 707 input and ground truth pairs, for a total of 31,815 image pairs. The inclusion of input and ground truth pairs ensured that the model is trained on metrics scores calculated from perfect transformations. DISTS was then calculated for each output transformation and ground truth pair. From the 707 source images, 10\% were split into a test set and the remainder used for 5-fold cross-validation. Since the sole alteration in the task is textural, similarity between the input and ground truth images remained high in general. The bulk of test instances recorded an actual DISTS score of between 0.075 to 0.250, with variation within that range in both directions contributing to the central mass of the heatmap. A handful of instances at the lower and upper end of actual DISTS deviate most notably from the line of best fit, attributable to a tendency to fit scores within this aforementioned median range. On average, transformations scored a DISTS value of 0.117 with a standard deviation of 0.030. The ensembled MMF model was able to predict DISTS to a mean absolute error of 0.0153, resulting in an $r^{2}$ score of 0.666. Transformations that scored lower showed a general qualitative superiority over those that scored higher. For the day to night translation task (Fig. \ref{fig:corr}, Bottom), the complete training dataset comprised of 2,287 input and output transformation pairs, and 51 input and ground truth pairs, for a total of 2,338 image pairs. The input and ground truth images were added to match the ground truth pairing ratio of 1/45 from the base color extraction training dataset. Unlike in the previous task, each source image has multiple feasible ground truth transformations, meaning that DISTS score varies for the same source image across multiple pairs. Nevertheless, target DISTS scores could be successfully predicted and used for ranking the resulting transformations. Compared to a possible ground truth, the average day to night transformation scored a DISTS similarity of 0.362 with a standard deviation of 0.051. DISTS scores were predicted with a mean absolute error of 0.0250, resulting in an $r^{2}$ score of 0.722. Image translations with best, average and worst examples of predicted DISTS are shown in Figure \ref{fig:day2night}. Based on the MMF results, day to night translations tend to work better for certain source images than others (Fig. \ref{fig:day2night}, Top), which is possibly related to content complexity and the fact that the day to night model favors safe translations where fine details are not easily perceptually detectable. Images in the worst DISTS band (Fig. \ref{fig:day2night}, Bottom) are often heavily distorted and can be discarded off-hand. Outcomes scoring around average for the dataset show some degree of distortion (Fig. \ref{fig:day2night}, Middle), and likely require further investigation before being disregarded. Predicted and actual DISTS scores for five samples in the best, average and worst score bands are included in Table S1. Corresponding images are shown in Figures S1, S2 and S3, respectively. \begin{figure*} \begin{center} \includegraphics[width=0.8\linewidth]{Figure4.eps \end{center} \caption{Samples with best (lowest, {\bf Top}), average ({\bf Middle}) and worst (highest, {\bf Bottom}) DISTS scores for the day to night translation task.} \label{fig:day2night} \end{figure*} \section{Discussion} In an extension to the MMF solution adopted by Liu et al. \cite{liu2012image}, who applied support vector regression to ensemble IQA methods, we found that a weighted combination of gradient-boosted regression algorithms produced the best predictive accuracy. In training the MMF model, the task appeared to be feature constrained, as removing a subset of IQA methods increased the mean absolute error. Metric calculation time thus represents the major limiting factor for MMF in a live system. Whilst the MMF model cannot address all aspects of transformed image quality, combined IQA methods can indicate whether a transformation falls within the zone of viability for it to be considered successful. IQA methods that are used in this study do not comment on whether specific facets of the transformation are accurate, for example whether colors have been transformed correctly. Per-pixel errors can be detected by AI-based methods where it is possible to investigate and visualize individual filters, or combine filters to achieve a single score. Better understanding of this type of errors needs more in-depth analysis which is beyond the scope of this work. In the absence of a fitting ad-hoc method of quantitative evaluation in image-to-image translation, MMF represents a reasonable and highly practical solution where paired datasets are available during training. The choice of target metric, DISTS, was informed by two key factors. In the base color extraction task, the image-to-image mapping repaints the focal object in its base colors, demanding a consistency between textures in output and ground truth images. DISTS is the first metric of its kind to explicitly accommodate textural regularities in its calculation, and as such offers clear utility in assessing the extent to which they appear similar. DISTS sports competitive performance on standard IQA databases and offers a robustness to mild geometric distortions, accrediting it specialist status in evaluating GAN-generated images. Furthermore, in a separate study of transformation outcomes, we observed that it was the metric maximally correlated with mean opinion score \cite{borasinski2020iqa}. There are a couple of considerations when interpreting the MMF results. Where a lower DISTS score would normally suggest a better transformation, it may also indicate that only a slight transformation has taken place and the output image is close to both the input and ground truth images. In this situation, IQA error signals between the input and output images are expected to be low. On the other hand, a higher DISTS score may indicate the presence of comprising distortions to transformed image quality, which often indicate deviations from the image domain or presence of heavy artifacts. Automatically filtering out failure cases via DISTS thresholding represents an intuitive way to leverage the MMF model in many use cases. Whilst this paper presented DISTS as the target of MMF, for extended application the choice of which metric to predict should be adapted to the idiosyncrasies of the mapping at hand. Domain-relevant knowledge regarding the prerequisite features of a successful transformation may help inform which metrics can act as useful differentiators of perceptual quality. Likewise, collecting limited human evaluation data in order to calculate correlation coefficients for a range of metrics can assist in this decision. In some cases, it may be viable to train multiple MMF models to predict complementary metrics. For example, one target metric may be more sensitive to the semantic content of the transformed image, i.e. using a measure of pixelwise error to assess whether color has been appropriately transposed, where another metric such as SSIM is then concerned with the integrity of spatial structures. The effective contribution of linear recombinations of select features suggests that the MMF model may be primarily constrained by the number of features available for the task. In turn, the limiting factor on available features is their sum computational time. With this, users need to decide whether they are more interested in model speed or model accuracy. If two further time-intensive methods are removed from calculation, FSIM and VIF, reducing sum computation time to less than a tenth of a second per image, mean absolute error increases from 0.0153 to 0.0156 and $r^{2}$ drops from 0.666 to 0.641. This does also mean however that the reverse is true, where the inclusion, at a timecost, of new methods that have become available stands to improve the accuracy of future MMF models. As a technique, many of MMF's constituent methods inform about the statistical properties of an image pair, whether structures remain intact, if textures are consistent between images, whether there are global changes in luminance, and so on. Dependent on the methods chosen, MMF can only offer limited comment, if any, on a transformation's wider semantics, for example when inferring a bottle's true color for an albedo transformation. For many metrics, the difference between correct and incorrect shades may be marginal to non-existent in their given score. For this reason, metrics such as mean absolute error, which promote pixel values closer to the original, are valuable in this semantic. In more extreme transformations, such as in the Pix2Pix day to night model, there is no metric which can evaluate whether a source of light is appropriately positioned or not within the wider context of an image. For this reason, it is suggested that MMF be used as a filtering tool to reject outright those transformations that are unviable in any sense due to excessive distortion to better direct attention resources to those with a reasonable chance of success. \section{Conclusion} We presented a multi-method fusion approach to assess the quality of transformations that are generated by image-to-image translation models where reference ground truth data is available during training. We demonstrated that the MMF model, which was trained on a collection of IQA metrics between input and transformed images, can successfully predict the target metric score (in our case, DISTS) between a transformed image and its presumed ground truth. This score can then be used to predict the quality of the output during inference where ground truth data is unavailable. \begin{contrib} Stefan Borasinski conducted the research, performed numerical experiments, and contributed to the initial version of the manuscript. Esin Yavuz designed the study, supervised the research, cured the data for numerical experiments, and wrote the manuscript. S\'ebastien B\'ehuret supervised the study and research, set up the infrastructure for numerical experiments, and wrote and revised the manuscript. \end{contrib} \begin{ack} This work was supported by the UK's innovation agency (Innovate UK Smart Grants February 2019, Project Number 34279, {\it Automated Texture Generation for E-commerce}) and Cyanapse Limited (Brighton, United Kingdom) in collaboration with ZEG.ai Ltd. (London, United Kingdom). \end{ack} {\small \bibliographystyle{plainnat} \section*{Tables} \newpage \begin{table} \small \begin{center} \begin{tabular}{l c c c} & Sample No. & Predicted DISTS & Measured DISTS \\ \\ \hline \\ Best 5 & 1 & 0.2523 & 0.2750\\ & 2 & 0.2547 & 0.2798\\ & 3 & 0.2941 & 0.2966\\ & 4 & 0.2960 & 0.2965\\ & 5 & 0.2969 & 0.3132\\ \\ \hline \\ Average 5 & 1 & 0.3697 & 0.3623\\ & 2 & 0.3699 & 0.3921\\ & 3 & 0.3700 & 0.3529\\ & 4 & 0.3700 & 0.3965\\ & 5 & 0.3719 & 0.3510\\ \\ \hline \\ Worst 5 & 1 & 0.4373 & 0.4427\\ & 2 & 0.4362 & 0.4535\\ & 3 & 0.4340 & 0.5060\\ & 4 & 0.4210 & 0.4483\\ & 5 & 0.4190 & 0.4764\\ \\ \hline \end{tabular} \bigskip \caption{Five best (lowest), average, and worst (highest) predicted DISTS scores for the samples from the day to night translation test dataset. Corresponding example images are shown in Figures S1, S2 and S3, respectively.} \label{table:benchmark2} \end{center} \end{table} \begin{figure} \begin{center} \includegraphics[width=0.9\linewidth]{FigureS1.eps \end{center} \caption{Samples with the best (lowest) DISTS scores.} \label{fig:sm_best} \label{fig:onecol1} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.9\linewidth]{FigureS2.eps \end{center} \caption{Samples with average DISTS scores.} \label{fig:sm_mid} \label{fig:onecol1} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.9\linewidth]{FigureS3.eps \end{center} \caption{Samples with the worst (highest) DISTS scores.} \label{fig:sm_worst} \label{fig:onecol1} \end{figure} \end{document}
1,116,691,497,053
arxiv
\section{Introduction} \subsection{Image and Video Object Detection in general} \subsubsection{History of image object detection} Before the advent of Deep Learning, image detection was carried out using classic Machine Learning methods like Discriminative Models e.g. Logistic Regression, Support Vector Machines etc. or Generative Models e.g. Gaussian Mixture Models etc. Bag of Visual Words (BOVW) was another technique that was used for the same purpose. The use of Deep Learning methods for image detection was restrained by two factors, first of all the dearth of large amounts of labeled data and secondly, the computational power. In 2009, Jia Deng et. al. presented a dataset named ImageNet\cite{b13} containing millions of images labeled into different categories. A breakthrough paper published in the same year by Rajat Raina\cite{b46} et. al. stipulated that using GPUs for deep learning can provide the much-needed computational power. Hence, the door to using deep learning for image classification and detection were open and the first landmark paper in this domain was published in 2012 by Alex Krizhevsky\cite{b14} et. al. \newline \subsubsection{Types of image object detectors} Image object detectors can be categorized into two types, single stage and 2-stage image object detectors. A two-stage pipeline firstly generates region proposals, which are then classified and refined\cite{b17} while a single-stage method, which is often more efficient but less accurate, directly regresses on bounding boxes and classes\cite{b18}\cite{b19}. \newline \subsubsection{Why is video object detection harder?} \begin{itemize} \item Large number of individual frames within the videos \item The occurence of motion blur between different frames \item The quality of video datasets are often not ideal \item The objects to be detected can get partially occuluded \item The objects can have unconventional poses \end{itemize} \subsection{Recurrent Neural Networks in general} \subsubsection{Delimitation to non-recurrent Neural Networks} Non-recurrent Neural Networks process on single inputs for example, a single image. Recurrent Neural Networks process on sequences of data, e.g.: multiple video frames. Recurrent Neural Network's core concept to enable the sequence processing is parameter sharing across different parts of a model. Parameter sharing can be reached by cycles in the architecture. \cite{b11} \newline \subsubsection{Common Types of Recurent Neural Networks} Most of the papers which are described in this work use two common Recurrent Neural Network approaches. The first one are LSTMs, first mentioned in \cite{b18} in 1997. The key of LSTMs are different gates (forget gate, external input gate, output gate) to control the cell state and the hidden state of the LSTM \cite{b11}. \newline The second type of Recurrent Neural Networks are Gated Recurrent Units (GRUs). The main difference to LSTMs is, that those GRUs consist out of a single gated unit which can simultaneously control the forgetting factor and decide to update the inner cell state. \cite{b11} \section{Feature-based Video Object Detection} First, we want to introduce feature-based Video Object Detection methods. As defined, for example, in [1-2] feature-based Video Object Detection methods fuse detectors which integrate features from multiple frames into their video detection. In most papers considered in this work this integration is done by recurrent units. \subsection{Recurrent Multi-frame Single Shot Detector for Video Object Detection [1]} Broad, Jones and Lee have in \cite{b1} and \cite{b12} the idea to design a multi-frame video object detector by inserting a fusion-layer into a classical Single Shot Detector (SSD) meta-architecture. Based on this idea they research in two main fields: On the one hand, they investigate different fusion methods and on the other hand they try several SSD meta-architectures [20-2.1.1; 20-3.6.1]. They test their approaches on the KITTI dataset \cite{b21} for autonomous driving and their model improves upon a state-of-the-art SSD model by 2.7 \% mAP. Finally they evaluate their best approach on Caltech Pedestrians Dataset \cite{b22} and find similar improvements of 5 \% in maP. \newline SSDs in general consist of two main components. First, an so called Feature Extractor, which gets as an input an image and outputs feature-maps. The other component is an Detection head which creates bounding boxes and class probabilities out of the feature maps. [1-3] Broad, Jones and Lee have the idea to insert a fusion layer in between those two components. As fusion techniques they test simple element-wise operations (e.g.: add or max), simple concatenations of features maps and recurrent layers [1-3]. Their experiments show that the recurrent layers perform best, because they add in contrast to the other two method new parameters to learn temporal context to the network. In addition, they observe that recurrent units do not slow down the computational speed significantly (only 5 fps slower then the baseline SSD) [1-4.1]. As a recurrent unit they use GRU because they observe that the results were similar to the results when they use LSTMs but the GRUs are faster to train [1-3.1.1]. Their final architecture is shown in Figure 1. \begin{figure} [h] \includegraphics[width=\columnwidth]{RMFSSD} \caption{Architecture: Recurent Multi-frame single shot detector [1]. The Feature Extractor generates feature maps for four frames and feeds those feature maps into the GRUs (Gated Recurrent Units). The Detection Head uses the final feature maps which are created by the GRU with respect to the temporal context to create Bounding Boxes and Class probabilities} \end{figure} In addition Broad, Jones and Lee test different types of SSDs as a baseline for their architecture. For all baseline SSDs the mAP was higher in comparison to the non-recurrent models. The mAP increase by 1.4 to 4.4 percent on KITTI dataset. The best mAP is achieved using SqueezeDet+ \cite{b23} as a baseline SSD network [1-4.2]. \newline Finally Broad, Jones and Lee use the Caltech Pedestrians dataset to expolore the effect of the number of prior frames and the frame-rate. They compare the single-frame SSD model, with the RMf-SSD using the prior frames and the RMf-SSD model using the frames t-2, t-4 and t-6. The improvement was 3 percentage points higher by using the frames t-2, t-4 and t-6 in comparison to use the prior 3 frames [1-4.4]. \newline There is only a little information on the training of the Recurrent Multi-frame Single Shot Detector. They use the SequeezeDet Training strategies [23-3.3] and a pre-trained version of the baseline SSD. Finally, they use the SequezzeDet fine tuning strategy to train the whole network afterwards [12-2]. \newline Overall Broad, Jones and Lee show three main things. First they show that including the temporal context improves single frame SSD methods. Moreover they show that Recurrent Units are a good approach to add the temporal context and finally they point that long time context (frames t-2, t-4, t-6) seems to be more important then short-term temporal context. Finally they reach an mAP of 83 \% on KITTI dataset [1-4.2] and a miss-rate of 29 \% on the Caltech Pedestrians dataset [1-4.1]. \newline This architecture is comparable to the architectures described in 2-B and 2-E. Main differences are that 2-B uses more then one recurrent unit and 2-E different feature extractors for different frames. But all of them feed feature maps of different frames into recurrent units. \subsection{Mobile Video Object Detection with Temporally Aware Feature Maps [2]} Liu and Zhu have in \cite{b2} the goal to design a video object detection architecture, which can run real-time (15 fps) on low-powered mobile and embedded devices. The key of their method is to combine convolutional layers with convolutional LSTMs. They investigate on the benefit from adding an LSTM into the baseline SSD, different types of Recurrent Layers (LSTM, GRU and bottleneck LSTMs), different dimensions of their bottleneck LSTMs and different LSTM placement strategies (Single LSTM placement and multiple LSTM placement). They test their model on Imagenet VID 2015 dataset \cite{b35} and reached a mAP of 54.4 \% while performing on 15 fps. \newline In the beginning they simply add one LSTM to their baseline SSDs architecture - MobileNet \cite{b24}. They observe that adding the LSTM improves the mAP in comparison to their baseline SSD architecture. Moreover, they investigate that the greatest improve is by adding the LSTM after the 13th convolutional Layer of the SSD. [2-4.2] \newline Afterwards, they compare LSTMs, GRUs and Bottleneck LSTMs by placing them after the 13th convolutional layer. Bottleneck-LSTMs have been designed by Liu and Zhu to increase the efficiency of LSTMs. They use convolutions with ReLU as activation function and feed Bottleneck feature map with less input channel then the original feature map into the LST. They come to the conclusion that Bottleneck-LSTMs are more effective the LSTMs and in the case of a convolutional kernel greater 1x1 even more effective than GRUs while attaining comparable perfomance. [2-3.4; 2-4.2]\newline In addition to the bottleneck LSTMs Liu and Zhu extend their network with multipliers. They use $\alpha_{base}$, $\alpha_{ssd}$ and $\alpha_{lstm}$ as multiplier to scale the channel dimension of each layer. They find out that the accuracy of the model remains near constant up to $\alpha_{ssd}= 0.25 \alpha$. For the other multipliers they use the values: $\alpha_{base} = \alpha$ and $\alpha_{ssd} = 0.5 \alpha$. [2-3.3; 2-4.2] \newline As shown in Fig. 2 the final model uses LSTMs after each convolutional Layer, because Liu and Zhu observe that there is a slight performance improve by adding LSTMs after every feature map and nearly no change in computational cost [2-4.2]. \begin{figure} [h] \includegraphics[width=\columnwidth]{Liu_Zhu} \caption{Architecture Mobile Video Object Detection with temporally aware feature maps \cite{b2}. First 13 Convolutional Layeer create Feature Maps. Those Feature Maps are fed into the convolutional Bottleneck-LSTMs (faster LSTMs which use convolutions). Afterwards Convolutional-Layer create new Feature Maps and feed them again into Bottleneck-LSTMs.} \end{figure} On the training strategy and the loss function Liu and Zhu do not provide any information. \newline Liu and Zhu confirm the observation from \cite{b1} that adding temporal context information with the help recurrent units improves the detection quality of the baseline SSDs. Moreover they show that architectures using recurrent units can still perform in real-time (15 fps) even on mobile devices. In comparison to the Mobilnet-SSD model they improve the mAP by 4.1 percentage and perform faster on Pixel 2 Phone. Overall they reach an 54.4 \% on Imagenet VID. \newline This architecture can be especially compared to the model described in "Looking Fast and Slow" because both of them are processing on a mobile device and are feature-based architectures. But "Looking Fast and Slow" differs because it does not treat every frame in the same way and proceeds with different features extractors on different frames. \subsection{Feature Selective Small Object Detection via Knowledge-based recurrent attentive neural networks \cite{b6}} [I] The aim of this paper is develop a video object detector for the purpose of autonomous driving. The network developed in this paper is termed as Knowledge-Based Recurrent Attentive Neural Network (KB-RANN). \newline The main contributions of the authors, Kai Yi et. al., are: \begin{itemize} \item An attention mechanism that works like human cognition. The attention mechanism can detect the salient features, which are important for the object detection problem and propagate them forward. This attention mechanism is based on previous work by Ashish Vaswani et. al. \cite{b30}. \item A domain and intuitive knowledge module which can use the knowledge about traffic signals to produce feature maps. \item A model which has good ability for transfering knowledge. The authors obtained good results by evaluating the KB-RANN model on BTSD dataset \cite{b29}, which was trained on KITTI dataset \cite{b21}. \end{itemize} The KB-RANN model is tested on KITTI and MS COCO17 dataset \cite{b31}, achieving 0.813 mAP and 0.578 mAP repectively, on all classes of those datasets. The model performs better then several popular objection detection algorithms like Faster R-CNN \cite{b15}, RetinaNet \cite{b32} and SqueezeDet \cite{b33}. The authors successfully compressed and accelerated their proposed model and deployed it to their own self-developed autonomous car. \newline \begin{figure}[h] \includegraphics[width=\columnwidth]{KB-RANN-architecture} \caption{Architecture of the model proposed in Feature Selective Small Object Detection via Knowledge-based recurrent attentive neural networks} \end{figure} [III-A] The authors use SqueezeNet\cite{b34} for feature extractor because SqueezeNet can provide AlexNet\cite{b14} level of accuracy but with 50 times less parameters. Although there are better performing models like VGGNet\cite{b45} and ResNet\cite{b40}, but these models are computationally expensive. They make a few changes in the SqueezeNet architecture, which include changing the kernel size and also fine-tuning the backend by adding two fire modules at the end. This modified SqueezeNet architecture is termed as, SqueezeNet+. \newline [III-B] Attention mechanism is used to find the saliency maps from the deep feature maps obtained from SqueezeNet. Attention module obtains the input tensor X and outputs the saliency map $\tilde{X\textsubscript{t}}$. \newline [III-B] LSTM \cite{b18} is used as a memory mechanism in order to find long term dependencies between different frames and the characteristics of Attention and LSTM are exploited together by fusing them with each other, this fused module is termed as RANN. As saliency feature extraction is applied on the original deep feature maps, it is possible that some of the information is lost. Multiple RANNs are cascaded together into a chain to minimize the effect of feature information loss. Moreover, the output H\textsubscript{t} of each LSTM is concatenated with the original feature map and used as an input for the next attention module to refine the saliency feature extraction process. \newline [III-C] Lasty, The authors also add a domain and intuitive knowledge module to the KB-RANN architecture. It is assumed that traffic signs detection is one of the most important aspect of autonomous driving. The major focus of attention of a driver is at the center of vision and the traffic lights are always located at a bias from that central region of peoples' gazes. With these assumptions in place, domain knowledge about traffic signals is learned from the data itself by constraining the distribution to a 2D Gaussian function and learning the mean and covariance matrices from the data. A reverse Gaussian distribution is computed from the learned Gaussian distribution. The feature maps obtained from this reverse Gaussian distribution are concatenated with the feature maps from RANN. \newline [IV-A,B,C]For training and evaluation of the model, the authors use the KB-RANN model and MS COCO17 dataset. They compare the results with other popular object detection models. On the KITTI dataset the KB-RANN achieves the mAP of 0.813, compared to the 0.763 of SqueezeDet, 0.702 of Faster R-CNN and 0.601 of RetinaNet. The frames-per-second (FPS) at which KB-RANN operates are higher then the compared models. In order to signify the accuracy gain from the attention mechanism and the knowledge module, authors also train and evaluate different architectures, namely KB-RCNN in which the attention mechanism is replaced with convulational layers and RANN, which does not have knowledge-based module. KB-RANN achieves better accuracy then KB-RCNN and RANN on KITTI and MS COCO17 dataset, although the FPS achieved by KB-RCNN due to its recurrent nature are higher. Lastly, a KB-RANN model with parameters trained on the KITTI dataset is also tested on BTSD dataset to demonstrate the knowledge transfer capabilities. \subsection{Looking fast and slow: memory-guided mobile video object detection \cite{b7}} [1] Mason Liu et. al. aim to, replicate the capability of a human visual system of obtaining the "gist" of a scene. Using this sparse information, and amalgamating it with the more thorough information, a human visual system can effectively detect objects in its field of vision. In light of this, the main contributions of this paper are: \begin{itemize} \item The introduction of multiple feature extractors. Some of those feature extractors are very light-weight but can provide a "gist" of the frame, while others can provide more accurate representation of a frame but at the cost of performance. \item A memory module, which can fuse the outputs of those feature extractors. \item An adaptive interleaving policy which uses reinforcement learning to find to decide, which feature extractor should be executed. \item The capability of executing multiple feature extractors asynchronously. So that, the light-weight feature extractors don't have to wait for the more expensive feature extractors. \end{itemize} The model is evaluated on the ImageNet VID 2015 dataset \cite{b35}, which is concatenated with extra data from the ImageNet DET \cite{b35} and MS COCO17 datasets \cite{b31}. The model manages to achieve a mAP of 0.593 at 72.3 FPS on a Pixel3 mobile device. \newline \begin{figure}[h] \includegraphics[width=\columnwidth]{looking-fast-and-slow-architecture} \caption{The KB-RANN architecture. The frames are passed through the feature extractors to obtain, feature maps. The blue MobileNetV2 feature extractors are the light-weight ones while the red one is computationally more expensive.} \end{figure} [3-3.1]Let \textit{m} be the number of feature extractors. For the implementation of this paper, the authors constraint $\textit{m}=2$, a d let f\textsubscript{1} be the feature extractor which is computationally more expensive while f\textsubscript{2} be the feature extractor which is cheaper. Both feature extractors use MobileNetV2 architecture \cite{b36}. f\textsubscript{1} uses a depth of 1.4 with $320*320$ input frames resolution while f\textsubscript{2} uses a depth of 0.35 with a lower $160*160$ input frames resolution. \newline [3-3.2] A modified LSTM \cite{b18} is used as a memory mechanism to preserve long term dependencies. The modifications in the LSTM are responsible for making the memory mechanism faster and also better at preserving long term dependencies. For faster memory mechanism, the authors make three modifications. They introduce bottlenecking and add a skip connection between the bottleneck and the output. Lastly, the LSTM states are grouped together and convulations are applied on each group seperately and the resultant states are then concatenated together to obtain the final states. The grouped convulations provide a speed-up. In order to improve the preservation of long term dependencies by the LSTM, the LSTM states are only updated when f\textsubscript{1} is run and not the f\textsubscript{2}, as feature maps obtained from f\textsubscript{2} are not of a higher quality compared to f\textsubscript{1} and this can result in loss of important state information in the LSTM. \newline SSD-style \cite{b17} detection is applied on refined feature maps obtained from the LSTM for classification and obtaining bounding boxes. \newline \begin{figure}[h] \centering \includegraphics[width=0.33\columnwidth]{interleaved-policy} \caption{The interleaved policy, is based on the LSTM states} \end{figure} [3-3.4] The interleaving policy which defines the feature extractor that should be used next is based on reinforcement learning. The state space of the reinforcement learning policy network $\pi$ consists of the LSTM states, $c\textsubscript{t}$, $h\textsubscript{t}$, as well as the differences between the states at different timestamps i.e. $c\textsubscript{t}-c\textsubscript{t-1}$, $h\textsubscript{t}-h\textsubscript{t-1}$ and lastly the action history $\eta\textsubscript{t}$. The action space has length \textit{m} and the action \textit{a} means that the feature extractor f\textsubscript{\textit{a}} should be run. \newline [3-3.5] It is observed by the authors that, despite the employment of the interleaving policy the real-time detection is limited by the execution of the expensive f\textsubscript{1} feature extractor. They introduce an asynchronous framework for running the feature extractors f\textsubscript{1} and f\textsubscript{2} extractors in parallel. During testing, this asynchronous framework provides better results. \newline [4-4.1; 4-4.2] As mentioned before, the authors use ImageNet VID 2015 dataset for training and evaluation, which has 30 classes in total, along with the addition of extra data from ImageNet DET and MS COCO17, but this extra data is limited to the classes contained within ImageNet VID. All results are reported, using a Pixel 3 mobile device. The results are compared with, baseline single-frame detection model i.e. MobilenetV2-SSDLite (mAP: 0.420, FPS: 14.4), LSTM-based model i.e. MobilenetV2-SSDLite+LSTM (mAP: 0.451, FPS: 14.6) and the state of the art mobile video object detection model of Zhu et. al. (mAP: 0.602, FPS: 25.6). The proposed model by authors manages to achieve a mAP of 0.593 at 72.3 FPS. The authors also perform evaluation using slight variations of the proposed architecture i.e. Non-interleaved, Interleaved only, Interleaved+Async and Interleaved+Adaptive+Async, in order to test the significance of different components of the architecture on the end results. The Interleaved+Adaptive+Async provides the most balanced end result. \cite{b37}. \subsection{Detect to Track and track to detect\cite{b8}} [1] In this paper, Christoph Feichtenhofer, Axel Pinz and Andrew Zisserman, aim to perform, detection and tracking simultaneously using a fully convulational network by infering a "tracklet" over multiple frames. This paper is based on the R-FCN\cite{b19} model and extends it for multiple frame and tracking. The main contributions of this paper are: \begin{itemize} \item Finding correlation maps of two feature maps of adjacent frames. These correlation maps are used to estimate the local displacement between frames. \item Using ROI-tracking layer to regress bounding boxes over multiple frames. \item Linking the detections based on tracklets to infer long-term tubes of objects over the course of the video. \end{itemize} The proposed model is trained and evaluated on the ImageNet VID dataset \cite{b35}, consisting of 30 classes. The model achieves an overall mAP of 0.82. \newline \begin{figure}[h] \includegraphics[width=\columnwidth]{D_T-architecture} \caption{Architecture of the model proposed in Detect to Track and track to detect} \end{figure} [3-3.2] The R-FCN\cite{b19} detection process consists of two stages, firstly a Region Proposal Network (RPN)\cite{b38} is used to find the candidate region of interests (ROIs) and then a ROI pooling layer\cite{b39} is used to perform classification of the regions into classes or the background. The input to the ROI pooling layer comes from a convulational layer put in place at the end of the ResNet-101 feature extractor\cite{b40} with output $x^t_{cls}$. The ROI layer outputs position sensitive feature maps and using a softmax layer these position senstive feature maps can be converted to class probablities for each ROI. On a second branch, R-FCN places a convulational layer at the end of the ResNet-101 feature extractor which outputs $x^t_{reg}$ and this ouput is again used as an input to ROI pooling layer which generates the bounding boxes. \newline [3-3.3] The objective function is given hereby: \begin{equation*} \begin{aligned} L(\{p_{i}\},\{b_{i}\},\{\Delta _{i}\}) = \frac{1}{N}\sum_{i=1}^{N}L_{cls}(p_{i,c^{*}}) \\+ \lambda \frac{1}{N_{fg}}\sum_{i=1}^{N}[c_{i}^{*}>0]L_{reg}(b_{i},b_{i}^{*})\\+ \lambda \frac{1}{N_{tra}}\sum_{i=1}^{N_{tra}}L_{tra}(\Delta _{i}^{t+\tau}, \Delta _{i}^{*, t+\tau}) \end{aligned} \end{equation*} In short, the first term contains $L_{cls}$, which is the classification, cross-entropy loss, the second term contains $L_{reg}$, which is the bounding boxes regression loss and lastly, $L_{tra}$ in the last term, is the tracking loss across two frames. All of the loss terms are normalized. \newline [3-3.4] For each pair of adjacent frames I\textsuperscript{t}, I\textsuperscript{t+$\tau$}, a bounding box regression layer is introduced that performs position senstive ROI pooling on the concatentation of bounding box regression features $x^t_{reg}$, $x^{t-1}_{reg}$, which are also stacked with correlation maps, to perform bounding box transformation regression between frames. The correlation maps between the two frames are obtained by finding correlation between the feature maps of the two frames. Finding correlation on all the features in the feature maps will result in an explosion of dimensionality so the correlation maps are only limited to the local neighbors. Like mentioned before, the correlation maps are stacked with the bounding box features maps. \newline [4] In practical uses, using all frames of the video is not the most efficient way of detection and tracking, as a lot of information between adjacent frames is redundant and also due to GPU memory and computational restrictions, only a certain number of frames can be processed in the GPU at the same time. Due to this, the authors employ a technique similar to action localization\cite{b41} to obtain an optimal path through a video. \newline [5-5.1;5-5.2;5-5.3] As mentioned above, the proposed model is trained and evaluated on the ImageNet VID dataset. A comparison is made with the R-FCN detector (mAP: 0.742), which the proposed model is based on, the ILSVRC 2015\cite{b42} (mAP: 0.738) winner and the ILSVRC 2016\cite{b43} (mAP: 0.762) winner. DT performs the best with a mAP of 0.82. A comparison between slight variations of the DT model are also evaluated. Firstly, using different feature extractor backbones: ResNet-50 (mAP: 0.767), ResNet-101 (mAP: 0.80) and Inception-v4 (mAP: 0.821)\cite{b44} and secondly using the ResNet-101 backbone but a temporal sampling rate $\tau$ of 10 (mAP: 0.786). \section{Box-Level-based Video Object Detection} \begin{figure}[h] \includegraphics[width=\columnwidth]{box-level-basic} \caption{Box-Level-based Video Object Detection} \end{figure} Bounding Boxes and Class probabilities are fed into the network and are refined temporally and/or spatially. \subsection{Context Matters: Refining Object Detection in Video with Recurrent Neural Networks \cite{b4}} In \cite{b4} Tripathi's, Lipton's, Belongie's and Nguyen's architecture consists two parts: A pseudo-labeler, which assigns labels to all video frames and a recurrent unit which refines those pseudo-labels by using the contextual information. Moreover, they describe a training strategy for their architecture and compare their approach to other models on the YouTube-Objects dataset (v2.0) \cite{b25} , which consists of the ten categories airplane, bird, boat, car, cow, dog, horse, mbike, train. Their model reaches an mAP of 68.73 percent which improves the strongest image-based baseline for Youtube-Video Objects dataset of 7.1\% [4-Abstract]. \newline The final architecture can be found in \textbf{Fig 10}. Tripathi, Lipton and Belongie first train a pretrained YOLO object detection network \cite{b20} as a pseudo-labeler on the YouTube-Video Dataset. As specified in YOLO they minimize the weighted squared detection loss and optimize classification and localization error simultaneously [4-3]. \begin{figure} [h] \includegraphics[width=\columnwidth]{ContextMatters} \caption{Architecture Context Matters: Refining Object Detection in Video with Recurrent Neural Networks \cite{b4}. First the "Pseudolabler" creates bounding boxes and class probabilities for every input frame. Afterwards the GRU (Gated Recurrent Unit) fuses the output of the current and some past frames and refines the bounding boxes and class probabilities.} \end{figure} After training the pseudo-labeler they train the Recurrent Neural Network, which takes as an input the pseudo-labels and outputs improved prediction. The RNN consists of two GRU layers [4-2]. \newline For training the whole network they use the following loss function to take both accuracy at the target frame and consistency of predictions across adjacent time steps into consideration. They choose the values of $\alpha = 0.2$, $\beta = 0.2$ and $\gamma = 0.1$ based on the detection performance on the validation set [4-2.1]: \newline $ loss = d_{loss} + \alpha \cdot s_{loss} + \beta \cdot c_{loss} + \gamma \cdot pc_{loss} $ \newline Tripathi, Lipton, Belongie and Nguygen use the object detection loss ($d_{loss}$) as described in YOLO [26, 4-2.1.1]. The similarity loss ($s_{loss}$) considers the dissimilarity between the pseudo-labels and prediction at each frame t [4-2.1.2], the category loss ($c_{loss}$) takes wrong class probabilities into consideration [4-2.1.3] and the prediction-consistency loss ($pc_{loss}$) regularizes the model by encouraging smoothness predictions across the time-steps [4-2.1.4]. \newline During the evaluation they find two possible areas of improvement for their approach. On the one hand the RNN is not able to recover from wrong predictions made by the pseudo-labeler after they have been fed into the RNN. This is a general disadvantage of box-level methods in comparison to feature-level methods. On the other hand, they observe that their network is not robust to motion [4-3.4]. \newline Tripathi, Lipton, Belongie and Nguyen test their model on the Youtube-Objects dataset. Overall they outperform the best no-recurrent architecture (DA Yolo) in their comparison by 7\% mAP [4-3.1]. This again confirms the observatoin that adding recurrency to integrate temporal context improves the detection quality. \newline The architecture mentioned in "Context Matters: Refining Object Detection in Video with Recurrent Neural Networks" is pretty similar to the one used in \cite{b4}. Both are using the YOLO network architecture as a baseline and feed its output into an Recurrent Unit. Main Differences is that \cite{b4} feeds the bounding boxes and in addition some visual features into the recurrent unit. \subsection{Optimizing Video Object Detection via a Scale-Time Lattice\cite{b10}} [1] The aim of this paper, is to propose an architecture which is balanced and flexible enough to allow prioritisation of accuracy or performance with minimal effort. The primary contributions of this paper published by Kai Chen et. al. are: \begin{itemize} \item The Scale-Time Lattice, which provides a rich design space. \item A detection framework which provides good accuracy to performance trade-off. \item A novel key-frame extraction policy, which is based on the ease of detection. \end{itemize} \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{scale-time-lattice} \caption{The Scale-Time Lattice has an efficient design for reaching a balance between performance and accuracy} \end{figure} [3] The Scale-Time Lattice allows coarse detection, both temporally and spatially and then use temporal propagation and spatial refinement to go from coarse to fine detection. \newline \begin{figure}[h] \centering \includegraphics[width=0.5\columnwidth]{pru} \caption{The Propagation and Refinement Units (PRUs) are the constituent elements which the Scale-Time Lattice is made up of} \end{figure} [4-4.1] The Scale-time lattice is composed of structures which connect together to perform the temporal and spatial operations. These structures are called Propagation and Refinement Units (PRUs). PRUs work on the basis of two operators $F_\tau$ and $F_S$, and by carefully allocating resources to the two operators, a balance between the detection accuracy and the performance can be acheived. Assuming that two frames are sampled from a video at time $t$ and $t+2\tau$, both at scale $s$, the $F_\tau$ operator tries to model the movement of the bounding boxes from time $t$ to $t+\tau$ and from time $t+2\tau$ to $t+\tau$ irrespective of the scale offset between the scale $s$ and the ground truth frame scale $s+1$. The modeling of the bounding boxes offset from scale $s$ to $s+1$ is the task of the $F_S$ operator. The figure depicting the PRU gives more information on this. \newline [4-4.1] The objective function is given hereby: \begin{equation*} \begin{aligned} L(\Delta_{F_{\tau}} ,\Delta_{F_{S}} ,\Delta_{F_{\tau}}^{*} ,\Delta_{F_{S}}^{*} ) = \\ \frac{1}{N}\sum_{j=1}^{N}L_{F_{\tau}}(\Delta_{F_{\tau}}^{j} ,\Delta_{F_{\tau}}^{*j}) + \\ \lambda \frac{1}{N}\sum_{j=1}^{N}L_{F_{S}}(\Delta_{F_{S}}^{j} ,\Delta_{F_{S}}^{*j}) \end{aligned} \end{equation*} Here, N is the number of bounding boxes in the batch, while $\Delta_{F_{T}}$ and $\Delta_{F_{S}}$ are the network output of $F_{\tau}$ and $F_{S}$. The $L_{F_{\tau}}$ and $L_{F_{S}}$ are losses of temporal propagation and spatial refinement network, respectively. \newline [4-4.2] The most straight-forward keyframe extraction policy is a uniform i.e. to use keyframes from the video after uniform intervals but an intelligent keyframe extraction policy can be used to increase the accuracy of the results. To that effect, the authors introduce a ease of detection cofficient e. If during detection, the value of e falls beneath a certain threshold that the sample rate of the frames from the video is increased. This can happen if there are a lot of objects on the screen or the objects are moving too quickly. \newline [5-5.2] The proposed model has an mAP of 0.79 at 60 FPS on the ImageNet VID dataset. \newline \subsection{Spatially Supervised Recurrent Convolutional Neural Networks for Visual Object Tracking \cite{b5}} Ning, Zhang, Huang, He, Ren and Wang design in \cite{b5} a combination of box-level and feature-level based Video detectors. They use the YOLO\cite{b26} network to create high-level visual features and preliminary location inferences and feed both into a recurrent unit. They test their approach on the OTB-30 dataset \cite{b28} and compared it with 9 different state-of-the-art trackers and outperform all of them. Their architecture, called ROLO, is shown in figure 11. [5-Abstract; 5-4] \begin{figure} [h] \includegraphics[width=\columnwidth]{ROLO} \caption{Architecture ROLO \cite{b5}. The architecture feeds feature representations made by the YOLO network as well as bounding boxes also made by the YOLO Network into a recurrent unit. The recurrent unit uses the temporal context of those inputs to finally output bounding boxes and class probabilities.} \end{figure} The architecture consists two main parts. The YOLO\cite{b26} networks which collects visual features and outputs also preliminary location inferences and the LSTM which is used as a tracking module [5-2]. \newline They use three phases to train their model: The pre-training phase of convolutional layers for feature learning, the traditional YOLO training phase for object proposal and the LSTM training phase for object tracking. [5-3] \newline In the pre-training phase convolutional layers, which create feature maps are pretrained on ImageNet data. Afterwards the YOLO-architecture is adopted as detection module and trained by using the traditional YOLO training phase. Lastly they add LSTMs for tracking. The LSTMs are fed with, the feature representations from the convolutional layers, the Bounding boxes from the detection module and the output states from the LSTM in the previous time-step. For training they use the Mean Squared Error (MSE) [5-3]: \newline \[ L_{MSE} = \dfrac{1}{n} \sum_{i = 1}^n ||B_{target}-B_{pred}||_{2}^2 \] \newline As an alternative, Ning, Zhang, Huang, He, Ren and Wang mention a Heatmap as input for the LSTM. Therefor the prediction and the visual features are concatenated before adding them to the LSTM. This is helpful to visualize the intermediate results. [5-3.3] \newline Special to this architecture is, that LSTM does regression in two folds. There is a regression within one frame between the location inferences and the high-level features and there is also regression over the different frames of the sequence by taking the temporal context into account [5-3.4]. \newline In 2015 Wu, Lim and Yang published a comparison of different online object trackers on their one data set \cite{b28}. The best 9 architectures (STRUCK [47]; CXT [48]; OAB [49]; CSK [50]; VTD [51]; VTS [52]; LSK [53]; TLD [54]; RS [55]) were used by Ning, Zhang, Huang, He, Ren and Wang to compare them with their own architecture. They use 30 videos with different objects (e.g.: car, human, Skater, Couple,..) out of the benchmark \cite{b28} dataset. They get the best tracking result in 13 videos and the second best in 3 videos. There is no other tracker that reaches the best results in more then 4 videos. \newline This approach combines box-level and feature-level methods and reaches good results based on the combination of spatial regression between features and location proposals and the temporal regression in the LSTM. The paper shows a potential way to combine the best of both methods (feature based and box-level approaches) which are shown in the other papers. \section{Flow-based Object Detection} \subsection{Definition} Another type of architectures for Video Object Detection defined, for example in \cite{b3}, are architectures which use Flow-Networks to consider the temporal context. The flow network estimates the optical flow which means it projects back the location in the current frame to an earlier frame. \subsection{Deep Feature Flow for Video Recognition \cite{b3}} Zhu, Xiong, Dai, Yuan and Wei develop in \cite{b2} a way to use flow nets to detect objects in video frames. As shown in figure 12 their architecture consists of three main parts: A network to create visual features, a network to create class probabilities and bounding boxes respectively the segmentation out of feature maps, and a network which estimates the optical flow. They tested their approach on the Cityscapes dataset \cite{b56} with scenes for autonomous driving and ground-truth for semantic segmentation and the ImageNet VID dataset \cite{b35} with ground truth for object detection. \begin{figure} [h] \includegraphics[width=\columnwidth]{Flow} \caption{Architecture Deep Feature Flow for Video Recognition \cite{b3}. The feature Network $N_{feat}$ extracts visual features for the key frames. The $N_{task}$ networks uses visual features to create Bounding boxes and Class probabilities. For non-key frame images, the flow networks F estimates the feature maps based on in a image sequence.} \end{figure} The paper differs between key frames and other frames. The network to create visual features $ N_{feat} $ only processes on key frames. It is a fully convolutional network, which takes as an input the image and outputs a feature maps. As a feature network they use a pretrained version of ResNet \cite{b40}. [3-3, 3-4] \newline The second network $ N_{task} $ does the recognition task over the feature maps. It performs on every frame. Zhu, Xiong, Dai, Yuan and Wei use an R-FCN \cite{b19} network for the recognition task. [3-3, 3-4] \newline For all non-key frames the feature maps are not constructed by the feature network. Instead, they are propagated by a flow network $ N_{flow} $. Zhu, Xiong, Dai, Yuan and Wei use a state-of-the-art CNN based FlowNet architecture \cite{b57} as a flow network. The flow estimation is given by [3-3, 3-4]: \newline $ f_{i} = W (f_{k},M_{i->k}, S_{i->k})$ \newline $ f_{k} $ is the key frame's feature map, $ M_{i->k} $ is a two dimensional flow field, it projects back the location of an object in the current frame to the location in the key frame by using bilinear interpolation and $ S_{i->k} $ is scale field [3-3]. \newline Zhu, Xiong, Dai, Yuan and Wei use a fixed key frame scheduling and they see a potential improvement in changing the key frame policy. \newline Overall Zhu, Xiong, Dai und Yuan show a alternative to recurrent neural networks to integrate the temporal context into the object tracking. They reach an mIoU up to 69.2 percent on Cityspace dataset and an mAP of 73.9 percent on ImageNet Vid. But those results are reached by processing with 5.6 and 4.05 fps. Both drop significantly with higher fps. Some recurrent papers in 2 and 3 reach higher mAPs on higher fps. \section{Comparison of different approaches} \subsection{KITTI Dataset} \captionof{table}{Results on KITTI Dataset} \begin{tabular}{ | p{2cm} | p{2em}| p{2em} | p{4em} | p{5em} | } \hline Model & MAP & FPS & Machine & Architecture \\ \hline Recurrent \cite{b1} & 86.0 & 50 & Nvidia TITAN X & Feature-Level \\ \hline Feature Selective \cite{b6} & 81.3 & 30.8 & Nvidia TITAN X & Feature-Level \\ \hline \end{tabular} \newline As seen in Table I the architecture mentioned in "Recurrent Multi-frame Single Shot Detector for Video Object Detection" \cite{b1} outperforms the results of the paper "Feature Selective Small Object Detection via Knowledge-based Recurrent Attentive Neural Network" \cite{b6} in regard to the detection quality (mAP) and the computational speed (fps). \newline The main difference between those two architectures is that \cite{b6} performs on a single frame and uses LSTMs to include the spatial context within this frame. \cite{b1} instead is processing on a sequence of input frames and uses recurrent units to take the temporal context into consideration. \newline That leads to the hypothesis that performing on multiple frames is more beneficial than performing on only one frame. Which means that temporal context is more important than spatial context. \subsection{ImageNet Dataset} \captionof{table}{Results on ImageNet Dataset} \begin{tabular}{ | p{2cm} | p{2em}| p{2em} | p{4em} | p{5em} | } \hline Model & MAP & FPS & Machine & Architecture \\ \hline DT \cite{b8} & 82.0 & 7 & Nvidia TITAN X & Feature-Level \\ \hline DT \cite{b8} & 78.5 & 55 & Nvidia TITAN X & Feature-Level \\ \hline Scale-Time Lattice \cite{b10} & 79.6 & 20 & Nvidia TITAN X & Box-Level \\ \hline Scale-Time Lattice \cite{b10} & 79 & 62 & Nvidia TITAN X & Box-Level \\ \hline DeepFeature Flow \cite{b3} & 73.9 & 3 & - & Flow-Based \\ \hline DeepFeature Flow \cite{b3} & 73.1 & 20.5 & - & Flow-Based \\ \hline Looking Fast and Slow \cite{b7} & 60.7 & 48.8 & Pixel Phone & Feature-Level \\ \hline Object Detection with Temporally-Aware \cite{b2} & 54.4 & 15 & Pixel Phone & Feature-Level \\ \hline \end{tabular} \newline As seen in Table 2 both "Detect to Track and Track to Detect" and "Optimizing Video Detection with spatial-time lattice" reach better results than the other papers. Both "Detect to Track and track to Detect" and "Optimizing Video Detection with spatial-time lattice" perform on multiple frames in parallel. That leads to the advantage that both of them can use temporal context from the past and also from the future. The doubling of the temporal context information is probably one reason for their comparatively good performance. \newline In addition to the advantage of performing on multiple frames in parallel "Optimizing Video Detection with spatial-time lattice" uses not only the temporal context. In addition to the temporal context, this approach takes also different scales into consideration. This is a possible further reason for the good results. \newline The Flow-based approach \cite{b3} has comparatively bad results on ImageNet. We only evaluated one flow-based paper, but the comparatively bad performance could be an evidence that flow-based approaches' benefit in comparison to recurrent approaches does not exist or is very small. \newline "Looking Fast and Slow: Memory-Guided Mobile Video Object Detection" and "Mobile Video Object Detection with Temporally-Aware Feature Maps" cannot be compared to the other 3 approaches because they are running on mobile devices, which have pretty less computational power than the GPUs which are used by the other papers. \subsection{Results on COCO Dataset} \captionof{table}{Results on COCO Dataset} \begin{tabular}{ | p{2cm} | p{2em}| p{2em} | p{4em} | p{5em} | } \hline Model & MAP & FPS & Machine & Architecture \\ \hline Feature Selective \cite{b6} & 57.8 & 37.5 & Nvidia TITAN X & Feature-Level \\ \hline \end{tabular} \subsection{Results on YouTube Dataset} \captionof{table}{Results on YT Dataset} \begin{tabular}{ | p{2cm} | p{2em}| p{2em} | p{4em} | p{5em} | } \hline Model & MAP & FPS & Machine & Architecture \\ \hline Context Matters \cite{b4} & 68.73 & - & - & Box-Level \\ \hline \end{tabular} \newline "Context Matters" is the only paper of those which we have done some research on, which uses the YouTube Dataset to test their architecture. Unfortunately the results cannot be directly compared to the other papers. Also Tripathi, Lipton, Belongie and Nguygen only compare their model with non-recurrent ones. \newline \subsection{Results on OTB Challenge Dataset} \captionof{table}{Results on OTB Challenge Dataset} \begin{tabular}{ | p{2cm} | p{3em}| p{2em} | p{4em} | p{4em} | } \hline Model & Success Rate & IoU & FPS & Machine \\ \hline Spatially Supervised \cite{b5} & 0.564 & 0.455 & 20/60 & Nvidia TITAN X \\ \hline \end{tabular} \newline Out of the paper which are mentioned in this paper only "Spatially Supervised Recurrent Convolutional Neural Networks for Visual Object Tracking" uses the OTB Challenge Dataset to evaluate their results. Unfortunately, they are only doing a comparison with non-recurrent approaches. \section{Outro} \subsection{Conclusion} The main conclusion of our research is that temporal context matters. All the papers came to the results that their models, which use more than one frame to detect objects outperform the models with similar baseline architecture which only proceed on single frames. \newline Moreover, we have seen that operating on multiple-frames at the same time is a beneficial approach. It doubles the amount of temporal context information, which leads to higher mAPs. \newline With respect to the computational speed we noticed that the recurrent units should not be too deep. And in addition working only on some keyframes can be beneficial to increase the speed. Therefor a good key-frame policy is needed. \newline For detection quality and also for computational speed it is beneficial to work on different scales. This enables us to use recurrency to take the spatial and the temporal context into consideration. \subsection{Further work} \begin{figure} [h] \includegraphics[width=\columnwidth]{Further_Work} \caption{The proposed architecture} \end{figure} The proposed architecture consists of a region proposal network \cite{b38} that is based on the N-Gram concept in Natural Language Processing. Given a window of N previous frames, the RPN proposes the regions where the object bounding boxes could be detected from within the next frame. The RPN (region proposal network) should be recurrent in nature for detecting the temporal dependencies and it is to be really light weight. The ROIs obtained from RPN are fed into the network to make the predictions. So rather than feeding the whole image, feed only the region proposals made by RPN. Lastly, affine transformations can be performed to the output bounding boxes to overlay them over the image.
1,116,691,497,054
arxiv
\section*{APPENDIX} \section{ADDENDUM TO SECTION~\ref{sec:methods}} \subsection{Problem~\eqref{eq:fair_PCA} May Not Be Well Defined}\label{app:not_well_defined} Let $n=2n'$, $\mathbf{x}_1=\mathbf{x}_2=\ldots=\mathbf{x}_{n'}=\mathbf{0}\in\mathbb{R}^2$, and $\mathbf{x}_{n'+1}, \ldots \mathbf{x}_{2n'}\in\mathbb{R}^2$ be equidistantly spread on a circle with center~$\mathbf{0}$. Let $z_1=\ldots=z_{n'}=0$, $z_{n'+1}= \ldots =z_{2n'}=1$, and $k=1$. Any projection onto a 1-dimensional linear subspace maps $\mathbf{x}_1,\ldots,\mathbf{x}_{n'}$ to $\mathbf{0}$ and $\mathbf{x}_{n'+1}, \ldots, \mathbf{x}_{2n'}$ onto a line through $\mathbf{0}$ such that half the points of $\mathbf{x}_{n'+1}, \ldots, \mathbf{x}_{2n'}$ lie on one side of $\mathbf{0}$ and the other half lies on the other side of $\mathbf{0}$ (at most two of $\mathbf{x}_{n'+1}, \ldots, \mathbf{x}_{2n'}$ might map to $\mathbf{0}$). The function $h: \mathbb{R} \rightarrow\mathbb{R}$ with $h(x)=\mathds{1}[x\neq 0]$ (almost) perfectly predicts $z_i$ from the projected points, showing that the set $\mathcal{U}$ defined in \eqref{eq:fair_PCA} can be empty if we require $h(\mathbf{U}^\transpose\mathbf{x}_i)$ and $z_i$ to be independent for \emph{all} functions~$h$. The same example shows that $\mathcal{U}$ can be empty if we require $h(\mathbf{U}^\transpose\mathbf{x}_i)$ and $z_i$ to be \emph{uncorrelated} (rather than independent) for all functions~$h$. It also shows that $\mathcal{U}$ can be empty if we require $h(\mathbf{U}^\transpose\mathbf{x}_i)$ and $z_i$ to be independent for all \emph{linear} functions~$h$ (rather than all functions~$h$): for $h: \mathbb{R} \rightarrow\mathbb{R}$ with $h(x)=x$, $h(\mathbf{U}^\transpose\mathbf{x}_i)$ and $z_i$ are clearly dependent. This shows that we have to relax Problem~\eqref{eq:fair_PCA} in two ways in order to arrive at a well defined problem. \begin{algorithm}[t!] \caption{Fair PCA (for multiple demographic groups) }\label{alg:fair_PCA_multi_groups} \begin{algorithmic} \STATE {\bfseries Input:} data matrix $\mathbf{X}\in\mathbb{R}^{d\times n}$; demographic attributes~$z_i^{(1)},\ldots,z_i^{(m)}\in\{0,1\}$, $i\in[n]$, where $z_i^{(l)}$ encodes membership of the $i$-th datapoint in the $l$-th group; target dimension $k\in[d-m+1]$ \vspace{1mm} \STATE {\bfseries Output:} a solution $\mathbf{U}$ to the multi-group version of Problem~\eqref{eq:fair_PCA_relaxed} \begin{itemize}[leftmargin=*] \setlength{\itemsep}{-2pt} \item set $\mathbf{Z}\in\mathbb{R}^{n\times m}$ with the $l$-th column of $\mathbf{Z}$ equaling $(z_1^{(l)}-\bar{z}^{(l)},\ldots,z_n^{(l)}-\bar{z}^{(l)})^\transpose$ with $\bar{z}^{(l)}=\frac{1}{n} \sum_{i=1}^n z_i^{(l)}$ \item compute an orthonormal basis of the nullspace of $\mathbf{Z}^\transpose\mathbf{X}^\transpose$ and build matrix~$\mathbf{R}$ comprising the basis vectors as columns \item compute orthonormal eigenvectors, corresponding to the largest $k$ eigenvalues, of $\mathbf{R}^\transpose\mathbf{X}\Xb^\transpose\mathbf{R}$ and build matrix~$\mathbf{\Lambda}$ comprising the eigenvectors as columns \item return $\mathbf{U}=\mathbf{R}\mathbf{\Lambda}$ \end{itemize} \end{algorithmic} \end{algorithm} \section{ADDENDUM TO SECTION~\ref{sec:extensions}} \subsection{Fair PCA for Multiple Demographic Groups}\label{app:multiple_groups} In fair PCA for multiple groups we want to solve \begin{align}\label{eq:fair_PCA_multi_groups_app} \argmax_{\mathbf{U}\in\mathbb{R}^{d\times k}:\, \mathbf{U}^\transpose\mathbf{U}=\mathbf{I}_{k\times k}}\trace(\mathbf{U}^\transpose\mathbf{X}\Xb^\transpose\mathbf{U})\quad \text{subject to}\quad \mathbf{Z}^\transpose\mathbf{X}^\transpose\mathbf{U}=\mathbf{0}, \end{align} where $\mathbf{Z}\in\mathbb{R}^{n\times m}$ and the $l$-th column of $\mathbf{Z}$ equals $(z_1^{(l)}-\bar{z}^{(l)},\ldots,z_n^{(l)}-\bar{z}^{(l)})^\transpose$ with $\bar{z}^{(l)}=\frac{1}{n} \sum_{i=1}^n z_i^{(l)}$ and $z_i^{(l)}=1$ if $\mathbf{x}_i$ belongs to group~$l$ and $z_i^{(l)}=0$ otherwise. Assuming that no group is empty, the rank of $\mathbf{Z}$ is $m-1$ as $\sum_{l=1}^m \mathbf{Z}^{(l)}_i=0$, $i\in[n]$, and in any linear combination of $(m-1)$ many columns of $\mathbf{Z}$ equaling zero all coefficients must be zero. Hence, $\rank(\mathbf{Z}^\transpose\mathbf{X}^\transpose)\leq \rank(\mathbf{Z}^\transpose)=\rank(\mathbf{Z})=m-1$ and the nullspace of $\mathbf{Z}^\transpose\mathbf{X}^\transpose$ has dimension at least $d-m+1$. Let $\mathbf{R}\in\mathbb{R}^{d\times s}$ with $s\geq d-m+1$ comprise as columns an orthonormal basis of the nullspace of $\mathbf{Z}^\transpose\mathbf{X}^\transpose$. We can then substitute $\mathbf{U}=\mathbf{R}\mathbf{\Lambda}$ for $\mathbf{\Lambda}\in\mathbb{R}^{s\times k}$. The constraint $\mathbf{U}^\transpose\mathbf{U}=\mathbf{I}_{k\times k}$ becomes $\mathbf{\Lambda}^\transpose\mathbf{\Lambda}=\mathbf{I}_{k\times k}$, and the objective $\trace(\mathbf{U}^\transpose\mathbf{X}\Xb^\transpose\mathbf{U})$ becomes $\trace(\mathbf{\Lambda}^\transpose\mathbf{R}^\transpose\mathbf{X}\Xb^\transpose\mathbf{R}\mathbf{\Lambda})$. Hence, we can compute $\mathbf{\Lambda}$ by computing eigenvectors, corresponding to the largest $k$ eigenvalues, of $\mathbf{R}^\transpose\mathbf{X}\Xb^\transpose\mathbf{R}$. This requires $k\leq s$, which is guaranteed to hold for $k\leq d-m+1$. If $m=2$, then the first and the second column of $\mathbf{Z}$ coincide up to multiplication by $-1$ and the nullspace of $\mathbf{Z}^\transpose\mathbf{X}^\transpose$ is the same as if we removed one of the two columns from $\mathbf{Z}$. This shows that for two groups, fair PCA as presented here is equivalent to fair PCA as presented in Section~\ref{sec:methods}. Finally, the interpretation of fair PCA provided in Section~\ref{sec:methods} also applies to the case of multiple groups: $\mathbf{Z}^\transpose\mathbf{X}^\transpose\mathbf{U} = \mathbf{0}$ is equivalent to \begin{align*} \frac{1}{|\{i:\mathbf{x}_i\in \text{group $l$}\}|}\sum_{i:\, \mathbf{x}_i\,\in \,\text{group $l$}} \mathbf{U}^\transpose\mathbf{x}_i = \frac{1}{|\{i:\mathbf{x}_i\notin \text{group $l$}\}|}\sum_{i:\, \mathbf{x}_i\,\notin\, \text{group $l$}} \mathbf{U}^\transpose\mathbf{x}_i,\quad l=1,\ldots,m, \end{align*} which in turn is equivalent to the projected data's group-conditional means to coincide for all groups. Hence, an analogous version of Proposition~\ref{prop:gaussian_data} holds true for multiple groups. The pseudo code of fair PCA for multiple demographic groups is provided in Algorithm~\ref{alg:fair_PCA_multi_groups}. The pseudo code of fair kernel PCA for multiple demographic groups is provided in Algorithm~\ref{alg:fair_PCA_kernelized}. \begin{algorithm}[t] \caption{Fair Kernel PCA (for multiple demographic groups) }\label{alg:fair_PCA_kernelized} \begin{algorithmic} \STATE {\bfseries Input:} kernel matrix $\mathbf{K}\in\mathbb{R}^{n\times n}$ with $\mathbf{K}_{ij}=k(\mathbf{x}_i,\mathbf{x}_j)$ for some kernel function~$k$; demographic~attributes $z_i^{(1)},\ldots,z_i^{(m)}\in\{0,1\}$, $i\in[n]$, where $z_i^{(l)}$ encodes membership of $\mathbf{x}_i$ in the $l$-th group; target dimension~$k\in[n-m+1]$; \emph{optional:} kernel matrix $\mathbf{\hat{K}}\in\mathbb{R}^{n\times n'}$ with $\mathbf{\hat{K}}_{ij}=k(\mathbf{x}_i,\mathbf{x}'_j)$, $i\in[n],j\in[n']$, % for test data~$\mathbf{x}'_1,\ldots,\mathbf{x}'_{n'}$ \vspace{1mm} \STATE {\bfseries Output:} $k$-dimensional representation of the training data~$\mathbf{x}_1,\ldots,\mathbf{x}_n$; \emph{optional:} $k$-dimensional representation of the test data~$\mathbf{x}'_1,\ldots,\mathbf{x}'_{n'}$ \begin{itemize}[leftmargin=*] \setlength{\itemsep}{-2pt} \item set $\mathbf{Z}\in\mathbb{R}^{n\times m}$ with the $l$-th column of $\mathbf{Z}$ equaling $(z_1^{(l)}-\bar{z}^{(l)},\ldots,z_n^{(l)}-\bar{z}^{(l)})^\transpose$ with $\bar{z}^{(l)}=\frac{1}{n} \sum_{i=1}^n z_i^{(l)}$ \item compute an orthonormal basis of the nullspace of $\mathbf{Z}^\transpose\mathbf{K}$ and build matrix~$\mathbf{R}$ comprising the basis vectors as columns \item compute orthonormal eigenvectors, corresponding to the largest $k$ eigenvalues, of the generalized eigenvalue problem $\mathbf{R}^\transpose\mathbf{K}\Kb\mathbf{R}\mathbf{\Lambda}=\mathbf{R}^\transpose\mathbf{K}\mathbf{R}\mathbf{\Lambda} \newblock$; here, the matrix~$\mathbf{\Lambda}$ comprises the eigenvectors as columns and $\mathbf{W}$ is a diagonal matrix containing the eigenvalues \item return $\mathbf{\Lambda}^\transpose\mathbf{R}^\transpose\mathbf{K}$ as the representation of the training data; \emph{optional:} return $\mathbf{\Lambda}^\transpose\mathbf{R}^\transpose\mathbf{\hat{K}}$ as the representation of the test data \end{itemize} \end{algorithmic} \end{algorithm} \section{ADDENDUM TO SECTION~\ref{sec:experiments}} \subsection{Implementation Details}\label{app:implementation_details} \paragraph{General details} \begin{itemize} \item \textbf{Solving generalized eigenvalue problem for fair kernel PCA:} Fair kernel PCA requires to solve a generalized eigenvalue problem of the form $\mathbf{A}\mathbf{x}=\lambda \mathbf{B}\mathbf{x}$ for square matrices~$\mathbf{A}$ and $\mathbf{B}$ that are given as input. In fair kernel PCA, $\mathbf{B}$ is guaranteed to be symmetric positive semi-definite, but not necessarily positive definite (and so is $\mathbf{A}$). We use the function \texttt{eigsh} from SciPy (\url{https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.eigsh.html}) to solve the generalized eigenvalue problem. While \texttt{eigsh} allows for a positive semi-definite $\mathbf{B}$, it requires a parameter~\texttt{sigma} to use the shift-invert mode in this case (this is in contrast to the function~\texttt{eig} in Matlab, which does not require such a parameter and automatically chooses the best algorithm to solve the generalized eigenvalue problem in case of a singular $\mathbf{B}$; cf. \url{https://de.mathworks.com/help/matlab/ref/eig.html}). In order to avoid having to look for an appropriate value of \texttt{sigma}, when \texttt{eigsh} would require its specification, we simply add $10^{-5}\cdot \mathbf{I}$ to $\mathbf{B}$, where $\mathbf{I}$ is the identity matrix, to guarantee that $\mathbf{B}$ is positive definite. This is a common practice in the context of kernel methods to avoid numerical instabilities \citep[see, e.g., ][Section 1.2]{williams2000}. \item \textbf{Bandwith for fair kernel PCA:} When running our proposed fair kernel PCA algorithm with a Gaussian kernel, we set the parameter~$\gamma$ of the kernel function (cf. \url{https://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.rbf_kernel.html#sklearn.metrics.pairwise.rbf_kernel}) to $1/(d\cdot \Var(\text{training data}))$, where $d$ is the dimension of the data (i.e., number of features) and $\Var(\text{training data})$ the variance of the flattened training data array. This value of $\gamma$ is the default value in Scikit-learn's kernel SVM implementation (cf. \url{https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html}). \end{itemize} \paragraph{Details for the experiments of Section~\ref{subsec:experiments_linear_guarding}} We used the experimental setup and code of \citet{Lee2022}. Hence, most implementation details can be found in their paper or code repository. In addition, we provide the following details: \begin{itemize} \item \textbf{Training additional classifiers for evaluating representations:} In addition to the metrics reported by \citet{Lee2022}, we reported the accuracy and $\Delta_{DP}$ of a linear support vector machine (SVM) and a multilayer perceptron (MLP). We trained the linear SVM in Matlab using the function \texttt{fitcsvm} (\url{https://de.mathworks.com/help/stats/fitcsvm.html}) and the MLP using Scikit-learn's \texttt{MLPClassifier} class (\url{https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html}) with all parameters set to the default values (except for \texttt{hidden\_layer\_sizes}, \texttt{max\_iter}, and \texttt{random\_state} for the MLP). \end{itemize} \paragraph{Details for the experiments of Section~\ref{subsec:experiments_bias_mitigation}} \begin{itemize} \item \textbf{Data normalization:} We normalized the data to have zero mean and unit variance on the training data. \item \textbf{Target dimension for our methods:} As target dimension~$k$ we chose $k=d-1$, where $d$ is the data dimension, for fair PCA and fair kernel PCA, $k=\lfloor d/4\rfloor$ for Fair PCA-S (0.5), and $k=\lfloor d/2\rfloor$ for Fair PCA-S (0.85). \item \textbf{Controlling accuracy vs. fairness trade-off:} For our methods, we deployed the strategy described in Section~\ref{subsec:tradeoff} to trade off accuracy vs. fairness. For the reductions approach of \citet{agarwal_reductions_approach}, we controlled the trade-off by varying the parameter \texttt{difference\_bound} in the classes \texttt{DemographicParity} or \texttt{TruePositiveRateParity}, which implement the fairness constraints. For all methods, we used 11 parameter values for generating the trade-off curves. For our methods, we set the fairness parameter~$\lambda$ of Section~\ref{subsec:tradeoff} to $(i/10)^3$, $i=0,1,\ldots, 10$. For the approach of \citeauthor{agarwal_reductions_approach} we set \texttt{difference\_bound} to 0.001, 0.005, 0.01, 0.015, 0.02, 0.03, 0.05, 0.07, 0.1, 0.15, 0.2. \item \textbf{Regularization parameters:} We trained the logistic regression classifier using Scikit-learn (\url{https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html}) with regularization parameter~$C=1/(2\cdot \text{size of training data}\cdot 0.01)$ and the kernel SVM classifier using Scikit-learn (\url{https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html}) with regularization parameter~$C=1/(2\cdot \text{size of training data}\cdot 0.00005)$. By default, both classifiers are trained with $l_2$-regularization. \end{itemize} \newcommand{0.18}{0.18} \newcommand{-3mm}{-3mm} \begin{figure}[t] \centering \includegraphics[scale=0.18]{CelebA_attribute_distributions/distribution_Bald.png} \hspace{-3mm} \includegraphics[scale=0.18]{CelebA_attribute_distributions/distribution_No_Beard.png} \hspace{-3mm} \includegraphics[scale=0.18]{CelebA_attribute_distributions/distribution_Eyeglasses.png} \hspace{-3mm} \includegraphics[scale=0.18]{CelebA_attribute_distributions/distribution_Wearing_Hat.png} \hspace{-3mm} \includegraphics[scale=0.18]{CelebA_attribute_distributions/distribution_Mustache.png} \hspace{-3mm} \includegraphics[scale=0.18]{CelebA_attribute_distributions/distribution_Smiling.png} \caption{Distributions of the attributes \emph{bald}, \emph{beard}, \emph{eyeglasses}, \emph{hat}, \emph{mustache}, and \emph{smiling} in the CelebA dataset.} \label{fig:CelebA_distribution_attributes} \end{figure} \subsection{Details about Datasets}\label{app:details_about_datasets} \paragraph{Adult Income dataset \citep{Dua:2019} } The Adult Income dataset is available on the UCI repository \citep{Dua:2019}. Each record comprises 14 features (before one-hot encoding categorical ones) for an individual, such as their education or marital status, and the task is to predict whether an individual makes more than \$50k per year or not (distribution: 23.9\% yes - 76.1\% no). In Section~\ref{subsec:experiments_linear_guarding}, we used the dataset as provided by \citet{Lee2022}. They removed the features ``fnlwgt'' and ``race'', and they subsampled the dataset to comprise 2261 records (cf. Appendix~I.3 in their paper). They used the binary feature ``sex'' as demographic attribute (distribution: 66.8\% male - 33.2\% female). In our comparison with the method of \citet{agarwal_reductions_approach} presented in Section~\ref{subsec:experiments_bias_mitigation}, we also used ``sex'' as demographic attribute; however, we did not remove any features and we randomly subsampled the dataset to comprise 5000 records for training and 5000 different records for evaluation (i.e., computing a classifier's accuracy and fairness violation). In the runtime comparison of Appendix~\ref{appendix_agarwal_addendum} we used between 1000 and 40000 randomly sampled records for training. \paragraph{Bank Marketing dataset \citep{moro2014,Dua:2019}} The Bank Marketing dataset is available on the UCI repository \citep{Dua:2019}. There are four versions available. We worked with the file \texttt{bank-additional-full.csv}. Each record comprises 20 features (before one-hot encoding categorical ones) for an individual, and the task is to predict whether an individual subscribes a term deposit or not (distribution: 11.3\% yes - 88.7\% no). We used a person's binarized age (older than 40 vs. not older than 40) as demographic attribute (distribution: 42.3\% older than 40 - 57.7\% not older than 40), and we randomly subsampled the dataset to comprise 5000 records for training and 5000 different records for evaluation. \paragraph{CelebA dataset \citep{liu2015faceattributes}} The CelebA dataset comprises 202599 pictures of faces of celebrities together with 40 binary attribute annotations for each picture. The dataset comes in two versions: one that provides in-the-wild images, which may not only show a person's face, but also their upper body, and one that provides aligned-and-cropped images, which only show a person's face. For our experiment, we used the latter one. We used one of the \emph{bald}, \emph{beard}, \emph{eyeglasses}, \emph{hat}, \emph{mustache}, or \emph{smiling} annotations as demographic attributes. The distributions of these attributes can be seen in Figure~\ref{fig:CelebA_distribution_attributes}. \paragraph{COMPAS dataset \citep{angwin2016}} The COMPAS dataset is available on \url{https://github.com/propublica/compas-analysis}. We used the dataset as provided by \citet{Lee2022}. They subsampled the dataset to comprise 2468 datapoints, removed the features ``sex'' and ``c\_charge\_desc'', and used the feature ``Race'' for defining the demographic attribute (cf. Appendix~I.1 in their paper). \paragraph{German Credit \citep{Dua:2019}} The German Credit dataset is available on the UCI repository \citep{Dua:2019}. It comprises 1000 datapoints. We used the dataset as provided by \citet{Lee2022}. They removed the features ``sex'' and ``personal\_status'' and used the feature ``Age'' for defining the demographic attribute (cf. Appendix~I.2 in their paper). \begin{figure}[t] \centering \includegraphics[scale=0.3]{MMD_fair_pca_experiment/Runtime_as_function_of_k.pdf} \caption{The running time of MbF-PCA \citep{Lee2022}, INLP \citep{ravfogel2020}, RLACE \citep{ravfogel2022} and our proposed methods as a function of the target dimension~$k$. The data dimension~$d$ equals 100. Note the logarithmic~y-axis.} \label{fig:runtime_as_function_of_k} \end{figure} \subsection{Another Runtime Comparison}\label{app:runtime_comparison} Figure~\ref{fig:runtime_as_function_of_k} provides a comparison of the running times of the various methods (except for FPCA, which we already have seen to run extremely slow in Figure~\ref{fig:MMD_fair_pca_exp_2}) in a related, but different setting as in the experiments of Section~\ref{subsec:experiments_linear_guarding} on the synthetic data. The data is generated in the same way as in Section~\ref{subsec:experiments_linear_guarding}, but now we vary the target dimension~$k$ and hold the data dimension~$d$ constant at 100. We see that while the running time of our proposed methods only moderately increases with~$k$, the running time of MbF-PCA drastically increases with~$k$. Note that the running time of INLP even decreases with~$k$, which is by the design of the method (cf. Section~\ref{sec:related_work}). \subsection{Tables for German Credit and COMPAS}\label{app:tables} Table~\ref{tab:German} and Table~\ref{tab:COMPAS} provide the results of the experiments of Section~\ref{subsec:experiments_linear_guarding} on the real data for the German Credit dataset and the COMPAS dataset, respectively. Note that the dimension of the COMPAS dataset is rather small---in particular, for $k=10$, we do not expect Fair PCA-S (0.5) or Fair PCA-S (0.85) to behave any differently than fair PCA since $l=\max\{k,\lfloor f\cdot d\rfloor\}=k=d-1$ for both $f=0.5$ and $f=0.85$. \begin{table*}[t] \centering \caption{Similar table as Table~\ref{tab:Adult} for the German Credit dataset.} \label{tab:German} \vspace{3mm} \renewcommand{\arraystretch}{1.15} \begin{scriptsize} \begin{tabular}{c|c|cccccccc} \hline \multicolumn{10}{c}{\normalsize{\textbf{German Credit} [$\text{feature dim}=57$, $\Psymb(Y=1)=0.3020$]}}\\ \multirow{2}{*}{$k$} & \multirow{2}{*}{Algorithm} & \multirow{2}{*}{\%Var{\scriptsize ($\uparrow$)}} & \multirow{2}{*}{MMD$^2${\scriptsize ($\downarrow$)}} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} \\ & & & & \multicolumn{2}{c}{Kernel SVM} & \multicolumn{2}{c}{Linear SVM} & \multicolumn{2}{c}{MLP}\\\hline \multirow{11}{*}{2} & PCA & $\mathbf{11.42_{0.45}}$ & $\mathbf{0.147_{0.047}}$ & $\mathbf{76.87_{1.32}}$ & $\mathbf{0.12_{0.06}}$ & $\mathbf{69.8_{1.21}}$ & $\mathbf{0.0_{0.0}}$ & $\mathbf{71.7_{1.83}}$ & $\mathbf{0.09_{0.09}}$ \\ \cdashline{2-10} & FPCA (0.1, 0.01) & $7.43_{0.56}$ & $0.017_{0.009}$ & $72.17_{1.04}$ & $0.03_{0.02}$ & $\mathbf{69.8_{1.21}}$ & $\mathbf{0.0_{0.0}}$ & $70.27_{1.38}$ & $0.01_{0.01}$ \\ & FPCA (0, 0.01) & $7.33_{0.54}$ & $0.015_{0.01}$ & $71.77_{1.52}$ & $0.03_{0.02}$ & $\mathbf{69.8_{1.21}}$ & $\mathbf{0.0_{0.0}}$ & $69.83_{1.49}$ & $\mathbf{0.0_{0.01}}$ \\ & MbF-PCA ($10^{-3}$) & $\mathbf{10.34_{0.57}}$ & $0.019_{0.014}$ & $\mathbf{74.87_{1.92}}$ & $0.04_{0.04}$ & $\mathbf{69.8_{1.21}}$ & $\mathbf{0.0_{0.0}}$ & $\mathbf{71.43_{2.08}}$ & $0.04_{0.05}$ \\ & MbF-PCA ($10^{-6}$) & $9.38_{0.3}$ & $0.016_{0.009}$ & $73.97_{1.59}$ & $0.03_{0.02}$ & $\mathbf{69.8_{1.21}}$ & $\mathbf{0.0_{0.0}}$ & $70.83_{1.66}$ & $0.03_{0.03}$ \\ & INLP & $2.99_{0.39}$ & $\mathbf{0.007_{0.004}} $& $70.93_{1.27}$ & $\mathbf{0.02_{0.02}} $& $\mathbf{69.8_{1.21}}$ & $\mathbf{0.0_{0.0}} $& $70.17_{1.56}$ & $0.01_{0.02}$ \\ & RLACE & $3.62_{0.27}$ & $0.042_{0.027} $& $71.5_{1.75}$ & $\mathbf{0.02_{0.02}} $& $\mathbf{69.8_{1.21}}$ & $\mathbf{0.0_{0.0}} $& $70.23_{1.5}$ & $0.02_{0.01}$ \\ \cdashline{2-10} & Fair PCA & $\mathbf{10.85_{0.55}}$ & $0.025_{0.016}$ & $\mathbf{75.6_{1.89}}$ & $0.06_{0.05}$ & $\mathbf{69.8_{1.21}}$ & $\mathbf{0.0_{0.0}}$ & $\mathbf{72.03_{1.98}}$ & $0.04_{0.05}$ \\ & Fair Kernel PCA & $n/a_{}$ & $n/a_{}$ & $69.8_{1.21}$ & $\mathbf{0.0_{0.0}}$ & $\mathbf{69.8_{1.21}}$ & $\mathbf{0.0_{0.0}}$ & $69.8_{1.21}$ & $\mathbf{0.0_{0.0}}$ \\ & Fair PCA-S (0.5) & $4.73_{0.43}$ & $\mathbf{0.01_{0.006}}$ & $72.47_{3.14}$ & $0.02_{0.02}$ & $\mathbf{69.8_{1.21}}$ & $\mathbf{0.0_{0.0}}$ & $70.93_{2.21}$ & $0.01_{0.01}$ \\ & Fair PCA-S (0.85) & $7.43_{0.42}$ & $0.018_{0.011}$ & $72.93_{2.05}$ & $0.02_{0.02}$ & $\mathbf{69.8_{1.21}}$ & $\mathbf{0.0_{0.0}}$ & $70.0_{1.83}$ & $0.02_{0.02}$ \\ \hline \multirow{11}{*}{10} & PCA & $\mathbf{38.24_{0.92}}$ & $\mathbf{0.13_{0.018}}$ & $\mathbf{99.93_{0.13}}$ & $\mathbf{0.12_{0.07}}$ & $\mathbf{74.8_{1.93}}$ & $\mathbf{0.15_{0.11}}$ & $\mathbf{96.87_{2.08}}$ & $\mathbf{0.11_{0.08}}$ \\ \cdashline{2-10} & FPCA (0.1, 0.01) & $29.85_{0.82}$ & $0.02_{0.005}$ & $\mathbf{99.93_{0.13}}$ & $0.12_{0.07}$ & $71.13_{2.75}$ & $0.02_{0.03}$ & $\mathbf{96.77_{1.8}}$ & $0.1_{0.07}$ \\ & FPCA (0, 0.01) & $29.74_{0.84}$ & $0.02_{0.005}$ & $\mathbf{99.93_{0.13}}$ & $0.12_{0.07}$ & $70.87_{2.38}$ & $0.02_{0.04}$ & $96.4_{1.77}$ & $0.1_{0.07}$ \\ & MbF-PCA ($10^{-3}$) & $\mathbf{34.07_{1.0}}$ & $0.019_{0.007}$ & $\mathbf{99.93_{0.13}}$ & $0.12_{0.07}$ & $\mathbf{73.7_{2.58}}$ & $0.05_{0.04}$ & $96.67_{1.04}$ & $0.11_{0.06}$ \\ & MbF-PCA ($10^{-6}$) & $16.82_{1.11}$ & $\mathbf{0.011_{0.007}}$ & $94.37_{2.63}$ & $0.12_{0.06}$ & $70.1_{0.63}$ & $\mathbf{0.0_{0.0}}$ & $80.07_{3.52}$ & $\mathbf{0.06_{0.05}}$ \\ & INLP & $15.5_{0.92}$ & $\mathbf{0.011_{0.002}} $& $98.83_{0.79}$ & $\mathbf{0.11_{0.07}} $& $69.8_{1.21}$ & $\mathbf{0.0_{0.0}} $& $94.2_{2.26}$ & $0.1_{0.06}$ \\ & RLACE & $17.24_{0.75}$ & $0.03_{0.023} $& $99.73_{0.29}$ & $0.12_{0.07} $& $70.97_{2.31}$ & $0.02_{0.03} $& $95.43_{2.89}$ & $0.12_{0.07}$ \\ \cdashline{2-10} & Fair PCA & $\mathbf{36.63_{1.04}}$ & $0.022_{0.008}$ & $\mathbf{99.93_{0.13}}$ & $0.12_{0.07}$ & $\mathbf{74.1_{2.23}}$ & $0.05_{0.04}$ & $96.03_{2.26}$ & $0.11_{0.04}$ \\ & Fair Kernel PCA & $n/a_{}$ & $n/a_{}$ & $70.1_{1.18}$ & $\mathbf{0.0_{0.01}}$ & $69.8_{1.21}$ & $\mathbf{0.0_{0.0}}$ & $74.07_{2.43}$ & $\mathbf{0.06_{0.03}}$ \\ & Fair PCA-S (0.5) & $20.51_{0.79}$ & $\mathbf{0.013_{0.006}}$ & $99.87_{0.22}$ & $0.12_{0.08}$ & $71.6_{2.83}$ & $0.02_{0.04}$ & $95.13_{2.52}$ & $0.08_{0.06}$ \\ & Fair PCA-S (0.85) & $28.83_{0.82}$ & $0.018_{0.007}$ & $\mathbf{99.93_{0.13}}$ & $0.12_{0.07}$ & $71.87_{2.62}$ & $0.03_{0.04}$& $\mathbf{96.27_{2.0}}$ & $0.09_{0.07}$ \\ \hline \end{tabular} \end{scriptsize} \end{table*} \begin{table*}[t] \centering \caption{Similar table as Table~\ref{tab:Adult} for the COMPAS dataset. } \label{tab:COMPAS} \vspace{3mm} \renewcommand{\arraystretch}{1.15} \begin{scriptsize} \begin{tabular}{c|c|cccccccc} \hline \multicolumn{10}{c}{\normalsize{\textbf{COMPAS} [$\text{feature dim}=11$, $\Psymb(Y=1)=0.4548$]}}\\ \multirow{2}{*}{$k$} & \multirow{2}{*}{Algorithm} & \multirow{2}{*}{\%Var{\scriptsize ($\uparrow$)}} & \multirow{2}{*}{MMD$^2${\scriptsize ($\downarrow$)}} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} \\ & & & & \multicolumn{2}{c}{Kernel SVM} & \multicolumn{2}{c}{Linear SVM} & \multicolumn{2}{c}{MLP}\\\hline \multirow{11}{*}{2} & PCA & $\mathbf{39.28_{4.91}}$ & $\mathbf{0.092_{0.009}}$ & $\mathbf{64.53_{1.38}}$ & $\mathbf{0.29_{0.08}}$ & $\mathbf{56.69_{1.52}}$ & $\mathbf{0.2_{0.09}}$ & $\mathbf{61.77_{2.81}}$ & $\mathbf{0.28_{0.06}}$ \\ \cdashline{2-10} & FPCA (0.1, 0.01) & $\mathbf{35.06_{4.9}}$ & $0.012_{0.007}$ & $61.65_{1.11}$ & $0.1_{0.06}$ & $56.23_{1.19}$ & $0.04_{0.03}$ & $57.61_{1.67}$ & $0.08_{0.04}$ \\ & FPCA (0, 0.01) & $34.43_{4.76}$ & $0.011_{0.006}$ & $60.86_{1.03}$ & $0.11_{0.06}$ & $55.9_{1.26}$ & $0.03_{0.03}$ & $56.9_{1.88}$ & $0.09_{0.03}$ \\ & MbF-PCA ($10^{-3}$) & $34.24_{3.68}$ & $0.006_{0.003}$ & $\mathbf{64.78_{0.96}}$ & $0.12_{0.05}$ & $56.92_{2.7}$ & $0.07_{0.06}$ & $\mathbf{60.53_{1.46}}$ & $0.1_{0.06}$ \\ & MbF-PCA ($10^{-6}$) & $13.52_{2.76}$ & $0.002_{0.002}$ & $58.26_{1.29}$ & $0.03_{0.02}$ & $55.01_{0.9}$ & $0.01_{0.03}$ & $56.15_{1.52}$ & $0.04_{0.04}$ \\ & INLP & $0.42_{1.25}$ & $\mathbf{0.0_{0.0}}$ & $54.95_{1.51}$ & $\mathbf{0.01_{0.02}}$ & $54.52_{0.7}$ & $\mathbf{0.0_{0.0}}$ & $54.95_{1.51}$ & $\mathbf{0.01_{0.02}}$ \\ & RLACE & $19.18_{4.03}$ & $0.008_{0.007} $& $63.36_{1.96}$ & $0.1_{0.06} $& $\mathbf{59.64_{3.04}}$ & $0.06_{0.05} $& $62.16_{2.67}$ & $0.07_{0.04}$\\ \cdashline{2-10} & Fair PCA & $\mathbf{35.56_{4.52}}$ & $0.019_{0.007}$ & $62.82_{0.88}$ & $0.11_{0.08}$ & $54.55_{0.76}$ & $0.03_{0.04}$ & $60.65_{2.29}$ & $0.13_{0.11}$ \\ & Fair Kernel PCA & $n/a_{}$ & $n/a_{}$ & $57.8_{1.82}$ & $\mathbf{0.08_{0.06}}$ & $54.74_{1.21}$ & $\mathbf{0.02_{0.04}}$ & $57.67_{1.57}$ & $\mathbf{0.05_{0.04}}$ \\ & Fair PCA-S (0.5) & $25.11_{5.14}$ & $\mathbf{0.006_{0.004}}$ & $\mathbf{64.1_{1.49}}$ & $0.15_{0.06}$ & $\mathbf{58.08_{2.92}}$ & $0.07_{0.05}$ & $\mathbf{62.65_{1.75}}$ & $0.14_{0.07}$ \\ & Fair PCA-S (0.85) & $35.42_{4.49}$ & $0.027_{0.005}$ & $60.93_{0.6}$ & $0.14_{0.04}$ & $55.43_{0.94}$ & $0.06_{0.08}$ & $56.63_{1.09}$ & $0.2_{0.1}$ \\ \hline \multirow{11}{*}{10} & PCA & $\mathbf{100.0_{0.0}}$ & $\mathbf{0.241_{0.005}}$ & $\mathbf{73.14_{1.16}}$ & $\mathbf{0.21_{0.06}}$ & $\mathbf{64.78_{0.99}}$ & $\mathbf{0.18_{0.05}}$ & $\mathbf{69.81_{1.8}}$ & $\mathbf{0.23_{0.08}}$ \\ \cdashline{2-10} & FPCA (0.1, 0.01) & $87.79_{1.21}$ & $0.015_{0.003}$ & $72.25_{0.88}$ & $\mathbf{0.16_{0.06}}$ & $64.7_{1.67}$ & $0.06_{0.05}$ & $\mathbf{69.73_{1.95}}$ & $0.15_{0.07}$ \\ & FPCA (0, 0.01) & $87.44_{1.28}$ & $0.015_{0.002}$ & $\mathbf{72.32_{0.88}}$ & $\mathbf{0.16_{0.07}}$ & $64.82_{1.64}$ & $\mathbf{0.05_{0.04}}$ & $68.52_{1.25}$ & $0.08_{0.07}$ \\ & MbF-PCA ($10^{-3}$) & $87.75_{1.29}$ & $\mathbf{0.013_{0.002}}$ & $72.19_{0.88}$ & $\mathbf{0.16_{0.06}}$ & $64.97_{1.53}$ & $0.08_{0.04}$ & $68.89_{1.61}$ & $0.11_{0.06}$ \\ & MbF-PCA ($10^{-6}$) & $87.75_{1.29}$ & $\mathbf{0.013_{0.002}}$ & $72.19_{0.88}$ & $\mathbf{0.16_{0.06}}$ & $\mathbf{65.01_{1.49}}$ & $0.08_{0.04}$ & $68.14_{1.14}$ & $\mathbf{0.07_{0.05}}$ \\ & INLP & $\mathbf{91.09_{0.88}}$ & $0.034_{0.005}$ & $71.4_{0.9}$ & $0.17_{0.03}$ & $64.93_{0.84}$ & $0.18_{0.04} $& $68.19_{1.46}$ & $0.2_{0.04}$ \\ & RLACE & $87.47_{1.27}$ & $0.015_{0.002} $& $72.29_{0.85}$ & $\mathbf{0.16_{0.06}} $& $64.75_{1.73}$ & $\mathbf{0.05_{0.03}} $& $68.3_{2.07}$ & $0.1_{0.07}$ \\ \cdashline{2-10} & Fair PCA & $\mathbf{87.44_{1.28}}$ & $\mathbf{0.015_{0.002}}$ & $\mathbf{72.32_{0.9}}$ & $\mathbf{0.16_{0.06}}$ & $64.72_{1.69}$ & $\mathbf{0.05_{0.03}}$ & $67.94_{1.43}$ & $\mathbf{0.09_{0.07}}$ \\ & Fair Kernel PCA & $n/a_{}$ & $n/a_{}$ & $65.96_{1.12}$ & $0.26_{0.07}$ & $64.33_{0.8}$ & $\mathbf{0.05_{0.04}}$ & $66.41_{1.03}$ & $0.14_{0.07}$ \\ & Fair PCA-S (0.5) & $\mathbf{87.44_{1.28}}$ & $\mathbf{0.015_{0.002}}$ & $72.31_{0.88}$ & $\mathbf{0.16_{0.06}}$ & $\mathbf{64.75_{1.7}}$ & $\mathbf{0.05_{0.03}}$ & $\mathbf{69.24_{2.02}}$ & $0.12_{0.06}$ \\ & Fair PCA-S (0.85) & $\mathbf{87.44_{1.28}}$ & $\mathbf{0.015_{0.002}}$ & $72.31_{0.88}$ & $\mathbf{0.16_{0.06}}$ & $\mathbf{64.75_{1.7}}$ & $\mathbf{0.05_{0.03}}$ & $\mathbf{69.24_{2.02}}$ & $0.12_{0.06}$ \\ \hline \end{tabular} \end{scriptsize} \end{table*} \clearpage \begin{table*}[t!] \centering \caption{We applied the fair PCA method of \citet{samira2018} to the three real-world datasets considered in Section~\ref{subsec:experiments_linear_guarding}. The tables do not provide the metrics \%Var and MMD$^2$ since the method of \citeauthor{samira2018} is not guaranteed to yield an embedding of the desired target dimension, and hence their method and the methods studied in Section~\ref{subsec:experiments_linear_guarding} are not comparable w.r.t. \%Var and MMD$^2$. } \label{tab:samadi} \vspace{3mm} \renewcommand{\arraystretch}{1.2} \begin{tabular}{c|c|cccccc} \hline \multicolumn{8}{c}{\normalsize{\textbf{Adult Income} [$\text{feature dim}=97$, $\Psymb(Y=1)=0.2489$]}}\\ \multirow{2}{*}{$k$} & \multirow{2}{*}{Algorithm} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} \\ & & \multicolumn{2}{c}{Kernel SVM} & \multicolumn{2}{c}{Linear SVM} & \multicolumn{2}{c}{MLP}\\\hline 2 & Fair PCA of \citet{samira2018} & $81.59_{1.12}$ & $0.14_{0.03}$ & $ 80.8_{1.09}$ & $0.13_{0.04}$ & $82.02_{1.03}$ & $0.18_{0.04}$ \\ \hline 10 & Fair PCA of \citet{samira2018} & $87.78_{0.87}$ & $0.18_{0.02}$ & $83.2_{1.09}$ & $0.13_{0.03}$ & $91.34_{2.04}$ & $0.18_{0.03}$ \\ \hline \hline \multicolumn{8}{c}{\normalsize{\textbf{German Credit} [$\text{feature dim}=57$, $\Psymb(Y=1)=0.3020$]}}\\ \multirow{2}{*}{$k$} & \multirow{2}{*}{Algorithm} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} \\ & & \multicolumn{2}{c}{Kernel SVM} & \multicolumn{2}{c}{Linear SVM} & \multicolumn{2}{c}{MLP}\\\hline 2 & Fair PCA of \citet{samira2018} & $73.73_{1.38}$ & $0.05_{0.03}$ & $ 69.97_{0.82}$ & $0.0_{0.01}$ & $72.97_{1.4}$ & $0.04_{0.02}$ \\ \hline 10 & Fair PCA of \citet{samira2018} & $98.73_{0.7}$ & $0.1_{0.07}$ & $76.8_{1.9}$ & $0.09_{0.07}$ & $98.13_{0.69}$ & $0.12_{0.07}$ \\ \hline \hline \multicolumn{8}{c}{\normalsize{\textbf{COMPAS} [$\text{feature dim}=11$, $\Psymb(Y=1)=0.4548$]}}\\ \multirow{2}{*}{$k$} & \multirow{2}{*}{Algorithm} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} & \%Acc{\scriptsize ($\uparrow$)} & $\Delta_{DP}${\scriptsize ($\downarrow$)} \\ & & \multicolumn{2}{c}{Kernel SVM} & \multicolumn{2}{c}{Linear SVM} & \multicolumn{2}{c}{MLP}\\\hline 2 & Fair PCA of \citet{samira2018} & $63.89_{1.86}$ & $0.2_{0.05}$ & $ 57.94_{2.22}$ & $0.12_{0.05}$ & $63.95_{3.98}$ & $0.18_{0.03}$ \\ \hline 10 & Fair PCA of \citet{samira2018} & $73.12_{1.17}$ & $0.21_{0.06}$ & $64.79_{0.96}$ & $0.18_{0.05}$ & $69.46_{1.04}$ & $0.27_{0.09}$ \\ \hline \end{tabular} \end{table*} \subsection{Comparison with \citet{samira2018}}\label{app:comparison_samira} We applied the fair PCA method of \citet{samira2018} to the three real-world datasets considered in Section~\ref{subsec:experiments_linear_guarding}. As discussed in Section~\ref{sec:related_work} and Section~\ref{subsec:experiments_linear_guarding}, the fairness notion underlying the method of \citeauthor{samira2018} is incomparable to our notion of fair PCA. \citeauthor{samira2018} provide theoretical guarantees for an algorithm that relies on solving a semidefinite program (SDP), but then propose to use a multiplicative weight update method for solving the SDP approximately in order to speed up computation. We observed that this can result in embedding dimensions that are much larger than the desired target dimension. We used the code provided by \citeauthor{samira2018} without modifications; in particular, we used the same parameters for the multiplicative weight update algorithm as they used in their experiment on the LFW dataset. Table~\ref{tab:samadi} provides the results. We see that the downstream classifiers trained on the fair PCA representation of \citeauthor{samira2018} have roughly similar values of accuracy and DP violation as standard PCA. Clearly, the DP violations are much higher than for our methods or the other competitors. Since the dimension of the fair PCA representation of \citeauthor{samira2018} is not guaranteed to equal the desired target dimension~$k$, we do not report the metrics \%Var and MMD$^2$ in Table~\ref{tab:samadi}. \subsection{Fair PCA Applied to the CelebA Dataset}\label{app:CelebA} Figures~\ref{fig:celeba_experiment_glasses} to~\ref{fig:celeba_experiment_hat} show examples of original CelebA images (top row of each figure) together with the results of applying fair PCA (middle and bottom row) for the various demographic attributes. We see that fair PCA adds something looking like glasses / a mustache / a beard to the faces, making it hard to tell whether an original face features those, and successfully obfuscates the demographic information for these attributes. Still the projected faces resemble the original ones to a good extent. For the attribute ``smiling'', fair PCA also succeeds in obfuscating the demographic information, but the whole faces become more perturbed and less similar to the original ones. For the attributes ``bald'' and ``hat'', fair PCA appears to fail, and we can tell for all of the faces under consideration that they \emph{do not} feature baldness / a hat. We suspect that the reason for this might be the high diversity of hats or \emph{non-bald} faces (see Figure~\ref{fig:celebA_hat_examples} for some example images). \begin{figure*}[t!] \centering \includegraphics[scale=0.21]{Bias_mitigation_comparison_AVERAGE/Adult_appendix/tradeoff_logreg_DP_10runs.pdf} \includegraphics[scale=0.21]{Bias_mitigation_comparison_AVERAGE/Adult_appendix/tradeoff_logreg_EO_10runs.pdf} \includegraphics[scale=0.21]{Bias_mitigation_comparison_AVERAGE/Bank_appendix/tradeoff_logreg_DP_10runs.pdf} \includegraphics[scale=0.21]{Bias_mitigation_comparison_AVERAGE/Bank_appendix/tradeoff_logreg_EO_10runs.pdf} \caption{Comparison with the state-of-the-art reductions approach of \citet{agarwal_reductions_approach} when training a logistic regression classifier on the Adult Income dataset (first and second plot) and the Bank Marketing dataset (third and fourth plot). Compared to the plots in Figure~\ref{fig:bias_mitigation_exp}, these plots also show the results for Fair PCA-S and fair kernel PCA.} \label{fig:bias_mitigation_exp_Bank_Marketing} \end{figure*} \newcommand{0.3}{0.3} \begin{figure*}[t!] \centering \includegraphics[scale=0.3]{Bias_mitigation_comparison_AVERAGE/Adult_runtime_comparison/runtime_logreg_DP_10runs.pdf} \hspace{1cm} \includegraphics[scale=0.3]{Bias_mitigation_comparison_AVERAGE/Adult_runtime_comparison/runtime_logreg_EO_10runs_no_legend.pdf} \caption{Runtime comparison between our methods and the method of \citet{agarwal_reductions_approach}. For our methods, the running time includes the time it takes to fit the logistic regression classifier on top of the fair representation.} \label{fig:bias_mitigation_exp_runtime} \end{figure*} \subsection{Comparison with \citet{agarwal_reductions_approach}}\label{appendix_agarwal_addendum} Figure~\ref{fig:bias_mitigation_exp_Bank_Marketing} shows the results of the comparison with the reductions approach of \citet{agarwal_reductions_approach} when training a logistic regression classifier for Fair PCA-S and fair kernel PCA (next to the results for fair PCA and the method of \citeauthor{agarwal_reductions_approach}, which we have already seen in Figure~\ref{fig:bias_mitigation_exp} in Section~\ref{subsec:experiments_bias_mitigation}). We see that Fair PCA-S produces smooth trade-off curves and can achieve lower fairness violation than fair PCA or the method of \citeauthor{agarwal_reductions_approach} in some cases. However, the representation learned by fair kernel PCA only allows for a constant logistic regression classifier (with zero fairness violation and an accuracy equaling the probability of the predominant label---cf. Appendix~\ref{app:details_about_datasets}). The plots of Figure~\ref{fig:bias_mitigation_exp_runtime} show the running time of the various methods as a function of the number of training points on the Adult Income dataset and when training a logistic regression classifier. The curves show the average over the eleven values of the fairness parameter / the parameter~\texttt{difference\_bound} (cf.~Appendix~\ref{app:implementation_details}) and over ten random draws of training data together with the standard deviation as error bars. Note that for our methods the running time includes the time it takes to train the classifier on a representation produced by our method. While none of our methods ever runs for more than 0.5 seconds, the method of \citeauthor{agarwal_reductions_approach}, on average, runs for more than 32 seconds when training with 40000 datapoints and aiming for DP. The plots do not show curves for fair kernel PCA, which we cannot simply apply to this large number of training points due to its cubic running time in the number of datapoints. We leave it as an interesting question for future work to develop scalable approximation techniques for fair kernel PCA similarly to those that have been developed for standard kernel PCA or other kernel methods \citep[e.g.,][]{williams2000,Kim2005,chin2006}. \clearpage \renewcommand{1.4cm}{1.4cm} \newcommand{glasses}{glasses} \input{celebA.tex} \vspace{1cm} \renewcommand{glasses}{mustache} \input{celebA.tex} \vspace{1cm} \renewcommand{glasses}{beard} \input{celebA.tex} \vspace{1cm} \renewcommand{glasses}{smiling} \input{celebA.tex} \vspace{1cm} \renewcommand{glasses}{bald} \input{celebA.tex} \vspace{1cm} \renewcommand{glasses}{hat} \input{celebA.tex} \clearpage \begin{figure} \centering \includegraphics[height=1.4cm]{CelebA_experiment/hat_images/000037.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/hat_images/000068.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/hat_images/000074.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/hat_images/000095.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/hat_images/000137.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/hat_images/000138.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/hat_images/000149.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/hat_images/000154.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/hat_images/000166.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/hat_images/000195.jpg} \vspace{3mm} \includegraphics[height=1.4cm]{CelebA_experiment/bald_images/000051.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/bald_images/000079.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/bald_images/000115.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/bald_images/000125.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/bald_images/000134.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/bald_images/000182.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/bald_images/000209.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/bald_images/000226.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/bald_images/000299.jpg} \includegraphics[height=1.4cm]{CelebA_experiment/bald_images/000306.jpg} \caption{Examples of faces in the CelebA dataset that feature a hat (top row) or baldness (bottom row). We can see that the hats are highly diverse. Note that both types of faces are rare in the dataset: only 4.8\% of the faces feature a hat, and only 2.2\% feature baldness (cf. Figure~\ref{fig:CelebA_distribution_attributes}).}\label{fig:celebA_hat_examples} \end{figure} \section{DISCUSSION}\label{sec:discussion} We provided a new derivation of fair PCA, aiming for a fair representation that does not contain demographic information. Our derivation is simple and allows for efficient algorithms based on eigenvector computations similar to standard PCA. Compared to existing methods for fair PCA, our proposed algorithms run much faster while achieving similar results. In a comparison with a state-of-the-art in-processing bias mitigation method we saw that our algorithms provide a significantly faster alternative \mbox{to~train~fair~classifiers.} \section{EXPERIMENTS}\label{sec:experiments} In this section, we present a number of experiments.\footnote{Code available on \url{https://github.com/amazon-science/fair-pca}.} We first rerun and extend the experiments performed by \citet{Lee2022} in order to compare our algorithms to the existing methods for fair PCA by \citet{olfat2019} and \citet{Lee2022} and to the methods for linear guarding by \citet{ravfogel2020} and \citet{ravfogel2022}. We also apply our version of fair PCA to the CelebA dataset of facial images to illustrate its applicability to large high-dimensional~datasets. We then demonstrate the usefulness of our proposed algorithms as means of bias mitigation and compare their performance to the reductions approach of \citet{agarwal_reductions_approach}, which is the state-of-the-art in-processing % method implemented in Fairlearn (\url{https://fairlearn.org/}). Some implementation details and details about datasets are \mbox{provided~in~Appendix~\ref{app:implementation_details}~and~\ref{app:details_about_datasets}.} \setcounter{footnote}{1} \stepcounter{footnote} \newcommand{0.19}{0.19} \begin{figure}[t] \centering \includegraphics[scale=0.19]{MMD_fair_pca_experiment/runtime.pdf} \hspace{2mm} \includegraphics[scale=0.19]{MMD_fair_pca_experiment/Runtime_Highdim.pdf} \caption{The running time of the various methods as a function of the data dimension~$d$. The target dimension~$k$ is 5 independent of $d$. Note the logarithmic y-axes and that the x-axes are different for the two plots.\protect\footnotemark} \label{fig:MMD_fair_pca_exp_2} \end{figure} \input{table_adult.tex} \subsection{Comparison with Existing Methods for Fair PCA }\label{subsec:experiments_linear_guarding} \paragraph{Experiments as in \citet{Lee2022}} We used the code provided by \citet{Lee2022} to rerun their experiments and perform a comparison with our proposed algorithms. We additionally compared to the methods by \citet{ravfogel2020} and \citet{ravfogel2022} using the code provided by those authors, where we set all parameters to their default values except for the maximum number of outer iterations for the second method, which we decreased from 75000 to 10000 to get a somewhat acceptable running time. We extended the experimental evaluation of \citeauthor{Lee2022} by reporting additional metrics, but other than that did not modify their code or experimental setting in any way. In their first experiment (Section 8.2 in their paper), \citet{Lee2022} applied standard PCA, their method (referred to as MbF-PCA) and the method by \citet{olfat2019} (FPCA) to synthetic data sampled from a mixture of two Gaussians of varying dimension~$d$. The two Gaussians correspond to two demographic groups. The target dimension~$k$ is held constant at~$5$. We reran the code of \citeauthor{Lee2022} and additionally applied the methods of \citet{ravfogel2020} (INLP) and \citet{ravfogel2022} (RLACE) and our algorithms for fair PCA, fair kernel PCA with a Gaussian kernel, and the variant of fair PCA that additionally aims to equalize group-conditional covariance matrices (referred to as \mbox{Fair PCA-S}; we set $l=\lfloor 0.85d\rfloor$---cf. Section~\ref{subsec:covariance_extension}). \citeauthor{Lee2022} reported the fraction of explained variance of the projected data (i.e., $\trace(\mathbf{U}^\transpose\mathbf{X}\Xb^\transpose\mathbf{U})/\trace(\mathbf{X}\Xb^\transpose)$ for the projection defined by $\mathbf{U}$---higher means better approximation of the original data), the squared maximum mean discrepancy (MMD$^2$) based on a Gaussian kernel between the two groups after the projection (lower means the representation is more fair) and the running time of the methods. We additionally report the error of a linear classifier trained to predict the demographic information from the projected data (higher means the representation is more fair; we refer to this metric as linear inseparability). Figure~\ref{fig:MMD_fair_pca_exp_1} and Figure~\ref{fig:MMD_fair_pca_exp_2} show the results, where the boxplots are obtained from considering ten random splits into training and test data and the runtime curves show an average over the ten splits. While standard PCA does best in approximating the original data (variance about 50\%), it does not yield a fair representation with high values for MMD$^2$ (more than 0.5) and low values for linear inseparability ($\approx 0$). Our algorithm for fair PCA does worse in approximating the data (variance above 30\%), but drastically reduces the unfairness of standard PCA (MMD$^2$ smaller than 0.07). The other methods yield even lower values for MMD$^2$, but this comes at the cost of a worse approximation of the data. Our variant Fair PCA-S performs similarly to FPCA by \citet{olfat2019}. All methods except standard PCA perform similarly in terms of linear inseparability. The biggest difference is in the methods' running times: while FPCA runs for more than 2000 seconds, RLACE for about 20 seconds, MbF-PCA for about 1.3 seconds, and INLP for about 0.8 seconds when the data dimension~$d$ is as small as 100, none of our algorithms runs for more than 0.5 seconds even when $d=800$. In the latter case, RLACE runs for about 260 seconds, MbF-PCA for about 43 seconds, and INLP for about 1270 seconds. \footnotetext{We ran these experiments on a MacBook Pro with 2.6 GHz 6-Core Intel Core i7 processor and 16 GB 2667 MHz DDR4 memory. MbF-PCA is implemented in Matlab while all other methods are implemented % in Python---by running time we mean wall time.} In Appendix~\ref{app:runtime_comparison}, we study the running time of the methods as a function of the target dimension~$k$ and observe that the running time of MbF-PCA drastically increases with~$k$ (about 290 seconds when $d=100$ and $k=50$). This shows that none of the existing methods can be applied when both $d$ and $k$ are large (such as in the experiment on the CelebA dataset below) and provides strong evidence for the benefit of~our~proposed~methods. In their second experiment (Section 8.3 in their paper), \citet{Lee2022} applied standard PCA, MbF-PCA and FPCA to three real-world datasets: Adult Income and German Credit from the UCI repository \citep{Dua:2019}, and COMPAS \citep{angwin2016}. \citeauthor{Lee2022} ran MbF-PCA and FPCA for two different parameter configurations, indicated by the numbers in parentheses after a method's name in the tables below. Similarly, we ran our proposed method Fair PCA-S from Section~\ref{subsec:covariance_extension} for $l=\max\{k,\lfloor 0.5d\rfloor\}$ as well as $l=\max\{k,\lfloor 0.85d\rfloor\}$. \citeauthor{Lee2022} reported the explained variance and MMD$^2$ as above. Furthermore, they reported the accuracy and the DP violation~$\Delta_{DP}:=|\Psymb(\hat{Y}=1|Z=0)-\Psymb(\hat{Y}=1|Z=1)|$ of a Gaussian kernel support vector machine (SVM) % trained to solve a downstream task on the projected data (e.g., for the Adult Income dataset the downstream task is to predict whether a person's income exceeds \$50k or not). We additionally report the accuracy and $\Delta_{DP}$ of a linear SVM and a multilayer perceptron (MLP) with two hidden layers of size 10 and 5, respectively. Table~\ref{tab:Adult} provides the results for Adult Income; the tables for German Credit and COMPAS can be found in Appendix~\ref{app:tables}. The reported results are average results (together with standard deviations in subscript) over ten random splits into training and test data. We see that there is no single best method. Methods that allow for a high downstream accuracy tend to suffer from higher DP violation and the other way around. The parameters of FPCA and MbF-PCA allow to trade-off accuracy vs. fairness and so does the parameter~$l$ in Fair PCA-S (note that Fair PCA is equivalent to Fair PCA-S(1.0)). Except on COMPAS, whose data dimension is very small, Fair PCA-S always achieves smaller DP violation than fair PCA for the non-linear classifiers and fair kernel PCA achieves the smallest DP violation, among all methods, for the kernel SVM. Overall, we consider the results for our proposed methods to be similar as for the existing methods. One of the reviewers asked for a comparison with the method of \citet{samira2018}, which aims to balance the excess reconstruction error of PCA across different demographic groups (cf. Section~\ref{sec:related_work}). We emphasize once more that this fairness notion is incomparable to ours \citep[also see the discussion in Appendix~A of][]{Lee2022}. Still, we provide the results for the method of \citet{samira2018} on the three real-world datasets in Appendix~\ref{app:comparison_samira}. As expected, their method yields much higher DP violations than our methods or~the~other~competitors. \input{figure_bias_mitigation_experiment.tex} \input{figure_celeba_experiment_main} \paragraph{Applying fair PCA to CelebA similarly to \citet{ravfogel2022}} Similarly to \citet{ravfogel2022}, we applied our fair PCA method to the CelebA dataset \citep{liu2015faceattributes} to erase concepts such as ``glasses'' or ``mustache'' from facial images. The CelebA dataset comprises 202599 pictures of faces of celebrities. We rescaled all images to $80\times 80$ grey-scale images and applied our Algorithm~\ref{alg:fair_PCA} to the flattened raw-pixel vectors, using one of the \emph{bald}, \emph{beard}, \emph{eyeglasses}, \emph{hat}, \emph{mustache}, or \emph{smiling} annotations as demographic attributes. Figure~\ref{fig:celeba_experiment} shows some results for \emph{eyeglasses}; we provide more results, also for the other attributes, and a discussion in Appendix~\ref{app:CelebA}. Due to their high running time, we were not able to apply the methods by \citet{olfat2019}, \citet{Lee2022}, \citet{ravfogel2020}, or \citet{ravfogel2022} to this large and high-dimensional dataset. However, results for the method of \citet{ravfogel2022} for a smaller resolution~can~be~found~in~their~paper. \subsection{Comparison with \citet{agarwal_reductions_approach}}\label{subsec:experiments_bias_mitigation} We compare our proposed algorithms as means of bias mitigation to the state-of-the-art in-processing method of \citet{agarwal_reductions_approach}. While our algorithms learn a fair representation and perform standard training (without fairness considerations) on top of that representation to learn a fair classifier, the approach of \citeauthor{agarwal_reductions_approach} modifies the training procedure. Concretely, their approach solves a sequence of cost-sensitive classification problems. We apply the various methods to the Adult Income and the Bank Marketing dataset \citep{moro2014}, which are both available on the UCI repository \citep{Dua:2019}. The goal for each method is to produce good accuracy vs. fairness trade-off curves---every point on a trade-off curve corresponds to a specific classifier. Note that the approach of \citeauthor{agarwal_reductions_approach} yields randomized classifiers, which is problematic if a classifier strongly affects humans' lives \citep{cotter_neurips2019}. For our algorithms we deploy the strategy of Section~\ref{subsec:tradeoff} to produce the trade-off curves. Figure~\ref{fig:bias_mitigation_exp} shows the results. All results are average results obtained from considering ten random draws of train and test data (see Appendix~\ref{app:details_about_datasets} for details). The plots show on the y-axis the accuracy of a classifier and on the x-axis its fairness violation, which is $\Delta_{DP}=|\Psymb(\hat{Y}=1|Z=0)-\Psymb(\hat{Y}=1|Z=1)|$ as in Section~\ref{subsec:experiments_linear_guarding} when aiming for DP and $\Delta_{EO}:=|\Psymb(\hat{Y}=1|Z=0, Y=1)-\Psymb(\hat{Y}=1|Z=1, Y=1)|$ when aiming for EO. In the first and the second plot of each row we learn a logistic regression classifier, aiming to satisfy DP or EO. We see that fair PCA produces similar curves as the method by \citeauthor{agarwal_reductions_approach} (note that in the bottom left plot $\Delta_{DP}$ is very small for all classifiers). However, fair PCA runs much faster: including the classifier training, fair PCA runs for 0.04 seconds on average while the method by \citeauthor{agarwal_reductions_approach} runs for 4.6 seconds (see Appendix~\ref{appendix_agarwal_addendum} for details). These plots do not show results for Fair PCA-S and fair kernel PCA since they cannot compete (we provide those results in Appendix~\ref{appendix_agarwal_addendum}). Fair PCA-S and fair kernel PCA can compete when training a kernel SVM classifier though (third and fourth plot of each row). \section{EXTENSIONS \& VARIANTS}\label{sec:extensions} We discuss several extensions and variants of our formulation of fair PCA and our proposed algorithm from Section~\ref{sec:methods}. \subsection{Trading Off Accuracy vs. Fairness}\label{subsec:tradeoff} Requiring an ML model to be fair often leads to a loss in predictive performance. For example, in the case of DP as defined in \eqref{eq:fairness_definitions} it is clear that any fair predictor cannot have perfect accuracy if $\Psymb(Y=1|Z=z_1)\neq \Psymb(Y=1|Z=z_2)$. Hence, it is desirable for bias mitigation methods to have a knob that one can turn to trade off accuracy vs. fairness. We introduce such a knob for fair PCA via the following strategy: if $\mathbf{U}_{\text{fair}}\in \mathbb{R}^{d\times k}$ denotes the projection matrix of fair PCA and $\mathbf{U}_{\text{st}}\in \mathbb{R}^{d\times k}$ the one of standard PCA, we concatenate the fair representation $\mathbf{U}^\transpose_{\text{fair}}\mathbf{x}$ of a datapoint~$\mathbf{x}$ with a rescaled version of the standard representation $\mathbf{U}^\transpose_{\text{st}}\mathbf{x}$, that is we consider $(\mathbf{U}^\transpose_{\text{fair}}\mathbf{x};\lambda \cdot \mathbf{U}^\transpose_{\text{st}}\mathbf{x})\in \mathbb{R}^{2k}$ for some $\lambda \in [0,1]$. If $\lambda=0$, this representation contains only the information of fair PCA; if $\lambda=1$, it contains all the information of standard PCA (and hence, potentially, all the demographic information in the data). For $0<\lambda\ll 1$, technically the new representation also contains all the information of standard PCA, but any ML model trained with weight regularization will have troubles to exploit that information and will be approximately fair.\footnote{ To obtain some intuition, consider the following simple scenario: let $k=1$, so that $\mathbf{U}^\transpose_{\text{fair}}\mathbf{x}=:x_{\text{f}}\in\mathbb{R}$ and $\mathbf{U}^\transpose_{\text{st}}\mathbf{x}=:x_{\text{s}}\in\mathbb{R}$, and assume that we train a linear model~$h:\mathbb{R}^2\rightarrow \mathbb{R}$, $h(u,v)=w_1\cdot u+w_2 \cdot v$, parameterized by $w_1$ and $w_2$, on the representation~$(x_{\text{f}};\lambda\cdot x_{\text{s}})$. With weight regularization, $w_1$ and $w_2$ are effectively bounded, and if $\lambda$ is small, $h(x_{\text{f}},\lambda\cdot x_{\text{s}})=w_1\cdot x_{\text{f}}+w_2\cdot\lambda \cdot x_{\text{s}}$ must mainly depend on the fair PCA representation~$x_{\text{f}}$ rather than the standard PCA representation~$x_{\text{s}}$. } There is a risk of redundant information in the concatenated representation~$(\mathbf{U}^\transpose_{\text{fair}}\mathbf{x};\lambda \cdot \mathbf{U}^\transpose_{\text{st}}\mathbf{x})$, which could confuse the learning algorithm applied on top according to some papers on feature selection \citep[e.g.,][]{Koller1996,Yu2004}. However, in our experiments in Section~\ref{subsec:experiments_bias_mitigation} this does not seem to be an issue and we see that our proposed strategy provides an effective way to trade~off~accuracy~vs.~\mbox{fairness}. \subsection{Adaptation to Equal Opportunity} Our formulation of fair PCA in Section~\ref{sec:methods} aimed at making the data representation independent of the demographic attribute, thus aiming for demographic parity fairness of arbitrary downstream classifiers. If we instead aim for equality of opportunity fairness, of downstream classifiers trained to solve a specific task (coming with ground-truth labels~$y_i$), we apply the procedure only to datapoints~$\mathbf{x}_i$ with $y_i=1$. \subsection{Kernelizing Fair PCA}\label{subsec:kernelized_version} Fair PCA solves \begin{align}\label{eq:fair_PCA_simple} \begin{split} \argmax_{\mathbf{U}\in\mathbb{R}^{d\times k}:\, \mathbf{U}^\transpose\mathbf{U}=\mathbf{I}_{k\times k}}\trace(\mathbf{U}^\transpose\mathbf{X}\Xb^\transpose\mathbf{U})\\ \text{subject to}\quad \mathbf{z}^\transpose\mathbf{X}^\transpose\mathbf{U}=\mathbf{0}. \end{split} \end{align} To kernelize fair PCA, we rewrite~\eqref{eq:fair_PCA_simple} fully in terms of the kernel matrix~$\mathbf{K}=\mathbf{X}^\transpose\mathbf{X}\in\mathbb{R}^{n\times n}$ and avoid using the data matrix~$\mathbf{X}$. By the representer theorem \citep{bernhard_representer_theorem}, the optimal $\mathbf{U}$ can be written as $\mathbf{U}=\mathbf{X} \mathbf{B}$ for some $\mathbf{B}\in\mathbb{R}^{n\times k}$. The objective $\trace(\mathbf{U}^\transpose\mathbf{X}\Xb^\transpose\mathbf{U})$ then becomes $\trace(\mathbf{B}^\transpose\mathbf{X}^\transpose\mathbf{X}\Xb^\transpose\mathbf{X}\mathbf{B})$, the constraint $\mathbf{U}^\transpose\mathbf{U}=\mathbf{I}_{k\times k}$ becomes $\mathbf{B}^\transpose\mathbf{X}^\transpose\mathbf{X}\mathbf{B}=\mathbf{I}_{k\times k}$, and the constraint $\mathbf{z}^\transpose\mathbf{X}^\transpose\mathbf{U}=\mathbf{0}$ becomes $\mathbf{z}^\transpose\mathbf{X}^\transpose\mathbf{X}\mathbf{B}=\mathbf{0}$. Hence, with $\mathbf{K}=\mathbf{X}^\transpose\mathbf{X}$, \eqref{eq:fair_PCA_simple} is equivalent to \begin{align}\label{fair_PCA_eq_K} \begin{split} \argmax_{\mathbf{B}\in\mathbb{R}^{n\times k}:\, \mathbf{B}^\transpose\mathbf{K}\mathbf{B}=\mathbf{I}_{k\times k}}\trace(\mathbf{B}^\transpose\mathbf{K}\Kb\mathbf{B})\\ \text{subject to}\quad \mathbf{z}^\transpose\mathbf{K}\mathbf{B}=\mathbf{0}. \end{split} \end{align} Let $\mathbf{R}\in\mathbb{R}^{n\times (n-1)}$ comprise as columns an orthonormal basis of the nullspace of $\mathbf{z}^\transpose\mathbf{K}$. With $\mathbf{B}=\mathbf{R}\mathbf{\Lambda}$ for $\mathbf{\Lambda}\in \mathbb{R}^{(n-1)\times k}$, \eqref{fair_PCA_eq_K} is equivalent to \begin{align}\label{fair_PCA_eq_K2} \argmax_{\mathbf{\Lambda}:\, \mathbf{\Lambda}^\transpose\mathbf{R}^\transpose\mathbf{K}\mathbf{R}\mathbf{\Lambda}=\mathbf{I}_{k\times k}}\trace(\mathbf{\Lambda}^\transpose\mathbf{R}^\transpose\mathbf{K}\Kb\mathbf{R}\mathbf{\Lambda}). \end{align} A solution $\mathbf{\Lambda}$ is obtained by filling the columns of $\mathbf{\Lambda}$ with the generalized eigenvectors, corresponding to the largest $k$ eigenvalues, that solve $\mathbf{R}^\transpose\mathbf{K}\Kb\mathbf{R}\mathbf{\Lambda}=\mathbf{R}^\transpose\mathbf{K}\mathbf{R}\mathbf{\Lambda} \mathbf{W}$, where $\mathbf{W}$ is a diagonal matrix containing the eigenvalues \citep{ghojogh2019}. When projecting datapoints onto the linear subspace, we can write $\mathbf{U}^\transpose\mathbf{X}=\mathbf{B}^\transpose\mathbf{X}^\transpose\mathbf{X}=\mathbf{\Lambda}^\transpose\mathbf{R}^\transpose\mathbf{K}$, and hence we have kernelized fair PCA. We provide the pseudo code of kernelized fair PCA in Appendix~\ref{app:multiple_groups}. Its running time is $\mathcal{O}(n^3)$ when being given $\mathbf{K}$ as input, which is the same as the running time of standard kernel PCA. \subsection{Multiple Groups}\label{subsec:multiple_groups} We derive fair PCA for multiple demographic groups by means of a one-vs.-all approach: assume that there are $m$ disjoint groups. For every datapoint~$\mathbf{x}_i$ we consider $m$ many one-hot demographic attributes~$z_i^{(1)},\ldots,z_i^{(m)}$ with $z_i^{(l)}=1$ if $\mathbf{x}_i$ belongs to group~$l$ and $z_i^{(l)}=0$ otherwise. We now require that for all linear functions $h$, $h(\mathbf{U}^\transpose\mathbf{x}_i)$ and $z_i^{(l)}$ are uncorrelated for all $l\in[m]$. This is equivalent to requiring that $\mathbf{Z}^\transpose\mathbf{X}^\transpose\mathbf{U}=\mathbf{0}$, where $\mathbf{Z}\in\mathbb{R}^{n\times m}$ and the $l$-th column of $\mathbf{Z}$ equals $(z_1^{(l)}-\bar{z}^{(l)},\ldots,z_n^{(l)}-\bar{z}^{(l)})^\transpose$ with $\bar{z}^{(l)}=\frac{1}{n} \sum_{i=1}^n z_i^{(l)}$. The resulting optimization problem can be solved analogously to fair PCA for two groups as long as $k\leq d-m+1$, and for $m=2$ the formulation presented here is equivalent to the one of Section~\ref{sec:methods}. Also the interpretation provided there holds in an analogous way for multiple groups: fair PCA for multiple groups finds a best-approximating projection such that the projected data's group-conditional means coincide for all groups. We provide details and the pseudo code of fair PCA for multiple groups, also in its kernelized version, in Appendix~\ref{app:multiple_groups}. \subsection{Multiple Demographic Attributes}\label{subsec:multiple_dem_attributes} We can also adapt fair PCA to simultaneously obfuscate demographic information for multiple demographic attributes (e.g., gender \emph{and} race), each of them potentially defining multiple demographic groups: assume that there are $p$ many attributes, where the $r$-th attribute defines $m_r$ demographic groups. For $r\in[p]$, let $\mathbf{Z}_r\in\mathbb{R}^{n\times m_r}$ be the matrix~$\mathbf{Z}$ from Section~\ref{subsec:multiple_groups} for the $r$-th attribute. By stacking the matrices~$\mathbf{Z}_r$ to form one matrix $\mathbf{Z}_{\text{comb}}\in\mathbb{R}^{n\times (\sum_r m_r)}$ and replacing the matrix~$\mathbf{Z}$ from Section~\ref{subsec:multiple_groups} or Algorithm~\ref{alg:fair_PCA_multi_groups} with $\mathbf{Z}_{\text{comb}}$, we obtain fair PCA for multiple demographic attributes. The resulting % algorithm is guaranteed to successfully terminate if $k\leq d - \sum_{r=1}^p m_r +p$. \subsection{Higher-Order Variant: Equalizing Group-Conditional Covariance Matrices }\label{subsec:covariance_extension} Fair PCA finds a best-approximating projection that equalizes the group-conditional means. It is natural to ask whether one can additionally equalize group-conditional covariances in order to further exacerbate discriminability of the projected group-conditional distributions. For one demographic attribute with two demographic groups, this additional constraint would result in the following problem: \begin{align}\label{eq:fair_PCA_with_covariance_constraint} \begin{split} \argmax_{\mathbf{U}\in\mathbb{R}^{d\times k}:\, \mathbf{U}^\transpose\mathbf{U}=\mathbf{I}_{k\times k}}\trace(\mathbf{U}^\transpose\mathbf{X}\Xb^\transpose\mathbf{U})\\ \text{s. t.}\quad \mathbf{z}^\transpose\mathbf{X}^\transpose\mathbf{U}=\mathbf{0}~\wedge~\mathbf{U}^\transpose(\mathbf{\Sigma}_0-\mathbf{\Sigma}_1)\mathbf{U}=\mathbf{0}, \end{split} \end{align} where $\mathbf{z}$ is the vector encoding group-membership as in Section~\ref{sec:methods} and $\mathbf{\Sigma}_0$ and $\mathbf{\Sigma}_1$ are the two group-conditional covariance matrices. Unfortunately, depending on $\mathbf{\Sigma}_0$ and $\mathbf{\Sigma}_1$, this problem may not have a solution (e.g., when the feature variances for one group are much bigger than for the other group and hence $\mathbf{\Sigma}_0-\mathbf{\Sigma}_1$ is positive or negative definite). However, for small $k$ (or large $d$) we can apply a simple strategy to solve \eqref{eq:fair_PCA_with_covariance_constraint} approximately. After writing $\mathbf{U}=\mathbf{R}\mathbf{\Lambda}$ as in Section~\ref{sec:methods}, the problem becomes \begin{align}\label{eq:fair_PCA_with_covariance_constraint_substituted} \begin{split} \argmax_{\mathbf{\Lambda}\in\mathbb{R}^{(d-1)\times k}:\, \mathbf{\Lambda}^\transpose\mathbf{\Lambda}=\mathbf{I}_{k\times k}}\trace(\mathbf{\Lambda}^\transpose\mathbf{R}^\transpose\mathbf{X}\Xb^\transpose\mathbf{R}\mathbf{\Lambda})\\ \text{subject to}\quad \mathbf{\Lambda}^\transpose\mathbf{R}^\transpose(\mathbf{\Sigma}_0-\mathbf{\Sigma}_1)\mathbf{R}\mathbf{\Lambda}=\mathbf{0}. \end{split} \end{align} For some parameter $l\in \{k,\ldots,d-1\}$, we can compute the $l$~smallest (in magnitude) eigenvalues of $\mathbf{R}^\transpose(\mathbf{\Sigma}_0-\mathbf{\Sigma}_1)\mathbf{R}$ and corresponding orthonormal eigenvectors. Let $\mathbf{Q}\in\mathbb{R}^{d-1 \times l}$ comprise these eigenvectors as columns. By substituting $\mathbf{\Lambda} = \mathbf{Q}\mathbf{V}$ for $\mathbf{V}\in \mathbb{R}^{l\times k}$ and solving \begin{align*} \argmax_{\mathbf{V}\in\mathbb{R}^{l\times k}:\, \mathbf{V}^\transpose\mathbf{V}=\mathbf{I}_{k\times k}}\trace(\mathbf{V}^\transpose\mathbf{Q}^\transpose\mathbf{R}^\transpose\mathbf{X}\Xb^\transpose\mathbf{R}\mathbf{Q}\mathbf{V}), \end{align*} which just requires to compute eigenvectors of $\mathbf{V}^\transpose\mathbf{Q}^\transpose\mathbf{R}^\transpose\mathbf{X}\Xb^\transpose\mathbf{R}\mathbf{Q}\mathbf{V}$, we optimize the objective of Problem~\eqref{eq:fair_PCA_with_covariance_constraint_substituted} while approximately satisfying its constraint. The running time of this procedure is $\mathcal{O}(nd^2+d^3)$ as for standard PCA. The smaller the parameter~$l$, the more we equalize the projected data's group-conditional covariance matrices. For $l=d-1$, our strategy becomes void and coincides with fair PCA as described in Section~\ref{sec:methods}. In our experiments in Section~\ref{sec:experiments} we choose $l=\max\{k,\lfloor 0.5d\rfloor\}$ or $l=\max\{k,\lfloor 0.85d\rfloor\}$ and observe good results. In particular, we see that the variant yields fairer non-linear downstream classifiers than fair PCA from Section~\ref{sec:methods}. An example can be seen in Figure~\ref{fig:example_fair_PCA_same_covariance}: here, the data comes from a mixture of two Gaussians in $\mathbb{R}^{10}$ with highly different covariance matrices and $k=2$. Each Gaussian corresponds to one demographic group. We can see that fair PCA from Section~\ref{sec:methods} fails to obfuscate the demographic information since the group-conditional covariance matrices of the projected data are highly different (just as for the original data), while the variant of this section (with $l=5=\lfloor 0.5d \rfloor$) successfully obfuscates the demographic information. \begin{figure} \centering \includegraphics[scale=0.25]{illustration_same_covariance/illustration_fair_PCA.pdf} \includegraphics[scale=0.25]{illustration_same_covariance/illustration_fair_PCA_eq_covariance.pdf} \caption{Fair PCA as described in Section~\ref{sec:methods} (left), which equalizes the group-conditional means, in comparison to the higher-order variant of Section~\ref{subsec:covariance_extension} (right), which additionally aims to equalize the group-conditional covariance matrices. Only the higher-order variant completely obfuscates the demographic information (encoded by color: red~vs.~blue).} \label{fig:example_fair_PCA_same_covariance} \end{figure} \section{INTRODUCTION}\label{sec:introduction} Over the last decade, fairness in machine learning \citep{barocas-hardt-narayanan} has become an established field. Numerous definitions of fairness, and algorithms trying to satisfy these, have been proposed. In the context of classification, two of the most prominent fairness notions are demographic parity \citep[DP;][]{kamiran2011} and equality of opportunity \citep[EO;][]{hardt2016equality}. DP requires a classifier's prediction to be independent of a datapoint's demographic attribute (such as a person's gender or race), and EO requires the prediction to be independent of the attribute given that the datapoint's ground-truth label is positive. Formally, in the case of binary classification, \begin{align}\label{eq:fairness_definitions} \begin{split} \text{DP:}~~\Psymb(\hat{Y}=1|Z=z)&=\Psymb(\hat{Y}=1),\\ \text{EO:}\Psymb(\hat{Y}=1|Z=z,Y=1)&=\Psymb(\hat{Y}=1|Y=1), \end{split} \end{align} where $\Psymb$ is a probability distribution over random variables~$Y,\hat{Y}\in\{0,1\}$ and $Z\in\mathcal{Z}$, with $Y$ representing the ground-truth label, $\hat{Y}$ representing the classifier's prediction and $Z$ representing the demographic attribute. An appealing approach to satisfy DP or EO is fair representation learning (e.g., \citealp{zemel2013}; see Section~\ref{sec:related_work} for related work): let $X\in\mathcal{X}$ denote a random vector representing features based on which predictions are made. The idea of fair representation learning is to learn a \emph{fair} feature representation $f:\mathcal{X}\rightarrow \mathcal{X}'$~such~that $f(X)$ is (approximately) independent of the demographic attribute~$Z$ (conditioned on $Y=1$ if one aims to satisfy EO). Once a fair representation is found, any model trained on this representation will also be fair. Of course, the representation still needs to contain some information about $X$ in order~to~be~useful. Leaving fairness aside, one of the most prominent methods for representation learning (in its special form of dimensionality reduction) is principal component analysis \citep[PCA; e.g.,][]{shalev2014understanding}. PCA projects the data onto a linear subspace such that the approximation error is minimized. The key idea of our paper is to alter PCA such that it gives a fair representation. This idea is not new: \citet{olfat2019} and \citet{Lee2022} already proposed formulations of fair PCA that aim for the same goal. We discuss the differences between our paper and these works in detail in Section~\ref{sec:related_work}. In short, the differences are twofold: (i)~while the goal is the same, the derivations are different, and we consider our derivation to be simpler and more intuitive. (ii)~the different derivations lead to different algorithms, with our main algorithm being very similar to standard PCA. While our formulation allows for an analytical solution by means of eigenvector computations, the methods by \citeauthor{olfat2019} and \citeauthor{Lee2022} rely on semidefinite programming or manifold optimization. While our algorithm can be implemented in a few lines of code and runs very fast, with the same complexity as standard PCA, their algorithms rely on specialized libraries and suffer from a huge running time. We believe that because of these advantages our new derivation of fair PCA and our proposed approach add value to the existing literature. \paragraph{Outline} In Section~\ref{sec:methods}, we first review PCA and then derive our formulation of fair PCA. We discuss extensions and variants, including a kernelized version, in Section~\ref{sec:extensions}. We provide a detailed discussion of related work in Section~\ref{sec:related_work} and present extensive experiments in Section~\ref{sec:experiments}. Some details and experiments are deferred to the appendix. \paragraph{Notation} For $n\in\mathbb{N}$, let $[n]=\{1,\ldots,n\}$. We generally denote scalars by non-bold letters, vectors by bold lower-case letters, and matrices by bold upper-case letters. All vectors $\mathbf{x}\in\mathbb{R}^d\equiv\mathbb{R}^{d\times 1}$ are column vectors, except that we use $\mathbf{0}$ to denote both a column vector and a row vector (and also a matrix) of all zeros. Let $\mathbf{x}^\transpose\in\mathbb{R}^{1\times d}$ be the transposed row vector of $\mathbf{x}$. We denote the Euclidean norm of $\mathbf{x}$ by $\|\mathbf{x}\|_2=\sqrt{\sum_{i} \mathbf{x}_i^2}$. For a matrix~$\mathbf{X}\in\mathbb{R}^{d_1\times d_2}$, let $\mathbf{X}^\transpose\in\mathbb{R}^{d_2\times d_1}$ be its transpose. $\mathbf{I}_{k\times k}$ denotes the identity matrix of size~$k$. For $\mathbf{X}\in\mathbb{R}^{d\times d}$, let $\trace(\mathbf{X})=\sum_{i=1}^d \mathbf{X}_{ii}$. \section{FAIR PCA FOR FAIR REPRESENTATION LEARNING}\label{sec:methods} We first review PCA and then derive our formulation of fair PCA. Our formulation is a relaxation of a strong constraint imposed on the PCA objective. We provide a natural interpretation of the relaxation and show that it is equivalent to the original constraint under a particular data~model. \vspace{2pt} \textbf{PCA}~~ We represent a dataset of $n$ points $\mathbf{x}_1,\ldots,\mathbf{x}_n\in \mathbb{R}^d$ as a matrix $\mathbf{X}\in\mathbb{R}^{d\times n}$, where the $i$-th column equals $\mathbf{x}_i$. Given a target dimension~$k\in[d-1]$, PCA \citep[e.g.,][]{shalev2014understanding} finds a best-approximating projection of the dataset onto a $k$-dimensional linear subspace. That is, PCA finds $\mathbf{U}\in\mathbb{R}^{d\times k}$ solving \begin{align}\label{eq:standard_PCA} \begin{split} &\argmin_{\mathbf{U}\in\mathbb{R}^{d\times k}:\, \mathbf{U}^\transpose\mathbf{U}=\mathbf{I}_{k\times k}}\sum_{i=1}^n\|\mathbf{x}_i-\mathbf{U}\Ub^\transpose\mathbf{x}_i\|_2^2\\ &~~~~\equiv \argmax_{\mathbf{U}\in\mathbb{R}^{d\times k}:\, \mathbf{U}^\transpose\mathbf{U}=\mathbf{I}_{k\times k}} \trace(\mathbf{U}^\transpose\mathbf{X}\Xb^\transpose\mathbf{U}). \end{split} \end{align} $\mathbf{U}^\transpose\mathbf{x}_i\in \mathbb{R}^k$ is the projection of $\mathbf{x}_i$ onto the subspace spanned by the columns of $\mathbf{U}$ viewed as a point in the lower-dim space $\mathbb{R}^k$, and $\mathbf{U}\Ub^\transpose\mathbf{x}_i\in \mathbb{R}^d$ is the projection viewed as a point in the original space~$\mathbb{R}^d$. A solution~to~\eqref{eq:standard_PCA} is given by any $\mathbf{U}$ that comprises as columns orthonormal eigenvectors, corresponding to the largest $k$ eigenvalues,~of~$\mathbf{X}\Xb^\transpose$. \vspace{2pt} \textbf{Our formulation of fair PCA}~~ In fair PCA, we aim to remove demographic information when projecting the dataset onto the $k$-dimensional linear subspace. We look for a best-approximating projection such that the projected data does not contain demographic information anymore: let $z_i\in\{0,1\}$ denote the demographic attribute of datapoint~$\mathbf{x}_i$, which encodes membership in one of two demographic groups (we discuss how to extend our approach to multiple groups in Section~\ref{subsec:multiple_groups} and to multiple attributes in Section~\ref{subsec:multiple_dem_attributes}). Ideally, we would like that no classifier can predict $z_i$ when getting to see only the projection of $\mathbf{x}_i$ onto the $k$-dimensional subspace, that is we would want to solve \begin{align}\label{eq:fair_PCA} \begin{split} & ~~~~~~~~~~\argmax_{\mathbf{U}\in\mathcal{U}} \trace(\mathbf{U}^\transpose\mathbf{X}\Xb^\transpose\mathbf{U}),\quad\text{where}\\ &\mathcal{U}=\left\{\mathbf{U}\in\mathbb{R}^{d\times k}:\,\mathbf{U}^\transpose\mathbf{U}=\mathbf{I}_{k\times k}~\text{and $\forall h: \mathbb{R}^k\rightarrow \mathbb{R}$,}\right.\\ & ~~~~~~~\left.\text{ $h(\mathbf{U}^\transpose\mathbf{x}_i)$ and $z_i$ are statistically independent}\right\}. \end{split} \end{align} It is not hard to see, that for a given target dimension~$k$ the set $\mathcal{U}$ defined in \eqref{eq:fair_PCA} may be empty and hence Problem~\eqref{eq:fair_PCA} not well defined (see Appendix~\ref{app:not_well_defined} for an example). The reason is that linear projections are not flexible enough to always remove all demographic information from a dataset.\footnote{ Also more powerful ``projections'' in methods for learning adversarially fair representations (cf. Section~\ref{sec:related_work}) have been found to fail removing all demographic information; that is, a sufficiently strong adversary can still predict demographic information from the supposedly fair representation \citep[e.g., ][]{balunovic2022}. } As a remedy, we relax Problem~\eqref{eq:fair_PCA} by % expanding the set~$\mathcal{U}$ in two ways: first, rather than preventing arbitrary functions~$h: \mathbb{R}^k\rightarrow \mathbb{R}$ from recovering $z_i$, we restrict our goal to linear functions of the form $h(\mathbf{x})= \mathbf{w}^\transpose\mathbf{x}+b$ (we provide a non-linear kernelized version of fair PCA in Section~\ref{subsec:kernelized_version} and another variant that can deal, to some extent, with non-linear~$h$ in Section~\ref{subsec:covariance_extension}); second, rather than requiring $h(\mathbf{U}^\transpose\mathbf{x}_i)$ and $z_i$ to be independent, we only require the two variables to be uncorrelated, that is their covariance to be zero. This leaves us with the following problem: \begin{align}\label{eq:fair_PCA_relaxed} \begin{split} & ~~~~~~~~~~\argmax_{\mathbf{U}\in\mathcal{U}'} \trace(\mathbf{U}^\transpose\mathbf{X}\Xb^\transpose\mathbf{U}),\quad\text{where}\\ &\mathcal{U}'=\left\{\mathbf{U}\in\mathbb{R}^{d\times k}:\,\mathbf{U}^\transpose\mathbf{U}=\mathbf{I}_{k\times k}~\text{and $\forall \mathbf{w}\in\mathbb{R}^k,b\in\mathbb{R}$,}\right.\\ & ~~~~~~~~~~~\text{ $\mathbf{w}^\transpose\mathbf{U}^\transpose\mathbf{x}_i+b$ and $z_i$ are uncorrelated, that is} \\ & ~~~~~~~~~~~\left.\Cov(\mathbf{w}^\transpose\mathbf{U}^\transpose\mathbf{x}_i+b,z_i)=0\right\}. \end{split} \end{align} We show that Problem~\eqref{eq:fair_PCA_relaxed} is well defined. Conveniently, it can be solved analytically similarly to standard PCA: with $\bar{z}=\frac{1}{n} \sum_{i=1}^n z_i$ and $\mathbf{z}=(z_1-\bar{z},\ldots,z_n-\bar{z})^\transpose \in\mathbb{R}^{n}$, \begin{align*} \forall \mathbf{w}\in\mathbb{R}^k,b\in\mathbb{R}: \text{$\mathbf{w}^\transpose\mathbf{U}^\transpose\mathbf{x}_i+b$ and $z_i$ are uncorr.}~\Leftrightarrow\\ \forall \mathbf{w}\in\mathbb{R}^k,b\in\mathbb{R}: \sum_{i=1}^n (z_i-\bar{z})\cdot(\mathbf{w}^\transpose\mathbf{U}^\transpose\mathbf{x}_i+b)=0~\Leftrightarrow\\ \forall \mathbf{w}: \mathbf{w}^\transpose \mathbf{U}^\transpose\mathbf{X}\mathbf{z}=0~\Leftrightarrow~ \mathbf{U}^\transpose\mathbf{X}\mathbf{z}=\mathbf{0}~\Leftrightarrow~ \mathbf{z}^\transpose\mathbf{X}^\transpose\mathbf{U}=\mathbf{0}. \end{align*} We assume that $\mathbf{z}^\transpose\mathbf{X}^\transpose\neq \mathbf{0}$ (otherwise Problem~\eqref{eq:fair_PCA_relaxed} is the same as the standard PCA Problem~\eqref{eq:standard_PCA}). Let $\mathbf{R}\in\mathbb{R}^{d\times (d-1)}$ comprise as columns an orthonormal basis of the nullspace of $\mathbf{z}^\transpose\mathbf{X}^\transpose$. Every $\mathbf{U}\in \mathcal{U}'$ can then be written as $\mathbf{U}=\mathbf{R}\mathbf{\Lambda}$ for $\mathbf{\Lambda}\in\mathbb{R}^{(d-1)\times k}$ with $\mathbf{\Lambda}^\transpose\mathbf{\Lambda}=\mathbf{I}_{k\times k}$, and the objective of \eqref{eq:fair_PCA_relaxed} becomes $\trace(\mathbf{\Lambda}^\transpose\mathbf{R}^\transpose\mathbf{X}\Xb^\transpose\mathbf{R}\mathbf{\Lambda})$, where we now maximize w.r.t.~$\mathbf{\Lambda}$. The latter problem has exactly the form of \eqref{eq:standard_PCA} with $\mathbf{X}\Xb^\transpose$ replaced by $\mathbf{R}^\transpose\mathbf{X}\Xb^\transpose\mathbf{R}$, and we know that a solution is given by orthonormal eigenvectors, corresponding to the largest $k$ eigenvalues, of $\mathbf{R}^\transpose\mathbf{X}\Xb^\transpose\mathbf{R}$. Once we have $\mathbf{\Lambda}$, we obtain a solution~$\mathbf{U}$ of \eqref{eq:fair_PCA_relaxed} by computing~$\mathbf{U}=\mathbf{R}\mathbf{\Lambda}$. We summarize the procedure as our proposed formulation of fair PCA in Algorithm~\ref{alg:fair_PCA}. Its running time is $\mathcal{O}(nd^2+d^3)$, which is the same as the running time of standard PCA. \begin{algorithm}[t!] \caption{Fair PCA (for two demographic groups) }\label{alg:fair_PCA} \begin{algorithmic} \STATE {\bfseries Input:} data matrix $\mathbf{X}\in\mathbb{R}^{d\times n}$; demographic attr.~$z_i\in\{0,1\}$, $i\in[n]$; target dimension $k\in[d-1]$ \vspace{1mm} \STATE {\bfseries Output:} a solution $\mathbf{U}$ to Problem~\eqref{eq:fair_PCA_relaxed} \begin{itemize}[leftmargin=*] \setlength{\itemsep}{-2pt} \item set $\mathbf{z}=(z_1-\bar{z},\ldots,z_n-\bar{z})^\transpose$ with $\bar{z}=\frac{1}{n} \sum_{i=1}^n z_i$ \item compute an orthonormal basis of the nullspace of $\mathbf{z}^\transpose\mathbf{X}^\transpose$ and build matrix~$\mathbf{R}$ comprising the basis vectors as columns \item compute orthonormal eigenvectors, corresponding to the largest $k$ eigenvalues, of $\mathbf{R}^\transpose\mathbf{X}\Xb^\transpose\mathbf{R}$ and build matrix~$\mathbf{\Lambda}$ comprising the eigenvectors as columns \item return $\mathbf{U}=\mathbf{R}\mathbf{\Lambda}$ \end{itemize} \end{algorithmic} \end{algorithm} The derivation above yields a natural interpretation of the relaxed Problem~\eqref{eq:fair_PCA_relaxed}. It is easy to see that the condition~$\mathbf{U}^\transpose\mathbf{X}\mathbf{z}=\mathbf{0}$ is equivalent to \begin{align*}% \frac{1}{|\{i:z_i=0\}|}\sum_{i: z_i=0} \mathbf{U}^\transpose\mathbf{x}_i = \frac{1}{|\{i:z_i=1\}|}\sum_{i: z_i=1} \mathbf{U}^\transpose\mathbf{x}_i. \end{align*} Hence, fair PCA finds a best-approximating projection such that the projected data's group-conditional means coincide. This interpretation implies that for a special data-generating model the relaxed Problem~\eqref{eq:fair_PCA_relaxed} solved by fair PCA coincides with Problem~\eqref{eq:fair_PCA}, which we originally wanted~to~solve. \begin{proposition}\label{prop:gaussian_data} If datapoints are sampled from a mixture of two Gaussians with identical covariance \mbox{matrices} and the two Gaussians corresponding to demographic groups, then, in the limit of $n\rightarrow \infty$, \eqref{eq:fair_PCA} and~\eqref{eq:fair_PCA_relaxed}~are~equivalent. \end{proposition} \begin{proof} Let $\mathbf{\mu}_0,\mathbf{\mu}_1\in\mathbb{R}^d$ be the means of the two Gaussians and $\mathbf{\Sigma}\in\mathbb{R}^{d\times d}$ their shared covariance matrix such that datapoints are distributed as $\mathbf{x}|z=l \sim \mathcal{N}(\mu_l,\mathbf{\Sigma})$, $l\in\{0,1\}$. After projecting datapoints onto $\mathbb{R}^k$ using $\mathbf{U}$ we have $\mathbf{U}^\transpose\mathbf{x}|z=l \sim \mathcal{N}(\mathbf{U}^\transpose\mu_l,\mathbf{U}^\transpose\mathbf{\Sigma} \mathbf{U})$. For $\mathbf{U}\in\mathcal{U'}$ as defined in \eqref{eq:fair_PCA_relaxed} the interpretation from above shows that $\mathbf{U}^\transpose\mu_0=\mathbf{U}^\transpose\mu_1$, and hence $h(\mathbf{U}^\transpose\mathbf{x})$ and $z$ are independent for any $h$. \end{proof} \subsubsection*{\bibname}} \usepackage[utf8]{inputenc} % \usepackage[T1]{fontenc} % \usepackage[colorlinks=true,linkcolor=blue,citecolor=blue]{hyperref} \usepackage{multirow} \usepackage{arydshln} \usepackage{url} % \usepackage{booktabs} % \usepackage{amsfonts} % \usepackage{nicefrac} % \usepackage{microtype} % \usepackage{rotating} \usepackage{amsmath, amssymb, amsthm} \usepackage{dsfont} \usepackage{algorithm} \usepackage{algorithmic} \renewcommand{\theHalgorithm}{\arabic{algorithm}} \newcommand{\textbf{break}}{\textbf{break}} \newcommand{\STATE \algorithmicbreak}{\STATE \textbf{break}} \usepackage{bbm} \usepackage{graphicx} \usepackage{xcolor} \usepackage{enumitem} \usepackage{dsfont} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbf{x}}{\mathbf{x}} \newcommand{\mathbf{X}}{\mathbf{X}} \newcommand{\mathbf{z}}{\mathbf{z}} \newcommand{\mathbf{Z}}{\mathbf{Z}} \newcommand{\mathbf{U}}{\mathbf{U}} \newcommand{\mathbf{R}}{\mathbf{R}} \newcommand{\mathbf{K}}{\mathbf{K}} \newcommand{\mathbf{B}}{\mathbf{B}} \newcommand{\mathbf{A}}{\mathbf{A}} \newcommand{\mathbf{I}}{\mathbf{I}} \newcommand{\mathbf{Q}}{\mathbf{Q}} \newcommand{\mathbf{W}}{\mathbf{W}} \newcommand{\mathbf{V}}{\mathbf{V}} \newcommand{\mathbf{\Lambda}}{\mathbf{\Lambda}} \newcommand{\mathbf{w}}{\mathbf{w}} \newcommand{\mathbf{U}_{\text{fair}}}{\mathbf{U}_{\text{fair}}} \newcommand{\mathbf{U}_{\text{st}}}{\mathbf{U}_{\text{st}}} \newcommand{\mathbf{I}_{k\times k}}{\mathbf{I}_{k\times k}} \newcommand{\mathbf{0}}{\mathbf{0}} \makeatletter \newcommand*{\transpose}{% {\mathpalette\@transpose{}}% } \newcommand*{\@transpose}[2]{% \raisebox{\depth}{$\m@th#1\intercal$}% } \makeatother \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\argmin}{argmin} \DeclareMathOperator{\argmax}{argmax} \DeclareMathOperator{\GrA}{Gr_A} \DeclareMathOperator{\GrB}{Gr_B} \DeclareMathOperator{\Dis}{Dis} \newcommand{\mathds{1}}{\mathds{1}} \DeclareMathOperator{\Psymb}{Pr} \DeclareMathOperator{\trace}{trace} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\Var}{Var} \DeclareMathOperator{\Cov}{Cov} \newtheorem{assumptions}{Assumptions} \newtheorem{proposition}{Proposition} \newtheorem{definition}{Definition} \def\do\/\do\-{\do\/\do\-} \begin{document} \runningauthor{Matthäus Kleindessner, Michele Donini, Chris Russell, Muhammad Bilal Zafar } \twocolumn[ \aistatstitle{Efficient fair PCA for fair representation learning} \aistatsauthor{ Matthäus Kleindessner \And Michele Donini } \aistatsaddress{ Amazon Web Services\\ Tübingen, Germany \And Amazon Web Services\\ Berlin, Germany } \aistatsauthor{ Chris Russell \And Muhammad Bilal Zafar } \aistatsaddress{ Amazon Web Services\\ Tübingen, Germany \And Amazon Web Services\\ Berlin, Germany} ] \input{abstract} \newcommand{2pt}{2pt} \input{introduction} \input{method} \input{extensions} \input{related_work} \input{experiments} \input{discussion} \newpage \section{RELATED WORK}\label{sec:related_work} \paragraph{Fairness in machine learning (ML)} Most works study the problem of fair classification \citep[e.g.,][]{zafar2019}, but % fairness has also been studied for unsupervised learning tasks \citep[e.g.,][]{chierichetti2017fair}. Two of the most prominent definitions of fairness in classification are demographic parity \citep{kamiran2011} and equal opportunity \citep{hardt2016equality} as introduced in Section~\ref{sec:introduction}. Methods for fair classification are commonly categorized into pre-processing, in-processing, and post-processing methods, depending on at which stage of the training pipeline they are applied \citep{alessandro2017}. In the following we % discuss the works most closely related to our paper, all of which can generally be considered as pre-processing~methods. \paragraph{Fair representation learning} \citet{zemel2013} initiated the study of fair representation learning, where the goal is to learn an intermediate data representation that obfuscates demographic information while encoding other (non-demographic) information as well as possible. Once such a representation is found, any ML model trained on it should not be able to discriminate based on demographic information and hence be demographic parity fair. The approach of \citeauthor{zemel2013} learns prototypes and a probabilistic mapping of datapoints to these prototypes. Since then, numerous methods for fair representation learning have been proposed \citep[e.g.][]{louizos2016,moyer2018invariant,sarhan2020,balunovic2022,oh2022}, many of them formulating the problem as an adversarial game \citep[e.g.][]{edwards2016,Beutel2017DataDA,xie2017controllable,jia2018right,madras2018,raff2018gradientreversal,adel2019,alvi2019a,feng2019,song2019} and some of them adapting their approach to aim for downstream classifiers to be equal opportunity fair \citep[e.g.][]{madras2018,song2019}. In contrast to our proposed approach, none of these techniques allows for an analytical solution and all of them require numerical optimization, which has often been found hard to perform, in particular for the adversarial approaches (cf. \citealp{feng2019}, Sec.~5, or \citealp{oh2022}, Sec.~2.2). \paragraph{Fair PCA for fair representation learning and other methods for linear guarding} The methods discussed next are all methods for fair representation learning that bear some resemblance to our proposed approach. Most closely related to our work are the papers by \citet{olfat2019}, \citet{Lee2022}, and \citet{Shao2022}. \citet{olfat2019} introduced a notion of fair PCA with the same goal that we are aiming for in our formulation, that is finding a best-approximating projection such that no linear classifier can predict demographic information from the projected data. They use Pinsker's inequality and an approximation of the group-conditional distributions by two Gaussians to obtain an upper bound on the best linear classifier's accuracy. The upper bound is minimized when the projected data's group-conditional means and covariance matrices coincide. \citeauthor{olfat2019} then formulate a semidefinite program (SDP) to minimize the projection's reconstruction error while satisfying upper bounds on the differences in the projected data's group-conditional means and covariance matrices. This SDP approach has been criticized by \citet[][Section 5.1]{Lee2022} for its high runtime and its relaxation of the rank constraint to a trace constraint, ``yielding sub-optimal outputs in presence of (fairness) constraints, even to substantial order in some cases''. In Section~\ref{sec:experiments} we rerun the experiments of \citeauthor{Lee2022} and also observe that the running time of the method by \citeauthor{olfat2019} is prohibitively high. Furthermore, we consider our derivation of fair PCA to be more intuitive since we do not % rely on upper bounds or a Gaussian approximation. Arguing that matching only group-conditional means and covariance matrices of the projected data might be too weak of a constraint, \citet{Lee2022} define a version of fair PCA by requiring that the projected data's group-conditional distributions coincide. They use the maximum mean discrepancy to measure the deviation of the group-conditional distributions and a penalty method for manifold optimization to solve the resulting optimization problem. While running much faster than the method by \citet{olfat2019}, we find the running time of the method by \citeauthor{Lee2022} to be significantly higher than the running time of our proposed algorithms; still, in terms of the quality of the data representation our algorithms can compete. \citeauthor{Lee2022} present their algorithm only for two demographic groups and it is unclear whether it can be extended to more than~two~groups. Concurrently with the writing of our paper, \citet{Shao2022} proposed the spectral attribute removal (SAL) algorithm to remove demographic information via a data projection. Their algorithm is based on the observation that a singular value decomposition of the cross-covariance matrix between feature vector~$\mathbf{x}$ and demographic attribute~$z$ yields projections that maximize the covariance of $\mathbf{x}$ and $z$. Although derived differently, it turns out that the SAL algorithm and our fair PCA method are closely related: SAL projects the data onto the subspace spanned by the columns of the matrix~$\mathbf{R}$ in our Algorithm~\ref{alg:fair_PCA}. Hence, for $k=d-1$ the two algorithms project the data onto the same subspace. However, SAL does not allow to choose an embedding dimension smaller than $d-1$. While \citeauthor{Shao2022} also provide a kernelized variant of their algorithm, they do not provide the interpretation of matching group-conditional means or any extension to also match group-conditional covariances. There are also papers that propose methods for linear guarding, that is finding a data representation from which no linear classifier can predict demographic information, that are not related to PCA: \citet{ravfogel2020} iteratively train a linear classifier to predict the demographic attribute and then project the data onto the classifier's nullspace; \citet{haghighatkhah2021} describe a procedure to find a projection such that the projected data is not linearly separable w.r.t. the demographic attribute anymore, but still linearly separable w.r.t. some other binary attributes; \citet{ravfogel2022} formulate the problem of linear guarding as a linear minimax game, where a projection matrix competes against the parameter vector of a linear model. In case of linear regression this game can be solved analytically, while for logistic regression and other linear models a relaxation of the game is solved via alternate minimization and maximization. \paragraph{Fair PCA for balancing reconstruction error} A very different notion of fair PCA was introduced by \citet{samira2018}, which views PCA as a standalone problem and wants to balance the excess reconstruction error across different demographic groups. This line of work, which is incomparable to our notion of fair PCA and the notions discussed above, has been extended by \citet{samira2019}, \citet{Pelegrina2021} and \citet{KamaniPCA}. \paragraph{Information bottleneck method} As pointed out by one of the reviewers, there might be a closer relationship between our formulation of fair PCA and the information bottleneck method \citep{tishby1999}, where the goal is to find a compression of a signal variable $X$ while preserving information about a relevance variable~$Y$. In particular, when $X$ and $Y$ are jointly multivariate Gaussian variables, the optimal projection matrix is obtained by solving an eigenvalue problem involving the cross-covariance matrix~$\mathbf{\Sigma}_{XY}=(\mathbb{E}[(X_i-\mathbb{E}[X_i])(Y_j-\mathbb{E}[Y_j])])_{ij}$ \citep{chechik2005}.
1,116,691,497,055
arxiv
\section{Introduction} Before the first star was born, Li was the third most abundant element in the universe. In the intervening 14~Gyr between then and now, Li has been both created by spallation of carbon, nitrogen, and oxygen nuclei and destroyed by astration in the interiors of stars. In most stars, destruction rates exceeded creation rates. Because stars produce rather than destroy most other elements, Li today is among the least abundant of the elements lighter than Zn. \addtocounter{footnote}{-1} In most metal-poor stars, the abundance of Li is a predictable function of surface temperature. Spectroscopy of large samples of stars in the Milky Way's halo \citep{spi82,gra00} and in metal-poor globular clusters \citep{lin09b,muc11} show that Li abundances in dwarf stars remain at the same value ($A({\rm Li}) = 2.3$)\footnote{$A({\rm Li}) = 12 + \log [n({\rm Li})/n({\rm H})]$ where $n$ is the number density of atoms.} until the first dredge-up on the subgiant branch. In this surface convection episode, material from deeper, hotter layers of the stars mixes with material at the stellar surface. The deeper layers contain no Li because Li burning is very efficient at $T \ga 2.5 \times 10^6$~K, cool compared to hydrogen burning temperatures. As a result, the first dredge-up depletes the photospheric value of Li by a factor of 15--20 \citep{lin09b}. Another dilution episode occurs at the luminosity function bump in the red giant branch (RGB)\@. Extra mixing processes \citep[e.g.,][]{pal11a} further introduce Li-depleted material to the stellar surface. The abundance of Li in red giants drops drastically as the star evolves beyond the RGB bump. After both dilution episodes, the number density of Li atoms drops to below 10 parts per trillion. Almost all first-ascent red giants with luminosities greater than the RGB bump have $A(\rm{Li}) < 1.5$. Some stars exhibit glaring exceptions to the standard picture of Li evolution. For example, excess Li often accompanies $^{13}$C enhancement in CJ stars \citep{hat03}. These stars could have participated in ``hot bottom burning'' \citep[a phrase coined by][]{sca75}, wherein $^7$Li can be synthesized in the star and observed at its surface \citep{cam55}. Specifically, the reaction $^3{\rm He}(\alpha,\gamma)^7{\rm Be}$, part of the pp-II hydrogen burning chain, occurs at temperatures greater than $10^7$~K\@. Li can be produced from Be by electron capture: $^7{\rm Be}(e^- \nu)^7{\rm Li}$. However, the second reaction must occur at $T < 2.5 \times 10^6$~K, or else the Li will be destroyed by proton capture. The proposed Cameron-Fowler (\citeyear{cam71}) mechanism solves the temperature discrepancy by theorizing that $^7$Be can be brought to the surface of the star, where it may capture an electron to create $^7$Li. The stellar surface is cool enough to preserve Li. However, the ongoing convection guarantees that the surface Li atoms do not last long. They quickly return to destructive temperatures. Thus, the surface composition of Li is a balance between its creation by the Cameron-Fowler mechanism and its destruction by convection. Hot bottom burning is effective at producing $^7$Li in asymptotic giant branch (AGB) stars with masses of about 4--7~$M_{\sun}$ \citep{ibe75,sac92}. The convective envelopes of these stars reach layers where $^7$Be is created. However, some less massive giants on both the RGB and AGB have been found to be Li-rich \citep[e.g.,][]{kra99,pal11a,ruc11}. The convective envelopes of these stars do not reach layers with $^7$Be. Therefore, $^7$Be should not be transported to the surfaces of these stars in the context of the standard model of stellar evolution. If the Cameron-Fowler mechanism is operating in these stars, then it requires ``extra mixing'' or ``cool bottom processing'' \citep{boo95,sac99} to connect the base of the convective envelope to deeper regions of the star that contain Be. At one time, thermohaline convection was considered as a source of the extra mixing \citep{cha07}, but the diffusion was later shown to be too slow to account for the photospheric compositions of red giants \citep{den11,pal11b}. Alternative mixing processes include magnetic buoyancy \citep{bus07,nor08} and rotation \citep{cha10}. Even though low-mass, Li-rich giants are rare, their existence challenges the standard theory of stellar evolution. They have spawned numerous modifications to the standard model. Different explanations depend on the evolutionary state of the star: the RGB bump \citep{cha00}, the AGB \citep{nol03}, or even anywhere along the RGB \citep{sac99}. Furthermore, the composition of the star also influences the strength of extra mixing and therefore the surface abundance of Li \citep{sac99}. Globular clusters and dwarf spheroidal galaxies (dSphs) are excellent places to search for Li-rich giants. First, they offer space densities high enough for efficient observations with multi-object spectrographs. Second, the stars are at a uniform distance, which eases the determination of evolutionary state and Li abundance. \section{Lithium Measurements} \begin{figure} \centering \includegraphics[width=\linewidth]{f1.eps} \caption{Small region of DEIMOS spectra centered on the \ion{Li}{1}~6708 multiplet (dashed vertical line) for each of the 14 dSph giants with detectable Li. The observed spectra (black) have been normalized to have unit continuum. The red curves show the best-fitting synthetic spectra.\label{fig:spectra}} \end{figure} \citet{kir10} obtained spectra of nearly 3000 red giants in eight dSphs with the DEIMOS medium-resolution, multi-object spectrograph \citep{fab03} on the Keck~II telescope. Of these data, 2812 spectra included the spectral region around the \ion{Li}{1} resonance line at 6708~\AA\@. The slit placement of the other stars caused their spectra to terminate redward of 6708~\AA\@. We quantified the signal-to-noise ratios (S/Ns) of the spectra in the vicinity of the Li line by computing the inverse standard deviation of continuum-normalized pixels within 8~\AA\ of the Li line but excluding the 4~\AA\ immediately surrounding the line. We searched for detections of the Li line in the 2054 spectra with ${\rm S/N} > 10~{\rm pixel}^{-1}$ that included the appropriate spectral range. We found 15 spectra with strong Li lines. The sample is random because the stars were not chosen for any property that could predict Li enhancement. One of these stars, star 461 in the Draco dSph, was already known to be Li-rich \citep{dom04}. The other stars belong to five dwarf galaxies: Sculptor, Fornax, Leo~I, Leo~II, and Canes Venatici~I\@. This sample more than doubles the number of known Li-rich, metal-poor (${\rm [Fe/H]} \la -0.7$) red giants. Table~\ref{tab:sample} gives the identities of the 14 newly discovered, Li-rich stars. Because the stars reside in different galaxies, the photometry is not homogeneous. Table~\ref{tab:sample} gives the filter set in which each star was observed. Figure~\ref{fig:spectra} shows the Li-rich stars' spectra around the Li line, and Table~\ref{tab:abundances} gives the previously measured \citep{kir10} temperatures, surface gravities, microturbulent velocities, and metallicities for these stars. We measured the equivalent widths (EWs) of the Li resonance lines by fitting Gaussians. In order to estimate the uncertainty on EW, we resampled the spectra 1000 times. In each realization, we perturbed the flux value of each pixel. The amount of perturbation was sampled from a Gaussian random distribution with a width equal to the measurement uncertainty of the pixel's flux. The EWs of the detected Li lines range from 175 to 694~m\AA. Table~\ref{tab:abundances} gives the EWs. We measured the EWs only to illustrate that these lines are very strong and easily detected. We used spectral synthesis, not the EWs, to quantify the Li abundances. \begin{figure*} \centering \includegraphics[width=\linewidth]{f2.eps} \caption{Color-magnitude diagrams for the dwarf galaxies in which Li-rich giants were detected. Blue points indicate radial velocity members. Red, five-pointed stars indicate the Li-rich stars, which are also radial velocity members. The orange horizontal lines indicate the approximate magnitudes of the RGB bumps. This magnitude was calculated with \citeauthor{fer99}'s (\citeyear{fer99}) formula assuming the average age \citep{orb08} and metallicity \citep{kir11} of the galaxy. The colors and magnitudes of the Li-rich stars indicate that they are low-mass giants more luminous than the RGB bump.\label{fig:cmds}} \end{figure*} The Li-rich stars' positions in color-magnitude diagrams (CMDs, Figure~\ref{fig:cmds}) are consistent with either the RGB or AGB\@. The colors and magnitudes of the two branches are hardly different for the old populations typical of dSphs. Whether the stars in our sample belong to the RGB or AGB, they belong to dwarf galaxies that are too old to host the 4--7~$M_{\odot}$ AGB stars that can produce Li in the standard model of stellar evolution \citep{sac92}. The colors of the Li-rich stars are also much redder than intermediate-mass AGB stars. Therefore, these low-mass stars are anomalous regardless of their evolutionary states. \begin{deluxetable*}{lcccccccc} \tablecolumns{9} \tablewidth{0pt} \tablecaption{Li-Rich Red Giant Sample\label{tab:sample}} \tablehead{\colhead{Star Name} & \colhead{RA (J2000)} & \colhead{Dec (J2000)} & \colhead{Filter 1} & \colhead{Mag 1} & \colhead{Filter 2} & \colhead{Mag 2} & \colhead{Approx. $V$} & \colhead{S/N at 6708~\AA\ (pixel$^{-1}$)}} \startdata Scl~1004838 & $00^{\mathrm{h}} 59^{\mathrm{m}} 34 \fs 1$ & $-33 \arcdeg 43 \arcmin 51 \arcsec$ & $ M$ & 19.000 & $T_2$ & 17.622 & 18.746 & 44 \\ Scl~1004861 & $00^{\mathrm{h}} 59^{\mathrm{m}} 34 \fs 2$ & $-33 \arcdeg 43 \arcmin 19 \arcsec$ & $ M$ & 19.272 & $T_2$ & 18.026 & 19.045 & 45 \\ For~55609 & $02^{\mathrm{h}} 39^{\mathrm{m}} 47 \fs 9$ & $-34 \arcdeg 27 \arcmin 47 \arcsec$ & $ B$ & 20.260 & $ R$ & 17.928 & 18.887 & 38 \\ For~60521 & $02^{\mathrm{h}} 39^{\mathrm{m}} 52 \fs 0$ & $-34 \arcdeg 36 \arcmin 31 \arcsec$ & $ B$ & 19.895 & $ R$ & 17.826 & 18.434 & 29 \\ For~90067 & $02^{\mathrm{h}} 40^{\mathrm{m}} 19 \fs 6$ & $-34 \arcdeg 33 \arcmin 42 \arcsec$ & $ B$ & 20.506 & $ R$ & 17.971 & 18.969 & 50 \\ For~100650 & $02^{\mathrm{h}} 40^{\mathrm{m}} 31 \fs 3$ & $-34 \arcdeg 28 \arcmin 52 \arcsec$ & $ B$ & 20.444 & $ R$ & 18.681 & 19.185 & 31 \\ LeoI~71032 & $10^{\mathrm{h}} 08^{\mathrm{m}} 17 \fs 6$ & $+12 \arcdeg 18 \arcmin 19 \arcsec$ & $ M$ & 20.412 & $T_2$ & 18.791 & 19.971 & 34 \\ LeoI~60727 & $10^{\mathrm{h}} 08^{\mathrm{m}} 18 \fs 0$ & $+12 \arcdeg 20 \arcmin 59 \arcsec$ & $ M$ & 20.420 & $T_2$ & 18.619 & 19.941 & 18 \\ LeoI~32266 & $10^{\mathrm{h}} 08^{\mathrm{m}} 30 \fs 1$ & $+12 \arcdeg 17 \arcmin 01 \arcsec$ & $ M$ & 20.575 & $T_2$ & 19.160 & 20.174 & 28 \\ LeoI~21617 & $10^{\mathrm{h}} 08^{\mathrm{m}} 37 \fs 3$ & $+12 \arcdeg 20 \arcmin 12 \arcsec$ & $ M$ & 20.326 & $T_2$ & 18.584 & 19.856 & 21 \\ LeoII~C-7-174 & $11^{\mathrm{h}} 13^{\mathrm{m}} 19 \fs 0$ & $+22 \arcdeg 06 \arcmin 45 \arcsec$ & $ M$ & 20.779 & $T_2$ & 19.573 & 20.477 & 18 \\ LeoII~C-3-146 & $11^{\mathrm{h}} 13^{\mathrm{m}} 36 \fs 2$ & $+22 \arcdeg 08 \arcmin 51 \arcsec$ & $ M$ & 20.489 & $T_2$ & 19.014 & 20.134 & 41 \\ CVnI~195\_195 & $13^{\mathrm{h}} 28^{\mathrm{m}} 27 \fs 6$ & $+33 \arcdeg 36 \arcmin 43 \arcsec$ & $ g$ & 19.571 & $ r$ & 18.667 & 19.044 & 37 \\ CVnI~196\_129 & $13^{\mathrm{h}} 28^{\mathrm{m}} 44 \fs 3$ & $+33 \arcdeg 34 \arcmin 12 \arcsec$ & $ g$ & 19.726 & $ r$ & 18.947 & 19.251 & 26 \\ \enddata \tablerefs{Identifications and photometry are from \citet{wes06} for Sculptor, \citet{ste98} for Fornax, \citet{soh07} for Leo~I and Leo~II, and the Sloan Digital Sky Survey \citep{ade07} for Canes Venatici~I.} \end{deluxetable*} \begin{deluxetable*}{lccccccccc} \tablecolumns{10} \tablewidth{0pt} \tablecaption{Stellar Parameters and Lithium Abundances\label{tab:abundances}} \tablehead{\colhead{Star Name} & \colhead{$T_{\rm eff}$~(K)} & \colhead{$\log g$~(cm~s$^{-2}$)} & \colhead{$\xi$~(km~s$^{-1}$)} & \colhead{[Fe/H]} & \colhead{EW(\ion{Li}{1}~6708)} & \colhead{$A({\rm Li})_{\rm LTE}$} & \colhead{$A({\rm Li})_{\rm NLTE}$} & \colhead{$\sigma_{\rm noise}$} & \colhead{$\sigma_{T_{\rm eff}}$}} \startdata Scl~1004838 & 4564 & 1.49 & 1.79 & $-1.59 \pm 0.11$ & $363 \pm 19$ & $3.32$ & $2.97$ & $0.12$ & $0.24$ \\ Scl~1004861 & 4866 & 1.74 & 1.73 & $-1.70 \pm 0.12$ & $193 \pm 29$ & $2.46$ & $2.37$ & $0.15$ & $0.15$ \\ For~55609 & 3863 & 0.55 & 2.01 & $-0.73 \pm 0.11$ & $694 \pm 24$ & $3.69$ & $3.68$ & $0.10$ & $0.14$ \\ For~60521 & 4193 & 0.69 & 1.98 & $-0.86 \pm 0.11$ & $382 \pm 35$ & $2.11$ & $2.25$ & $0.13$ & $0.20$ \\ For~90067 & 3768 & 0.40 & 2.05 & $-0.68 \pm 0.11$ & $503 \pm 23$ & $2.02$ & $1.76$ & $0.30$ & $0.11$ \\ For~100650 & 4422 & 1.18 & 1.86 & $-0.95 \pm 0.11$ & $492 \pm 28$ & $3.74$ & $3.57$ & $0.14$ & $0.22$ \\ LeoI~71032 & 4410 & 0.90 & 1.93 & $-1.29 \pm 0.11$ & $322 \pm 27$ & $2.50$ & $2.48$ & $0.15$ & $0.23$ \\ LeoI~60727 & 4182 & 0.72 & 1.97 & $-1.42 \pm 0.12$ & $514 \pm 47$ & $3.49$ & $3.40$ & $0.39$ & $0.19$ \\ LeoI~32266 & 4690 & 1.16 & 1.87 & $-1.35 \pm 0.12$ & $175 \pm 30$ & $2.07$ & $2.15$ & $0.13$ & $0.17$ \\ LeoI~21617 & 4249 & 0.75 & 1.97 & $-1.10 \pm 0.11$ & $546 \pm 52$ & $3.53$ & $3.43$ & $0.37$ & $0.19$ \\ LeoII~C-7-174 & 4981 & 1.57 & 1.77 & $-1.24 \pm 0.12$ & $225 \pm 42$ & $2.92$ & $2.72$ & $0.15$ & $0.16$ \\ LeoII~C-3-146 & 4501 & 1.18 & 1.86 & $-1.40 \pm 0.11$ & $449 \pm 30$ & $3.52$ & $3.26$ & $0.16$ & $0.33$ \\ CVnI~195\_195 & 4286 & 0.66 & 1.99 & $-2.61 \pm 0.12$ & $527 \pm 17$ & $3.98$ & $3.85$ & $0.20$ & $0.28$ \\ CVnI~196\_129 & 4507 & 0.85 & 1.94 & $-2.82 \pm 0.13$ & $380 \pm 36$ & $3.64$ & $3.15$ & $0.23$ & $0.27$ \\ \enddata \end{deluxetable*} We prepared each spectrum by normalizing to the continuum. We divided the observed spectrum by the best-fitting synthetic spectrum determined by \citet{kir10}. We fit a B-spline with a breakpoint spacing of 25~\AA\ to the quotient spectrum, excluding the region between 6705.9~\AA\ and 6709.9~\AA\@. This exclusion made the continuum determination insensitive to the strength of the Li resonance line. We divided the observed spectrum by the spline. We synthesized the spectral region around the Li resonance line with the spectral synthesis code MOOG \citep{sne73} coupled with ATLAS9 model atmospheres \citep{kur93,kir11pasp} in local thermodynamic equilibrium (LTE). We calculated the surface gravity of each star based on its position in the CMD and interpolation in model isochrones \citep{dem04}. The temperatures were based on a combination of photometry and spectroscopy \citep{kir10}. The spectra were synthesized with the multiplet of $^7$Li lines \citep{hob99}. We assumed that all of the Li was in the $^7$Li isotope, as is typical for metal-poor stars \citep{asp06}. The full spectral range synthesized was 6697.9--6717.9~\AA. Although Li is by far the strongest line in a 5~\AA\ window around 6708~\AA, we supplemented the line list with atomic (including \ion{Fe}{1}~6707) and molecular (CN, C$_2$ and MgH) transitions from elements other than Li. We adopted the same line list as \citet{kir08}. We calculated Li abundances and their uncertainties by minimizing $\chi^2$ (Equation~\ref{eq:chisq}) between the observed and synthetic spectra using Levenberg-Marquardt optimization. \begin{equation} \chi^2 = \sum_{\lambda = 6705.9~{\rm \AA}}^{6709.9~{\rm \AA}} \frac{(f(\lambda) - s(\lambda))^2}{\sigma(\lambda)^2} \label{eq:chisq} \end{equation} \noindent In Equation~\ref{eq:chisq}, $f$ represents the continuum-normalized observed spectrum, $s$ represents the synthetic spectrum, and $\sigma^2$ represents the variance of the observed spectrum, propagated through flat fielding and continuum normalization. We repeatedly computed synthetic spectra with varying Li abundances until the $\chi^2$ reached a minimum and changed by less than one part in $10^5$ between iterations. Figure~\ref{fig:spectra} shows the best-fitting synthetic spectra in red. We applied corrections for deviations from LTE using \citeauthor{lin09a}'s (\citeyear{lin09a}) grid. In some cases, we extrapolated beyond the boundaries of the grid ($\log g < 1$ and $T_{\rm eff } < 4000$~K). Table~\ref{tab:abundances} lists the LTE and non-LTE (NLTE) Li abundances for the 14 newly discovered Li-rich giants. We calculated two sources of error: $\sigma_{\rm noise}$ and $\sigma_{T_{\rm eff}}$. The random error on $A({\rm Li})$ from spectral noise, $\sigma_{\rm noise}$, is the amount by which $A({\rm Li})$ can change before $\chi^2$ increases by one. There is also some systematic error from the uncertain effective temperature of the star, the continuum placement, and uncertainties in the transition probabilities. The uncertainty on $T_{\rm eff}$ dominates the systematic error. The approximate uncertainty on $T_{\rm eff}$ for these relatively cool giants is 100~K\@. We determined this systematic error by recalculating the best-fitting value of $A({\rm Li})$ from synthetic spectra with $T_{\rm eff}$ both 100~K above and 100~K below the nominal value of $T_{\rm eff}$ determined by \citet{kir10}. We then corrected $A({\rm Li})$ for NLTE effects. The average of these deviations of $A({\rm Li})$ from the value computed with the unperturbed temperature is $\sigma_{T_{\rm eff}}$. Table~\ref{tab:abundances} gives both $\sigma_{\rm noise}$ and $\sigma_{T_{\rm eff}}$. \section{Discussion} \begin{figure} \centering \includegraphics[width=\linewidth]{f3.eps} \caption{Li abundances as a function of the dSph stars' differences in $V$ magnitude from the predicted magnitude of the RGB bump, calculated with \citeauthor{fer99}'s (\citeyear{fer99}) formula, assuming the average age \citep{orb08} of the dSph and the measured metallicity of the star \citep{kir10}. The errors on $A({\rm Li})$ are given by $\sqrt{\sigma_{\rm noise}^2 + \sigma_{T_{\rm eff}}^2}$. Also shown are Li abundances in the globular cluster NGC~6397 \citep[gray points and upper limits,][]{lin09b}. The dSph giants in our sample are much more Li-enhanced than typical red giants, and many are more Li-rich than the universe's primordial Li abundance \citep{coc12}, indicated by the dashed line. NGC~6397 also contains an unusually Li-rich turn-off star \citep[pink point,][]{koc11}.\label{fig:lin09b}} \end{figure} The Li abundances range from $A({\rm Li})_{\rm{NLTE}} = 1.76$ to $3.85$. The universe's primordial value of Li is $A({\rm Li}) = 2.72$ \citep{coc12}. Eight of the stars in our sample have larger Li abundances. Therefore, these stars have not merely refrained from participating in Li destruction. The Li in these stars must have been created since the Big Bang. Our discovery reasserts that the phenomenon of extreme Li enrichment in giants is not limited to the Milky Way. Furthermore, the phenomenon extends to very metal-poor stars. The two Li-rich stars in Canes Venatici~I have ${\rm [Fe/H]} < -2.6$. Figure~\ref{fig:lin09b} shows $A({\rm Li})$ as a function of the evolutionary state of the star, expressed as a difference in magnitude from the RGB bump in the dSph. The RGB bump magnitude is calculated individually for each star from \citeauthor{fer99}'s (\citeyear{fer99}) formula, assuming the mean age of the dSph \citep{orb08} and the measured metallicity of the star \citep{kir10}. For comparison, Figure~\ref{fig:lin09b} also includes Li abundances in the metal-poor globular cluster NGC~6397 from the main sequence through RGB bump \citep{lin09b}. Incidentally, NGC~6397 contains a Li-rich turn-off star, which is even harder to explain than Li-rich giants \citep{koc11}. Like most other metal-poor, Li-rich giants \citep{ruc11}, all of the stars in our sample are more luminous than the RGB bump. However, our sample is biased toward high luminosities. Of the stars we searched, 1764 out of 2054 (86\%) are more luminous than the RGB bump, and the Li line at fixed abundance becomes weaker for decreasing luminosities (higher temperatures). Although our sample does not offer strong statistical evidence for the Li-rich phenomenon to occur exclusively above the RGB bump in metal-poor stars, all of the known, metal-poor, Li-rich giants are consistent with that hypothesis \citep[also see][]{gon09}. Almost all of our sample's stars that are less luminous than the bump reside in galaxies without detections of Li-rich stars. The fraction of strong Li detection in our sample of stars with ${\rm S/N} > 10~{\rm pixel}^{-1}$ above the RGB bump is 15 of 1764 (0.85\%). However, the detectability of Li depends on the stellar temperature, Li abundance, and spectral S/N. Our spectra with lower S/Ns could harbor anomalously large Li lines, but we possibly would have missed them in our visual search. Future work (X.~Fu et al., in preparation) will make a more quantitative determination of the Li-rich fraction of red giants in our sample. The existence of Li-rich red giants and their abundances do not correlate with any measurable parameter. Although \citet{cha00} found that Li-rich giants seem to cluster in the CMD, including at the RGB bump, our sample shows no such clustering. \citet{mon11} and \citet{leb12} found a similar result in the Milky Way disk and bulge. These and our samples have the advantage that the stars are in stellar systems with known distances. Therefore, the magnitude distance from the RGB bump does not need to be inferred from spectroscopically determined atmospheric parameters. In addition to positions in the CMD, our stars' temperatures, surface gravities, iron abundances, and [$\alpha$/Fe] abundance ratios are not unusual in any regard with respect to Li-normal stars in the same dwarf galaxies. Although the resolution of our spectra is too low to measure rotation, at least 80\% of metal-poor, Li-rich red giants in another survey \citep{ruc11} exhibit typical rotation velocities. The typicality of metal-poor, Li-rich stars in all regards except Li abundance suggests that these stars are not unusual in terms of their intrinsic properties or external stimuli. Furthermore, the fraction of Li-rich, metal-poor giants in our sample shows that the frequency of the Li-rich phenomenon in dSphs---about 1\%---is roughly the same as in the Milky Way disk, bulge, and halo \citep{bro89,ruc11,mon11,leb12}. The apparent randomness of giants that exhibit large Li abundances restricts extra mixing models. For example, the sudden increase in angular momentum caused by engulfment of a planet could induce extra mixing \citep{den00}. However, the occurrence of planets in the solar neighborhood is far more likely around metal-rich stars \citep{fis05}. Assuming that stars in dSphs also obey a correlation between metallicity and the occurrence of hot Jupiters and that the occurrence relation extends to very low metallicities ($-3 < {\rm [Fe/H]} < -0.5$), the stars in our sample are extremely unlikely to host hot Jupiters. In fact, we could find no model in the literature that adequately explains the available observations for Li-rich, low-mass, metal-poor giants (Li enhancements as high as $A({\rm Li}) = 3.9$ even near the tip of the RGB, no concentration in the CMD, weak correlation with rotation). Because Li-rich, metal-poor giants are otherwise ordinary, we suggest that Li enhancement does not arise only in special cases. Instead, we echo a previous suggestion \citep{del96,gon09} that extra mixing and its associated Li enhancement could be a brief, universal phase of stellar evolution. The lifetime of a 10~Gyr old red giant is 420~Myr, 35~Myr of which is spent above the RGB bump \citep{dot08}. The rate of Li depletion with increasing luminosity in NGC~6397 \citep{lin09b} is roughly $\Delta A({\rm Li}) / \Delta M_V = 1.5$. Just above the RGB bump, red giants brighten by one magnitude in 19~Myr \citep{dot08}. From these derivatives, we infer that the $e$-folding time for Li in the atmosphere of a normal, metal-poor red giant near the RGB bump is about 5~Myr, or 15\% of the lifetime of the red giant above the RGB bump. However, the convection zone is deeper closer to the RGB tip than at the RGB bump, so the Li destruction rate must accelerate. Furthermore, the destruction rate could be even faster in the presence of extra mixing, which brings photospheric material to even hotter temperatures. The accelerated Li destruction could conspire to reduce the observable lifetime of an instantaneously Li-enhanced star to just 1\% of the lifetime above the RGB bump. In this scenario, about 1\% of all red giants above the bump would appear Li-rich. \citet{pal01} postulated that this process happens at the RGB bump in a Li flash that also serves to temporarily increase the luminosity of the red giant, which could explain why Li-rich giants are observed at all luminosities between the bump and the tip of the RGB. However, neither \citet{den04} nor \citet{pal06} could achieve high enough mixing rates in their models to trigger a Li flash. Nonetheless, the idea that Li enhancement is a brief, universal phase of stellar evolution remains attractive in order to explain the lack of correlation with almost any other measurable parameter. \acknowledgments We thank the editor and the anonymous referee for a timely and helpful report. Support for this work was provided by NASA through Hubble Fellowship grant 51256.01 awarded to E.N.K. by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. X.T.F. and P.G. acknowledge support by NSF grant AST 09-37525. X.T.F. and L.D. thank NSFC for support by grants nos. 10973015 and 11061120454. PG acknowledges NSF grant AST-1010039. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. {\it Facility:} \facility{Keck:II (DEIMOS)}
1,116,691,497,056
arxiv
\section{Introduction} \label{sec:1} In quantum mechanics, the quantum state of a system completely describes all aspects of the system. The instantaneous state of a quantum system encodes the probabilities of its measurable properties, or "observables" (examples of observables include energy, position, momentum and angular momentum). Generally, quantum mechanics do not assign determinist values to observables. Instead, it makes predictions about probability distributions; that is, the probability of obtaining each of the possible outcomes from measuring an observable.\\ We have two mathematical representations of a quantum state: the density matrix $\rho$ and its associated Wigner function $W_\rho$. The densitymatrix $\rho$, which describes completely a quantum state, is hermitian, positive definite and with trace one. It can be finite or infinite dimensional. Equivalently the corresponding Wigner function $W_\rho:\mathbb{R}^2\rightarrow\mathbb{R}$ may be defined. In general, $W_\rho$ is regarded as a generalized probability density, integrating to plus one over the whole plane. It does not satisfy all the properties of a proper probability density as it can, and normally does, go negative for states which have no classical model. It satisfies also certain intrinsic positivity constraints in the sense that it corresponds to a density matrix. \paragraph{} In this paper we address the problem of estimating the quadratic functional $d^2=\int W_\rho^2$ of the Wigner function of a monochromatic light in a cavity prepared in the state $\rho$ by using Quantum Homodyne Tomographic (QHT\footnote{We refer the interested reader to Artiles \textit{et al.} (2005) \cite{MR2136642} for further details on the physical background}) data measurement performed on independent, identical systems. The Quantum Homodyne Detection (QHD) has been put in practice for the first time by Smithey \textit{et al.} (1993) \cite{SmitA}, we will detail this technique in section~\ref{sec:2.2}.\\ We study the quantity $d^2=\int W_\rho^2$ which has an interest in itself as a physical measure of the purity of quantum state. It allows us to detect pure state and mixed state as it always equals $\frac{1}{2\pi}$ in case of pure states. A state is called pure if it cannot be represented as a mixture (convex combination) of other states, i.e., if it is an extreme point of the convex set of states. All other states are called mixed states.\\ The QHD technique gives results of the measure of the electric and the magnetic fields $(p,q)$ of the studied laser for some phase $\Phi$. In the ideal case, we would observe the random variable $(X,\Phi)=(\cos(\Phi)Q+\sin(\Phi)P,\Phi)$ where $\Phi$ is chosen independently of $(Q,P)$, and uniformly in the interval $[0,\pi]$. In our paper we do not consider the ideal data $ (X,\Phi)$ but the noisy observations $(Y,\Phi)$ where $Y$ is the sum of the random variable $X$ and a gaussian random variable $\xi$. We assume that the unknown function $W_\rho$ belongs to $\mathcal{A}(\alpha,r,L)$ a class of super smooth functions where $\alpha>0$, $0<r\leq 2$ and $L>0$ will be defined later. Those classes are similar to those of Cavalier (2000) \cite{MR1790012} for $r=1$ and functions are defined on $\mathbb{R}^d$; Butucea and Tsybakov (2007) \cite{ButuTsyba04} on $\mathbb{R}$; Butucea \textit{et al.} (2007) \cite{ButGutArt05} on $\mathbb{R}^2$.\paragraph{} The study of quadratic functionals started with Bickel and Ritov (1988) \cite{BickelRitov}, who have considered the problem of estimating the integral of the square of a derivative of a probability density function and obtained nonparametric rates. Their results have been extended by Birg\'{e} and Massart (1995) \cite{MR1331653} on the estimation of more general functionals, who established nonparametric lower bounds. The study of general functionals was completed by Kerkyacharian and Picard (1996) \cite{MR1394973} for minimax rates. Laurent (1996) \cite{MR1394981} gave efficient estimation of some functionals of a density function at parametric rate. The problem of adaptive estimation of general functionals has been considered by Tribouley (2000) \cite{MR1772223} in the classical white noise model.\\ In the convolution model, Butucea (2004) \cite{Butu04} has estimated a quadratic functional of a density on $\mathbb{R}$ and applied it to the goodness-of-fit test in $L_2$ distance.\\ In our paper, the first difficulty is that we do not deal with proper probability density function but with quasiprobability density. Moreover, note that our problem is a double inverse problem as we observe the Radon transform of $W_\rho$ (PET) with a convolution (white noise).\\ Inverse problems have been extensively studied in mathematical literature. In a positron emission tomography (PET) perspective, the problem of estimating a probability density on $\mathbb{R}^2$ from tomographic data $(X_k,\Phi_k)$ has been treated by Korostel\"{e}v and Tsybakov (1993) \cite{MR1226450} and johnstone and Silverman (1990) \cite{MR1041393}. Cavalier (2000) \cite{MR1790012} considered also PET model and obtained an estimator of a multi-dimensional density function which is asymptotically sharp minimax, i.e. it achieves the optimal rate of convergence and attains the best constant for the minimax risk.\\ The estimation of the Wigner function $W_\rho$ has been treated by Gu{\c{t}}{\u{a}} and Artiles (2006) \cite{GutaArt05} in the case free of noise. Our noisy model has been studied in a parametric framework by D'Ariano and in a nonparametric framework for the estimation of the Wigner function by Butucea \textit{et al.} (2007) \cite{ButGutArt05}. We propose to estimate the integral of the square of the Wigner function rather than the function itself.\\ Other problems have been considered, in the context of tomography: Goldenshluger and spokoiny (2006) \cite{GoldSpok06} have considered the problem of recovering edges of an image from noisy tomographic data in a white noise model and reached nearly optimal rate. Recovering boundaries in models that involve indirect observations in the $d$-dimensional Euclidean space $\mathbb{R}^d$ has been discussed recently in Goldenshluger and Zeevi (2006) \cite{GoldZeev06}. We note that a Wigner function cannot have a bounded support. \paragraph{} The main contributions of this paper are the following. We propose a method for estimating a quadratic functional of a generalized probability density which may take negative values from indirect and noisy observations in view to detect pure states and mixed states. It is shown that the proposed estimator is optimal or nearly optimal in a minimax sense -depending on the smoothness parameter $r$ of the class $\mathcal{A}(\alpha,r,L)$. Moreover, an adaptive estimator is constructed which attains optimal rates. Another main interest of the estimation of $d^2$ is the important application to goodness-of-fit test in $\mathbb{L}_2$-norm in quantum statistics. This means that physicists want to test whether they produced a laser in the quantum state $\rho_0$ or something different. This can be done via the Wigner functions as follows: \[\left\lbrace \begin{array}{c l} H_0: & \text{ $W_\rho=W_{\rho_0}$},\\ H_1: & \text{$\sup_{W_\rho\in\mathcal{A}(\alpha,r,L)}\|W_\rho-W_{\rho_0}\|_2\geq c\cdot\varphi_n$}. \end{array} \right. \] where $\varphi_n$ is a sequence which tends to 0 when $n\rightarrow\infty$ and it is the testing rate. We can device a test statistic based on the estimator of $d^2=\int W_\rho^2$ constructed in this paper. Similary to Butucea (2004) \cite{Butu04} we conjecture that the testing rates are of the same order as the nonparametric ones found in this paper. \paragraph{} The rest of the paper is organized as follows. In Section~\ref{sec:3} we formulate the statistical model and introduce notation and properties of quantities of interest. In Section~\ref{sec:4} we construct an estimator of the quadratic functional of the unknown Wigner function, along with the bias-variance decomposition. Our main theoretical results are presented in Section~\ref{sec:5}. In Section~\ref{sec:6}, we derive some example of quantum states. Proofs of upper and lower bounds are given in Sections~\ref{sec:7},~\ref{sec:8} and ~\ref{sec:9}. \section{Preliminaries} \label{sec:2} \subsection{Definition} \label{sec:2.1} We study this problem in a minimax framework. Let $d_n^{2}$ be an estimator of $d^{2}=\int{W_\rho^2}$ based on this indirect noisy observations $(Y_i,\Phi_i)$, $i=1,\ldots,n$ as anounced above. We measure the accuracy of $d_n^{2}$ by the maximal risk$$\mathcal{R}(d_n^{2};\mathcal{A}(\alpha,r,L))=\sup_{W_\rho\in\mathcal{A}(\alpha,r,L)}E_{\rho}[|d_n^2-d^2|^2]$$ over the class $\mathcal{A}(\alpha,r,L)$. Here $E_{\rho}$, $P_{\rho}$ denote the expected value and probability when the true underlying quantum state is $\rho$. The minimax risk is defined by $$\mathcal{R}^{*}(\mathcal{A}(\alpha,r,L))=\inf_{\widehat{d}_n^2}\mathcal{R}(\widehat{d_n^{2}};\mathcal{A}(\alpha,r,L))$$ where the infimum is taken over all possible estimators $\widehat{d}_n^2$ of the quadratic functional of the Wigner function $W_{\rho}$.\\ Let $\varphi_n$ be a positive sequence, an estimator $d^2_n$ is \textit{optimal in a minimax sense} \begin{itemize} \item if it satisfies the following \textit{upper bound} \begin{eqnarray} \label{UB} \limsup_{n\to\infty}\varphi_n^{-2}\mathcal{R}(d_n^{2};\mathcal{A}(\alpha,r,L))\leq C_{u}, \end{eqnarray} \item and if the following \textit{lower bound} is satisfied \begin{eqnarray} \label{LB} \liminf_{n\rightarrow\infty}\inf_{\widehat{d}_n^2}\varphi_n^{-2}\mathcal{R}(\widehat{d}_n^2;\mathcal{A}(\alpha,r,L))\geq C_{l}, \end{eqnarray} \end{itemize} where the infimum is taken over all possible estimators $\widehat{d}_n^2$ of the quadratic functional of the Wigner function $W_{\rho}$.Then, $\varphi_n$ is called \textit{optimal rate in a minimax sense}. Our aim is to find rate optimal estimator of $d^{2}$ and to establish asymptotics of minimax risks for some classes of Wigner functions $\mathcal{A}(\alpha,r,L)$. We rely on Butucea \textit{et al.} (2007) \cite{ButGutArt05}, who derived rate optimal pointwise and adaptive estimators of $W_\rho$ (instead of $\int{W_\rho^2}$ in our case) from indirect noisy observations. \subsection{Quantum Homodyne Tomography} \label{sec:2.2} The theoretical foundation of quantum state reconstruction was outlined by Vogel and Risken (1989) \cite{Vogelrisken} and has inspired the first experiments determining the quantum state of a light field, initially with optical pulses with Smithey \textit{et al.} (1993) \cite{SmitA} and Smithey \textit{et al.} \cite{Smitheybis}.\\ \begin{figure}[htbp!] \begin{center} \includegraphics[trim = 0cm 3cm 0mm 0cm,clip,width=8cm]{noisyQHTsetup.eps \caption{QHT mesurement} \label{fig:1} \end{center} \end{figure} \\ The physicists developed a monochromatic laser in state $\rho$ in a cavity. In order to study it, one takes measurement by quantum tomography homodyne (QHT). This technique schematized in figure~\ref{fig:1} consists in mixing the laser to be studied with a laser of reference of high intensity $\left|z\right|>>1$ called local oscillator. Then the beam obtained is split into two and two photodetectors measure each one of the beams ($I_1,I_2$). One measures $X$ the difference of the intensities of the two beams and rescale it by the intensity $\left|z\right|$. Thus for the cavity pulse chosen to be $\phi$, data $(X,\Phi)$ should be obtained. It is widely known in the physical litterature (see Leonhardt (1997) \cite{Leon97}) that an additive gaussian noise is mixed with ideal data $X$, giving for known efficiency $\eta$, data $Y$. \section{Statistical context} \label{sec:3} \subsection{Problem formulation} \label{sec:3.1} In the present paper we estimate the integral of the square of the Wigner function from data measurement performed on $n$ identical quantum systems where the Wigner function is assumed to be a joint generalized density of two variables $P$ and $Q$, $W_\rho:\mathbb{R}^2\rightarrow\mathbb{R}.$ It may take negative values but it integrates to plus one over the whole plane. For further information of the Wigner function, we invite readers to refer to the paper by Artiles \textit{et al.} \cite{MR2136642}.\\ Our statistical problem can been seen as follow: consider $(X_1,\Phi_1)\ldots(X_n,\Phi_n)$ independent identically distributed random variables with values in $\mathbb{R}\times[0,\pi]$. The probability density of $(X,\Phi)$ equals the \textbf{Radon transform} \textbf{$\Re[W_\rho]$} of the Wigner function with respect to the measure $\lambda/\pi$, where $\lambda$ is the Lebesgue measure on $\mathbb{R}\times[0,\pi]$. \begin{equation} \label{radon} p_\rho(x/\phi):=\Re[W_\rho](x,\phi)=\int_{-\infty}^\infty W_\rho(x\cos\phi+t\sin\phi,x\sin\phi-t\cos\phi)dt \end{equation}and $X$ has density $p_\rho(x/\phi)$. As we annouced in the introduction we do not observe the ideal data $(X_\ell,\Phi_\ell)$ $\ell=1,\ldots,n$ but a degraded noisy version $(Y_1,\Phi_1)\ldots(Y_n,\Phi_n)$, \begin{equation} \label{model} Y_\ell:=\sqrt{\eta}X_\ell+\sqrt{(1-\eta)/2}\xi_\ell \end{equation}with $\xi_\ell$ a standard Gaussian random variables independent of all $(X_k,\Phi_k)$ and $0<\eta<1$ is a known parameter. The parameter $\eta$ is called the detection efficiency and represents the proportion of photons which are not detected due to various losses in the measurement process. We note $p_\rho^\eta(x,\phi)$ the density of $(Y_\ell,\Phi_\ell)$. Thus, $p_\rho^\eta(x,\phi)$ is the convolution of the density $\frac{1}{\sqrt{\eta}}p_\rho^\eta(\frac{x}{\sqrt{\eta}},\phi)$ with the density of a centered Gaussian density having variance $(1-\eta)/2$. We assume that the unknown Wigner function $W_\rho$ belongs to a class $\mathcal{A}(\alpha,r,L)$ of infinitely differentiable functions. For $0<r\leq 2$, $\alpha>0$ and $L>0$ define \begin{equation} \label{ens fctnel} \mathcal{A}(\alpha,r,L)=\{W_\rho:\int_{\mathbb{R}^{^2}}|\widetilde{W_\rho}(u,v)|^{2}e^{2\alpha\|(u,v)\|_2^r} du dv\leqslant(2\pi)^{2}L\} \end{equation}where $\|(u,v)\|_2=\sqrt{u^2+v^2}$ is the euclidian norm. \subsection{Properties of Wigner functions and remarkable equations} \label{sec:3.2} In this paragraph we will state some very useful properties the Wigner function. \paragraph{\textbf{Fourier transforms}} A remarkable relation links the Fourier transform of the Wigner function to the Fourier transform of its Radon transform. If we denote \begin{eqnarray*} \widetilde{W}_{\rho}(u,v)&:=&\mathcal{F}_2[W_\rho](u,v), \end{eqnarray*}then \begin{eqnarray} \label{fourierp} \widetilde{W}_{\rho}(t\cos\phi,t\sin\phi)&:=&\mathcal{F}_1[p_\rho(\cdot/\phi)](t)=E_\rho[e^{itX}] \end{eqnarray}where $\mathcal{F}_2$, $\mathcal{F}_1$ denote the fourier transform w.r.t two, respectively one variables. \paragraph{\textbf{Some remarkable equations}} In Section~\ref{sec:8}, most of the proofs make extensive use of the following equations. Since \begin{eqnarray*} E_\rho[e^{itY}]&=&E_\rho[e^{it\sqrt{\eta}X}]\cdot E_\rho[e^{it\sqrt{\frac{1-\eta}{2}}\xi}] \end{eqnarray*}then \begin{eqnarray} \label{fourierproun} \mathcal{F}_1[p_\rho^\eta(\cdot/\phi)](t)&=&\mathcal{F}_1[\frac{1}{\eta}p_\rho^\eta(\frac{.}{\eta}/\phi)](t)\cdot\widetilde{N}^\eta(t)\\ \label{fourierprodeux} &=&\mathcal{F}_1[p_\rho(\cdot/\phi)](\sqrt{\eta}t)\cdot\widetilde{N}^\eta(t), \end{eqnarray}where $\widetilde{N}^\eta(t)$ denotes the Fourier transform of $\sqrt{(1-\eta)/2}\xi\sim\mathcal{N}(0;(1-\eta)/2).$ Then \begin{equation} \label{fourierbruit} \widetilde{N}^\eta(t):=E_\rho[e^{it\sqrt{(1-\eta)/2}\xi}]=e^{-\frac{t^2}{4}(1-\eta)}. \end{equation} \section{Estimation procedure} \label{sec:4} We are now able to define the estimation procedure of the quadratic functional $d^2=\int W_\rho^2$ of the unknown function $W_\rho$ directly from data $(Y_{\ell},\phi_{\ell})$. Next we evaluate an upper bound of the maximal risk uniformly over all Wigner functions in the class $\mathcal{A}(\alpha,r,L)$. \subsection{ Kernel estimator} \label{sec:4.1} Let us define our estimator as a U-statistic of order 2: \begin{defi} Let $(Y_{\ell},\phi_{\ell}),\ell=1,\ldots,n$, be i.i.d data coming from the model \eqref{model}, and $\delta=\delta_n\rightarrow 0$ as $n\rightarrow\infty$. The estimator $d_n^{2}$ can be written \begin{equation} \label{estimateur} d_{n}^{2}:=\frac{1}{(2\pi)^{2}}\frac{1}{n(n-1)}\sum_{k\neq\ell=1}^n\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}e^{itY_k-itY_\ell}d\phi dt. \end{equation} \end{defi} \begin{defi} \label{def} Let $d_n^{2}$ be the estimator defined in \eqref{estimateur}, having bandwidth $\delta>0$. We call the bias and the variance of the estimator, respectively: $$B(d_n^{2}):=|E_\rho[d_n^{2}]-d^{2}|^2\quad \textrm{and}\quad\text{Var}(d_n^{2}):=E_\rho\left[|d_n^{2}-d^{2}|^2\right].$$ \end{defi} \subsection{Bias-variance decomposition} \label{sec:4.2} The following proposition plays an important role in the proof of the upper bound of the risk as we split it into the bias term and the variance term. \begin{prp} \label{prop} Let $(Y_{\ell},\phi_{\ell}),\ell=1,\ldots,n$ be i.i.d data coming from the model \eqref{model} and $d_{n}^{2}$ be the estimator in \eqref{estimateur} (with $\delta\rightarrow 0$ as $n\rightarrow \infty$) of $d^2$ the quadratic functionnal of the Wigner function $W_\rho$ which is lying in the class $\mathcal{A}(\alpha,r,L)$ with $\alpha>0$, $L>0$ and $0<r\leq 2$ defined in \eqref{ens fctnel} then, \begin{enumerate} \item for all $0<r\leq 2$ \begin{eqnarray} \label{propa} |E_\rho[d_n^{2}]-d^{2}|^2 &\leq &L^2 e^{-4\alpha/\delta^r}, \end{eqnarray} \item for all $0<r<2$ \begin{eqnarray} \label{propbun} \text{Var}(d_n^{2})\leq\frac{8\eta^2/(1-\eta)^2}{\pi^2 n^2}e^{\frac{1-\eta}{\eta}\frac{1}{\delta^2}}+\frac{8L}{n\pi}\frac{\eta}{1-\eta}e^{\frac{1-\eta}{2\eta}\frac{1}{\delta^2}-\frac{2\alpha}{\delta^r}}, \end{eqnarray} \item for all $r=2$ and $\frac{1-\eta}{2\eta}-2\alpha>0$ \begin{eqnarray} \label{propbdeux} \text{Var}(d_n^{2})\leq\frac{8\eta^2/(1-\eta)^2}{\pi^2 n^2}e^{\frac{1-\eta}{\eta}\frac{1}{\delta^2}}+\frac{8L}{n\pi}\frac{\eta}{1-\eta-4\alpha\eta}e^{(\frac{1-\eta}{2\eta}-2\alpha)\frac{1}{\delta^2}}, \end{eqnarray} \item for all $r=2$ and $\frac{1-\eta}{2\eta}-2\alpha<0$ \begin{eqnarray} \label{propbtrois} \text{Var}(d_n^{2})\leq\frac{8\eta^2/(1-\eta)^2}{\pi^2 n^2}e^{\frac{1-\eta}{\eta}\frac{1}{\delta^2}}+\frac{1}{n}\cdot\frac{8\eta L}{4\alpha\eta-1+\eta}. \end{eqnarray} \end{enumerate} \end{prp} The proof of this proposition is given in section~\ref{sec:8}. \section{Main results} \label{sec:5} In this section, the first theorem considers the case of nonparametric rates of convergence of our estimator which is proven optimal or nearly optimal (as we loose a logarithmic factor in the lower bound) in the minimax sense. In the second theorem, our estimator attains the parametric rate $1/n$. \begin{thm} \label{theo:1} Let $(Y_{\ell},\phi_{\ell}),\ell=1,\ldots,n$ be i.i.d data coming from the model \eqref{model} where the underlying parameter is the Wigner function $W_\rho$ lying in the class $\mathcal{A}(\alpha,r,L)$, $\alpha>0$ and $L>0$. Then for $d_{n}^{2}$ defined in \eqref{estimateur} and according to the definition given to section~\ref{sec:2.1}, \begin{enumerate} \item for $0<r<2$, with $\delta:=\delta_{opt}$ solution of the equation \begin{eqnarray} \label{ftre} \frac{1-\eta}{2\eta}\frac{1}{\delta_{opt}^2}+\frac{2\alpha}{\delta_{opt}^r}=\log n-(\log\log n)^2, \end{eqnarray} we reach the optimal rate $\varphi_n$ with $C_u=1$, $C_l=1/16$ constants defined in \eqref{UB} and \eqref{LB} \begin{eqnarray} \label{vitesse} \varphi_n^2=L^2e^{\frac{-4\alpha}{\delta_{opt}^r}}, \end{eqnarray} \item for $r=2$, $\frac{1-\eta}{2\eta}-2\alpha>0$ and by taking $\delta=\delta^{*}=\left(\frac{\log n}{\frac{1-\eta}{2\eta}+2\alpha}\right)^{-1/2}$, the rate of convergence is nearly optimal as \begin{eqnarray} \label{vitesse1} \varphi_n^2=n^{\frac{-4\alpha}{\frac{1-\eta}{2\eta}+2\alpha}}, \end{eqnarray} is the rate of convergence in the upper bound \eqref{UB} and \begin{eqnarray} \label{vitesse2} \varphi_n^2=(n\log n )^{\frac{-4\alpha}{\frac{1-\eta}{2\eta}+2\alpha}}, \end{eqnarray} is the rate of convergence in the lower bound \eqref{LB}. \end{enumerate} \end{thm} To prove the Theorem~\ref{theo:1}, one has to prove on the one hand, the upper bound (section~\ref{sec:7}) and on the other hand, the lower bound (section~\ref{sec:9}) according to the definition given in section~\ref{sec:2.1}. \begin{thm} \label{theo:2} Let $(Y_{\ell},\phi_{\ell}),\ell=1,\ldots,n$ be i.i.d data coming from the model \eqref{model} where the underlying parameter is the Wigner function $W_\rho$ lying in the class $\mathcal{A}(\alpha,r,L)$, $r=2$, $\alpha>0$, $L>0$ and $\frac{1-\eta}{2\eta}-2\alpha<0$. Then for $d_{n}^{2}$ defined in \eqref{estimateur} with $\delta=\delta^*=\left(\frac{\eta\log n}{1-\eta}\right)^{-1/2}$, the rate of convergence is parametric: $\varphi_n^2=\frac{1}{n}$.\\ Moreover, in this case our estimator \eqref{ens fctnel} is asymptotically normally distributed $$\sqrt{n}(d^2_n-d^2)\rightarrow\mathcal{N}(0,\mathcal{W}),$$ with asymptotic variance $$\mathcal{W}=\frac{1}{4\pi^{2}}\int\int|t_1||t_2|e^{\frac{1-\eta}{2\eta}t_1t_2}E[e^{it_1X}]E[e^{it_2X}] E[e^{-i(t_1+t_2)X}] dt_1 dt_2-4d^2.$$ \end{thm} The proof of the Theorem~\ref{theo:2} in given in section~\ref{sec:7}. \begin{rmq} \label{rmq:1} We are able to give a more explicit form for the bandwidth and thus for the bias term which is asymptotically equivalent to the rate according to the values of $r$. Let $s_n:=\frac{\log n-(\log\log n)^2}{2a}$ where $a:=\frac{1-\eta}{4\eta}$, then we make successive approximations in \eqref{ftre} starting with $\delta_0$ and we plug it back into \eqref{ftre}, we find $\delta_1$. And successively for all $k\geq 1$ we have $\delta_k$. Values are given by table~\ref{tab:1} and table~\ref{tab:2}.\\ \begin{table}[htbp!] \caption{Procedure} \label{tab:1} \begin{tabular}{lll} \hline\noalign{\smallskip} $\delta_0$ & $\delta_1$ & for all $k\geq 1$, $\delta_k$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} $=s_n^{-1/2}$ & $=(s_n-\frac{\alpha}{a}\delta_0^{-r})^{-1/2}$ & $=(s_n-\frac{\alpha}{a}\delta_{k-1}^{-r})^{-1/2}$ \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{table}[htbp!] \caption{Rates of convergence} \label{tab:2} \begin{tabular}{lll} \hline\noalign{\smallskip} If $r$ & It is enough & And the \\ belongs to & to choose & rate is \\ \noalign{\smallskip}\hline\noalign{\smallskip} $r\in]0,1]$ & $\delta=\delta_1$ & $L^2e^{\left(-4\alpha s_n^{r/2}+o(1)\right)}$ \\ $r\in]1,4/3]$ & $\delta=\delta_2$ & $L^2e^{\left(-4\alpha s_n^{r/2}+C_1 s_n^{r-1}-o(1)\right)}$ \\ $r\in]\frac{2(k-1)}{k},\frac{2k}{k+1}]$ & $\delta=\delta_k$ & $L^2 e^{\left(-4\alpha s_n^{r/2}+C_1 s_n^{r-1}-\ldots+C_{k-1}s_n^{kr/2-(k-1)}+o(1)\right)}$ \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \end{rmq} In the previous theorems, the bandwidth $\delta_{opt}$ depends on the parameters $\alpha$, and $r$ of the class $\mathcal{A}(\alpha,r,L)$ which may be difficult to evalute in practice. However, it is possible to construct an adaptive estimator which does not depend on these parameters and which attains the same asymptotic behavior as in Theorem~\ref{theo:1}, provided that these parameters lie in a certain set. Note that the parameter $\eta$ is supposed to be known. Define two sets of parameters \begin{eqnarray*} \Theta_1&=&\{(\alpha,r,L):\alpha>0,L>0,0<r<1\}\\ \Theta_2&=&\{(\alpha,r,L):0<\alpha\leq\alpha_0,L>0,r=1\},\quad\alpha_0>0. \end{eqnarray*} \begin{thm} \label{theo:3} Let $(Y_{\ell},\phi_{\ell}),\ell=1,\ldots,n$ be i.i.d data coming from the model \eqref{model}. For $\delta=\delta^i_{ad},\,i=1,2$, let $d_{\delta,n}^{2}$ be the estimator defined by $$d_{\delta,n}^{2}:=\frac{1}{(2\pi)^{2}}\frac{1}{n(n-1)}\sum_{k\neq\ell=1}^n\int_{|t|\leq\frac{1}{\delta^i_{ad}\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}e^{itY_k-itY_\ell}d\phi dt,$$ with $\delta^1_{ad}=(\frac{2 \eta \log n}{1-\eta}-\sqrt{\frac{2\eta\log n}{1-\eta}})^{-1/2}$ and $\delta^2_{ad}=(\frac{2\eta\log n}{1-\eta}-\frac{4A\eta}{1-\eta}\sqrt{\frac{2\eta\log n}{1-\eta}})^{-1/2}$, $A>\alpha_0$. Then, for all $(\alpha,r,L)\in\Theta_i,$ $i=1,2$, respectively, $$\limsup_{n\rightarrow\infty}\sup_{W_\rho\in\mathcal{A}(\alpha,r,L)}E[|d_{\delta,n}^2-d^2|^2]\varphi_n^{-2}\leq C_i,$$ where $\varphi_n^{-2}$ is the rate defined in \eqref{vitesse} and the constants are respectively $C_1=1$ and $C_2=\exp{(\frac{8A\alpha\eta}{1-\eta}-\frac{8\alpha^2\eta}{1-\eta})}.$ \end{thm} The proof of the adaptive case in given section~\ref{sec:7}. \section{Examples} \label{sec:6} The Table~\ref{tab:3} shows five examples of quantum pure states and one example of mixed state which can be created at this moment in laboratory. Among the pure states we consider the vacuum state which is the pure state of zero photons, the single photon state, the coherent state which characterizes the laser pulse with an average of $N$ photons. The squeezed states have Gaussian Wigner functions whose variances in the two directions have a fixed product. And the well-known Schr\"{o}dinger's Cat which is also a pure state.\\ Note that for pure states, $d^2=1/2\pi$. The thermal state is a mixed state describing equilibrium at temperature equal to $1/\beta$, having Gaussian Wigner function with variance increasing with the temperature. This state is mixed and here we find $d^2=\frac{\tanh(\beta/2)}{2\pi}$. For these examples of quantum states, the procedure gives fast parametric rates with $r=2$ and $\frac{1-\eta}{2\eta}-2\alpha<0$. We can easily check that each Wigner function belongs to the class $\mathcal{A}(\alpha,2,L)$ for small enough values of $\alpha$ (see Table~\ref{tab:3}). \begin{table}[htbp!] \caption{Examples of quantum states} \label{tab:3} \begin{tabular}{lll} \hline\noalign{\smallskip} State & Fourier transform of Wigner & in the class \\ & function $\widetilde{W_\rho}(u,v)$ & $\mathcal{A}(\alpha,2,L)$ if \\ \noalign{\smallskip}\hline\noalign{\smallskip} Vacuum state & $\exp\left(\frac{-\|(u,v)\|^2_2}{4}\right)$ & $\alpha<1/4$ \\ Single photon state & $\left(1-\frac{\|(u,v)\|^2_2}{2}\right)\exp\left(\frac{-\|(u,v)\|^2_2}{4}\right)$ & $\alpha<1/4$ \\ Schr\"{o}dinger's Cat $X_0>0$ & $\frac{e^{\frac{-\|(u,v)\|^2_2}{4}}}{2(1+e^{-X_0^2})}\left(\cos(2uX_0)+e^{-X_0^2}\cosh (X_0v)\right)$ & $\alpha<1/4$ \\ Coherent state $N\in\mathbb{R}_+$ & $\exp\left(\frac{-\|(u,v)\|^2_2}{4}+i\sqrt{N}v\right)$ & $\alpha<1/4$ \\ Squeezed state $N\in\mathbb{R}_+$, $\xi\in\mathbb{R}$ & $\exp\left(-\frac{u^2}{4}e^{2\xi}-\frac{v^2}{4}e^{-2\xi}+iv\alpha\right)$ & $\alpha<e^{2\xi}/4$ \\ Thermal state $\beta>0$ & $\exp\left(\frac{-\|(u,v)\|^2_2}{4(\tanh(\beta/2))^2}\right)$ & $\alpha<\frac{(\tanh(\beta/2))^2}{4}$ \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \\ Our previous results show that our estimator of the purity atteins the parametric rate $1/n$ if $\eta>\frac{1}{1+4\alpha}$. This is not restrictive at all. In practice, physicists usually find $\eta>0.8$ and more often $\eta$ is close to $0.9$ and $0.95$. Thus, by choosing $\alpha$ as close to its upper bound (in Table~\ref{tab:3}) as possible we make sure that our estimator attains the parametric rate. \section{Proof of the upper bounds of theorems} \label{sec:7} \paragraph{\textbf{Sketch of proof of upper bound in Theorem~\ref{theo:1}-\eqref{vitesse}}} For $0<r<2$ and by \eqref{propa} and \eqref{propbun} \begin{eqnarray*} \text{Var}(d_n^{2})&\leq&\frac{8\eta^2/(1-\eta)^2}{\pi^2 n^2}e^{\frac{1-\eta}{\eta}\frac{1}{\delta^2}}+\frac{8L}{n\pi}\frac{\eta}{1-\eta}e^{\frac{1-\eta}{2\eta}\frac{1}{\delta^2}-\frac{2\alpha}{\delta^r}}\\ &=&\frac{C_{V1}}{n^2}e^{\frac{1-\eta}{\eta}\frac{1}{\delta^2}}+\frac{C_{V2}}{n}e^{\frac{1-\eta}{2\eta}\frac{1}{\delta^2}-\frac{2\alpha}{\delta^r}}. \end{eqnarray*} On the one hand, we select the bandwidth $\delta^*$ as $$\delta^*=\arg\inf_{\delta>0}\left\{\frac{C_{V1}}{n^2}e^{\frac{1-\eta}{\eta}\frac{1}{\delta^2}}+\frac{C_{V2}}{n}e^{\frac{1-\eta}{2\eta}\frac{1}{\delta^2}-\frac{2\alpha}{\delta^r}}+C_Be^{-4\alpha/\delta^r}\right\},$$ by taking derivatives, $\delta^*$ is a positive real number satisfying $$\frac{1-\eta}{2\eta}\frac{1}{\delta^{*2}}+\frac{2\alpha}{\delta^{*r}}+\log(\delta^{*r-2})=\log n$$ and we notice that $B(d^2_n)\sim\delta^{r-2}Var(d^2_n)$. So the rate of convergence for the upper bound is given by the bias i.e. $\varphi^2_n=B(d^2_n)(1+o(1))$. On the other hand, we show that by taking $\delta:=\delta_{opt}$ the unique solution of the equation $$\frac{1-\eta}{2\eta}\frac{1}{\delta_{opt}^2}+\frac{2\alpha}{\delta_{opt}^r}=\log n-(\log\log n)^2$$ we obtain the same results. We find $B(d^2_n)\sim\delta^{r-2}Var(d^2_n)$ for $\delta^*$. \begin{eqnarray*} \frac{\delta_{opt}^{r-2}}{n}\exp{\left(\frac{1-\eta}{2\eta\delta_{opt}^2}\frac{-2\alpha}{\delta_{opt}^r}\right)} &=& \frac{\delta_{opt}^{r-2}}{n}\exp{\left(\log n-(\log\log n)^2-\frac{4\alpha}{\delta_{opt}^r}\right)}\\ &=&\frac{\delta_{opt}^{r-2}}{(\log\log n)^2}\exp{\left(\frac{-4\alpha}{\delta_{opt}^r}\right)}\\ &=&\frac{\left(\log n/(2\beta)\right)^{(2-r)/2}}{(\log\log n)^2}\exp{\left(\frac{-4\alpha}{\delta_{opt}^r}\right)}\\ &=&o(1)\exp{\left(\frac{-4\alpha}{\delta_{opt}^r}\right)}. \end{eqnarray*} Last equalities are due to Lemma 8 from Butucea and Tsybakov \cite{ButuTsyba04}. We note that, the variance term with $\delta_{opt}$ is bigger than the variance term with $\delta^*$ but these terms are asymptotically negligible w.r.t. the bias ones. This improvement does not appear in the main term of the asymptotics. Then we conclude $\varphi_n^2=L^2\exp{\left(\frac{-4\alpha}{\delta_{opt}^r}\right)}(1+o(1)).$ The lower bound is proven in last section. \paragraph{\textbf{Sketch of proof of upper bound in Theorem~\ref{theo:1}-\eqref{vitesse1}}} For $r=2$ and $\frac{1-\eta}{2\eta}-2\alpha>0$, we have by \eqref{propa} and \eqref{propbdeux}: $$E[|d_n^2-d^2|^2]\leq\frac{8\eta^2/(1-\eta)^2}{\pi^2 n^2}e^{\frac{1-\eta}{\eta}\frac{1}{\delta^2}}+\frac{8L}{n\pi}\frac{\eta}{1-\eta-4\alpha\eta}e^{(\frac{1-\eta}{2\eta}-2\alpha)\frac{1}{\delta^2}} +L^2e^{-4\alpha/\delta^2}.$$ To select the bandwidth, we choose $\delta=\delta^*$ as solution of $$\delta^*=\arg\inf_{\delta>0}\left\{\frac{C_{V1}}{n^2}e^{\frac{1-\eta}{\eta}\frac{1}{\delta^2}}+\frac{C_{V2}}{n}e^{(\frac{1-\eta}{2\eta}-2\alpha)\frac{1}{\delta^2}}+C_Be^{-4\alpha/\delta^2}\right\}$$ By taking derivatives, we found $\delta^*$, a positive real number satisfying $\frac{1}{\delta^{*2}}=\frac{\log n}{\frac{1-\eta}{2\eta}+2\alpha}$ we get the rate $\varphi_n^2=n^{\frac{-4\alpha}{\frac{1-\eta}{2\eta}+2\alpha}}.$ The proof of the lower bound is in last section. \paragraph{\textbf{Proof of the parametric rate in Theorem~\ref{theo:2}}} For $r=2$ and $\frac{1-\eta}{2\eta}-2\alpha<0$ we have by \eqref{propa} and \eqref{propbtrois}: \begin{eqnarray*} E[|d_n^2-d^2|^2]&\leq&\frac{8\eta^2/(1-\eta)^2}{\pi^2 n^2}e^{\frac{1-\eta}{\eta}\frac{1}{\delta^2}}+\frac{8\eta L}{4\alpha\eta-1+\eta}\cdot\frac{1}{n}+L^2e^{-4\alpha/\delta^2}. \end{eqnarray*} And we can write by taking $\frac{1}{\delta^{*2}}=\frac{\eta\log n}{1-\eta}$ \begin{eqnarray*} \sup_{W_\rho\in\mathcal{A}(\alpha,2,L)}E[|d_n^2-d^2|^2]&\leq& C_V\frac{e^{\frac{1-\eta}{\eta}\frac{1}{\delta^2}}}{n^2}+C_Be^{-4\alpha/\delta^2} \leq C_V\frac{1}{n}+C_Bn^{\frac{-4\alpha}{(1-\eta)/\eta}}\\ &\leq &C_V\frac{1}{n}(1+o(1)). \end{eqnarray*} So we find a parametric rate. The proof of the asymptotic normality is in the section~\ref{sec:8.3}. \paragraph{\textbf{Proof of upper bound in Theorem~\ref{theo:3}}} Our proof is based on results of Butucea and Tsybakov \cite{ButuTsyba04}. Define $ a:=\frac{1-\eta}{4\eta}$. \paragraph{Over the set $\Theta_1$\\} As $0<r/2<1/2$ it is easy to remark $-(\frac{\log n}{2a}-\sqrt{\frac{\log n}{2a}})^{r/2}>-\frac{a}{2\alpha}\sqrt{\frac{\log n}{2a}}$ for $n$ large enough, and thus $\exp\left(-\frac{4\alpha}{(\delta^1_{ad})^r}\right)\geq\exp\left(-2a\sqrt{\frac{\log n}{2a}}\right).$ On the other hand the first and second variance terms found in \eqref{propbun} are equal respectively \begin{eqnarray*} \frac{1}{n}\exp\left(\frac{2a}{(\delta^1_{ad})^2}-\frac{2\alpha}{(\delta^1_{ad})^r}\right)&=&\exp\left(-2a\sqrt{\frac{\log n}{2a}}-2\alpha\left(\frac{\log n}{2a}-\sqrt{\frac{\log n}{2a}}\right)^{r/2}\right)\\ \frac{1}{n^2}\exp\left(\frac{4a}{(\delta^1_{ad})^2}\right)&=&\exp\left(-4a\sqrt{\frac{\log n}{2a}}\right). \end{eqnarray*} Therefore, with the bandwidth $\delta^1_{ad}$ the ratio of the bias term found in \eqref{propa} to the first and second variance terms are bounded from below respectively by $\exp\left(2\alpha\left(\frac{\log n}{2a}-\sqrt{\frac{\log n}{2a}}\right)^{r/2}\right)$ and $\exp\left(2a\sqrt{\frac{\log n}{2a}}\right).$ These expressions tend to $\infty$ as $n\rightarrow\infty$. Thus, the variance terms are asymptotically negligible w.r.t the bias term.It remains to check that the bias term with the bandwidth $\delta^1_{ad}$ is asymptotically bounded by $\varphi_n^2$. For $n$ large enough \begin{eqnarray*} L^2\exp\left(-\frac{4\alpha}{(\delta^1_{ad})^r}\right)&=&L^2\exp\left(-4\alpha\left(\frac{\log n}{2a}-\sqrt{\frac{\log n}{2a}}\right)^{r/2}\right)\\ &=&L^2\exp\left(-4\alpha(\frac{\log n}{2a})^{r/2}\left(1-(\frac{\log n}{2a})^{-1/2}\right)^{r/2}\right)\\ &\leq&L^2\exp\left(-4\alpha(\frac{\log n}{2a})^{r/2}+c(\frac{\log n}{2a})^{r/2-1/2}\right) \leq\varphi_n^2(1+o(1)). \end{eqnarray*} \paragraph{Over the set $\Theta_2$\\} As $r=1$ a simple calculation shows that $\delta_{opt}=\left(\frac{\log n}{2a}-\frac{\alpha}{a}\sqrt{\frac{\log n}{2a}}\right)^{-1/2}$ is a correct approximation in this case, giving a variance infinitely smaller than the bias which is of order \begin{eqnarray*} \varphi_n^2&=&L^2\exp\left(-\frac{4\alpha}{(\delta_{opt})}\right)=L^2\exp\left(-4\alpha\left(\frac{\log n}{2a}-\frac{\alpha}{a}\sqrt{\frac{\log n}{2a}}\right)^{1/2}\right)\\ &=&L^2\exp\left(-4\alpha\sqrt{\frac{\log n}{2a}}+\frac{2\alpha^2}{a}\right)(1+o(1)). \end{eqnarray*} As for the estimator with bandwidth $\delta^2_{ad}$ we get \begin{eqnarray*} L^2\exp\left(-\frac{4\alpha}{(\delta^2_{ad})}\right)&=&L^2\exp\left(-4\alpha\sqrt{\frac{\log n}{2a}}+\frac{2A\alpha}{a}\right)(1+o(1))\\ &=& C_2L^2\exp\left(-4\alpha\sqrt{\frac{\log n}{2a}}+\frac{2\alpha^2}{a}\right)(1+o(1)). \end{eqnarray*} Hence the results. \section{Proof of the Proposition 1} \label{sec:8} \paragraph{Main tools} Note that extensive use is made of formulaes \eqref{fourierp}, \eqref{fourierproun}, \eqref{fourierprodeux}, \eqref{fourierbruit} and \eqref{ens fctnel} and that Plancherel formula writes, \begin{eqnarray} \label{distance} d^2:=\int_{\mathbb{R}^2}W_\rho^2(p,q)dpdq =\frac{1}{(2\pi)^{2}}\int_{\mathbb{R}^2} |\widetilde{W}_\rho(u,v)|^2dudv. \end{eqnarray} \subsection{Proof of Proposition 1-\eqref{propa}} \label{sec:8.1} We write $E[\cdot]$ instead of $E_\rho[\cdot]$. Because $Y_k$ and $Y_\ell$ are i.i.d for all $k\neq \ell$ \begin{eqnarray*} E[d_n^{^2}]&=&\frac{1}{(2\pi)^{2}}\frac{1}{n(n-1)}\sum_{k\neq\ell=1}^n\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi \eta|t|E\left[e^{\frac{t^2}{2}(1-\eta)}e^{itY_k-itY_\ell}\right]d\phi dt\\ &=&\frac{1}{(2\pi)^{2}}\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}|E\left[e^{itY}\right]|^2d\phi dt\\ &=&\frac{1}{(2\pi)^{2}}\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}|\mathcal{F}[p^{\eta}_\rho(\cdot/\phi)](t)|^2d\phi dt. \end{eqnarray*} Use \eqref{fourierprodeux}, and a changes of variables $T=t\sqrt{\eta}$ and next the polar coordinates $u=T\cos \phi$, $v=T\sin\phi$ \begin{eqnarray*} E[d_n^{^2}] &=&\frac{1}{(2\pi)^{2}}\int_{|T|\leq\frac{1}{\delta}}\int_0^\pi\eta|T||\mathcal{F}[p_\rho(\cdot/\phi)](T)|^2d\phi dT\\ \label{esperance} &=&\frac{1}{(2\pi)^{2}}\int\int_{\|(u,v)\|^2_2\leq\frac{1}{\delta}}|\widetilde{W}_\rho(u,v)|^2dudv. \end{eqnarray*} So by combining \eqref{distance} et \eqref{esperance} and define $w:=(u,v)$ \begin{eqnarray*} |E[d_n^{^2}]-d^{2}|&=&|\frac{1}{(2\pi)^{2}}\int_{|t|>\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}|E[e^{itY}]|^2d\phi dt|\\ &=&\frac{1}{(2\pi)^{2}}|\int_{\mathbb{R}^2}\mathbb{I}_{\|w\|_2>1/\delta}|\widetilde{W}_\rho(w)|^2dw|\\ &\leq& \frac{1}{(2\pi)^{2}}e^{-2\alpha/\delta^r}\int_{\mathbb{R}^2}|\widetilde{W}_\rho(w)|^2e^{2\alpha\|w\|_2^r}dw\leq L e^{-2\alpha/\delta^r}, \end{eqnarray*} as $W_\rho$ belongs to $\mathcal{A}(\alpha,r,L)$. \subsection{Proof of Proposition 1-\eqref{propbun}-\eqref{propbdeux}-\eqref{propbtrois}} \label{sec:8.2} First we center variables \begin{eqnarray*} d_n^{2}-E[d_n^{2}]&=&\frac{1}{4\pi^{2}n(n-1)}\sum_{k\neq\ell=1}^n\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi \eta|t|e^{\frac{t^2}{2}(1-\eta)}\left(e^{itY_k-itY_\ell}-E[e^{itY}]E[e^{-itY}]\right)d\phi dt\\ &=&\frac{1}{4\pi^{2}n(n-1)}\sum_{k\neq\ell=1}^n\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}\left(e^{itY_k}-E[e^{itY}]\right)\\ &&\cdot\left(e^{-itY_\ell}-E[e^{-itY}]\right)d\phi dt+\frac{1}{4\pi^{2}n}\sum_{k=1}^n\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi \eta|t|e^{\frac{t^2}{2}(1-\eta)}\left(e^{itY_k}E[e^{-itY}]\right.\\ &&\left.+e^{-itY_k}E[e^{itY}]\right)d\phi dt-2|E[e^{itY}]|^2. \end{eqnarray*} Let define by $Z_k(t)=Z_k:=e^{itY_k}-E[e^{itY}]$, and $\bar{Z}_k$ its complex conjugate, then: \begin{eqnarray*} d_n^{2}-E[d_n^{2}]&=&\frac{1}{(2\pi)^{2}}\left(\frac{1}{n(n-1)}\sum_{k\neq\ell}\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi \eta|t|e^{\frac{t^2}{2}(1-\eta)}Z_k\bar{Z}_\ell d\phi dt\right.\\ &&+\left.\frac{1}{n}\sum_{j}\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}{\left(Z_jE[e^{-itY}]+\bar{Z}_jE[e^{itY}]\right)}d\phi dt\right). \end{eqnarray*} Denote by $J_1$ and $J_2$ respectively the first and the second term of the previous sum, we have then \begin{equation} \label{variance} Var(d_n^{2})=E[(d_n^{2}-E[d_n^{2}])^2]=E[J_1^2]+E[J_2^2]+2E[J_1J_2]. \end{equation} See that the third part of the previous sum: \begin{eqnarray*} E[J_1J_2]&=&\frac{1}{(2\pi)^{4}}\frac{1}{n^2(n-1)}\sum_{k\neq\ell}\sum_{j}E\left[\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi \eta|t|e^{\frac{t^2}{2}(1-\eta)}Z_k\bar{Z}_{\ell}d\phi dt\right)\right.\\ &&\cdot\left.\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}\left(E[e^{-itY}]Z_j +E[e^{itY}]\bar{Z}_{j}\right)d\phi dt\right)\right]=0. \end{eqnarray*} By noticing $E[Z_j]=0$ for all $j=1,...,n$, and because there always exists a $j\neq k$ and thus $Z_j$, $Z_k$ are independent or a $j\neq \ell$ and $Z_j$, $Z_\ell$ are independent. Now study \begin{eqnarray*} E[J_1^2]&=&\frac{1}{16\pi^{4}n^2(n-1)^2}E\left[\left(\sum_{k\neq\ell}\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi \eta|t|e^{\frac{t^2}{2}(1-\eta)}Z_{k}\bar{Z}_{\ell}d\phi dt\right)^2\right]\\ &=&\frac{1}{16\pi^{4}n^2(n-1)^2}\sum_{k_1\neq\ell_1}\sum_{k_2\neq\ell_2}E\left[\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}Z_{k_1}\bar{Z}_{\ell_1}d\phi dt\right)\right.\\ &&\cdot\left.\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi \eta|t|e^{\frac{t^2}{2}(1-\eta)}Z_{k_2}\bar{Z}_{\ell_2}d\phi dt\right)\right]. \end{eqnarray*} Note that,as soon as an indices $k_1$ ,$\ell_1$, $k_2$ ,$\ell_2$ is different from the others, the expected value is $0$. Thus, \begin{eqnarray*} E[J_1^2]&=&\frac{1}{16\pi^{4}n^2(n-1)^2}\left(\sum_{k\neq\ell}E\left[\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t| e^{\frac{t^2}{2}(1-\eta)}Z_{k}\bar{Z}_{\ell}d\phi dt\right)^2\right]\right.\\ &&\left.+\sum_{k\neq\ell}E\left[|\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}Z_{k}\bar{Z}_{\ell}d\phi dt|^2\right]\right) \end{eqnarray*} \begin{eqnarray*} E[J_1^2]&=&\frac{1}{16\pi^{4}n(n-1)}\cdot\frac{1}{2}E\left[\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}Z_1\bar{Z}_{2}d\phi dt\right)^2\right.\\ &&\left.+\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}Z_2\bar{Z}_{1}d\phi dt\right)^2\right]\\ &&+\frac{1}{16\pi^{4}n(n-1)}E\left[|\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}Z_1\bar{Z}_{2}d\phi dt|^2\right] \end{eqnarray*} \begin{eqnarray*} E[J_1^2]&=&\frac{1}{16\pi^{4}n(n-1)}E\left[\Re e\left(\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}Z_1\bar{Z}_{2}d\phi dt\right)^2\right)\right]\\ &&+\frac{1}{16\pi^{4}n(n-1)}E\left[|\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}Z_1\bar{Z}_{2}d\phi dt|^2\right]. \end{eqnarray*} By noticing that $|\Re e(z)|\leq |z|$ and using the fact $|Z_k|\leq 2$, we get \begin{eqnarray} E[J_1^2]&\leq&\frac{1}{8\pi^{4}n(n-1)}E\left[|\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t| e^{\frac{t^2}{2}(1-\eta)}Z_1\bar{Z}_{2}d\phi dt|^2\right]\nonumber\\ &\leq&\frac{2\pi^2}{\pi^{4}n^2}\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\eta|t|e^{\frac{t^2}{2}(1-\eta)}d\phi dt\right)^2\nonumber \\ \label{Jun} &\leq &\frac{8\eta^2}{\pi^{2}(1-\eta)^2}\cdot\frac{1}{n^2}\exp\left(\frac{1-\eta}{\eta\delta^2}\right). \end{eqnarray} Noticing that $(Z_k)_k$ are i.i.d and centered, we then have by developing the square: \begin{eqnarray} E[J_2^2]&=&E\left[\frac{1}{(4\pi^2 n)^{2}}\left(\sum_{k=1}^n\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi \eta|t|e^{\frac{t^2}{2}(1-\eta)}2\Re e(E[e^{itY}]\bar{Z}_k)d\phi dt\right)^2\right]\nonumber\\ &=&\frac{1}{16\pi^{4}n}E\left[\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}2\Re e(E[e^{itY}]\bar{Z}_1)d\phi dt\right)^2\right] \nonumber\\ \label{termdom} &=&\frac{1}{4\pi^{4}n}E\left[\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}\Re e(E[e^{itY}]\bar{Z}_1)d\phi dt\right)^2\right]\\ &\leq&\frac{1}{\pi^{4}n}\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}|E[e^{itY}]|d\phi dt\right)^2, \nonumber \end{eqnarray} as $|\Re e(z)|\leq |z|$ and $|\bar{Z}_1|\leq 2$. Then use successively \eqref{fourierproun}and \eqref{fourierbruit}, next \eqref{fourierprodeux},and a change of variables $T=t\sqrt{\eta}$ \begin{eqnarray*} E[J_2^2]&\leq&\frac{1}{\pi^{4}n}\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}|E[e^{itY}]|d\phi dt\right)^2\\ &=&\frac{1}{\pi^{4}}\frac{1}{n}\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{4}(1-\eta)}\frac{|\textit{F}_1[p_\rho^\eta(./\phi)](t)|}{|\widetilde{N}^\eta(t)|}d\phi dt\right)^2\\ &=&\frac{1}{\pi^{4}}\frac{1}{n}\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{4}(1-\eta)}|\textit{F}_1[p_\rho(./\phi)](\sqrt\eta t)|d\phi dt\right)^2\\ &=&\frac{1}{\pi^{4}}\frac{1}{n}\left(\int_0^\pi \int_{|T|<\frac{1}{\delta}}|T|e^{\frac{T^2}{4\eta}(1-\eta)}|\textit{F}_1[p_\rho(./\phi)](T)|d\phi dT\right)^2. \end{eqnarray*} Then, by \eqref{fourierp} and next use the polar coordinates $u=T\cos\phi$, $v=T\sin\phi$ \begin{eqnarray*} E[J_2^2]&=&\frac{1}{\pi^{4}}\frac{1}{n}\left(\int_0^\pi\int_{|T|<\frac{1}{\delta}}|T|e^{\frac{T^2}{4\eta}(1-\eta)}|\widetilde{W}_{\rho}(T\cos\phi,T\sin\phi)|d\phi dT\right)^2\\ &=&\frac{1}{\pi^{4}}\frac{1}{n}\left(\int_{\|(u,v)\|_2\leq 1/\delta}e^{\frac{1-\eta}{4\eta}\|(u,v)\|^2_2}|\widetilde{W}_{\rho}(u,v)|dudv\right)^2. \end{eqnarray*} Define $z:=(u,v)$, and use Cauchy-Schwartz inequality and \eqref{ens fctnel} \begin{eqnarray} E[J_2^2]&=&\frac{1}{\pi^{4}}\frac{1}{n}\left(\int_{\|z\|_2\leq 1/\delta}e^{\frac{1-\eta}{4\eta}\|z\|^2_2}|\widetilde{W}_{\rho}(z)|dz\right)^2\nonumber\\ &\leq&\frac{1}{n\pi^4}(2\pi)^2L\int_{\|z\|_2\leq1/\delta}te^{\frac{1-\eta}{2\eta}\|z\|_2^2-2\alpha\|z\|_2^r}dz\nonumber\\ \label{Jdeux} &\leq&\frac{8L}{n\pi} \int_0^{1/\delta}te^{\frac{1-\eta}{2\eta}t^2-2\alpha t^r}dt. \end{eqnarray} \begin{enumerate} \item For $0<r<2$ and according to Lemma 6 of Butucea and Tsybakov \cite{ButuTsyba04} we get: \begin{eqnarray} \label{run} \frac{8L}{n\pi}\int_0^{1/\delta}te^{\frac{1-\eta}{2\eta}t^2-2\alpha t^r}dt\leq\frac{8L}{n\pi}\frac{\eta}{1-\eta}e^{\frac{1-\eta}{2\eta}\frac{1}{\delta^2}-2\alpha\frac{1}{\delta^r}}. \end{eqnarray} The expressions \eqref{Jun} and \eqref{Jdeux} together with \eqref{run} conclude \eqref{propbun}.\item For $r=2$ and $\frac{1-\eta}{2\eta}-2\alpha>0$ and according to Lemma 6 of \cite{ButuTsyba04} we get: \begin{eqnarray} \frac{8L}{n\pi}\int_0^{1/\delta}te^{\frac{1-\eta}{2\eta}t^2-2\alpha t^2}dt&\leq&\frac{8L}{n\pi}\cdot\frac{1}{2(\frac{1-\eta}{2\eta}-2\alpha)}e^{(\frac{1-\eta}{2\eta}-2\alpha)\frac{1}{\delta^2}}\nonumber\\ \label{rdeux} &\leq&\frac{8L}{n\pi}\cdot\frac{\eta}{1-\eta-4\alpha\eta}e^{(\frac{1-\eta}{2\eta}-2\alpha)\frac{1}{\delta^2}}. \end{eqnarray} The expressions \eqref{Jun} and \eqref{Jdeux} together with \eqref{rdeux} conclude \eqref{propbdeux}.\item For $r=2$ and $\frac{1-\eta}{2\eta}-2\alpha<0$ we have: \begin{eqnarray} \frac{8L}{n\pi}\int_0^{1/\delta}te^{\frac{1-\eta}{2\eta}t^2-2\alpha t^2}dt&\leq&\frac{4L}{2\alpha-\frac{1-\eta}{2\eta}}\frac{1}{n} \label{rtrois} \leq\frac{8\eta L}{4\alpha\eta-1+\eta}\cdot\frac{1}{n}. \end{eqnarray} The expressions \eqref{Jun} and \eqref{Jdeux} together with \eqref{rtrois} conclude \eqref{propbtrois}. \end{enumerate} \subsection{the asymptotic normality} \label{sec:8.3} Let $r=2$, $\frac{1-\eta}{2\eta}-2\alpha<0$ and $\delta=\delta^*=\left(\frac{\eta\log n}{1-\eta}\right)^{-1/2}$. $$\sqrt{n}(d_n^2-d^2)=\sqrt{n}(d_n^2-E[d_n^2])+\sqrt{nB(d_n^2)}.$$ The term $\sqrt{nB(d_n^2)}\leq\sqrt{n}Le^{-\frac{2\alpha}{(\delta^*)^r}}$ tends to 0 as $n\rightarrow \infty$.\\ Moreover $\sqrt{n}(d_n^2-E[d_n^2])=\sqrt{n}(J_1+J_2)$, where $J_{1,2}$ are centered and were defined in Section~\ref{sec:8.2}. It has been shown in \eqref{rtrois} that the dominating term in the variance $E[(d_n^{2}-E[d_n^{2}])^2]$ is given by $E[J^2_2]$ defined in \eqref{termdom}. That means $nE[J^2_1]=o(1)$, as $n\rightarrow \infty$ and $\sqrt{n}J_1\stackrel{P}{\rightarrow} 0$. Thus, the asymptotic normality is given by the term $\sqrt{n}J_2$.\\ As $J_2=\left.\frac{1}{n}\sum_{j}\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}{\left(Z_jE[e^{-itY}]+\bar{Z}_jE[e^{itY}]\right)}d\phi dt\right)$, we can use a classical central limit theorem for i.i.d. random variables with finite variance and the asymptotic variance is given by the limit of $n E[J_2^2]$. Let us study $\lim_{n\to\infty} nE[J^2_2]$ \begin{eqnarray*} n E[J_2^2]&=&\frac{1}{4\pi^{4}}E\left[\Re e\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi \eta|t|e^{\frac{t^2}{2}(1-\eta)}(E[e^{itY}]e^{-itY}\right.\right.\\ &&\left.\left.-E[e^{itY}]E[e^{-itY}])d\phi dt\right)^2\right]\\ &=&\frac{1}{4\pi^{4}}E\left[\Re e\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}E[e^{itY}]e^{-itY}d\phi dt\right)^2\right]\\ &&-\frac{1}{4\pi^{4}}\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}E[e^{itY}]E[e^{-itY}]d\phi dt\right)^2:=A_1-A_2. \end{eqnarray*} On the one hand, we already proved in section~\ref{sec:8.1}, $A_2=4\left(E[d_n^2]\right)^2$. Therefore, $\lim_{n\to\infty}A_2=\left\|W_\rho\right\|^2_2$. On the other hand \begin{eqnarray*} A_1&=&\frac{1}{4\pi^{4}}E\left[\Re e\left(\int_{|t|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\eta|t|e^{\frac{t^2}{2}(1-\eta)}E[e^{itY}]e^{-itY}d\phi dt\right)^2\right]\\ &=&\frac{1}{4\pi^{4}}\int_{|t_1|\leq\frac{1}{\delta\sqrt{\eta}}}\int_{|t_2|\leq\frac{1}{\delta\sqrt{\eta}}}\int_0^\pi\int_0^\pi\eta^2|t_1||t_2|e^{\frac{t_1^2+t_2^2}{2}(1-\eta)}E[e^{it_1Y}]E[e^{it_2Y}]\\ &&\cdot E[e^{-i(t_1+t_2)Y}]d\phi_1 dt_1d\phi_2 dt_2. \end{eqnarray*} By changing the variable $t$ into $t/\sqrt{\eta}$ and as $Y/\sqrt{\eta}=(X+\sqrt{\frac{1-\eta}{2\eta}}\xi)$ we get \begin{eqnarray*} A_1&=&\frac{1}{4\pi^{2}}\int_{|t_1|\leq\frac{1}{\delta}}\int_{|t_2|\leq\frac{1}{\delta}}|t_1||t_2|e^{\frac{1-\eta}{2\eta}(t_1^2+t_2^2)}E[e^{it_1Y/\sqrt{\eta}}]E[e^{it_2Y/\sqrt{\eta}}]\\ &&\cdot E[e^{-i(t_1+t_2)Y/\sqrt{\eta}}]dt_1dt_2\\ &=&\frac{1}{4\pi^{2}}\int_{|t_1|\leq\frac{1}{\delta}}\int_{|t_2|\leq\frac{1}{\delta}}|t_1||t_2|e^{\frac{1-\eta}{2\eta}(t_1^2+t_2^2)}E[e^{it_1(X+\sqrt{\frac{1-\eta}{2\eta}}\xi)}]E[e^{it_2(X+\sqrt{\frac{1-\eta}{2\eta}}\xi)}]\\ &&\cdot E[e^{-i(t_1+t_2)(X+\sqrt{\frac{1-\eta}{2\eta}}\xi)}]dt_1 dt_2. \end{eqnarray*} As $X$ and $\xi$ are independent and since $E[e^{iT\sqrt{\frac{1-\eta}{2\eta}}\xi)}]=e^{-T^2\frac{1-\eta}{4\eta}}$, we get \begin{eqnarray*} A_1&=&\frac{1}{4\pi^{2}}\int_{|t_1|\leq\frac{1}{\delta}}\int_{|t_2|\leq\frac{1}{\delta}}|t_1||t_2|e^{\frac{1-\eta}{2\eta}(t_1^2+t_2^2)}e^{-\frac{1-\eta}{4\eta}(t_1^2+t_2^2)}e^{-\frac{1-\eta}{4\eta}(t_1+t_2)^2}E[e^{it_1X}]\\ &&\cdot E[e^{it_2X}] E[e^{-i(t_1+t_2)X}]dt_1 dt_2\\ &=&\frac{1}{4\pi^{2}}\int_{|t_1|\leq\frac{1}{\delta}}\int_{|t_2|\leq\frac{1}{\delta}}|t_1||t_2|e^{\frac{1-\eta}{2\eta}t_1t_2}E[e^{it_1X}]E[e^{it_2X}] E[e^{-i(t_1+t_2)X}]dt_1 dt_2, \end{eqnarray*} and $\lim_{n\to\infty}A_1=\frac{1}{4\pi^{2}}\int\int|t_1||t_2|e^{\frac{1-\eta}{2\eta}t_1t_2}E[e^{it_1X}]E[e^{it_2X}] E[e^{-i(t_1+t_2)X}]dt_1 dt_2$.\\ By denoting $\mathcal{W}=\lim_{n\to\infty}(A_1-A_2)$, we get the result. \section{Proofs of lower bounds} \label{sec:9} In this section, we will show the lower bounds of Theorem~\ref{theo:1}. For that we will be based on the results of Butucea and Tsybakov \cite{ButuTsyba04} . They show that the problem of bound from above the minimax risk can be reduce to two functions $W_{\rho_1}$ and $W_{\rho_0}$ depending on a parameter $\tilde{\delta_n}=\tilde{\delta}$ such that $\tilde{\delta}\rightarrow 0$ as $n\rightarrow 0$. The choice of $\tilde{\delta}$ insures the existence of the lower bound. The parameter $\tilde{\delta}$ is the unique solution of the equation \begin{eqnarray} \label{equah} \frac{2\alpha}{\tilde{\delta}^r}+\frac{1-\eta}{2\eta\tilde{\delta}^2}=\log n+(\log\log n)^2. \end{eqnarray} If $0<r<2$, notice that it is different of the $\delta$ appearing in the expression of our estimator defined in \eqref{estimateur}. And for $r=2$, we take \begin{eqnarray} \label{equahdeux} \tilde{\delta}=\left(\frac{\log (n\log n)}{2(a+\alpha)}\right)^{-1/2},\quad \text{where}\quad a=\frac{1-\eta}{4\eta}. \end{eqnarray} We will use Wigner functions $W_{\rho_0}$ and $W_{\rho_1}$ built by Butucea \textit{et al.} (2007) \cite{ButGutArt05} in their first prepublication like certain results coming from this construction. $W_{\rho_0}$ is a fixed function corresponding to the density matrix $\rho_o$, and $W_{\rho_1}$ is of the form $$W_{\rho_1}(z)=W_{\rho_0}(z)+V_{\tilde{\delta}}(z)\quad \text{and}$$ $$\rho_1=\rho_0 +\tau^{\tilde{\delta}}$$ such that $\rho_1$ is a density matrix (positive and trace equal to one) with Radon transforms $p_1$. Note that the function $V_{\tilde{\delta}}$ is not a Wigner function of a density matrix but belongs to the linear span of the space of Wigner functions and its corresponding matrix $\tau^{\tilde{\delta}}$ is in the linear span of density matrix. We will detail in a next paragraph the construction of $W_{\rho_{0,1}}$, $\rho_{0,1}$ and $V_{\tilde{\delta}}$ as well as the results which results from this. As we have stipulated it higher, we will use lemma 4 in Butucea and Tsybakov \cite{ButuTsyba04}. Let us suppose first of all that the following conditions are satisfied: \begin{equation} \label{hyp1} W_{\rho_1},W_{\rho_0}\in\mathcal{A}(\alpha,r,L), \end{equation} \begin{equation} \label{hyp2} |d_1^2-d_0^2|=|\|W_{\rho_1}\|^2_2-\|W_{\rho_0}\|^2_2|\geq 2\phi_n(1+o(1)),\;n\rightarrow\infty, \end{equation} \begin{equation} \label{hyp3} n\chi^2:=n\int_0^\pi \int\frac{(p_1^\eta(y)-p_0^\eta(y))^2}{p_0^\eta(y)}dyd\phi=o(1),\;n\rightarrow\infty. \end{equation} Then we reduce the minimax risk to these two functions, $W_{\rho_1}$, $W_{\rho_0}$, and note $\widehat{d}_n^2$ an arbitrary estimator of $d_\rho^2:=\|W_\rho\|^2_2$, then we get for some $0<\tau<1$ \begin{eqnarray*} \inf_{\widehat{d}_n^2}\sup_{W_\rho\in\mathcal{A}(\alpha,r,L)}E[|\widehat{d}_n^2-d_\rho^2|^2] &\geq&\inf_{\widehat{d}_n^2}\frac{1}{2}(E_{\rho_0}[|\widehat{d}_n^2-d_{\rho_0}^2|^2]+E_{\rho_1}[|\widehat{d}_n^2-d_{\rho_1}^2|^2])\\ &\geq&\inf_{\widehat{d}_n^2}\frac{1}{2}(E_{\rho_0}[|\widehat{d}_n^2-d_{\rho_0}^2|^2]\\ &&+(1-\tau)E_{\rho_0}[\mathbb{I}(\frac{dP^{\eta}_{\rho_1}}{dP^{\eta}_{\rho_0}}\geq 1-\tau)|\widehat{d}_n^2-d_{\rho_1}^2|^2])\\ &\geq&\inf_{\widehat{d}_n^2}\frac{1}{2}(1-\tau)(E_{\rho_0}[\mathbb{I}(\frac{dP^{\eta}_{\rho_1}}{dP^{\eta}_{\rho_0}}\geq 1-\tau)(|\widehat{d}_n^2-d_{\rho_0}^2|^2\\ &&+|\widehat{d}_n^2-d_{\rho_1}^2|^2)]). \end{eqnarray*} As $a^2+b^2\geq (a-b)^2$ for $a$ and $b$ positives reals numbers, we can get ride of the estimator. \begin{eqnarray*} &\geq&\frac{1}{4}(1-\tau)E_{\rho_0}[\mathbb{I}(\frac{dP^{\eta}_{\rho_1}}{dP^{\eta}_{\rho_0}}\geq(1-\tau))|d_{\rho_1}^2-d_{\rho_0}^2|^2]\\ &\geq&(1-\tau)\phi_n^2(1-P_{\rho_0}(\frac{dP^{\eta}_{\rho_1}}{dP^{\eta}_{\rho_0}}-1<-\tau))\\ &\geq&(1-\tau)\phi_n^2(1-\frac{1}{\tau^2}\int(\frac{dP^{\eta}_{\rho_1}}{dP^{\eta}_{\rho_0}}-1)^2dP^{\eta}_{\rho_0}). \end{eqnarray*} By supposing $n\chi^2\leq\tau^4$ the last inequality is undervalued by $(1-\tau)^2\phi_n^2(1+\tau)$. It is enough to check \eqref{hyp3}, in order to get $\tau_n\rightarrow 0$ as $n\rightarrow\infty$, and we obtain a lower bound for the minimax risk of order $\phi_n^2(1+o(1))$for any estimator $\widehat{d}_n^2$. Our proof of lower bounds is quite similar to the one of Butucea \textit{et al.} (2007) \cite{ButGutArt05}. The main difference is the proof of \eqref{hyp2} as we don't bound from below the Wigner function but the quadratic functional of the Wigner function. Nevertheless, for the reader's convenience, we reproduce key proofs to complete the proof of the lower bounds. \subsubsection{The density matrix $\rho_0$} \label{sec:9.0.1} The main difference with the construction in Butucea \textit{et al.} (2007) \cite{ButGutArt05} is that they had considered two Wigner functions $W_{\rho_1}$ and $W_{\rho_2}$ with $W_{\rho_1,\rho_2}=W_{\rho_0}\pm V_{\tilde{\delta}}$ while we consider only the Wigner function $W_{\rho_1}=W_{\rho_0}+ V_{\tilde{\delta}}$. Because we have to bound from below the quantity $|\|W_{\rho_1}\|^2_2-\|W_{\rho_0}\|^2_2|$ instead of $\|W_{\rho_1}-W_{\rho_0}\|^2_2$ and we must make sure that $\widetilde{W}_{\rho_0}$ and $\widetilde{V}_{\tilde{\delta}}$ are positive functions. In this paragraph we will recall some results and lemmas of Butucea \textit{et al.} (2007) \cite{ButGutArt05} about the density matrix $\rho_0$ and its corresponding Wigner function. They had constructed a family of density matrices $\rho^{\beta,\xi}$ from which they selected $\rho_0=\rho^{\beta_0,\xi_0}$ with Radon transform $p_{\beta}^{\xi}$ equals to $$p_\beta^\xi(x,\phi):=\int_0^1\frac{f_\beta^\xi(z)}{\sqrt{\pi(1-z^2)}}\exp{\left(-x^2\frac{1-z}{1+z}\right)}dz,$$ where$f_\beta^\xi(z)=\beta(1-z)^\beta/(1-\xi)\mathbb{I}(\xi\leq z\leq 1)$, for some $0<\beta,\xi\leq 1$. The Fourier transform is $$\widetilde{W_\beta^\xi}(w)=\mathcal{F}_1[p_\beta^\xi](\|w\|,\phi)=\int_0^1\frac{f_\beta^\xi(z)}{1-z}\exp\left(-\|w\|^2\frac{1+z}{4(1-z)}\right)dz.$$ Notice that the Fourier transform is positive and $\widetilde{W_\beta^\xi}(0)=1$. The study of the asymptotic behavior of such functions is done in lemmae~\ref{lm:1} and ~\ref{lm:2}. Lemma~\ref{lm:3} proves the fact that $W_\beta^\xi$ belongs to the class $\mathcal{A}(\alpha,r,L)$ for $\beta>0$ small enough and $\xi$ close to 1. \begin{lm} \label{lm:1} For all $0<\beta,\xi\leq 1$ and $|x|>1$ there exist constants $c, C$ depending on $\beta$ and $\xi$, such that $$c|x|^{-(1+2\beta)}\leq p_\beta^\xi(x,\phi)\leq C|x|^{-(1+2\beta)}.$$ \end{lm} \begin{lm} \label{lm:2} For all $0<\beta,\xi\leq 1$ we have $$\rho^{\beta,\xi}_{n,n}=\frac{\beta}{(1-\xi)^\beta}\Gamma(\beta+1)n^{-(1+\beta)}(1+o(1)),\,\,n\rightarrow\infty.$$ \end{lm} \begin{lm} \label{lm:3}For any $(\alpha,r,L)$ such that $0<r\leq 2$, there exists an $0<\beta,\xi\leq 1$ such that $W_\beta^\xi$ belongs to the class $\mathcal{A}(\alpha,r,L)$. \end{lm} We refer for the proof of these lemmae to Butucea \textit{et al.} (2007) \cite{ButGutArt05}. \subsubsection{Construction of $V_{\tilde{\delta}}$ and asymptotic properties of $\tau^{\tilde{\delta}}$} \label{sec:9.0.2} For using the same construction as Butucea \textit{et al.} (2007) \cite{ButGutArt05}, we have to define on $\mathbb{R}^2$ the function $V_{\tilde{\delta}}$ whose Fourier transform is $$\mathcal{F}_2[V_{\tilde{\delta}}](w):=\widetilde{V}_{\tilde{\delta}}(w)=J_{\tilde{\delta}}(t)=2\sqrt{rL\pi\alpha}\tilde{\delta}^{(2-r)/2}e^{\alpha/\tilde{\delta}^r}e^{-2\alpha|t|^r}J(|t|^r-\frac{1}{\tilde{\delta}^r}),$$ where $t=\|w\|$, and $J$ is a 3-times continuously differentiable function on $\mathbb{R}$ with its first 3 derivatives uniformly bounded on $\mathbb{R}$ such that for any $\lambda>0$ and any $D>4\lambda$ $$\mathbb{I}(2\lambda\leq u\leq D-2\lambda)\leq J(u)\leq\mathbb{I}(\lambda\leq u\leq D-\lambda),\quad \text{for all}\,u\in\mathbb{R}.$$ We choose $\tilde{\delta}$ solution of \eqref{equah} when $0<r<2$ and $\tilde{\delta}$ such as in \eqref{equahdeux}, when $r=2$. We want $V_{\tilde{\delta}}$ to be a function of a density matrix belonging to the linear span of the space of Wigner functions and its corresponding matrix $\tau^{\tilde{\delta}}$ belonging to the linear span of density matrix. For that, we use an important property of Wigner functions: the isometry (up to a constant) between the linear span of density matrices and that of Wigner functions with respect to the $\mathbb{L}_2$-distances, in particular $$\|W_{\rho_2}-W_{\rho_1}\|=:\int\int|W_{\rho_2}(p,q)-W_{\rho_1}(p,q)|^2dpdq=\frac{1}{2\pi}\|\rho_2-\rho_1\|^2_2,$$ for any $\rho_2$, $\rho_1$. Note that because the function $V_{\tilde{\delta}}$ is invariant under rotations in the plane, the corresponding matrix has all off-diagonal elements equal to 0 and for the diagonal ones we can use the following formula from Leonhardt (1997) \cite{Leon97} $$\tau^{\tilde{\delta}}_{nn}=4\pi^2\int^1_0 L_n(t^2/2)e^{-t^2/4}tJ_{\tilde{\delta}}(t)dt.$$ And as our choice of $V_{\tilde{\delta}}$ is the same as Butucea \textit{et al.} (2007) \cite{ButGutArt05}, we have the same asymptotic behavior derived in the following lemma. \begin{lm} \label{lm:4} The matrix $\tau^{\tilde{\delta}}$ has the following asymptotic behavior $$\tau^{\tilde{\delta_n}}_{nn}=O(n^{-5/4})o_{\tilde{\delta}}(1).$$ \end{lm} For the proof of this lemma we refer to Butucea \textit{et al.} (2007) \cite{ButGutArt05}. We have now to prove conditions \eqref{hyp1}, \eqref{hyp2} and \eqref{hyp3} to obtain the lower bound. \subsection{Proof of conditions \eqref{hyp1}, \eqref{hyp2} and \eqref{hyp3}} \label{sec:9.1} \paragraph{\textbf{Proof of \eqref{hyp1}}} From Lemma~ref{lm:3} we get for any $\beta$ small enough and $\xi$ close to 1 that the Wigner function $W_\beta^\xi$ belongs to the class $\mathcal{A}(\alpha,r,a^2L)$. And the Lemmae~ref{lm:2} and ~ref{lm:3} implies that for any $\beta<1/4$ the diagonal matrix $\rho_1=\rho^{\beta,\xi}+\tau^{\tilde{\delta}}$ is positive with trace one for $\tilde{\delta}$ small enough. Thus there exists an $\beta_0$, $\xi_0$ such that the corresponding matrix $\rho_1$ is a density matrix and $W_{\rho_0}=W_{\beta_0}^{\xi_0}\in\mathcal{A}(\alpha,r,a^2L)$. Let us prove that $W_{\rho_1}\in\mathcal{A}(\alpha,r,L)$. By triangle inequality \begin{eqnarray*} \|\mathcal{F}_2[W_{\rho_1}]e^{\alpha\|.\|^r}\|_2 &\leq &\|\mathcal{F}_2[W_{\rho_0}]e^{\alpha\|.\|^r}\|_2+\|\mathcal{F}_2[V_{\tilde{\delta}}]e^{\alpha\|.\|^r}\|_2\\ &\leq& 2\pi a\sqrt{L}+\|\mathcal{F}_2[V_{\tilde{\delta}}]e^{\alpha\|.\|^r}\|_2. \end{eqnarray*} Now, by the change of variables $u=t\cos\phi$, $v=t\sin\phi$ \begin{eqnarray*} \|\mathcal{F}_2[V_{\tilde{\delta}}]e^{\alpha\|.\|^r}\|_2^2&=&\int_{\mathbb{R}^2}|\mathcal{F}_2[V_{\tilde{\delta}}](w)|^2e^{2\alpha\|w\|^r}dw\\ &=&\int_0^\pi\int_{\mathbb{R}}|t||\mathcal{F}_2[V_{\tilde{\delta}}](t\cos\phi,t\sin\phi)|^2e^{2\alpha|t|^r}dtd\phi\\ &=&\pi\int_{\mathbb{R}}|t||\mathcal{F}_2[V_{\tilde{\delta}}](t\cos\phi,t\sin\phi)|^2e^{2\alpha|t|^r}dt\\ &=&\pi\int_{\mathbb{R}}|t||J_{\tilde{\delta}}(t)|^2e^{2\alpha|t|^r}dt\\ &\leq& 2^2\pi^2L\alpha r\tilde{\delta}^{2-r}e^{2\alpha/\tilde{\delta}^r}2\int^\infty_{(\lambda+\frac{1}{\tilde{\delta}^r})^{1/r}} t e^{-2\alpha t^r}dt\leq 2^2\pi^2Le^{-2\alpha\lambda}. \end{eqnarray*} Thus, it is enough to take $a=1-e^{-\alpha\lambda/2}$ to get $W_{\rho_1}\in\mathcal{A}(\alpha,r,L(1-e^{-\alpha\lambda/2}+e^{-\alpha\lambda})^2) \subset\mathcal{A}(\alpha,r,L).$ \paragraph{\textbf{Proof of \eqref{hyp2}}} By noticing that $\widetilde{W}_{\rho_0}$ and $\widetilde{V}_{\tilde{\delta}}$ are positive functions we get \begin{eqnarray*} |\|W_{\rho_1}\|^2_2-\|W_{\rho_0}\|^2_2|&\geq &\frac{1}{(2\pi)^2}|\int_{\mathbb{R}^2}|\widetilde{V}_{\tilde{\delta}}(w)|^2dw| =\frac{1}{(2\pi)^2}|\pi\int_{\mathbb{R}}|t||J_{\tilde{\delta}}(t)|^2dt|\\ &\geq &\frac{1}{(2\pi)^2}2^2\pi^2 L\alpha r\tilde{\delta}^{2-r}e^{2\alpha/\tilde{\delta}^r}2 \int_{(2\lambda+\frac{1}{\tilde{\delta}^r})^{1/r}}^{(D-2\lambda+\frac{1}{\tilde{\delta}^r})^{1/r}}t e^{-4\alpha t^r}dt\\ &=&2 L\alpha r\tilde{\delta}^{2-r}e^{2\alpha/\tilde{\delta}^r} \int_{(2\lambda+\frac{1}{\tilde{\delta}^r})^{1/r}}^{(D-2\lambda+\frac{1}{\tilde{\delta}^r})^{1/r}}t e^{-4\alpha t^r}dt\\ &\geq &\frac{1}{2}Le^{2\alpha /\tilde{\delta}^r}\left(e^{-4\alpha(2\lambda+\frac{1}{\tilde{\delta}})}\left(1+o(1)\right)-e^{-4\alpha (D-2\lambda+\frac{1}{\tilde{\delta}})}\left(1+o(1)\right)\right)\\ &\geq &\frac{1}{2}Le^{-2\alpha/\tilde{\delta}^r}\left(e^{-8\alpha\lambda}-e^{-4\alpha(D-2\lambda)}\right)\left(1+o(1)\right)\\ &=&2\phi_n\left(e^{-8\alpha\lambda}-e^{-4\alpha(D-2\lambda)}\right)\left(1+o(1)\right) \end{eqnarray*} for $n$ large enough, with $\phi_n=\frac{1}{4}\varphi_n$ where $\varphi_n$ is the rate of convergence define in \eqref{vitesse}. Note that we obtain lower bounds for $\tilde{\delta}$ solution of \eqref{equah} for the case $0<r<2$, while we obtain optimal rates (up to a logarithmic factor) of order $(n\log n)^{-\frac{\alpha}{a+\alpha}}$ for $r=2$, with $a$ defined in \eqref{equahdeux}. \paragraph{\textbf{Proof of \eqref{hyp3}}} Let us now bound $n\chi^2$. From the lemma 6.1 we get that $p_0(x)\geq Cx^{-2}$ for all $|x|\geq 1$. After a convolution with the gaussian density of the noise the asymptotic decay can not be faster $$p_0^\eta(y)\geq \frac{C_1}{y^2},\forall|y|\geq M,$$ for some fixed $M>0$. Notice that $C$ design a constant which may change along the proof. \begin{eqnarray} n\chi^2&\leq&\pi\int\frac{(p_1^\eta(y)-p_0^\eta(y))^2}{p_0^\eta(y)}dy\nonumber\\ \label{equaun} &\leq &Cn\left(C(M)\|p_1^\eta(y)-p_0^\eta(y)\|^2_2+\int_{|y|>M}y^2\left(p_1^\eta(y)-p_0^\eta(y)\right)^2dy\right). \end{eqnarray} In the first term we have \begin{eqnarray} \|p_1^\eta(y)-p_0^\eta(y)\|^2_2&=&C\int|J_{\tilde{\delta}}(t)|^2e^{-(1-\eta)t^2/(2\eta)}dt\nonumber\\ &\leq &C\tilde{\delta}^{2-r}e^{2\alpha/\tilde{\delta}^r}\int^{(D-\lambda+\frac{1}{\tilde{\delta}^r})^{1/r}}_{(\lambda+\frac{1}{\tilde{\delta}^r})^{1/r}} e^{-(1-\eta)t^2/(2\eta)-4\alpha t^r}dt\nonumber\\ &\leq &C\tilde{\delta}^{3-r}e^{2\alpha/\tilde{\delta}^r}\int^{\infty}_{(\lambda+\frac{1}{\tilde{\delta}^r})^{1/r}} te^{-(1-\eta)t^2/(2\eta)-4\alpha t^r}dt\nonumber\\ \label{equadeux} &\leq &C\tilde{\delta}^{3-r}\exp{\left(-\frac{2\alpha}{\tilde{\delta}^r}-\frac{1-\eta}{2\eta\tilde{\delta}^2}\right)}. \end{eqnarray} Let us see the second part of the sum \begin{eqnarray} \int_{|y|>M} y^2(p_1^\eta(y)-p_0^\eta(y))^2dy&\leq &\int(\frac{\partial}{\partial t}(J_{\tilde{\delta}}(t)e^{-(1-\eta)t^2/(4\eta)}))^2dt\nonumber\\ &\leq &C\tilde{\delta}^{2-r}e^{2\alpha/\tilde{\delta}^r}\int^{\infty}_{(\lambda+\frac{1}{\tilde{\delta}^r})^{1/r}}t^2e^{-(1-\eta)t^2/(2\eta)}e^{-4\alpha t^r}dt\nonumber\\ \label{equatrois} &\leq &C\tilde{\delta}^{1-r}\exp{\left(-\frac{2\alpha}{\tilde{\delta}^r}-\frac{1-\eta}{2\eta\tilde{\delta}^2}\right)}. \end{eqnarray} In case $0<r<2$, by taking $\tilde{\delta}$ as solution of \eqref{equah} we have the expressions in \eqref{equadeux} and \eqref{equatrois} tend to $0$ and together with \eqref{equaun} conclude. For the case $r=2$, we proved a weaker form of \eqref{hyp3}: $n\chi^2=O(1)$. As $\tilde{\delta}$ given in \eqref{equahdeux}, we have the expression in \eqref{equadeux} tend to $0$ while the expression in \eqref{equatrois} stays bounded as $n\rightarrow\infty$ and together with \eqref{equaun} we get the wanted result. \bibliographystyle{plain}
1,116,691,497,057
arxiv
\section{Introduction} In the last years, we invested a large amount of effort to create a versatile data set about the evolution of software projects that combines data from different sources based on our SmartSHARK platform for replicable and reproducible software repository mining~\cite{Trautsch2017, Trautsch2020}. The core of this approach was to combine all data we generated for different publications in a single database, that grows with every publication. This does not only mean that we add more projects over time, but also that the amount of information for the projects already within the database increases. By now, our database contains the following data: \begin{itemize} \item Data collected from Git, e.g., the commit messages, authors, dates, as well as the changed hunks. The clone of the Git repository at the time of collection is also stored to enable further analysis of the source code. \item Data about the source code \textbf{for each commit} focused on Java, e.g., software metrics (size, complexity, documentation, code clones), static analysis warnings from PMD\footnote{https://pmd.github.io/}, and the number of nodes of each type in the AST of a file. \item Data about code changes, i.e., the detection of change types with ChangeDistiller~\cite{Fluri2007}, as well as refactorings with RefDiff~\cite{Silva2017} and RefactoringMiner~\cite{Tsantalis2018}. \item Data collected from Jira, i.e., the issues, comments, and changes to issues made. \item Data collected from GitHub, i.e., issues, pull requests, and code reviews as part of pull requests. \item Data collected from mailing lists, i.e., all emails from the developer and user mailing lists. \item Links commits and issues, as well as links between commits and pull requests. \item Manually validated links between commits and bug issues, as well as the type of issues labeled as bug for 38 projects~\cite{Herbold2019}. \item Manually validated line labels that mark which changes contributed to a bug fix for 23 projects as well as partial data for five additional projects~\cite{herbold2020largescale}. \item Annotations for commits and changes, i.e., bug fixing changes including their probable inducing changes, if changes modified Javadocs or inline comments, whether TODOs were added or removed, if test code changed or if we were able to detect refactorings. \item Travis CI build logs and build status information for all projects that use Travis CI. \end{itemize} The identities of developers are managed in a separate collection that is not shared publicly, unless specifically requested with a description of the purpose. Hence, developers are only identified by the (random) object identifier in the database. \section{Data Description} This publication describes version 2.1 of the data, which is publicly available.\footnote{Full data: \url{http://141.5.100.155/smartshark_2_1.agz}\\Small version without code entity states, code group states, and clone instances: \url{http://141.5.100.155/smartshark_2_1.agz}\\Please check \url{https://smartshark.github.io/dbreleases/} for mirrors or newer releases.\\DOIs follow with official publication.} Older releases are available on our homepage, where we will also post future releases.\footnote{\url{https://smartshark.github.io/dbreleases/}} A description on how to setup the data for local use, as well as an example for accessing the data with Python is available online.\footnote{https://smartshark.github.io/fordevs/} In the following, we describe the data sources, the tools we used for the data collection, the size and format of the data, the schema of our database, the sampling strategy we used and the list the projects for which data is available. \subsection{Data Sources} The raw data was collected from four different sources. \begin{itemize} \item Version control data is collected directly from a clone of the Git repository. The repositories are retrieved from GitHub.\footnote{\url{https://www.github.com/}} \item Issue tracking data is collected from the Apache Jira\footnote{\url{https://issues.apache.org/jira/}} and GitHub. \item Pull request data is collected from GitHub. \item Continuous integration data is collected from Travis CI.\footnote{\url{https://www.travis-ci.com/}} \end{itemize} All data is publicly available, but the tool vendors may require the registration to scrape the data. \subsection{Data Collection Tools} Figure~\ref{fig:tools} shows the data collection tools we used. All tools are available on GitHub.\footnote{\url{https://github.com/smartshark/}} The vcsSHARK downloads a clone of the Git repository and collects metadata about commits. The coastSHARK, mecoSHARK, changeSHARK, refSHARK, and rminerSHARK use the clone of the repository to collect software metrics and detect refactorings. The memeSHARK removes duplicate software metrics and reduces the data volume. The travisSHARK collects data from Travis and links it to the commits. The prSHARK collects pull requests including reviews and comments from GitHub and links them to commits. The mailingSHARK collects E-Mails from mailing lists. The issueSHARK collects issue tracking data from Jira and GitHub issues. The linkSHARK establishes links between the issues and commits. The labelSHARK uses these links, the textual differences of changes, and changes to code metrics to compute labels for the commits. These labels are used by the inducingSHARK to find the probable changes that are inducing for the labels, e.g., for bugs. The visualSHARK is used for manual validation of data, e.g., of links between commits and issues, issue types, and changes that contribute to bug fixes. This information is used by the labelSHARK and inducingSHARK to improve data that relies this information, e.g., bug labels for commits. For completeness, we also mention the identitySHARK, which can be used to merge multiple identities of the same person in our data (e.g., different user name, same email). However, this data is not part of our public data set and will only be made available upon request if the desired usage is clearly specified and does not raise any ethical or data privacy related concerns. \begin{figure} \includegraphics[width=\linewidth]{2021-01-sharkTools_cropped} \caption{Overview of data collection tools. The arrows indicate dependencies between tools. The colors indicate that different data sources are used (blue: Git repository; green: Jira and GitHub Issues; light blue: GitHub pull requests; yellow: mailing list archive; orange: Travis CI; grey: manual validation). A mix of colors means that data from different sources is required, as indicated by the dependencies and the colors. } \label{fig:tools} \end{figure} \subsection{Size and Format} The data set currently contains 77{} projects, the manual validations are available for a subset of 38 projects. Overall, these projects have 366,322{} commits, 163,057{} issues, 47,303{} pull requests, and 2,987,591{} emails. All data is stored in a MongoDB. The size of the complete MongoDB is 1.2 Terabyte{}. This size drops drastically to about 40 Gigabyte{}, if we omit the collections with code clone data and software metrics. The data is still growing and additional projects will also be made available through subsequent releases of the data set. Drivers for MongoDB are available for many programming languages.\footnote{https://docs.mongodb.com/drivers/} Additionally, we provide Object-Relational Mapping (ORM) for Python and Java. \subsection{Overview of the Database Schema} \begin{figure*} \centering \includegraphics[width=\textwidth]{2020-08-smartshark-schema_cropped} \caption{Overview of the database schema and the relationships between the collections.} \label{fig:schema} \end{figure*} We currently have data of four types of repositories: version control systems, issue tracking systems, pull request systems, and mailing lists. Figure~\ref{fig:schema} gives an overview of our database schema. A complete documentation is available online.\footnote{https://smartshark2.informatik.uni-goettingen.de/documentation/} Each project has an entry with the name and id. The software repositories are assigned to projects by their id. The simplest system are the mailing lists. The emails for the mailing lists are stored in the \texttt{message} collection. The issue tracking data is stored in three collections: \texttt{issue} stores the data about the issue itself, e.g., the title, description, and type; \texttt{issue\_comment} stores the discussion of the issue; and \texttt{event} stores any update made to the issue, e.g., changes of the type, status, or description. This way, the complete life-cycle including all updates is available in the database. The pull requests are organized similarly, but require seven collections, due to the direct relationship to source code and associated code reviews: \texttt{pull\_request} stores the metadata of the pull request, e.g., the title, description, and the associated branches; \texttt{pull\_request\_comment} stores the discussion of the pull request; \texttt{pull\_request\_event} stores any update made to the pull request; \texttt{pull\_request\_file} and \texttt{pull\_request\_commit} store references to files and commits within pull requests; and \texttt{pull\_request\_review} and \texttt{pull\_request\_review\_comment} store the information about code reviews. The version control system data is relatively complex, due to the diversity of the data stored. The main collection is \texttt{commit}, which contains the general metadata about the commits, e.g., the author, committer, revision hash, commit message, and time stamp. Moreover, \texttt{commit} also contains computed data, e.g., labels like bug fixing or links to issues. The \texttt{file\_action} group all changes made to a file in a commit, \texttt{hunk} contains the actual changes, including the diffs. The general information about the history is completed by the \texttt{branch} and \texttt{tag} collections. The \texttt{code\_group\_state} and \texttt{code\_entity\_state} contain the results of the static analysis we run on the repository at each commit. Code groups are, e.g., packages, code entities, are, e.g., files, classes, and methods. We removed duplicate code entities, e.g., files where the measurements did not change from one commit to the next. This way, we can reduce the data volume by over 11 Terabyte. To still allow the identification of the code entity states at the time of a commit, the \texttt{commit} collection contains a list to the correct code entities. While the code entities also contain a link to the commit for which they were measured, this link should be avoided, because users may inadvertently assume that they could find all code entities for a specific commits this way, which is not the case. The \texttt{clone\_instance} collection stores data about code clones. The automatically detected refactorings are stored in the \texttt{refactoring} collection. The \texttt{travis\_build} collection contains the general information about the build, e.g., time stamps and the build status, and the \texttt{travis\_job} collection contains the logs of the individual build jobs. The \texttt{people collection} is not associated with any specific data source. Instead, we map all metadata that contains accounts, names, or email addresses to this collection and store the name, email address, and user name. The identity collection contains list of people, that very likely belong to the same identity. We use our own identity merging algorithm, which is available online.\footnote{\url{https://github.com/smartshark/identitySHARK} (a scientific publication about our algorithm is not yet available)} \subsection{Sampling Strategy and Representativness} The data contains only projects from the Apache Software Foundation that have Java as the main language. The projects all have between 1,000 and 20,000 commits, i.e., the data does not contain very small or very large projects. The reason for the exclusion of very large projects is the data volume and processing time for the static analysis of each commit. While the sample is not randomly drawn, it should be representative for well-maintained Java projects that have high standards for their development processes, especially with respect to issue tracking. Moreover, the projects cover different kinds of applications, including build systems (ant-ivy), Web applications (e.g., jspwiki), database frameworks (e.g., calcite), big data processing tools (e.g., kylin), and general purpose libraries (commons). \subsection{List of Projects} We have collected the data we described above for the following projects. Manually validated data is available for the italic projects. Travis CI data is available for all bold-faced projects. activemq, \textit{ant-ivy}, \textit{archiva}, \textbf{bigtop}, \textbf{\textit{calcite}}, \textbf{\textit{cayenne}}, \textbf{\textit{commons-bcel}}, \textbf{\textit{commons-beanutils}}, \textbf{\textit{commons-codec}}, \textbf{\textit{commons-collections}}, \textbf{\textit{commons-compress}}, \textbf{\textit{commons-configuration}}, \textbf{\textit{commons-dbcp}}, \textbf{\textit{commons-digester}}, \textbf{commons-imaging}, \textbf{\textit{commons-io}}, \textbf{\textit{commons-jcs}}, \textbf{\textit{commons-jexl}}, \textbf{\textit{commons-lang}}, \textbf{\textit{commons-math}}, \textbf{\textit{commons-net}}, \textbf{commons-rdf}, \textit{commons-scxml}, \textbf{\textit{commons-validator}}, \textbf{\textit{commons-vfs}}, \textbf{curator}, cxf-fediz, \textit{deltaspike}, derby, directory-fortress-core, directory-kerby, directory-studio, \textit{eagle}, falcon, \textbf{fineract}, \textbf{flume}, \textbf{freemarker}, \textit{giraph}, \textit{gora}, helix, \textbf{httpcomponents-client}, \textbf{httpcomponents-core}, jackrabbit, jena, \textit{jspwiki}, kafka, \textbf{\textit{knox}}, \textbf{\textit{kylin}}, \textit{lens}, \textbf{\textit{mahout}}, \textbf{\textit{manifoldcf}}, maven, mina-sshd, \textbf{nifi}, \textit{nutch}, oozie, openjpa, openwebbeans, \textbf{\textit{opennlp}}, \textbf{\textit{parquet-mr}}, \textbf{pdfbox}, phoenix, pig, \textbf{ranger}, roller, \textbf{samza}, \textit{santuario-java}, \textbf{storm}, \textbf{streams}, \textbf{struts}, \textit{systemml}, \textbf{tez}, \textit{tika}, \textit{wss4j}, xerces2-j, \textbf{xmlgraphics-batik}, \textbf{zeppelin}, zookeeper. The bug inducing changes are not available for maven, because the project uses multiple issue trackers, which we currently cannot handle. \footnote{The release 2.2 of this data set with more projects and this missing inducing data is scheduled for December 2022. This preprint will then be updated with the final list of projects available for the challenge.} \section{Usage Examples} The SmartSHARK data is versatile and allows different kinds of research. In the past, we have focused mostly on the analysis of bugs, as well as longitudinal analysis of trends within the development history. Below, we list some examples of papers that used (a subset of) this data set. Please note that some of papers below are still under review and not yet published in their final versions. \begin{itemize} \item We evaluated defect prediction data quality with a focus on SZZ and manual validation~\cite{Herbold2019}. The manuscript describes how we manually validated the links between commits and issues, as well as the issue types and how we used SmartSHARK to create release-level defect prediction data. \item We evaluated trends of static analysis warnings from PMD and usage of custom rules for PMD as well as the impact on defect density~\cite{Trautsch2020b}. \item We evaluated the impact of static source code metrics and static analysis warnings from PMD on just-in-time defect prediction~\cite{Trautsch2020a}. \item We used the manually validated issue type data to train and evaluate issue type prediction models~\cite{Herbold2020a}. \item We provided the data for the modelling of the developer behaviour through Hidden Markov Models (HMMs)~\cite{Herbold2019a}. \item We analyzed the tangling within bug fixing commits as well as the capability of researchers to manually identify tangling~\cite{Herbold2020}. \item We conducted an initial evaluation of a cost model for defect prediction~\cite{Herbold2019c}. \end{itemize} \section{Possible Research Questions} The strength of our data is the capability to reason over data from different information sources. Questions regarding differences between the discussions on mailing lists and within issue trackers can be answered without scraping data from multiple sources. Moreover, the static analysis results and the labeling of changes allow for research into the relationship, e.g., between refactorings and self-admitted technical debt or bug fixes. The availability of manually validated data enables us to evaluate the validity of heuristics, as well as the development of improvements of heuristics, e.g., for the labeling of bug fixes. Moreover, while we already established many links between the data sources, there are more opportunities that could be considered, e.g., the links between the mailing list and commits, or the mailing list and reported issues. Similarly, the links between pull requests and bugs can be explored, e.g., to understand why post release bugs were not spotted during code review. \section{Limitations} The greatest limitation of the SmartSHARK data is the number of projects for which data is available. The reason for this is the large computational effort required for the static analysis of the Java code of each commit. This not only limits the external validity due to the sample size, but also due to a focus on Java as programming language. In the future, we plan to overcome this limitation by extending the database with a large set of projects, for which we omit the static analysis and, thereby, are able to scale up the number of projects. While this will not support the same research questions, there are many interesting questions that can be answered without a static analysis of the source code for each commit. \section{Conclusion} The SmartSHARK data set provides a rich source of data that enables us to explore research questions that require data from different sources and/or longitudinal data over time. Since all data is stored in a single data base, results are easy to reproduce. The data is still growing and future releases will further extend the data with more projects and additional data sources. \nocite{*} \bibliographystyle{IEEEtran}
1,116,691,497,058
arxiv
\section{I. Introduction} Ultrashort-pulse laser micromachining of materials is attracting growing interest due to the possibility of achieving much higher quality of the laser processed surfaces as compared to longer laser pulses \cite{Malinauskas.2016}. The enhanced processing quality is largely conditioned by the difference in the laser ablation mechanisms for ultrashort (at the range from femtoseconds to dozen of picoseconds) laser pulses as compared to longer pulses. The difference results from strong thermal and stress confinements inherent for ultrashort laser pulses \cite{Paltauf.2003}. Although generally the conditions of stress confinement can be achieved at different pulse durations \cite{Paltauf.2003}, at ultrashort laser pulses this effect is more distinct so that the ablation can occur in the form of mechanical fracture and ejection of a layer of the irradiated material (referred to as spallation) as a result of the development of tensile stresses \cite{Zhigilei.1999,Ivanov.2003}. Spallation of laser-irradiated materials from front target surface and from rear surface in the case of films/foils has been extensively studied both experimentally \cite{Fox.1973,Gilath.1988,Eliezer.1990,Tinten.1998,Tamura.2001,Koch.2005,Savolainen.2011,Puerto.2012} and theoretically \cite{Eliezer.1990,Zhigilei.1999,Ivanov.2003,Zhakhovskii.2008}. In all experiments on the rear surface spallation of metals, the irradiation spot size was considerably larger than the metal film thickness \cite{Fox.1973,Gilath.1988,Eliezer.1990,Tamura.2001}. It has been proven that the rear-side spallation effect is conditioned by the reflection of the laser-induced shock wave from the rear surface with the formation of a region with a high strain rate sufficient for creation of voids/cracks. For the case of front surface spallation, which can also manifest itself as swelling, the spalled layers were found to be of a submicrometer thickness with the size dependent on the materials properties \cite{Koch.2005,Savolainen.2011,Puerto.2012}. In this paper, we report on front side delamination of a layer from yttria-stabilized zirconia (YSZ) foil upon femtosecond laser processing of its surface with a low numerical aperture lens. Here we call this effect 'delamination' to underline the difference from the spallation mechanism mentioned above. In our case, delamination takes place from the front side of the target irradiated with multiple laser pulses at fluences $F$ exceeding the surface ablation threshold. The delaminated layer thickness is from ten to several dozens of micrometers that depends on the irradiation conditions. It must be underlined that the irradiation spot size on the material surface is much smaller as compared to the thickness of the YSZ samples. We discuss the physical mechanisms of this effect and demonstrate that delamination happens due to a complex interplay of the two major phenomena, self-focusing of the laser beam transmitted toward the material bulk and its defocusing by the free-electron plasma generated in the surface layer of the sample. It is noteworthy that recently Kim et al. \cite{Kim} have reported on using femtosecond laser pulses for slicing 4H-SiC wafers by tight beam focusing (NA = 0.8) to a desired depth inside the sample. By using this method, exfoliation of $\sim$260 $\mu$m 4H-SiC layer was achieved with smaller roughness and material losses as compared to conventional slicing techniques. The laser irradiation conditions used in the present study differ considerably from those of work \cite{Kim}: the beam was focused on the sample surface with a low numerical aperture lens. However, due to highly non-linear properties of YSZ ceramic, the observed delamination effect that is analyzed below can have some analogy with the that reported in \cite{Kim}. \section{II. Experimental Setup} Experiments were performed with the diode-pumped fiber laser {\it Tangerine} produced by {\it Amplitude Systems}, emitting in TEM$_{00}$-mode at a wavelength of 1030\,nm and a pulse duration of 290 fs (full width at half maximum, FWHM). Repetition rate is scalable up to 2\,MHz. The laser beam quality factor M$^2$ is about 1.15 and the beam diameter $D_{\textrm{FWHM}}$ is about 1.25\,mm at the laser output. The downstream beam expander by {\it Thorlabs} expands the beam diameter by the factor of 3, to 3.75\,mm. The beam is directed by a set of mirrors to the Galvo-scanner {\it SCANcube 7} by {\it SCANLAB}. At the scanner output, an F-theta lens with focal length of $f = 63\,$mm (numerical aperture 0.06) by {\it SCANLAB} focuses the beam on the sample surface located on the XYZ stage with z-axis parallel to the laser beam. Processed areas were either $0.5\times0.5$ mm$^2$ or $5\times 5$ mm$^2$. The beam waist in the focal plane is $w_0 \approx 7.5\,$\textmu m and the Rayleigh-length $z_R\approx$ 149 \textmu m is close to the thickness of the irradiated sample. The yttria stabilized zirconia (or 8YSZ) used in these studies is zirconium dioxide with 8\% of yttrium oxide molar percentage, which is added to stabilize the cubic lattice. We notice however that recent publications indicate that a complete stabilization at room temperature is not achieved and there are inclusions of tetragonal phase, so called t" \cite{Butz.2011}. The melting point of 8YSZ is about 2700$^\circ$ C. Further material data is given in Table \ref{material}. The samples were provided by {\it Forschungszentrum J{\"u}lich GmbH, IEK1} purchased from {\it KERAFOL}. Dimensions of the samples are 25$\times$25$\times$0.2 mm$^3$. \begin{table} \centering \caption{Physical properties of 8YSZ. } \begin{tabular} { |l|c|c|l|} \hline Characteristics & Formula symbol & Value & Unit \\ \hline Crystal lattice \cite{Selcuk.1997} & - & cubic & - \\ Band gap \cite{Goetsch.2016} & $E_g$ &5.3 &eV \\ Density\footnote{ Manufacturer specification} & $\rho$ & 5950 & kg~m$^{-3}$ \\ Absorbtion depth at 1030nm\footnote{Obtained for virgin samples by measuring transmission and reflectance using \textit{UV-3600plus} by SHIMADZU} & $l_{a}$ & 53 & $\mu$m \\ Reflection coefficient at 1030nm$^b$ & $R$ & 0.52 & \\ Heat capacity \cite{Vaen.2004} ($20^\circ C$) & $c_{p}$ & 500 & J~kg$^{-1}$K$^{-1}$ \\ Heat capacity \cite{Vaen.2004} ($1200^\circ C$) & $c_{p}$ & 670 & J~kg$^{-1}$K$^{-1}$ \\ Thermal conductivity \cite{Vaen.2004} & $\lambda_{th}$& 2.2 & W~m$^{-1}$K$^{-1}$ \\ Thermal diffusivity ($20^\circ C$) & $\kappa$ & $7.4\cdot 10^{-7}$ & m$^{2}~$s$^{-1}$\\ \hline \end{tabular} \label{material} \end{table} \section{III. Results and Discussion} \subsection{A. General features of 8YSZ processing} Depending on the processing parameters (laser fluence and overlap of the irradiation spots upon scanning) different modes of sample modification/ablation are observed. We define the ablation threshold as the highest fluence at which no visible modification is seen in white-light interferometry (WLI). Additionally, we introduce a threshold fluence for the onset of delamination, $F_{th}^\textup{delam}$. This threshold is determined as the lowest investigated fluence, at which a layer of delaminated material can be found on the top of the laser processed area, though this layer can cover the processed area only partially. These two thresholds are slightly varying from one to another set of the experiments that can be conditioned by the initial sample defects. They depend on the scanning speed and the pulse repetition rate. As will be shown below, varying the scanning speed can compensate, to a certain extent, the change in the repetition rate to ensure a similar overlap (OL). However, the delamination threshold and the delaminated layer thicknesses can be slightly different for the same overlaps at different repetition rates that can be referred to the heat accumulation effect. Below we give the thresholds as the ranges of fluence in which either ablation or delamination (ablation or delamination threshold respectively) start to be observed in all our experiments. Figure \ref{onset} demonstrates typical scanning electron microscope (SEM) images of laser-processed surfaces when delamination is either not observed or partially seen. Between the two above-defined thresholds, the processed area does not contain microparticulates and is relatively smooth as demonstrated in Fig.~\ref{onset}(a). At fluences just above the delamination threshold for a particular overlap, the delaminated layer covers the processed surface only partially as shown in Fig.~\ref{onset}(b)-(c). It is always attached to the side of the processed area where laser scanning was started. That can be an indication of an accumulation effect, which leads to destroying the delaminated layer upon continued scanning. The roughness of the unprocessed YSZ surfaces is 0.15 $\mu$m. The roughness of the processed areas in Fig.~\ref{onset}(a) and Fig.~\ref{onset}(b) (in white-framed area) was 0.5 $\mu$m and 2 $\mu$m respectively. \begin{figure} [h] \includegraphics[width=16cm]{onset} \caption{SEM images of the laser processed areas. (a) OL = 22\% and $F=20.1$ J~cm$^{-2}$. No delamination features are observed. (b) OL = 75\% and $F=24.9$ J~cm$^{-2}$.The most part is clean from any particulates (outlined by white frame) while the delaminated layer can be seen in the black-framed region. (c) OL = 83\% and $F=8.8$ J~cm$^{-2}$. Delaminated layer covers a substantial part of the processed surface (bottom part of the image) and its boundary in the form of flakes is clearly seen at the top part of the image.} \label{onset} \end{figure} With further increasing fluence or overlap, the area becomes fully covered by the delaminated layer and the thickness of the layer increases as clearly seen in Fig.~\ref{overview}(a) (areas marked by numbers 20, 19, and 18). The figure presents the cross sectional image of the processed with the delaminated layers. For obtaining this image, the sample was cut across the processed areas. It should be noted that, in these cases of delamination, all three surfaces are modified with clear signs of melting, on both top and bottom of the delaminated layer as well as the surface from which delamination occurred. Also it can be noticed that the delaminated layer somewhat raises up from the initial sample surface. Figure \ref{overview}(b) presents the magnified view of the contact between the delaminated layer and the underlying sample as well as the edge where processed and virgin sample areas are contacting. The SEM image in Fig.~\ref{overview}(c) shows a free standing delaminated layer obtained due to a particular breakout at the cutting edge. The perspective is 30$^\circ$ tilted from the upright position. Fig.~\ref{overview}(d) supports that the bottom surface of this delaminated layer is modified. \begin{figure} \includegraphics[width=12cm]{alltogether} \caption{SEM images of laserprocessed areas with different fluences. Overlap (83.1\%) and repetition rate (100kHz) were constant for all cases shown. (a) Cross sectional view of the laser-processed sample (30$^\circ$-tilted view). Laser fluences are 7.2 J~cm$^{-2}$, 7.9 J~cm$^{-2}$ and 8.6 J~cm$^{-2}$ (marked as 20, 19 and 18 respectively). (b) Magnified view of the edge of the processed area (a) 18 in its contact with virgin sample area. Melting and ablation features can be recognized. (c) 30$^\circ$-tilted view of the delaminated layer at a laser fluence of 17.2 J~cm$^{-2}$. (d) Magnified view of (c), showing the contact between the delaminated layer and the underlying sample.} \label{overview} \end{figure} Here we hypothesize that, below the delamination threshold, the observed ablation of ceramics also proceeds via delamination but the delaminated layer is too thin. As a result, it is destroyed, most probably mechanically, via cracking and ejection from the processed sample. With increasing the thickness, the delamination layer withstands cracking and remains attached to the sample. This assumption can be verified by the inspection of the deposit of the ablation products on a collecting substrate. To do this, a microscope glass substrate was placed slightly aside of the laser beam, still assuring the laser plume deposition. The results are shown in Fig.~\ref{deposite_sm1} for the irradiation spot overlap of 92\%. The upper row presents images of the processed area (a) and glass substrate in bright (b) and dark (c) fields for laser fluence of 1.6 J~cm$^{-2}$. The delaminated layer is partially destroyed and consequently its large fragments are abundantly deposited on the substrates. At the same time, at laser fluence of 5.7 J~cm$^{-2}$ when the delaminated layer remains completely attached to the sample (d), only small rare particulates can be recognized on the glass substrate. \begin{figure} [h] \includegraphics[width=12cm]{deposite_sm1} \caption{(a)-(c) Images of laser processed area at fluence of 1.6 J~cm$^{-2}$ obtained by optical microscopy. (a) partly destroyed delaminated layer, (b) corresponding glass substrates with the deposit of the ablation products in bright field and (c) in dark field. The delaminated layer is partially destroyed and its large fragments are clearly seen on the substrates. (d)-(f) The same for laser fluence of 5.7 J~cm$^{-2}$ when the whole delaminated layer remains attached to the sample. Only small rare particulates can be recognized on the substrate. OL = 92\% for all images.} \label{deposite_sm1} \end{figure} Below we will address this assumption in more details and will provide the most probable physical mechanism of the observed delamination phenomenon. \subsection{B. Ablation depth and delaminated layer thickness} Based on the above assumption that the visible ablation naturally transfers to delamination with increasing laser fluence (note that the delaminated layer remaining on the sample after processing can be mechanically removed from the processed area), we introduce here a \textit{unified ablation depth} as the difference between the levels of virgin surface and the bottom of either ablated (without delamination signs) or delaminated area. The unified ablation depth was determined by white-light interferometry, using the \textit{Polytec TMS 1200} with Mirau-objectives. According to the device specification, the uncertainty in the step measurement varies between 0.18 $\mu$m and 0.1 $\mu$m depending on the step size. The reproducibility of the method was tested by measuring 15 different areas made on different YSZ samples with the same processing parameters. The standard deviation of the measurements was approximately 0.8 $\mu$m, which reveals certain deviation in the studied YSZ samples. For areas, which were not fully covered by the delaminated layer, it was measured in the regions free of delamination features. For the processed areas completely covered by the delamination layer, the corresponding cross sectional images were analyzed. \begin{table} \centering \caption{Fluence and peak power ranges, starting from which delamination is observed. The data are presented for several overlaps and the two repetition rates, 200 and 100 kHz. The corresponding unified ablation depths as well as pulse-to-pulse and line-to-line shifts, $\Delta$x and $\Delta$y respectively, are also given.} \begin{tabular} {|c|c|c|c|c|c|c|} \hline Overlap &$\Delta$x &$\Delta$y & $F_{th}^\textup{delam}$ & Peak power & Unified ablation & Repetition rate \\ in \% & in \textmu m & in \textmu m & in J~cm$^{-2}$ & in MW & depth in \textmu m & in kHz\\ \hline 92 &1 &1 & 2.2 - 2.6 & 13 - 15 & 11.2 & 200 \\ 83 &2 &2 & 4.3 - 4.9 & 26 - 30 & 15.7 & 200 \\ 75 &3 &3 & 9.6 - 10.4 & 59 - 63 & 18.4 & 200 \\ \hline 75 &3 &3 & 12.7 - 13.5 & 78 - 82 & 22.8 & 100 \\ 67 &4 &4 & 22.9 - 23.9 & 140 - 146 & 23.1 & 100 \\ \hline \end{tabular} \label{treshold} \end{table} Table~\ref{treshold} outlines the tendencies of the unified ablation depth evolution as a function of overlap between irradiation spots. The ablation depths are given at the delamination thresholds indicated as the fluence ranges (see comment above) for two repetition rates, 100 and 200 kHz. The overlaps in x- and y-direction are always the same as indicated in the Table by sample shifts $\Delta$x and $\Delta$y between two subsequent pulses along the scanning line and between subsequent scanning lines respectively. All processed areas here were the same, 0.5$\times$0.5 mm$^2$. Decreasing overlap requires a higher fluence to observe the delaminated layer. Similarly, when decreasing the repetition rate with preserving the overlap, higher fluences have to be applied to obtain the delaminated layer attached to the processed surface. We note here that $F_{th}^\textup{delam}$ is typically several times higher than the ablation threshold. Thus, for 75\% overlap and 200\,kHz, the ablation threshold was found to be in the range of 2.2 - 2.6\,J~cm$^{-2}$, four times smaller than $F_{th}^\textup{delam}$ at this regime of processing, see Table~\ref{treshold}. Figure~\ref{thickness} presents the unified ablation depth (see definition above) as a function of (a) fluence and (b) energy density dose $\Theta$, which is defined as the product of single-pulse energy and the total number of pulses per area divided by the processed area size. All the results presented were obtained with two repetition rates, of 100\,kHz and 200\,kHz and three overlaps between pulses and lines, 67\%, 75\%, and 83\%. Symbols outlined in Fig.~\ref{thickness}(a) by circles refer to the processing regimes above the delamination thresholds. Based on Fig.~\ref{thickness}, several features of the ablation/delamination process can be highlighted: \begin{itemize} \item [-]The depth increases monotonously for increasing fluence (Fig.~\ref{thickness}(a)) and increasing energy density dose (Fig.~\ref{thickness}(b)). There are no visible peculiarities which would indicate a transition from ``pure" ablation to delamination. \item [-] Increasing overlap at a constant single-pulse fluence considerably increases the unified ablation depth. \item [-] Twice increasing the repetition rate has a tendency to slightly increase the unified ablation depth. \item [-] The energy density dose defines the unified ablation depths as clearly seen from Fig.~\ref{thickness}(b). \item [-] A similar unified ablation depth can be achieved by different parameter sets of overlap and fluence which correspond to the same energy density dose. As will be shown in Section III.D, the fraction of the delaminated layer, which remains attached to the processed sample, depends on the overlap and the number of scans. \item [-] Interestingly, in all cases the delaminated layer survives on the processed area when the ablation reaches approximately 20 $\mu$m (somewhat smaller for higher overlap and slightly larger for smaller overlap). \end{itemize} \begin{figure} [h] \includegraphics[width=8cm]{thickness} \includegraphics[width=8cm]{Energydose} \caption{(a) Unified ablation depth as a function of laser pulse fluence for different overlaps and repetition rates. Circled datapoints correspond to fluences for which a delaminated layer could be observed. (b) Unified ablation depth as a function the energy density dose. No significant difference in the ablation depth could be observed for similar energy doses when varying parameter set of OL and fluence.} \label{thickness} \end{figure} The last feature indicates that the delaminated layer thinner than $\sim$20 \textmu m experiences cracking, fractures and is ejected from the processed surface as proven in Fig.~\ref{deposite_sm1} where large fragments of ceramics deposited on a collecting substrate are demonstrated for the processing regimes below $F_{th}^\textup{delam}$. Upon reaching a certain depth, the layer becomes able to withstand against cracking and remains attached to the sample. As a whole, the outlined features count in favor of delamination/fracturing of essentially mechanical nature whose mechanism will be addressed in Section III.F. \subsection{C. Structure of the delaminated layer} Figure~\ref{ablation} presents the typical SEM images of the top surface of the delaminated layer and of the sample beneath it. It is apparent that ablation/modification of material occurs both on the top of the delaminated layer and at its contact with the rest sample. The inset shows an unirradiated area with the grain sizes significantly larger than the particles seen on the irradiated surfaces. Remarkable is that the signs of the initial grained structure can be recognized in the ablation relief, see Fig.~\ref{ablation}(a). The reason is a high concentration of defects in grain boundaries that provokes a preferred ablation at the boundary sites \cite{Ribeiro1997}. Hence, this image supports considerable ablation from the external surface of the delamination layer which must be mediated by the laser-induced breakdown and free-electron plasma formation at the sample surface \cite{Stuart.1996}. XRD (x-ray diffraction) measurements were carried out with the \textit{X\`{}Pert Pro} by \textit{PANalytical}, used in Bragg-Brentano geometry. XRD characterization of unprocessed and processed areas, the latter in the regimes with and without delamination, showed no differences indicating that the processed surfaces, including the delaminated layer, have the same polycrystalline structure as the unprocessed material. \begin{figure} \includegraphics[width=12cm]{ablation} \caption{(a)Typical view of the ablation features on the top of the delaminated layer. (b) Morphology of the sample surface beneath the delaminated layer. The inset shows an unirradiated sample surface with observable grain boundaries. Images obtained by scanning electron microscopy. Overlap between the irradiation spots is 75\%; laser fluence is 17.2 J~cm$^{-2}$.} \label{ablation} \end{figure} \subsection{D. Repetition rate and heat accumulation} As already indicated in Table~\ref{treshold} and Fig.~\ref{thickness}, a lower repetition rate results in a higher $F_{th}^\textup{delam}$. This can also be seen in Fig.~\ref{rep}. For all four processed areas in the middle gray-scale image, pulse fluence (11 J~cm$^{-2}$) and overlap (83\%) were the same and only repetition rate was changed in this series of laser processing from the upper left image to the lower right one from 100 to 50, 25 and 10 kHz, respectively. The ablation depths for these processed areas are 40.3, 38.3, 35.9 and 34.8 \textmu m respectively. The noticeable decrease in the ablation depth with decreasing the repetition rate can be attributed to the heat accumulation effect. The heat accumulation at the surface layer of the sample should lead to the thermal expansion of the layer and correspondingly to a somewhat lower refractive index in the heat affected zone. This should consequently result in some defocusing of the laser beam part penetrating toward the sample bulk. According to the mechanism of delamination proposed below in Section III.F, beam refocusing deep in the sample produces the delamination cut. The surface layer expanded due to the heat accumulation effect can shift the cut deeper to the bulk at the higher repetition rates. Although heat accumulation can also cause thermal lensing, the absorbed heat is mostly confined in the delaminated layer and its gradient across the beam radius can be insufficient to counterbalance the thermal expansion effect. We suppose here that the ejection of the material due to self-focusing cut in its depth does not occur at each laser pulse but happens periodically as a result of stress accumulation from several pulses, see Section III.F. \begin{figure} \includegraphics[width=12cm]{rep_wli_2} \caption{5$\times$5 mm processed areas irradiated with $F=11$ J~cm$^{-2}$ (photograph in the middle). Repetition rates were 100 kHz (upper-left, maximum depth of processing, $H$, is 40.3\,\textmu{}m), 50 kHz (upper-right, $H$ = 38.3\,\textmu{}m), 25\,kHz (lower-left, $H$ = 35.9\,\textmu{}m) and 10\,kHz (lower-right, $H$ = 34.8\,\textmu{}m). The decrease in the repetition rate was compensated by the scanning velocity to keep the same overlap of 83\%. The delaminated layers appear as lighter regions within the process areas. Note that processing was performed horizontally with gradual shifting the scanning lines from the bottom to the top in each processed area. Magnified views represent the WLI images. Scale and color map are applicable to all four WLI images.} \label{rep} \end{figure} The delaminated layer is clearly seen as a lighter region within the processed square-shaped area in the images. It is almost completely preserved on the sample without destruction at the highest repetition rate of $f=100\,$kHz while for lower repetition rates the remaining delaminated layer considerably decreases in size (Fig.~\ref{rep}). It is worth mentioning that the light shadows beneath the bottom parts of the processed areas for 50, 25 and 10 kHz originate from redeposition of particulates from the ablation plume while at 100 kHz, when the delaminated layer is almost completely preserved on the processed area, no visible signs of particulate redeposition is observed. The most plausible explanation is seen again in the heat accumulation effect emerging at higher repetition rates. Indeed it is known that ceramics usually become less brittle at enhanced temperatures \cite{Gogotsi.1978}. Hence, the higher the repetition rate, the higher is the temperature in the delaminated layer and the longer this layer can withstand to fracturing upon laser scanning. Note that the delamination of the irradiated layer from the sample confines the absorbed energy within the layer, thus enhancing the heat accumulation effect. The important role of heat accumulation has also been confirmed in the following series of the experiments. In this series, the number of pulses per each processed area of 0.5$\times$0.5 mm$^2$ and the single pulse fluence (and hence, the energy density dose) were kept constant. For the first two processed areas, all pulses were applied in one scanning run with high overlaps between the irradiation spots, 83\% (0.97 s scanning time) and 75\% (0.52 s scanning time), which correspond to 62500 and 27777 pulses per area respectively. Two other areas were processed in four scanning runs but with smaller overlaps of the irradiation spots within each scan (67\%, 0.33 s time per scan and 51\%, 0.18 s time per scan). As a result, the total numbers of pulses per area were the same as for the first two areas, 62500 and 27777 respectively. However, in the case of fourfold scanning with smaller OL, the energy density dose was four times smaller in each scan as compared to single scan with high overlap. Additionally to lower heating in each scan, the heat accumulated during one scan has a time to partially dissipate by the time of the next scan. As a result, fourfold scanning provides colder conditions of material processing. Figure \ref{62500} shows the WLI images of the processed areas of 0.5$\times$0.5 mm$^2$ with 62500 pulses. The applied single pulse fluence was 15.4 J~cm$^{-2}$ and the energy density doze was 760 J~cm$^{-2}$ for the processed area, resulting in the ablation depth of $\sim$61.4 $\mu$m in both single and fourfold scanning. However, at the single run, almost the whole delaminated layer has survived on the sample and only a very small region in the upper part of the processed area was evidently destroyed and ejected from the sample as seen in Fig.~\ref{62500}(a) (note that laser processing was started at the bottom edge of the image). On the contrary, at fourfold scanning with smaller overlap and time delays between subsequent scanning runs that should ensure better heat dissipation, only a small part of the delaminated layer have survived on the sample (Fig.~\ref{62500}(b)). These experiments demonstrate that heat accumulation plays an important role in preventing destruction of the delaminated layer. \begin{figure} \includegraphics[width=12cm]{62500} \caption{The areas of 0.5$\times$0.5 mm$^2$ processed with the same number of pulses, 62500. (a) Overlap 83\%, single scanning run. (b) Overlap 67\%, four scanning runs. In both cases, applied fluence was $F=15.4$ J~cm$^{-2}$, the energy density dose $\Theta=760$ J~cm$^{-2}$. Laser processing was started from the bottom edge of the image with scanning lines along $x$ direction. Images were obtained by WLI.} \label{62500} \end{figure} Figure~\ref{27777} shows the WLI images of the processed areas of 0.5$\times$0.5 mm$^2$ with 27777 pulses at laser fluence of one pulse of 20.1 J~cm$^{-2}$ resulting in an energy density dose $\Theta=440$ J~cm$^{-2}$ for the processed area. On the surface processed by single scanning run with overlap of 75\%, the delaminated layer is partially preserved, being attached to the edge from which scanning was started (Fig.~\ref{27777}(a)). The ablation depth is 36.1 \textmu m in this case. In the area processed four times with 51\% overlap, the ablation depth is somewhat smaller than at 75\% overlap, 34.6 \textmu m (but comparable in regard to the standard deviation of 0.8 \textmu m of measurements, see section III B.), while no signs of the delamination layer are visible (Fig.~\ref{27777}(b)). Also it can be noticed that the delamination layer is raising up from the sample surface by about 40 \textmu m. It looks like a flake, being attached to the sample at the starting edge of processing and lifted off at the rest area, that is plausible due to pushing forces upon the delamination cut/crack formation (Fig.~\ref{27777}(a)). Figures \ref{62500} and \ref{27777} clearly indicate the dominating role of the energy density dose in regard to the ablation depth and the heat accumulation effect in regard to the delaminated layer stability. Although the laser fluence is higher for smaller number of pulses applied to the same area, the ablation depth and, hence, the delamination layer thickness is considerably smaller. To explain this and other features of the delamination effect, below we consider the processes taking place upon laser beam coupling to bandgap materials and discuss possible mechanisms and scenarios of the delamination effect. \begin{figure} \includegraphics[width=12cm]{27777} \caption{Same as in Fig.~\ref{62500} for 27777 pulses per 0.5$\times$0.5 mm$^2$ area. (a) Overlap 75\%, single scanning run; (b) Overlap 51\%, four scanning runs. Applied fluence was $F=20.1$ J~cm$^{-2}$, the energy density dose $\Theta=440$ J~cm$^{-2}$. Laser processing was started from the bottom edge of the image with scanning lines along x direction. Images are obtained by WLI.} \label{27777} \end{figure} \subsection{F. Possible mechanism of observed delamination: Counterbalancing between self-focusing and electron plasma anti-waveguiding} Ceramic delamination can be explained by laser beam self-focusing upon propagation in the non-linear optical medium. In non-linear media, the refractive index $n$ depends not only on the frequency of electromagnetic field but also on the local field intensity of the laser beam $I(r,z,t)$ as $n = n_0 + n_2I(r,z,t)$ where $n_0$ and $n_2$ are the linear and non-linear (Kerr) refractive indexes and $r$ and $z$ are respectively radial and axial coordinates. For transparent crystals and glasses, the value of $n_2$ is typically positive and in the range of $10^{-16}-10^{-14}$~cm$^2$~W$^{-1}$ \cite{Weber1978}. The wave front of powerful laser beams with the intensity increasing toward the axis (e.g., Gaussian as in our case) is distorted during beam propagation in such non-linear medium, as schematically shown in Fig.~\ref{self-focusing}(a), due to decreasing phase velocity in higher refractive index regions \cite{Chekalin.2013}. As a result, initially parallel optical rays are converging toward the beam axis, culminating in catastrophic collapse at a distance $z_{\text{sf}1}$ after the entering of the laser beam into the medium. The critical laser power for self-focusing, which is derived from the balance between the angles of self-focusing $\theta_{\text{sf}}$ and beam diffraction $\theta_{\text{df}}$, $\theta_{\text{sf}} = \theta_{\text{df}}$, can be evaluated as $P_{cr} \approx 3.72 \lambda_0^2/(8 \pi n_0n_2)$ where $\lambda_0$ is the laser wavelength \cite{Marburger.1975,Couairon.2007}. \begin{figure} \includegraphics[width=15cm]{self-focusing_delamination} \caption{(a) Illustration of laser beam self-focusing in a transparent non-linear solid with a high ionization threshold. The beam is focused on the sample surface. Instead of diverging after the geometrical focus (dash-dotted lines), the beam experiences self-focusing governed by the Kerr effect until its collapsing at the distance $z_{\text{sf}1}$ which culminates with generation of free-electron plasma. The scheme has been adapted from \cite{Chekalin.2013}. (b) For semi-transparent materials like ceramics considered in this paper, free-electron plasma is already generated at the surface layer (pink surface region) that can lead to melting and ablation of the surface layer. Free electron population counteracts to the Kerr effect by adding a negative contribution to the refractive index, see Eq. (\ref{n change}). This ``anti-waveguiding'' effect \cite{Chekalin.2013} is stronger for higher energy of the beam. As a result, self-focusing is delayed in space and the self-focusing distance $z_{\text{sf}2}$ is increasing with laser beam power. (c)--(f) Schematics of the ablation/delamination mechanism. At relatively low laser power (but above self-focusing threshold), the beam collapse happens close to the surface (as in (a)), resulting in fracturing the region between the collapse spot and the surface (c). In such regimes with scanning (scanning direction is shown by black arrow), mechanical fracturing with ejection of ceramic fragments is the main mechanism of ablation (d). At high laser power, generation of a dense electron plasma in a thin surface layer leads to partial reflecting of laser light and in the ``anti-waveguiding'' effect \cite{Chekalin.2013}. As a result, the laser beam fraction, which is transmitted through the electron plasma layer, collapses deep in the target and provides the material melting, ablation and fracturing inside the bulk that is seen as the layer delamination (e). At intermediate beam powers, the layer delamination can transform to layer fracturing upon scanning (f) as seen in Figs.~\ref{rep},~\ref{62500} and~\ref{27777}.} \label{self-focusing} \end{figure} For yttria stabilized zirconia at laser wavelength of 1030 nm, $n_0$ = 2.1236 and $n_2$ = 1.184$\times$10$^{-15}$~cm$^2$~W$^{-1}$. Using these data, the critical power for self-focusing can be evaluated as $\sim$0.63 MW, which is more than the order of magnitude smaller than the smallest threshold value of beam power for delamination, see Table \ref{treshold}. For transparent (low-absorbing) media, the propagation depth of the beam till its collapse can be estimated by the empirical expression \cite{Marburger.1975,Couairon.2007} \begin{equation} L_c = \frac{0.367z_R}{\sqrt{[(P_{in}/P_{cr})^{0.5}-0.852]^2-0.0219}}. \label{self-focusing depth} \end{equation} Here $P_{in}$ is the power of incident laser beam. We suppose that in our experiments the layer is delaminated at the depth of self-focusing $L_c$. Assuming as the first approximation that material absorption is insignificant before beam collapsing, one can evaluate $L_c \approx$ 11 \textmu m at a fluence of 2.5 J~cm$^{-2}$. Interestingly, this pair of values coincides with the threshold of delamination at high overlap (92\%, see Table \ref{treshold}). We can presume that, at such low laser fluences, only small fraction of light is absorbed before beam collapsing and, hence, the depth of beam collapse is reasonably described by Eq. (\ref{self-focusing depth}). Indeed, for the intrinsic (linear) absorption depth of 53 \textmu m of YSZ ceramics (assuming absence of non-linear absorption), less than 20\% of the beam energy is absorbed at the distance of 11 \textmu m and the Kerr focus is shifted insignificantly. However, as follows from Eq. (\ref{self-focusing depth}), the self-focusing distance has to move closer to the sample surface with increasing beam power. Thus, for the fluence of 7 J~cm$^{-2}$ $L_c \approx$ 7.4 \textmu m that is more than 3 times smaller as compared to the unified ablation depth for the overlap of 83\% (Fig.~\ref{thickness}, 200 kHz repetition rate). Below we show that there is no contradiction as, for semi-transparent materials irradiated with loose beam focusing on the surface and at high repetition rates, other effects can contribute to the position of the self-focus. The transient and permanent changes of refractive index in laser irradiated materials can be caused by several factors which include the Kerr effect ($\Delta n_{\text{Kerr}}$), generation of conduction-band electrons ($\Delta n_{\text{CB}}$), heat accumulation ($\Delta n_{\text{th}}$), accumulation of defects ($\Delta n_{\text{def}}$), density change in the heat affected zone ($\Delta n_{\rho}$), and local stress ($\Delta n_{\text{P}}$) \cite{Waxler.1973,Sakakura.2011,Bulgakova.2013}: \begin{equation} \Delta n = \Delta n_{\text{Kerr}} + \Delta n_{\text{CB}} + \Delta n_{\text{th}} + \Delta n_{\text{def}} + \Delta n_{\rho} + \Delta n_{\text{P}}. \label{n change} \end{equation} The contribution of the Kerr effect is positive, resulting in narrowing and, finally, collapsing the laser beam (Fig.~\ref{self-focusing}(a)). Upon beam collapsing, a high local intensity is achieved which is enough to create free electrons. Free electron population is produced via photo-ionization which can trigger collisional ionization starting from a certain level of free electrons \cite{Stuart.1996}: \begin{equation} \frac{dN_e}{dt} = (\sigma_kI^k +\alpha_{\text{col}}N_eI)\frac{(N_{0}-N_e)}{N_0}. \label{free electrons} \end{equation} Here $N_e$ is the density of free electrons, $N_0$ is the atomic density of unexcited material, $\sigma_k$ and $k$ are the coefficient and the order of multi-photon ionization respectively, and $\alpha_{\text{col}}$ is the coefficient of collisional ionization. The factor $(N_0-N_e)/N_0$ is added to account for available ionization centers at high ionization rates \cite{Burakov.2005}. It should be underlined that, at ultrashort laser pulses, the avalanche ionization can considerably contribute to generation of free electrons in bandgap materials. Thus, Lenzner at al. \cite{Lenzner.2000} have shown that in fused silica the avalanche process is developing already at laser pulses of 120 fs duration that leads to strong decreasing the processing quality compared to shorter laser pulses. Furthermore, numerical simulations \cite{RethfeldSiO2} have demonstrated that, for fused silica at 300 fs laser pulses, the avalanche process contributes noticeably to material ionization already starting from $\sim 2 \times 10^{13}$ W/cm$^2$ and Eq.~(\ref{free electrons}) is applicable at intensities $\gtrsim 4 \times 10^{13}$ W/cm$^2$ (see Fig. 3 in \cite{RethfeldSiO2}). Note that such intensities are typical for our experiments while smaller band gap of YSZ ceramics ($E_g$ = 5.3 eV against 9 eV for fused silica) should result even at lower intensities for free carrier generation and subsequent triggering the avalanche process. As soon as free electron plasma is produced in the conduction band, it counteracts to the Kerr self-focusing ($\Delta n_{\text{CB}} <0$) and can even lead to the anti-waveguiding effect \cite{Chekalin.2013}. In our case, when the laser beam is loosely focused on the sample surface with generation of free electrons in the surface layer according to Eq. (\ref{free electrons}), the free electron plasma can considerably alter the beam coupling to the material via increasing reflectivity, light defocusing/scattering, and attenuating the beam along its propagation toward the material bulk after a partial reflection from the electron plasma at the surface layer. Within a surface layer where the laser beam is not yet strongly distorted by self-focusing and defocusing, attenuation of laser intensity can be roughly described in a one-dimensional form as \begin{equation} \frac{dI}{dz} = -\alpha_{\text{in}}I - \sigma_kI^k\frac{(N_{0}-N_e)}{N_0}k\hbar\omega -\alpha_{\text{fe}}I. \label{beam attenuation} \end{equation} Here $\alpha_{in} = 1/l_a$ is the intrinsic absorption coefficient and $\alpha_{\text{fe}}$ is the absorption coefficient of free electrons produced by the laser light. The optical response of the dynamically ionized dielectric target (both dynamic change of the reflection coefficient and the spatio-temporal behavior of $\alpha_{\text{fe}}$) can be calculated through the complex dielectric function $\epsilon(N_e)$ by involving the Drude theory \cite{Burakov.2005}. A dynamically evolving reflectivity of the beam from the sample surface and attenuation of its part penetrating toward the sample bulk have to strongly affect the position of the Kerr focus under the condition that the beam power remains in excess of $P_{cr}$ after the beam passes through the excited surface region. It can be stated that the Kerr focus is dynamic under such excitation conditions and its position depends on the fraction of the beam energy (power) that has passed through the free-electron-plasma ``shield" generated in the surface layer. Generally, the transient plasma ``mirror/attenuator" created in the surface layer of the sample should move the Kerr focus deeper to the material bulk as schematically shown in Fig.~\ref{self-focusing}(b). We recall that, at 7 J~cm$^{-2}$, $P_{in}/P_{cr} \approx 68$, yielding in $L_c \approx$ 7.4 $\mu$m as estimated by Eq. (\ref{self-focusing depth}). It is possible to roughly evaluate that, for moving the Kerr focus deeper to the sample, to $\sim27$ $\mu$m from the surface (Fig.~\ref{thickness}), the beam power must decrease by the factor of $\sim 8.3$ after partial reflection and attenuation by the free electron plasma at the surface layer (to achieve $P/P_{cr} \approx$ 8.1-8.2 after passing the plasma layer). Note that in such a case the laser fluence drops from 7 J~cm$^{-2}$ at the sample surface to a local level below 1 J~cm$^{-2}$, the latter is well smaller than the damage threshold of a wide bandgap dielectric (estimated direct bandgap of yttria-stabilized zirconia is around 5.2-5.8 eV \cite{Ostanin.2000}). Here under the damage any irreversible change of material is meant which is observed after laser irradiation such as visible signs of melting, material ablation, cracking, change of crystalline structure, compaction or appearance of porosity. Note that the minimal laser fluence starting from which the damage is observed (damage threshold) is scaling with the band gap of dielectric materials and for materials with $E_g >$ 5 eV it exceeds 1 J cm$^{-2}$ at pulse durations of 100 fs and longer \cite{Mero2005}. Upon self-focusing, the intensity of the attenuated laser beam can reach again the value sufficient for the free electron production, which in its turn will induce local material heating and stress generation, in analogy with \cite{Kim} where the laser beam was purposely focused inside 4H-SiC wafer with a high numerical aperture lens. Hence, the laser-induced free electron plasma created upon focusing the laser beam on the sample surface can reasonably explain the delamination effect and its depth found in this work. Reliable numerical simulations of the experimental conditions presented here are not seen possible in view of extremely large computational resources required for such kind of problems (see, e.g., \cite{Bulgakova.2015}) and a number of unknown material parameters for description of free-electron generation. To estimate light reflection and absorption by the generated free electron plasma, we recall that, even for the materials with a larger band gap such as fused silica under similar surface-irradiation conditions, an overcritical free-electron density is produced within the laser fluence range used in the present experiments \cite{Mirza.2016}. According to simulations for fused silica (see Fig. 6 in \cite{Mirza.2016}) and taking into account intrinsic reflectivity of 8YSZ ceramics (Table \ref{material}), at laser fluence of $\sim$7 J~cm$^{-2}$ (peak fluence of $\sim$14 J~cm$^{-2}$) more than half of the laser energy is reflected from the surface. The rest of laser energy, which penetrates to the sample, is attenuated due to photo-ionization and absorption by free electrons during propagation toward the target. The attenuated energy density can exceed 3 J~cm$^{-2}$ at the distance of $\sim$4 $\mu$m. Note that, for this estimation, we assume that an average energy spent for free electron production and heating is in the range of 50-60 eV per electron which is consistent with simulations \cite{Mirza.2016} and experiments \cite{Geoffroy.2014}. Under such conditions of laser energy absorption, the laser beam is attenuated to the fluence below the damage threshold but still it has a power above $P_{cr}$ in respect of the self-focusing effect. Note that additionally the laser beam can be scattered (defocused) by the free electron plasma. In view of a lower band gap of yttria-stabilized zirconia as compared to fused silica, the generated free electron density can be even higher than considered in the above estimations that will result in a higher light absorption within the surface layer. Hence, the scenario presented in Fig.~\ref{self-focusing}(b) seems to be plausible and convincing: - With increasing beam energy, the laser-generated electron plasma in the surface layer of the sample strongly depletes the laser beam. The absorbed laser energy in this layer is enough to induce melting and even partial ablation at the sample surface. - The fraction of the beam, which passes through the free-electron region, is not sufficiently energetic to cause material damage. - However, as the power of the beam is still above $P_{cr}$, the beam experiences self-focusing with the Kerr focus well deeper to the sample as compared to what could be expected for the case of absent or weak absorption. As a result, new local region of high laser energy absorption appears in the Kerr focus, inducing new damage (melting, cracking, internal ablation). Regarding the ablation/delamination mechanisms, the following conclusions can be done based on the above considerations. At relatively low laser fluences (but above the self-focusing threshold in terms of pulse power), the free electron density generated in the surface layer of the sample as well as linear material absorption are insufficient to induce the surface damage. As a result, after partial reflection from the surface and absorption in the surface layer, the beam penetrates toward the bulk and collapses in the subsurface region. In the collapse region, due to formation of a highly localized free-electron population which transfers its energy to the lattice upon recombination at the time scale of few picoseconds, ceramics must melt and a very high stress is generated. It was shown that, in the beam focusing region deep inside fused silica bulk, the stress level is of the order of 70-80 MPa \cite{Bulgakova.2015}. In YSZ ceramics under similar focusing conditions, the maximum stress level is expected to be more than the order of magnitude higher. Indeed, the stress is proportional to Young's modulus (approximately 2.5-3 times higher for YSZ \cite{Adams.1997} as compared to fused silica) and the coefficient of thermal expansion ($\sim$10$^{-5}$ K$^{-1}$ for YSZ \cite{Hayashi.2005} against 0.55$\times$10$^{-6}$ K$^{-1}$ for fused silica) \cite{Kingery.1955}. As the tensile strength of YSZ ceramics is reported to be 745 MPa \cite{Noguchi.1989}, the expected stress has to considerably exceed the material strength, leading to mechanical damage around the collapse region. As estimated above, at relatively low beam powers and, hence, at the absence or at low free-electron plasma shielding, the self-focus has to be formed close to the sample surface. The laser-induced stress, which exceeds the material strength, should cause fracturing of the material layer between the focus and the surface with ejection of particulates (Fig.~\ref{self-focusing}(c)). In such regimes, mechanical fracturing of the surface layer is the main mechanism of ablation upon laser processing (see Fig.~\ref{self-focusing}(d)) that was confirmed by the experiments with deposition of the ablation products (Fig.~\ref{deposite_sm1}). The smallest fluence, at which such material removal starts to be observed, is considered as the ablation threshold. Noticeable is that the ablation thresholds for different processing conditions (pulse repetition rates, irradiation spot overlap) differ insignificantly (Fig.~\ref{thickness}(a)). It is known that, at ultrashort laser irradiation of bandgap materials, the surface damage threshold drops dramatically with the number of pulses applied to the same surface area due to material-dependent incubation effects \cite{Rosenfeld1999}. An insignificant difference in the ablation thresholds at different overlaps upon laser scanning observed in our experiments supports that the ablation process is governed by beam self-focusing, which is weakly dependent on incubation effects in the surface layer of the sample. Nevertheless, it can be expected that, upon laser scanning, not every pulse leads to material fracturing but ejection of particulates happens periodically as a result of stress accumulation from several laser pulses. At high beam powers, a high-density free-electron plasma is formed at the very surface layer of the sample that results in shielding and ``anti-waveguiding" of the laser beam. The fraction of the beam, which penetrates through the shielding area, experiences collapsing at a large distance from the surface as discussed above. In such cases, the stress generated in the self-focus region, is not enough to induce fracturing of a relatively thick material layer between the self-focus and the surface. The high-temperature/high-pressure local zone inside the bulk evolves into a pore \cite{Bulgakova.2015}. Namely in such regimes, the delamination effect over the whole processed area is developing, which originates from the line of the adjacent or closely located pores (Fig.~\ref{self-focusing}(e)). At intermediate laser powers, the delaminated layer can be preserved on the sample till a certain level of the accumulated stress after which it is starts to fracture at further processing (Fig.~\ref{self-focusing}(f)). The area of the preserved delaminated layer depends on the depth of self-focusing and the overlapping degree, the latter determines the heat accumulation in the delaminated layer and, hence, its mechanical properties. It is necessary to admit that, due to multiplicity of factors influencing light propagation in materials (Eq. (\ref{n change})), the delamination effect uncovered in this work is a very complex phenomenon. Under multi-pulse irradiation of the same area with relatively high overlapping of the irradiation spots as in the present experiments, accumulation of heat and defect states (the latter are abundant in yttria-stabilized zirconia \cite{Foster.2002}) can change the self-focusing conditions via creation of the thermal and defect convex lenses. Indeed, both terms $\Delta n_{\text{th}}$ and $\Delta n_{\text{def}}$ in Eq. (\ref{n change}) are positive \cite{Waxler.1973,Sakakura.2011,Martin.1997} and can assist in the self-focusing effect. As for two last terms in Eq. (\ref{n change}), $\Delta n_{\rho}$ and $\Delta n_{\text{P}}$, their roles at multi-pulse irradiation are more complicated. Heat accumulation in the delaminated layer should lead to a decrease of material density within the layer ensuring $\Delta n_{\rho} < 0$. On the other hand, at each pulse within the light absorption region, material can be relocated with creation zones of higher and lower density as compared to the virgin one \cite{Bulgakova.2015}. It could be speculated that a pressure-induced compacted shell is created, which surrounds the pathway of the focused/self-focusing beam toward the focal region. This compacted shell, which is subjected also to the residual stress accumulation, should affect the propagation of the next laser pulse and, most plausibly, assists in the light guiding toward the plane of the beam collapse \cite{Chan.2003}. We note that the waveguiding effect of a transient lens created by a mutual action of thermal and defect-induced lenses as well as by the material compaction shell, must also be inherent for direct laser writing of waveguides in optical glasses. \section{IV. CONCLUSION} In this paper we have analyzed ultrashort-pulse laser ablation of semi-transparent materials on the example of YSZ ceramics. Unlike transparent (e.g., fused silica) or strongly-absorbing (e.g., metals) materials, here the laser ablation process is strongly influenced by delamination of a relatively thick surface layer. The depth of the delamination (and hence the depth of the crater) depends on the interplay between Kerr self-focusing due to the positive nonlinear refractive index of the material and beam defocussing induced by free-electron plasma formation at the sample surface. As the free-electron plasma density is evolving during the laser pulse, the position where beam self-focusing may happen can also evolve in time. The actual position of the beam collapse is determined by the strength of a negative free-electon lens achieved during the pulse. When the incident pulse power increases, a denser free-electron `shield' is created on the sample surface that leads to a stronger anti-waveguiding effect and spatial delaying of the beam collapse. Our studies show that the unified ablation depth as a function of laser fluence (see Fig.~\ref{thickness}(a)) can be better fitted by the linear dependence than by a logarithmic one, inherent for thermal mechanisms of ablation \cite{Chichkov1996}. Together with the depth of observed delamination, which well exceeds the size of the laser irradiation spot, this supports the relevance of the proposed phenomenological model. This modeling representation reasonably explains the dependence of the ablation depth on the laser fluence and provides an adequate quantitative estimation for the crater depth at the threshold of delamination. At high laser fluences well exceeding the ablation threshold and strong overlaps between the irradiation spots upon processing, a paradoxical effect can be observed: no crater is left on the surface anymore but instead the processed area raises up from the virgin sample surface, see e.g. Fig.~\ref{62500}(a). It has been found that, in such cases, the delaminated layer is thick enough to withstand dynamic mechanical stresses and remains attached to the processed area as schematically shown in Fig.~\ref{self-focusing}(e). Summarizing, in this study we have demonstrated laser-induced delamination of layers with the thickness of several tens of micrometers and the area of nearly 5$\times$5\,mm from the bulk YSZ ceramics. It has been shown that the delaminated layer thickness can be controlled by laser fluence and overlap of the irradiation spots upon laser scanning of samples. Consequently, the discovered effect opens up a new way for controllable laser microslicing of brittle ceramic materials, i.e. cutting two-dimensional high-aspect-ratio sheets parallel to the bulk surface. \section{ACKNOWLEDGMENT} NMB acknowledges the European Regional Development Fund and the state budget of the Czech Republic (project BIATRI: No. CZ.$02.1.01/0.0/0.0/15\_003/0000445$, project HiLASE CoE: No. CZ$.02.1.01/0.0/0.0/15\_006/0000674$, programme NPU I: project No. LO1602). Delivery of starting materials by Forschungszentrum J\"{u}lich GmbH (K. Wilkner, M. Bram) is greatly acknowledged.
1,116,691,497,059
arxiv
\section{Introduction} \label{sec:intro} Dark matter halos are the fundamental units of cosmic structure formation. These objects are formed through the assembly of dark matter particles, in which initially overdense regions of the universe collapse through gravitational instability. In hierarchical structure formation, low-mass dark halos form first, while more massive structures form gradually through mergers, combined with the smooth accretion of dark matter. Eventually, baryonic matter settles into the gravitational potential wells of these halos, leading to the eventual cooling of gas into a star-forming state, and the subsequent production of stars and black holes. Given the chronology of structure formation, understanding the assembly and present-day mass distribution of dark matter halos is an important first step in developing a comprehensive theory of galaxy formation. $N$-body simulations of collisionless `cold' dark matter agree that the mass distribution within dark halos takes on a universal shape, most conveniently paramterized by the Navarro-Frenk-White (NFW) density profile \citep{Navarro1996,Navarro1997,Wang2020}. The NFW density profile is defined such that for halos of fixed mass, the density distribution depends on only one additional parameter, the so-called `concentration' of the halo -- a measure of how centrally peaked the density profile is. Concentration may also be interpreted as a measure of the formation epoch of the halo, with low-mass, older halos exhibiting larger concentrations than high-mass objects that have assembled more recently in cosmic history \citep[e.g.][]{Bullock2001,Wechsler2002,Zhao2003,Ludlow2013}. The concept of halo mass is not without ambiguity. Dark matter halos exhibit irregular shapes, and quantifying the mass of a halo then becomes an exercise in determining a suitable boundary demarcating its dark matter content. The most commonly-adopted definitions of halo boundaries originate from the spherical collapse model \citep{Gunn1972}, in which halos condense out of initial overdensities that decouple from the background expansion through their own gravity; the resulting extent of the collapsed object is then determined using the virial theorem. This condition then defines halos as virialized objects with a mean enclosed density equal to $178$ times the critical density of the universe. This condition applies specifically to an Einstein de-Sitter universe (with $\Omega_m=1$); the extension to arbitray cosmologies is more involved and requires numerical solutions \citep[e.g.][]{Eke1996,Bryan1998,Rubin2013}. Nevertheless, these virialization conditions act as convenient criteria that can be used to identify (spherical) halos from groups of particles in cosmological simulations of structure formation, where an overdensity closer to 200 times the critical (or, sometimes, the mean) density is assumed. The corresponding ``virial'' radius is denoted as $r_{200c}$ ($r_{200m}$). Visual depictions of halos in simulations show them to be anything but spherical. In particular, the equilibrium state of halos is, in some ways, a function of radial distance from the halo center. The inner cores of halos assemble earlier and are typically more relaxed. On the other hand, the exterior portions of halos (particularly those of galaxies more massive than the Milky Way) may still be accreting new material in the form of diffuse dark matter and merging halos, and do not show an obvious separation from the cosmological background. This has led to several authors advocating for the ``splashback radius'' as a more natural definition of a halo boundary \citep[e.g.][]{Fillmore1984,Bertschinger1985,Diemer2014,Adhikari2014,More2015}, identified as a caustic in the density profile in the outskirts of halos. Physically speaking, the splashback definition may be more robustly defined as the (smoothed) average of the apocentric radii of all particles in the halo \citep{Diemer2017}. Defined as such, the splashback radius then separates material that is infalling from that which is in orbit within the gravitational potential of the halo. A number of recent programs have targeted the identification of the splashback feature in observations. The majority of these efforts have been focused in the regime of rich galaxy clusters, in which the comparatively large number of tracers enables a more faithful detection of the caustic in the (projected) number density profile and cross-correlation functions of member galaxies \citep[e.g.][]{More2016,Patej2016,Baxter2017,Chang2018,Nishizawa2018,Shin2019,Zurcher2019,Murata2020,Tomooka2020}. Though the comparison between the observationally-inferred and theoretically-predicted splashback radii can be hindered by systematics \citep[e.g.][]{Busch2017}, there is now a considerable body of evidence that the outermost realms of massive galaxies are imprinted with a caustic splashback feature. A particularly interesting feature of the splashback radius is its dependence on redshift, the accretion rate and environment of the halo \citep[e.g.][]{Diemer2017b,Mansfield2017} and perhaps even the underlying theory of gravity \citep{Adhikari2018}. The dependence on accretion rate is especially informative, as it influences the relationship between the splashback radius and the more conventionally defined virial radius derived from the spherical collapse model. Using cosmological $N$-body simulations, \cite{More2015} observe that in slowly accreting halos, the splashback feature occurs at around $\approx 1.2-1.5\, r_{200m}$, while in more rapidly accreting systems, in which the added mass causes particle orbits to ``turn around'' at smaller radii, this feature occurs closer to $\approx 0.8-1.0\, r_{200m}$. The dynamics of particle orbits, which after all defines the splashback radius, therefore retains memory of the accretion events that punctuate the history of the halo. Recent theoretical works have demonstrated that caustic features that demarcate natural halo boundaries may be measured in other quantities besides the radial density profile. For example, \cite{Fong2020} construct a new definition, the ``depletion radius'', by identifying the radial location of the minimum in the bias profile of halos (i.e. a measure of the overdensity profile of a given object). The location of this depletion radius is somewhat larger than the typical splashback radius (on average by a factor of two or greater), and coincides with the location of the maximum infall velocity around the halo. Using high-resolution, zoom hydrodynamical simulations of Milky Way-mass analogs, \cite{Deason2020} showed that a series of caustics may also be identified in the radial velocity dispersion profile, with the innermost caustic (located around $\sim 0.6 r_{200m}$) defined by material that has undergone at least two pericentric passages, and which provides a means to define the ``edge'' of the Milky Way halo. Unsurprisingly, the (radial) stellar velocity dispersion profile has been measured most comprehensively in the case of the Milky Way \citep[e.g.][]{Brown2010,Deason2012,Cohen2017}. The extension to galaxies beyond the Milky Way has been more limited in terms of the maximum projected distance up to which the velocity dispersion profile has been measured, where the component being measured is the stellar motion along the line-of-sight \citep[e.g.][]{Tempel2006,Veale2018,Mogotsi2019}. Identifying caustics in the velocity dispersion profile in the most exterior portions of external galaxies may be enabled by stacking profiles of multiple objects, and by obtaining deeper spectra from future observational facilities. The aim of this paper is to investigate the extent to which visible tracers in the halo--in particular, their stellar content--can be used to determine aspects relating to the mass and the formation history of the host dark matter halo. We make use of the IllustrisTNG cosmological, hydrodynamical simulations \citep{Pillepich2018,Nelson2018b} to measure the velocity dispersion profile of stars of objects ranging from the scale of Milky Way-mass halos to those of rich clusters of galaxies. We find that sharp breaks in the velocity dispersion profile, occurring at radii consistent with the expected splashback radius of these halos, are prominent in both the dark matter and stellar velocity dispersion profiles. Furthermore, we establish a connection between the shape of these profiles and the mass and concentration of the host halos, which opens up the possibility to infer these quantities from the velocity dispersion profile of galaxies measured in future observations. This paper is organized as follows. In Section~\ref{sec:numerical}, we describe the simulation set used in this work. Our main results are presented in Section~\ref{sec:results}, in which we establish the connection between the stellar velocity dispersion profiles and the assembly history of halos. We further demonstrate how the form of the stellar velocity dispersion can be used to determine the virial radius of the host dark matter halo. Finally, Section~\ref{sec:conclusions} provides a summary of our investigation. \section{Numerical Methods} \label{sec:numerical} First, we provide a brief description of the IllustrisTNG{} simulation suite, which provides the computational domain analyzed in this paper. \subsection{Simulations} \label{sect:sims} The IllustrisTNG{} project is a suite of cosmological, magneto-hydrodynamical simulations of galaxy formation \citep{Pillepich2018b,Nelson2018a,Marinacci2018,Naiman2018,Springel2018}, carried out in periodic volumes with comoving lengths of 50, 100 and 300 Mpc (TNG50, TNG100, and TNG300, respectively, with only the two latter boxes used in this paper). Each simulation follows the co-evolution of dark matter and baryons, and has been run using the \textsc{Arepo}{} code \citep{Springel2010,Weinberger2019}, in which the equations of magneto-hydrodynamics governing gas elements are solved using an unstructured, Voronoi mesh. The Voronoi tessellation is adaptive in nature: regions with high gas density are resolved with many small cells as compared to more diffuse regions. This vastly increases the dynamical range achievable in any given simulation. The TNG model is the successor to the original Illustris galaxy formation model presented in \cite{Vogelsberger2013,Vogelsberger2014a}; a comprehensive list of all the changes introduced in the new version is provided in \cite{Pillepich2018}. TNG incorporates a range of physical processes thought to be important to regulating the evolution of galaxies, including gas cooling and star formation; the seeding, growth and feedback resulting from supermassive black holes; the launching of galactic winds, as well as the influence of galactic-scale magnetic fields. A sequence of works have shown that this model is able to successfully reproduce a variety of properties of the real galaxy population as a function of cosmic time \citep{Pillepich2018b,Nelson2018a,Marinacci2018,Naiman2018,Springel2018}. All simulation data used in this work (particle snapshots, halo and galaxy catalogs, and merger trees) have been made publicly available\footnote{\href{http://www.tng-project.org/data/}{http://www.tng-project.org/data/}} \citep{Nelson2018b}. The higher resolution TNG100 simulation consists of a periodic box of length $L_{{\rm box}} = 75\,h^{-1}$Mpc $\approx$ 100 Mpc, with 2$\times$1820$^3$ resolution elements corresponding to dark matter particles and gas cells. This corresponds to a mass resolution of $9.44\times10^5\,h^{-1}\,{\rm M}_\odot$ in baryons and $5.06\times10^6\,h^{-1}\,{\rm M}_\odot$ in dark matter. The maximum physical softening length of dark matter and star particles is set to $0.5\,h^{-1}$kpc. TNG300 simulates an even larger cosmological box of size $L_{{\rm box}} = 205\,h^{-1}$Mpc $\approx$ 300 Mpc with 2$\times$2500$^3$ resolution elements. The increased volume comes at the expense of more modest particle resolution: in particular, the mass resolution is $7.44\times10^6\,h^{-1}\,{\rm M}_\odot$ in baryonic matter and $3.98\times10^7\,h^{-1}\,{\rm M}_\odot$ in dark matter. The maximum physical softening length of dark matter and star particles is set to $1.0\,h^{-1}$kpc. TNG300 serves as our primary dataset for sampling the regime of low-mass groups and rich galaxy clusters ($\gtrsim 10^{13}\,{\rm M}_\odot$), which, in particular, is the mass range of interest in this paper. Both sets of simulations have been evolved until $z=0$, starting from initial conditions generated at $z=127$. The initial particle set is constructed by assuming cosmological parameters estimated by {\it Planck} \citep{Planck2016}: $\Omega_0 = 0.3089$ (total matter density), $\Omega_{\rm b} = 0.0486$ (baryon density), $\Omega_\Lambda = 0.6911$ (dark energy density), $H_0 = 67.74$ kms$^{-1}$Mpc$^{-1}$ (Hubble parameter) and $\sigma_8 = 0.8159$ (linear rms density fluctuation in a sphere of radius 8 $h^{-1}$ Mpc at $z=0$). \begin{figure*} \centering \includegraphics[width=0.475\textwidth]{Figures/Vdisp_3D_12p0-12p5.pdf} \includegraphics[width=0.475\textwidth]{Figures/Vdisp_3D_13p0-13p5.pdf}\\ \includegraphics[width=0.475\textwidth]{Figures/Vdisp_3D_13p5-14p0.pdf} \includegraphics[width=0.475\textwidth]{Figures/Vdisp_3D_14p0-14p5.pdf}\\ \caption{The stacked 3D velocity dispersion profiles of dark matter (black) and star particles (red) in halos identified from TNG. The thick lines represent mean profiles, whereas the thin red curves show a selection of individual profiles from a subset of halos in each mass bin (as defined in the textbox within each sub-panel). The vertical dashed and solid lines, respectively, mark the averaged values of $r_{200c}$ and $r_{200m}$ for each mass bin. The shaded gray band is bounded by the convergence radius, below which the dispersion profiles can no longer be expected to have converged numerically. The shape of the velocity dispersion profile for stellar particles is similar to that of the dark matter, albeit with a lower amplitude.} \label{fig:3d_disp} \end{figure*} \subsection{Halo identification} \label{sec:identify} Dark matter particles in all TNG simulations are first linked together using the `friends-of-friends' (FOF) algorithm \citep[e.g.][]{Davis1985}, providing an initial catalog of dark matter halos. This FOF algorithm connects dark matter particles separated by at most 0.2 times the mean interparticle separation to form groups; the \textsc{Subfind} algorithm \citep{Springel2001} is then used to identify gravitationally bound substructures within each group. In this work, we will focus on the properties of FOF groups as a whole, in particular, considering the set of dark matter and star particles associated with each FOF group. Throughout this paper, we will refer to the mass of a FOF group as $M_{200}$, which is the mass contained within the radius $r_{200c}$, the radius which encloses a mean density equal to 200 times the critical density of the universe at the redshift at which the halo is identified. We will also quote a second radius, $r_{200m}$, which is the radius within which the enclosed density of the halo is equal to 200 times the mean {\it background} density of the universe at that redshift. Defined in this way, $r_{200m}$ is always greater than $r_{200c}$ (for typical NFW-like halos, $r_{200m} \approx 1.6 r_{200c}$). Owing to its substantially larger volume, we use TNG300 as our primary data set for halos more massive than $\log\left[M_{200}/{\rm M}_\odot\right]\geq13.0$; for less massive halos ($\log\left[M_{200}/{\rm M}_\odot\right]=\left[12.0, 13.0\right]$), we resort to TNG100, which offers somewhat better mass and force resolution in this regime. \section{Results} \label{sec:results} In the following subsections, we present the main results of our analysis. In Section~\ref{sec:v3d}, we showcase the diversity of dark matter and stellar velocity dispersion profiles (in 3D) measured in TNG halos. Section~\ref{sec:assembly} then establishes the connection between the diversity of these profiles and the assembly history of the halo. Finally, in Section~\ref{sec:model}, we present a simple model that describes the dependence of stellar velocity dispersion profile on physical properties of the host halo, such as its mass and concentration. \subsection{Dispersion profiles in three dimensions} \label{sec:v3d} We begin our investigation by considering the three dimensional velocity dispersion profiles of dark matter and star particles measured from TNG halos. Throughout this paper, we consider all particles within the spherical region enclosed by a radius $5r_{200m}$ from the center of the halo. Figure~\ref{fig:3d_disp} shows the 3D velocity dispersion profiles in bins of halo mass (separate panels). The thick lines in each color shows the mean profile of the dark matter (black) and the stars (red). To provide an impression of the diversity in dispersion profiles, a subset of individual stellar velocity curves are represented by the thin red lines. Finally, the gray shaded region marks the ``convergence radius'': the regime below which the velocity profiles cannot be expected to have converged numerically given the numerical settings adopted in TNG. To determine this radius, we use the criterion described in \cite{Power2003}. Tests of the convergence of these profiles as a function of resolution are shown in Appendix~\ref{sec:convergence}. The similarity in the shape of the dark matter and stellar velocity dispersion profiles is apparent; indeed, both curves show a prominent dip towards the exterior of the halo, typically occurring at around $\sim 1.2-1.5r_{200m}$. The location of this feature is commensurate with that of the so-called ``splashback'' radius of dark matter halos \citep[e.g.][]{Fillmore1984,Bertschinger1985,Adhikari2014,Diemer2014,More2015}, which demarcates the outermost caustic in the matter distribution. Indeed, a sharp rise in the velocity dispersion of material in the outskirts of halos is a hallmark of the region that separates infalling material from their surroundings. While the stacked dispersion profiles for dark matter and stars differ in their amplitude, it is striking to note the extent to which the stellar distribution traces the overall shape of the underlying dark matter potential, particularly for halos in the mass range $\log \left[ M_{200}/{\rm M}_\odot\right] \leq 14.0$. On inspecting the dispersion profiles of individual halos, we notice several cases where the `kink' in the exterior of the profile is barely noticeable; this is typically the case for well-isolated halos, in which the velocity dispersion drops precipitously beyond $\sim r_{200m}$. Where there is a second, neighboring halo, the velocity dispersion profiles turn upwards as as the gravitational potential of the neighboring object starts to dominate. The stacked, averaged profiles show this effect clearly. In subsequent sections, we divert our attention away from 3D velocity dispersion profiles and focus instead on line-of-sight velocity dispersions as may be measured with deep measurements of galaxy spectra. \subsection{The effect of assembly history on velocity dispersion profiles} \label{sec:assembly} In this subsection, we establish the connection between the diversity of velocity dispersion profiles and the assembly history of halos in IllustrisTNG{}. In Figure~\ref{fig:los_disp_assembly}, we identify halos from TNG300 in the mass range $\log\left[ M_{200}/{\rm M}_\odot\right]=13.0-13.5$. We differentiate between halos with ``early'' and ``late-time'' formation histories by defining, respectively, the epoch at which 10\% (top row) and 90\% (bottom row) of the final-day (dark matter) mass of the halo has collapsed. For each definition, we then split halos that are the 20\% earliest-forming (solid lines) and 20\% latest-forming (dashed-lines). This selection therefore singles out halos at fixed $z=0$ mass that are most discrepant in their early and late-time formation histories. The left-hand panels in Figure~\ref{fig:los_disp_assembly} show the corresponding line-of-sight velocity dispersion profiles for dark matter and star particles. Solid lines show the mean stacked profile, while the shaded bands encompass the scatter around the mean, showing the range in profiles spanned by the halos that are picked out when selected according to their formation history. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Figures/VdispProjected_los_z0p10_13p0-13p5.pdf} \\ \includegraphics[width=0.9\textwidth]{Figures/VdispProjected_los_z0p90_13p0-13p5.pdf} \\ \caption{The scatter in the line-of-sight velocity dispersion profiles of dark matter and stars after selecting halos by formation time {\it at fixed final-day mass} ($\log\left[M_{200}/{\rm M}_\odot\right]=13.0-13.5$ in this example). The profiles have been computed as a function of the projected halocentric radius, $r_p$. {\bf Top row}: selecting the 20\% earliest-forming and 20\% latest-forming halos as determined by the epoch by which 10\% of halo's $z=0$ dark matter content was assembled (right panel); the corresponding scatter in the line-of-sight velocity dispersion profiles, $\sigma^{{\rm los}}(r_p)$ is shown in the left panel as the shaded portions of the profiles. {\bf Bottom row}: as in the top row, but now selecting halos that are the 20\% earliest and latest-forming as defined by the redshift by which 90\% of their $z=0$ halo mass was assembled. This figure demonstrates that scatter in $\sigma^{{\rm los}}(r_p)$ in the inner parts of halos ($r \lesssim 0.1r_{200m}$) is driven primarily by scatter in the early assembly history of these halos; on the other hand, the scatter in $\sigma^{{\rm los}}(r_p)$ in the outskirts of halos is driven by variations in the late-time assembly of the same objects.} \label{fig:los_disp_assembly} \end{figure*} From these two panels, we see a clear spatial dependence of the scatter in velocity dispersions when selecting on halo formation time. In particular, we find that halos that differ most in their late-time formation history show scatter in the outskirts of their velocity dispersion profiles ($r\gtrsim 0.1r_{200m}$). On the other hand, the scatter is larger in the regime of the inner profile when selecting halos that differ most in their early formation history. \begin{figure*} \centering \includegraphics[width=0.475\textwidth]{Figures/VdispProjected_fit_los_z0p90_12p0-12p5.pdf} \includegraphics[width=0.475\textwidth]{Figures/VdispProjected_fit_los_z0p90_13p0-13p5.pdf}\\ \includegraphics[width=0.475\textwidth]{Figures/VdispProjected_fit_los_z0p90_13p5-14p0.pdf} \includegraphics[width=0.475\textwidth]{Figures/VdispProjected_fit_los_z0p90_14p0-14p5.pdf}\\ \caption{Normalized line-of-sight velocity dispersion profiles for dark matter (black) and stars (red) as a function of projected halocentric radius, $r_p$, in TNG halos. The blue dashed curves represent fits to the stellar velocity dispersion profile using Eq.~(\ref{eq:disp_eq}). The lower sub-panels show the radial variation of the slope, ${\rm d} \log \tilde{\sigma} / {\rm d} \log r$, where $\tilde{\sigma} = \sigma^{{\rm los}}(r_p) / \sigma^{{\rm los}}_{{\rm max}}$. The horizontal dotted line marks the the case where the normalized stellar velocity dispersion falls to $60\%$ its maximum value; the intersection of this line with the velocity dispersion profile is typically consistent with $r_{200m}$. Note that this is also the location where the slope, ${\rm d}\log \tilde{\sigma} / {\rm d}\log r$, reaches its minimum value.} \label{fig:los_disp_fit} \end{figure*} Figure~\ref{fig:los_disp_assembly} is an example of how the inside-out formation of dark matter halos manifests in the observable properties of visible tracers. In the standard picture of hierarchical structure formation, the central cores of halos collapse first, while the bulk of the mass of the halo continues to accumulate in the outskirts of halos through accretion and mergers. A tight scatter in velocity dispersions reflects regions of halos that are in some semblance of virial equilibrium. As the cores of halos collapse early on, selecting on differences in the early accretion histories of halos directly reflect differences in the central velocity dispersions of dark matter and stars. When selecting on differences in the late-time assembly history of halos, we specifically select objects that show large variances in the outskirts of their dispersion profiles. As seen in Figure~\ref{fig:los_disp_assembly}, the inner regions ($r\lesssim 0.1 r_{200m}$) show very little scatter as this portion of the halo is only marginally affected by late-time accretion events. Figure~\ref{fig:los_disp_assembly} also shows that the impact of assembly history is imprinted in the dispersion profile for both dark matter and stars. The size of the scatter is also comparable between the two sets of tracers. The bottom row of this figure hints that scatter in the exterior profile may be somewhat larger for stars than for dark matter; this implies that following a late-time merger event, it takes longer for the stellar tracers to virialize than it does for the dark matter. The results suggest that luminous tracers may indeed be used to decipher aspects of the formation history of their host dark matter halos. In the following subsection, we investigate this in more detail and tie together the quantities that can be measured from the stellar velocity dispersion profiles with metrics that are associated with the host halo. \subsection{A model for the velocity dispersion profiles of dark matter halos} \label{sec:model} In the previous subsection, we have seen a distinct connection between the early/late-time formation histories of dark matter halos and the imprint this leaves on the velocity dispersion profiles of dark matter and stars at present-day. In this subsection, we establish this relationship more formally. The general shape of the normalized line-of-sight velocity dispersion profiles may, to first order, be described simply by a parabola, defined by a normalization term and the radial location of the peak of this function. While it is in principle possible to predict the dispersion profile starting from the NFW profile itself \citep[e.g.][]{Binney1982,Lokas2001}, here we define a simpler functional form and parametrize the {\it stellar} dispersion profile as: \begin{equation} \label{eq:disp_eq} \frac{\sigma^{{\rm los}}(r_p)}{\sigma^{{\rm los}}_{{\rm max}}} = 1 + \chi_\star \left( \log\left[ \frac{r_p}{r_\star}\right] \right)^2\;, \end{equation} in which $r_p$ is the projected distance from the halo center, $\chi_\star$ is a normalization factor, and $r_\star$ is the characteristic radius at which the profile is maximum ($\sigma^{{\rm los}}(r_\star) = \sigma^{{\rm los}}_{{\rm max}}$). \begin{figure*} \centering \includegraphics[width=0.48\textwidth]{Figures/mass_norm.pdf} \includegraphics[width=0.48\textwidth]{Figures/mass_rmax.pdf} \caption{The halo mass dependence of the parameters $\chi_\star$ and $r_\star$ obtained through fitting Eq.~(\ref{eq:disp_eq}) to stellar velocity dispersion profiles in TNG. The stars represents the value of these parameters obtained from fitting the mean dispersion profile in any given mass bin, while the error bars show the range of $\chi_\star$ values measured from fitting the 16$^{\rm th}$-84$^{\rm th}$ percentile scatter around the mean profile. {\bf Left panel}: Comparison of the relationship between $M_{200}$ and $\chi_\star$ with the (rescaled) mass-concentration of DMO halos, as predicted by the model of \citet{Ludlow2016}. The shaded blue region encompasses the typical scatter in this relation of the order of 0.13 dex \citep[see, e.g.,][]{Dutton2014}. The halo mass dependence of $\chi_\star$ is very similar to that of the halo concentration, further establishing the intimate connection between the shape of the stellar velocity dispersion profiles and the assembly history of the host dark matter halo. The agreement worsens below $\log\left[ M_{200}/{\rm M}_\odot \right] \lesssim 12.5$, the mass scale below which the peak of dispersion profile is only marginally resolved in the TNG100 simulation (Figure~\ref{fig:los_disp_fit}). {\bf Right panel}: The halo mass dependence of $r_\star$, which is the characteristic radius at which the velocity dispersion profile reaches its peak value. The relationship between halo mass and $r_{{\rm max}}$, the radius at which the 3D circular velocity profile for TNG halos peaks is shown in orange (rescaled by a factor of 0.1 to aid comparison with $\chi_\star$). The same relation computed in the dark matter-only version of TNG is shown in gray. In all cases, we see a strong, positive correlation between halo mass and each of these characteristic radii. The blue line shows the best-fitting power law (defined in Eq.~\ref{eq:rstar_eq}) that describes the $M_{200}-r_\star$ relation measured in TNG.} \label{fig:mass_rmax_norm} \end{figure*} Figure~\ref{fig:los_disp_fit} shows the fits made to the normalized velocity dispersion profiles using Eq.~(\ref{eq:disp_eq}), separated in bins of host halo mass. The fit is performed only in the region to the right of the shaded gray box, which marks the radius within which particle dynamics are unreliable due to finite force resolution. The panels show only the mean normalized profiles, and we leave out the scatter for clarity. The blue dashed line is the best-fit profile obtained using Eq.~(\ref{eq:disp_eq}). The two parameter model provides a good fit to the dispersion profiles, across the full range of halo mass. It is interesting to note that, when expressed in normalized units, the `kink' in the velocity dispersion profile -- which is related to the value of $r_{200m}$ (Section~\ref{sec:v3d}) -- appears at the radius at which $\sigma^{{\rm los}}(r_p) \approx 0.6\sigma^{{\rm los}}_{{\rm max}}$. This is certainly the case for the mean stacked profile in each mass bin, and the exact location depends on the assembly history of the halo (as indicated by the width of the shaded regions in Figure~\ref{fig:los_disp_assembly}). This suggests that it may be possible, to within a factor of a few, to {\it predict} the value of $r_{200m}$ by estimating the halocentric distance at which the line-of-sight velocity dispersion profile drops to $\sim 0.6$ its maximum value. We also note that at this radius, the logarithmic slope of normalized dispersion profile (denoted as $\tilde{\sigma}$) also reaches its minimum value; this is shown in the lower sub-panels in each frame in Figure~\ref{fig:los_disp_fit}. The intersection of the horizontal dotted line in each panel of Figure~\ref{fig:los_disp_fit}, which marks the condition $\sigma^{{\rm los}}(r_p) \approx 0.6 \sigma^{{\rm los}}_{{\rm max}}$, with the velocity dispersion profile shows that indeed this radial location is coincident with $r_{200m}$. For objects in the range $\log\left[M_{200}/{\rm M}_\odot\right]>13.0$, the value of $r_{200m}$ estimated from the dispersion profile is quite close to its true value, while the agreement is substantially worse in the regime of Milky Way-mass halos (top-left panel of Figure~\ref{fig:los_disp_fit}). The reason for this may be partly due to the limited numerical resolution afforded by TNG100, the simulation box from which these halos have been extracted. As the top-left panel shows, the full parabolic shape is not resolved fully in TNG100 for this mass range. In particular, profile peaks below the convergence radius, the portion of the halo internal to which issues pertaining to finite force softening start to dominate (gray shaded region). It is worthwhile exploring the physical interpretation of the fit parameters $\chi_\star$ and $r_\star$ in more detail -- in particular, their connection to the properties of the host dark matter halo. Figure~\ref{fig:mass_rmax_norm} shows the mass dependence of these parameters as obtained through fitting Eq.~(\ref{eq:disp_eq}) to the velocity dispersion profiles of the stars in TNG halos. \begin{figure*} \centering \includegraphics[width=\textwidth]{Figures/VdispProjected_model_allmass.pdf} \caption{Normalized line-of-sight velocity dispersion profiles for dark matter and stars as a function of projected halocentric radius, $r_p$, in TNG halos. The colors represent results from different mass bins, and each pair of profiles has been offset vertically for clarity. Furthermore, the symbols representing individual profiles are made fainter below the convergence radius. The vertical solid lines mark the location of $r_{200m}$ for the corresponding mass bin. The thick curves represent predictions of the halo mass and concentration-based model defined in Eq.~(\ref{eq:disp_eq_recast}), where the concentration for any given mass bin is predicted using the \citet{Ludlow2016} model. We find that across the range of halo masses considered in this work, the model in Eq.~(\ref{eq:disp_eq_recast}) provides a good fit to the velocity dispersion profiles measured in the TNG simulations.} \label{fig:model_compare} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/r200m_pred.pdf} \caption{Ratio between the true and predicted values of $r_{200m}$, where the latter is determined by using Eq.~(\ref{eq:disp_eq_recast}) to estimate where the normalized line-of-sight velocity dispersion profile drops to $\sim 0.6$ its peak value (see main text for details). The gray shaded band marks the 25\% error region around the true value. For halos more massive than low-mass groups, which are the best-resolved objects in our dataset,the agreement is generally quite good, often to within 25\%.The level of agreement towards lower halo mass is substantially poorer.} \label{fig:rvir_pred} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{Figures/VdispProjected_fit_los_z0p50_star_13p5-14p0.pdf} \caption{The normalized line-of-sight velocity dispersion profile of halos in the mass range $\log\left[M_{200}/{\rm M}_\odot\right]=13.5-14.0$, split by {\it stellar age}. The objects are split into two halves: those with `older' stars, represented in red, and those with `younger' stars shown in blue. Here, stellar age is defined as the epoch at which the halo accumulated 50\% of its final-day {\it stellar} mass. The measurements from the simulated halos are represented by the star symbols; the smooth curves are fits to the measured profiles obtained using Eq.~(\ref{eq:disp_eq_recast}). As noted in the legend, we find that our model successfully predicts the expected trend where older objects have higher concentrations ($c_{200}$) than younger ones, although the difference within a fixed mass bin is small.} \label{fig:vlos_stellar_fit} \end{figure} Mathematically, the normalization parameter, $\chi_\star$, determines the concavity (or width) of the profile which, as we discussed in Section~\ref{sec:assembly}, is related to the assembly history of the host halo. For dark matter halos, a simple parameter that is often used to characterize the assembly history of halos is the concentration, $c_{200}$, defined as the ratio $r_{200c}/r_s$ where, for a dark matter halo described by an NFW profile, $r_s$ is the `scale radius' at which the slope of the density profile is $-2$. The solid blue line in the left-hand panel of Figure~\ref{fig:mass_rmax_norm} shows the median halo mass-concentration relation for dark matter halos in the {\it Planck} cosmology, as predicted by the model described in \cite{Ludlow2016}. In this model, the concentration of NFW halos may be predicted using only the initial power spectrum, which determines (at least to first order) the subsequent assembly history of the halo. The predictions of the \cite{Ludlow2016} model have been rescaled to facilitate comparison of their relationship with $\chi_\star$. The blue shaded region shows a constant scatter of 0.13 dex in the halo mass-concentration relation \citep[see, e.g.,][]{Dutton2014}. It is striking to note that the gradient of the $M_{200}$-$\chi_\star$ relation obtained from the stellar velocity dispersion profile is nearly identical to that of the $M_{200}$-$c_{200}$ of DMO halos across this mass scale (albeit with a different normalization). This reaffirms our earlier interpretation of $\chi_\star$ as a parameterization of the assembly history of the host dark matter halo. This suggests that given a halo of mass $M_{200}$, the parameter $\chi_\star$, which sets the normalization of the profile in Eq.~(\ref{eq:disp_eq}), may be {\it predicted} using the standard mass-concentration relation of dark matter halos. Focusing next on the relationship between $r_\star$ and $M_{200}$ (right-hand panel of Figure~\ref{fig:mass_rmax_norm}), we find a strong correlation between these quantities. This is unsurprising: the radius at which the velocity dispersion profile peaks is tied directly to the depth of the potential well of the halo (and, therefore, its mass). Indeed, the parameter $r_\star$ bears a similar relationship with halo mass as the more familiar parameter, $r_{{\rm max}}$, which denotes the radius at which the halo's 3D circular velocity profile reaches its maximum. This is represented by the solid gray line (rescaled by a factor of 0.1), which shows the $M_{200}$-$r_{{\rm max}}$ relationship for halos in this mass range in TNG DMO; the corresponding relation for the full physics TNG simulation is shown in orange. The $M_{200}-r_{{\rm max}}$ relation, which is nearly identical in the full physics and DMO versions of TNG in the mass scale $\log\left[M_{200}/{\rm M}_\odot\right] \gtrsim 13.0$, steepens at lower masses in the hydrodynamical simulation. This is likely the consequence of gas cooling and subsequent star formation, which is concentrated primarily in the central regions of halos, thereby contracting the radius at which the peak circular velocity is achieved in the hydrodynamical simulation compared to TNG DMO; the effect on the largest halos is modest. Both observations are consistent with the conclusions of \cite{Lovell2018}, who investigated the impact of baryon physics on the dark matter content of TNG halos. The $M_{200}$-$r_\star$ relationship measured from the stellar velocity dispersion profile shows a similar gradient to the $M_{200}$-$r_{{\rm max,hydro}}$ in the mass scale $\log\left[M_{200}/{\rm M}_\odot\right] \lesssim 14.0$, but exhibits a somewhat steeper mass dependence in the regime of galaxy clusters. In TNG, we find that the relationship between $r_\star$ and $M_{200}$ can be described by the following functional form: \begin{equation} \label{eq:rstar_eq} r_\star = 7.1 \left( \frac{M_{200}}{10^{13}\,{\rm M}_\odot} \right)^{0.8} {\rm kpc}; \end{equation} which, when combined with the realization that the normalization parameter $\chi_\star$ may be predicted by the mass-concentration relation allows us to recast Eq.~(\ref{eq:disp_eq}) in terms of halo mass, $M_{200}$, and halo concentration, $c_{200}$, only: \begin{equation} \label{eq:disp_eq_recast} \frac{\sigma^{{\rm los}}(r_p)}{\sigma^{{\rm los}}_{{\rm max}}} = 1 + \log \left(\frac{c_{200}}{8}\right) \log^2 \left[ \frac{r_p}{7.1 \,{\rm kpc}} \left(\frac{10^{13}\,{\rm M}_\odot}{M_{200}}\right)^{0.8}\right]. \end{equation} The fact that line-of-sight velocity dispersion profile can be expressed in terms of halo mass and concentration should come as no surprise since it is indeed possible to predict the velocity dispersion starting from the NFW profile itself \citep[see, e.g.][for the functional form of the {\it radial} velocity dispersion in terms of the same quantities]{More2009}. The formula expressed in Eq.~(\ref{eq:disp_eq_recast}) provides a simple and convenient functional form that accurately captures the form of these profiles measured in TNG. In Figure~\ref{fig:model_compare}, we show comparisons between predictions of the model defined by Eq.~(\ref{eq:disp_eq_recast}) (solid lines) and the dispersion profiles actually measured in our simulations (symbols, made fainter below the convergence radius). The open circles represent the dispersion profiles of the dark matter component, while the filled stars represent the stellar velocity dispersion profile. Different bins of halo mass are represented by the different colors. Lines and symbols of a given color have been renormalized by a constant factor to offset them for clarity. The average location of $r_{200m}$ is shown using the vertical solid lines. The model defined in Eq.~(\ref{eq:disp_eq_recast}), in which the only two free parameters are the mass and concentration of the halo, does a good job of describing the shape of the velocity dispersion profiles across the full range of host halo masses. Given the relatively simple parameterization that we have adopted, the model fails to fully capture the intricacies of the dispersion profile, particularly at its extremities, but the quality of the fit is good in and around the peak of the profile. Using the fact that the tentative location of $r_{200m}$ may be identified as the radius at which velocity dispersion reaches $\sim0.6$ its peak value, we can extrapolate the smooth profiles defined by Eq.~(\ref{eq:disp_eq_recast}) to estimate this radius in each mass bin. Figure~\ref{fig:rvir_pred} compares the quality of our predictions for $r_{200m}$ (using Eq.~\ref{eq:disp_eq_recast}) quantitatively. Here, we show the ratio of the predicted value to the true value of $r_{200m}$ as a function of halo mass. The gray shaded band shows a 25\% error region about the true value. In general, we find that the value of $r_{200m}$ estimated by extrapolating Eq.~(\ref{eq:disp_eq}) to $\sigma^{{\rm los}}(r_p) \approx 0.6 \sigma^{{\rm los}}_{{\rm max}}$ agrees with the true, averaged value of $r_{200m}$ to within a factor of two, and often to within 25\%. The exceptions are at the extreme ends of the mass scale, where finite resolution affects our measurements at the scale of halos less massive than groups, while finite box size (i.e. statistics) in TNG300 affects our ability to measure the profile at the scale of rich clusters. The behavior of this ratio at the low mass end stresses the importance of resolving the peak of the dispersion profile when using Eq.~(\ref{eq:disp_eq_recast}) to predict $r_{200m}$. Note that the exercise of measuring the value of $r_{200m}$ by estimating where the dispersion profile falls to 60\% of its peak value is most effective when the dispersion profiles of several objects have been stacked. Individual galaxies are likely to be noisy and will exhibit significant deviation from spherical symmetry (see Figure~\ref{fig:3d_disp}); the stacking procedure helps in reducing this noise. Stacking will, of course, also be necessary to probe the exterior portion of the dispersion profile, where the surface brightness in individual galaxies is extremely low. It is an instructive exercise to compare the quality of these predictions with existing observational techniques for measuring the boundary of halos. Recent efforts include the measurement of the weak lensing mass profile \citep{Chang2018} or of (projected) galaxy density profiles around massive clusters (typically with mass $ \log \left[M_{200}/{\rm M}_\odot \right] \gtrsim 14.2$), selected either optically \citep[e.g.][]{More2016,Baxter2017,Murata2020,Bianconi2020} or via the Sunyaev-Zel'dovich effect \citep{Shin2019}. These techniques have typically measured the splashback radius to within 10-20\% of the values predicted from $N$-body simulations. Yet another way to estimate the virial radius (or, more accurately, the virial mass) of halos from observations is to use the halo abundance matching technique, which assigns the measured stellar mass to halo mass by pairing galaxies with halos with the same cumulative number densities \citep[e.g.][]{Kravtsov2004,Tasitsiomi2004,Vale2004}. After applying the abundance matching to technique to a sample of galaxies from the Sloan Digital Sky Survey, \cite{Calderon2019} find this method to yield errors in the predicted halo mass at the 0.27-0.90 dex level; the error depends on the mass scale of interest, as well as the halo property used as a proxy for abundance matching (halo mass, maximum circular velocity etc.). In comparison to these methods, the methodology for using the velocity dispersion profile for estimating the halo virial radius as outlined in this paper is competitive, particularly in the regime of clusters. Furthermore, it serves as an orthogonal measurement, which can be employed jointly with existing techniques to obtain more robust estimates of the halo virial radius. As a final test of our main conclusions, in Figure~\ref{fig:vlos_stellar_fit} we show the normalized line-of-sight velocity dispersion profiles of well-resolved halos in the mass range $\log\left[M_{200}/{\rm M}_\odot\right]=13.5-14.0$, split now by the ages of their {\it stellar populations}. We use a definition analogous to the one for halo formation time, such that the stellar age of a halo is defined as the epoch at which 50\% of the object's final-day {\it stellar mass} was accumulated. The 50\% oldest halos according to this definition are shown in red, while the remaining 50\% younger halos are shown in blue. It is immediately clear that selecting halos by stellar age (at fixed mass) also separates them in the space of the velocity dispersion profiles, just as it did when selecting on halo formation time (Figure~\ref{fig:los_disp_assembly}). The solid lines are the best-fit profiles obtained using the model in Eq.~(\ref{eq:disp_eq_recast}); the best-fit concentration, $c_{200}$, is listed in the legend. We find that our model predicts that older halos have higher concentrations than younger halos, as expected. Furthermore, the range of concentrations predicted is also consistent with the values expected of halos at this mass scale. The difference in concentration between the old and young halo subsets is not large ($\sim 10\%$). Indeed, the predicted difference is larger if the populations are selected using {\it halo} formation time instead, which is to be expected given that the concentration is tied more directly to the assembly of the dark matter halo than it is the assembly of the stars. Finally, we also note that the familiar kink in the outskirts of the profile are present in both the young and old stellar population subsets. The results presented in this section suggest that the velocity dispersion profiles of stars contain substantial amounts of information on the dark matter halos in which they reside. In particular, they encode both the mass and the memories of the accretion history of the halo. The function presented in Eq.~(\ref{eq:disp_eq}) contains two free parameters, $r_\star$ and $\chi_\star$ which capture this information. Furthermore, these quantities are closely associated with their analogues in NFW halos formed in DMO simulations, $r_{{\rm max}}$ and the concentration, $c_{200}$, which allows us to then rewrite Eq.~(\ref{eq:disp_eq}) into a form that depends on halo mass and concentration explicitly (Eq.~\ref{eq:disp_eq_recast}). In principle, therefore, a measurement of the stellar velocity dispersion profile allows a potential avenue to infer halo mass and concentration directly. \section{Conclusions} \label{sec:conclusions} In this paper, we have explored the dynamics of dark and luminous tracers of the potential wells of dark matter halos. In particular, we used the IllustrisTNG{} cosmological, hydrodynamical simulations to measure the (line-of-sight) velocity dispersion profiles of dark matter and stars, $\sigma^{{\rm los}}(r_p)$, and established the connection between the form of these profiles and the properties of the encompassing dark matter halo. In order to sample a wide range of halo mass, we combined data from the publicly-available 100 Mpc (TNG100) and 300 Mpc (TNG300) TNG simulation volumes. Our main results are summarized as follows: \begin{itemize} \item The velocity dispersion profiles for halos exhibit a universal shape (as a function of halo mass), and is nearly identical for both dark matter and stars. The exterior profile shows a characteristic `kink', on average around $\sim 1.2\,r_{200m}$, consistent with the expected location of the so-called `splashback radius' of the halo (Figure~\ref{fig:3d_disp}). \item Halos at fixed mass exhibit significant scatter around the mean dispersion profile shape. This scatter may be explained, at least in part, by differences in the assembly history of the halo compared to average assembly history of halos in that mass bin. In particular, the late-time formation history of the halo influences the shape of the exterior portion of the velocity dispersion profile ($r\gtrsim 0.1 r_{200m}$), while the region interior to this is more strongly affected by the early accretion history of the halo (Figure~\ref{fig:los_disp_assembly}). \item When expressed in normalized units, the kink in the exterior of the profile occurs approximately where the velocity dispersion profile drops to 60\% its peak value ($\sigma^{{\rm los}}(r_p) \approx 0.6 \sigma^{{\rm los}}_{{\rm max}}$). The (normalized) velocity dispersion profiles obtained from both dark matter and stars are well fit by the two parameter functional form presented in Eq.~(\ref{eq:disp_eq}). This simple model provides a good fit to the velocity dispersion profile across a wide range of halo mass (Figure~\ref{fig:los_disp_fit}). \item There are distinct connections between the fit parameters in Eq.~(\ref{eq:disp_eq}) and the properties of the host dark matter halo (Figure~\ref{fig:mass_rmax_norm}). In particular, we find that the coefficient in this equation, $\chi_\star$, bears a near identical dependence on halo mass as the concentration of the host halo. The characteristic length scale in this fitting formula, $r_\star$, is directly correlated with the mass of the halo (Eq.~\ref{eq:rstar_eq}). \item The consequence of this relationship is that it allows us to recast Eq.~(\ref{eq:disp_eq}) in terms of halo mass and concentration only, in the form described in Eq.~(\ref{eq:disp_eq_recast}). The resulting model does a reasonably good job of matching the shape of the velocity dispersion profiles measured in the TNG simulation suite, across the full range of halo masses considered in this work (Figure~\ref{fig:model_compare}). This figure also suggests that halo mass and concentration may be inferred directly from (stacked) stellar velocity dispersion profiles measured in galaxies. \item Using Eq.~(\ref{eq:disp_eq_recast}) to estimate where $\sigma^{{\rm los}}(r_p) \approx 0.6 \sigma^{{\rm los}}_{{\rm max}}$, one can `predict' the virial radius, $r_{200m}$, based on the measured velocity dispersion profile. In general, the predicted and true values of $r_{200m}$ agree to within a factor of two, although it is worse for halos with mass $\log\left[M_{200}/{\rm M}_\odot\right]<13.0$, where we do not properly resolve the peak of the dispersion profile (Figure~\ref{fig:rvir_pred}). \item The velocity dispersion profiles also reflect differences in the assembly of the stellar component of halos at fixed mass. In particular, we find clear differences in the velocity dispersion profiles of halos selected based on young/old stellar populations, in a manner that is consistent with the dependence on the halo formation time (Figure~\ref{fig:vlos_stellar_fit}). Encouragingly, we find that using Eq.~(\ref{eq:disp_eq_recast}) to estimate the concentrations of halos at fixed mass, split by stellar age, predicts that halos with younger stellar populations have lower concentrations on average than halos with an older stellar component, as expected. \end{itemize} Our study demonstrates the value in measuring the dynamics of tracers of the gravitational potential in the very outskirts of galaxies. Accurate measurements in this regime may inform us of both the mass and details of the assembly history of the host dark matter halo. Indeed, the measurement of the outskirts of these entities may soon be within reach, at least after stacking profiles of several objects. For example, the surface brightness of massive clusters around $r_{200m}$ ranges between $32-36$ mag arcsec$^{-2}$ at $z=0.25$ \citep[e.g.][]{Deason2020b}, which will be within the operational capabilities of future observational facilities like the Rubin Observatory Legacy Survey of Space and Time \citep[LSST,][]{Ivezic2019}, {\it Euclid} \citep{Laureijs2011}, and {\it The Nancy Grace Roman Space Telescope} \citep{Spergel2015}. As these and other facilities geared towards low surface brightness measurements begin to come online \citep[see][for a review]{Kaviraj2020}, the feasibility of running the deep and wide surveys necessary for this kind of measurement will become reality. \acknowledgments We are grateful to the referee for providing us with a constructive report, and for suggesting the test in Figure~\ref{fig:vlos_stellar_fit}, enhancing the scope of our results and the overall quality of this work. We thank the IllustrisTNG collaboration for making the data used in this paper available for public use (\href{https://www.tng-project.org/}{https://www.tng-project.org/}). We thank Benedikt Diemer for insightful discussions during the course of this project, and for providing helpful feedback on this manuscript. We are also grateful to Lars Hernquist and Ken Freeman for their input at the onset of this project. This project made use of the SAO/NASA Astrophysics Data System (ADS), the arXiv.org preprint server, as well as the {\tt matplotlib} \citep{Hunter2007}, {\tt numpy} \citep{Numpy2020}, and {\tt scipy} \citep{Scipy2020} python packages. S.B. is supported by Harvard University through the ITC Fellowship.
1,116,691,497,060
arxiv
\section{Introduction and Notations} In 1967 Robert Katz and Michael Grossman created the first system of non-Newtonian calculus, which we call the geometric calculus. In 1970 they had created an infinite family of non-Newtonian calculi, each of which differs markedly from the classical calculus of Newton and Leibniz. Among other things, each non-Newtonian calculus possesses four operators : a gradient (i.e. an average rate of change), a derivative, an average, and an integral. For each non-Newtonian calculus there is a characteristic class of functions having a constant derivative. In view of pioneering work carried out in this area by Grossman and Katz \cite{GrossmanKatz} we will call this calculus as multiplicative calculus, although the term of exponential calculus can also be used. The operations of multiplicative calculus will be called as multiplicative derivative and multiplicative integral. We refer to Grossman and Katz \cite{GrossmanKatz}, Stanley \cite{Stanley}, Bashirov et al. \cite{BashirovMisirh,BashirovKurpinar}, Grossman \cite{Grossman83} for elements of multiplicative calculus and its applications. An extension of multiplicative calculus to functions of complex variables is handled in Bashirov and R\i za \cite{BashirovRiza}, Uzer \cite{Uzer10}, Bashirov et al. \cite{BashirovKurpinar}, \c{C}akmak and Ba\c{s}ar \cite{CakmakBasar}, Tekin and Ba\c{s}ar\cite{TekinBasar}, T\"{u}rkmen and Ba\c{s}ar \cite{TurkmenBasar}. In \cite{KADAK3}, Kadak and \"{O}zl\"{u}k studied the generalized Runge-Kutta method with respect to non-Newtonian calculus. Kadak et al \cite{KadakEfe,kadak2} studied certain new types of sequence spaces over the Non-Newtonian Complex Field. Geometric calculus is an alternative to the usual calculus of Newton and Leibniz. It provides differentiation and integration tools based on multiplication instead of addition. Every property in Newtonian calculus has an analog in multiplicative calculus. Generally speaking multiplicative calculus is a methodology that allows one to have a different look at problems which can be investigated via calculus. In some cases, for example for growth related problems, the use of multiplicative calculus is advocated instead of a traditional Newtonian one. The main aim of this paper is to construct the difference sequence space $l_\infty^{G} \left({\Delta}_G\right)$ over geometric complex numbers which forms a Banach space with the norm defined on it and obtain the Geometric Newton-Gregory interpolation formulae which are more useful than Newton-Gregory interpolation formulae. We should know that all concepts in classical arithmetic have natural counterparts in $\alpha-arithmetic.$ Consider any generator $\alpha$ with range $A\subseteq \mathbb{C}.$ By $\alpha- arithmetic,$ we mean the arithmetic whose domain is $A$ and operations are defined as follows. For $x, y \in A$ and any generator $\alpha,$ \begin{align*} &\alpha -addition &x\dot{+}y &=\alpha[\alpha^{-1}(x) + \alpha^{-1}(y)]\\ &\alpha-subtraction &x\dot{-}y&=\alpha[\alpha^{-1}(x) - \alpha^{-1}(y)]\\ &\alpha-multiplication &x\dot{\times}y &=\alpha[\alpha^{-1}(x) \times \alpha^{-1}(y)]\\ &\alpha-division &\dot{x/y}&=\alpha[\alpha^{-1}(x) / \alpha^{-1}(y)]\\ &\alpha-order &x\dot{<}y &\Leftrightarrow \alpha^{-1}(x) < \alpha^{-1}(y). \end{align*} If we choose \textit{$exp$} as an $\alpha-generator$ defined by $\alpha (z)= e^z$ for $z\in \mathbb{C}$ then $\alpha^{-1}(z)=\ln z$ and $\alpha-arithmetic$ turns out to Geometric arithmetic. \begin{align*} &\alpha -addition &x\oplus y &=\alpha[\alpha^{-1}(x) + \alpha^{-1}(y)]& = e^{(\ln x+\ln y)}& =x.y ~geometric ~addition\\ &\alpha-subtraction &x\ominus y&=\alpha[\alpha^{-1}(x) - \alpha^{-1}(y)]&= e^{(\ln x-\ln y)} &= x\div y, y\ne 0 ~geometric ~subtraction\\ &\alpha-multiplication &x\odot y &=\alpha[\alpha^{-1}(x) \times\alpha^{-1}(y)]& = e^{(\ln x\times\ln y)} & = ~x^{\ln y}~ geometric ~multiplication\\ &\alpha-division &x\oslash y&=\alpha[\alpha^{-1}(x) / \alpha^{-1}(y)] & = e^{(\ln x\div \ln y)}& = x^{\frac{1}{\ln y}}, y\ne 1 ~ geometric ~division. \end{align*} In \cite{TurkmenBasar} defined the geometric complex numbers $\mathbb{C}(G)$ as follows: \[\mathbb{C}(G):=\{ e^{z}: z\in \mathbb{C}\} = \mathbb{C}\backslash \{0\}.\] Then $(\mathbb{C}(G), \oplus, \odot)$ is a field with geometric zero $1$ and geometric identity $e.$\\ Then for all $x, y\in \mathbb{C}(G)$ \begin{itemize} \item{ $x\oplus y=xy$} \item{ $x\ominus y=x/y$} \item{ $x\odot y=x^{\ln y}=y^{\ln x}$} \item{ $x\oslash y$ or $\frac{x}{y}G=x^{\frac{1}{\ln y}}, y\neq 1$} \item{ $x^{2_G}= x \odot x=x^{\ln x}$} \item{ $x^{p_G}=x^{\ln^{p-1}x}$} \item{ ${\sqrt{x}}^G=e^{(\ln x)^\frac{1}{2}}$} \item{ $x^{-1_G}=e^{\frac{1}{\log x}}$} \item{ $x\odot e=x$ and $x\oplus 1= x$} \item{ $e^n\odot x=x^n=x\oplus x\oplus .....(\text{upto $n$ number of $x$})$} \item{ \begin{equation*} \left|x\right|^G= \begin{cases} x, &\text{if $x>1$}\\ 1,&\text{if $x=1$}\\ \frac{1}{x},&\text{if $x<1$} \end{cases} \end{equation*}} Thus $\left|x\right|^G\geq 1.$ \item{ ${\sqrt{x^{2_G}}}^G=\left|x\right|^G$} \item{ $\left|e^y\right|^G=e^{\left|y\right|}$} \item{ $\left|x\odot y\right|^G=\left|x\right|^G \odot \left|y\right|^G$} \item{ $\left|x\oplus y\right|^G \leq\left|x\right|^G \oplus \left|y\right|^G$} \item{ $\left|x\oslash y\right|^G=\left|x\right|^G \oslash \left|y\right|^G$} \item{ $\left|x\ominus y\right|^G\geq\left|x\right|^G \ominus \left|y\right|^G$} \item{ $0_G \ominus 1_G\odot\left(x \ominus y\right)=y\ominus x\,, i.e.$ in short $\ominus \left(x \ominus y\right)= y\ominus x.$} \end{itemize} Let $l_{\infty},c$ and $c_0$ be the linear spaces of complex bounded, convergent and null sequences, respectively, normed by \[||x||_\infty=\sup_k|x_k|.\] T\"{u}rkmen and Ba\c{s}ar \cite{TurkmenBasar} have proved that \[\omega(G)=\{(x_k): x_k \in \mathbb{C}(G)\, \text{for all}\, k\in \mathbb{N}\}\] is a vector space over $\mathbb{C}(G)$ with respect to the algebraic operations $\oplus$ addition and $\odot$ multiplication \begin{align*} \oplus : \omega(G) \times \omega (G) &\rightarrow \omega (G)\\ (x, y)&\rightarrow x \oplus y =(x_k) \oplus (y_k)=(x_ky_k)\\ \odot : \mathbb{C(G)} \times \omega (G) &\rightarrow \omega (G)\\ (\alpha, y)&\rightarrow \alpha \odot y=\alpha \odot (y_k)=(\alpha^{\ln y_k}), \end{align*} where $x=(x_k), y=(y_k) \in \omega (G)$ and $\alpha \in \mathbb{C}(G).$ Then \begin{align*} l_\infty(G) &=\{x=(x_k) \in \omega (G): \sup_{k\in \mathbb{N}}|x_k|^G< \infty\}\\ c(G) &= \{x=(x_k) \in \omega (G): {_G\lim_{k\rightarrow \infty}}|x_k\ominus l|^G=1\}\\ c_0(G) &= \{x=(x_k) \in \omega (G): {_G\lim_{k\rightarrow \infty}} x_k=1\}, \text{where $_G\lim$ is the geometric limit}\\ l_p(G) &= \{x=(x_k) \in \omega (G):{_G\sum^\infty_{k=0}}\left(|x_k|^G\right)^{p_G} <\infty\}, \text{~where ${_G\sum}$ is the geometric sum}, \end{align*} are classical sequence spaces over the field $\mathbb{C}(G).$ Also it is shown that $l_{\infty}(G),$ $c(G)$ and $c_0(G)$ are Banach spaces with the norm \[||x||^{G}=\sup_{k}|x_k|^{G}, x=(x_1,x_2,x_3,...)\in \lambda(G), \lambda\in \{l_{\infty},c, c_0\}.\] For the convenience, in this paper we denote $l_\infty(G), c(G), c_0(G),$ respectively as $l_\infty^G, c^G, c_0^G.$ \section{New geometric sequence space} In 1981, Kizmaz \cite{Kizmaz} introduced the notion of difference sequence spaces using forward difference operator $\Delta$ and studied the classical difference sequence spaces $\ell _{\infty }(\Delta ),$ $c(\Delta ),$ $c_{0}(\Delta ).$ In this section we define the following new geometric sequence space \[l_\infty^G(\Delta_G)= \{x=(x_k) \in \omega (G): \Delta_G x\in l_\infty^G\}, \text{~where~} {\Delta}_G x=x_k \ominus x_{k+1}.\] \begin{thm}\label{eight} The space $l_\infty^{G} \left({\Delta}_G\right)$ is a normed linear space w.r.t. the norm \begin{equation*} \left\|x\right\|^G_{{\Delta}_G}=\left|x_1\right|^G\oplus\left\|{\Delta}_Gx\right\|^G_\infty. \end{equation*} \end{thm} \begin{proof} For $x=(x_k), y=(y_k) \in l_\infty^{G} \left({\Delta}_G\right),$ \begin{align*} N1.\quad \left\|x\right\|^G_{{\Delta}^G} &=\left|x_1\right|^G\oplus\left\|{\Delta}^Gx\right\|^G_\infty\\ &=\left|x_1\right|^G.\sup_k\left|x_k\ominus x_{k+1}\right|^G\\ &\geq 1, \quad \text{since $\left|x_1\right|^G\geq 1$ and $\left|x_k\ominus x_{k+1}\right|^G\geq 1.$} \end{align*} \begin{align*} N2. \quad \left\|x\right\|^G_{{\Delta}_G} =1 &\Leftrightarrow \left|x_1\right|^G\oplus\left\|{\Delta}_Gx\right\|^G_\infty=1\\ &\Leftrightarrow \left|x_1\right|^G.\sup_k\left|x_k\ominus x_{k+1}\right|^G=1 ~ \forall k\\ &\Leftrightarrow \left|x_1\right|^G=1 \text{~and $\left|x_k\ominus x_{k+1}\right|^G= 1$}\\ &\Leftrightarrow x_1=1 \text{~and $x_k\ominus x_{k+1}=1 ~ \forall k$}\\ &\Leftrightarrow x_1=1 \text{~and $x_k\slash x_{k+1}=1 ~ \forall k$}\\ &\Leftrightarrow x_1=1 \text{~and $x_k= x_{k+1} ~~ \forall k$}\\ &\Leftrightarrow x_k=1 ~\forall k\\ &\Leftrightarrow x=(1,1,1,1,.........)=0_G. \end{align*} \begin{align*} N3. \quad \left\|x\oplus y\right\|^G_{{\Delta}_G}&=\left|x_1\oplus y_1\right|^G\oplus \left\|{\Delta}_G(x_k\oplus y_k)\right\|^G_\infty\\ &=\left|x_1\oplus y_1\right|^G\oplus \left\|{\Delta}_G(x_ky_k)\right\|^G_\infty \\ &=\left|x_1\oplus y_1\right|^G\oplus \sup_k\left|x_ky_k\ominus x_{k+1}y_{k+1}\right|^G\\ &=\left|x_1\oplus y_1\right|^G\oplus \sup_k\left|\frac{x_ky_k}{ x_{k+1}y_{k+1}}\right|^G\\ &=\left|x_1\oplus y_1\right|^G\oplus \sup_k\left|\frac{x_k}{x_{k+1}}.\frac{ y_k}{y_{k+1}}\right|^G\\ &=\left|x_1\oplus y_1\right|^G\oplus \sup_k\left|\frac{x_k}{x_{k+1}}\oplus\frac{ y_k}{y_{k+1}}\right|^G\\ &\leq\left|x_1\oplus y_1\right|^G\oplus \sup_k\left\{\left|\frac{x_k}{x_{k+1}}\right|^G\oplus\left|\frac{ y_k}{y_{k+1}}\right|^G\right\}\\ &=\left|x_1\oplus y_1\right|^G\oplus \sup_k\left\{\left|x_k\ominus x_{k+1}\right|^G\oplus\left| y_k\ominus{y_{k+1}}\right|^G\right\}\\ &=\left|x_1\oplus y_1\right|^G\oplus \sup_k\left\{\left|{{\Delta}_G}x\right|^G\oplus\left|{{\Delta}_G}y\right|^G\right\}\\ &\leq\left|x_1\right|^G\oplus \left|y_1\right|^G\oplus \sup_k\left\{\left|{{\Delta}_G}x\right|^G\right\}\oplus \sup_k\left\{\left|{{\Delta}_G}y\right|^G\right\}\\ &=\left[\left|x_1\right|^G\oplus \sup_k\left\{\left|{{\Delta}_G}x\right|^G\right\}\right]\oplus \left[\left|y_1\right|^G\oplus \sup_k\left\{\left|{{\Delta}_G}y\right|^G\right\}\right]\\ &= \left\|x\right\|^G_{{\Delta}_G} \oplus \left\|y\right\|^G_{{\Delta}_G}. \end{align*} \begin{align*} N4.\quad \left\|\alpha\odot x\right\|^G_{{\Delta}^G} &=\left|\alpha\odot x_1\right|^G\oplus\left\|{\Delta}_G(\alpha\odot x)\right\|^G_\infty, \quad \alpha\in \mathbb{C}(G)\\ &=\left|\alpha\right|\odot \left|x_1\right|^G\oplus\left\|\alpha\odot x_k\ominus \alpha\odot x_{k+1}\right\|^G_\infty\\ &=\left|\alpha\right|\odot \left|x_1\right|^G\oplus\left\|\alpha\odot (x_k\ominus x_{k+1})\right\|^G_\infty\\ &=\left|\alpha\right|\odot \left|x_1\right|^G\oplus\left|\alpha\right|\odot\left\|x_k\ominus x_{k+1}\right\|^G_\infty\\ &=\left|\alpha\right|\odot \left[\left|x_1\right|^G\oplus\left\|{\Delta}_G x\right\|^G_\infty\right]\\ &=\left|\alpha\right|\odot\left\|x\right\|^G_{{\Delta}_G}. \end{align*} Thus $\left\|.\right\|^G_{{\Delta}_G}$ is a norm on $\mathbb{C}(G).$ \end{proof} \begin{thm} The space $l_\infty^{G} \left({\Delta}_G\right)$ is a Banach space w.r.t. the norm $\left\|.\right\|^G_{{\Delta}_G}.$ \end{thm} \begin{proof} Let $(x_n)$ be a Cauchy sequence in $l_\infty^{G} \left({\Delta}_G\right),$ where $x_n= \left(x_k^{(n)}\right)=\left(x_1^{(n)}, x_2^{(n)}, x_3^{(n)},........\right)$ $\forall n \in \mathbb{N},x_k^{(n)}$ is the $k^{th}$ coordinate of $x_n.$ Then \begin{align*} \left\|x_n\ominus x_m\right\|^G_{{\Delta}_G}&=\left|x_1^{(n)}\ominus x_1^{(m)}\right|^G \oplus \left\|{\Delta}_G x_n\ominus {\Delta}_Gx_m\right\|^G_\infty \rightarrow 1 \text{~as $m, n\rightarrow\infty$}\\ &=\left|x_1^{(n)}\ominus x_1^{(m)}\right|^G \oplus \left\|(x_k^{(n)}\ominus x_{k+1}^{(n)})\ominus (x_k^{(m)}\ominus x_{k+1}^{(m)})\right\|^G_\infty\rightarrow \, 1\\ &=\left|x_1^{(n)}\ominus x_1^{(m)}\right|^G \oplus \left\|(x_k^{(n)}\ominus x_k^{(m)})\ominus (x_{k+1}^{(n)}\ominus x_{k+1}^{(m)})\right\|^G_\infty\rightarrow \, 1\\ &=\left|x_1^{(n)}\ominus x_1^{(m)}\right|^G \oplus \sup_k\left|(x_k^{(n)}\ominus x_k^{(m)})\ominus (x_{k+1}^{(n)}\ominus x_{k+1}^{(m)})\right|^G\rightarrow 1\text{~as $m, n\rightarrow \infty$}. \end{align*} This implies that $\left|x_k^{(n)}\ominus x_k^{(m)}\right|^G\rightarrow 1\mbox{~as~} n, m\rightarrow\infty~~ \forall~ k\in\mathbb{N},$ \text{~since $\left|x_k^{(n)}\ominus x_k^{(m)}\right|^G\geq 1.$}\\ Therefore for fixed $k,$ $k^{\text{th}}$ co-ordinates of all sequences form a Cauchy sequence in $\mathbb{C}(G)$\\ i.e. $x^{(n)}_k=(x^{(1)}_k, x^{(2)}_k, x^{(3)}_k,x^{(4)}_k,.........)$ is a Cauchy sequence. Then by the completeness of $\mathbb{C}(G), (x^{(n)}_k)$ converges to $x_k$ (say) as follows: \[\begin{matrix} x_1 &=(&x^{(1)}_1,&x^{(1)}_2,&x^{(1)}_3,&\cdots,&x^{(1)}_k,&\cdots)\\ x_2 &=(&x^{(2)}_1,&x^{(2)}_2,&x^{(2)}_3,&\cdots,&x^{(2)}_k,&\cdots)\\ x_3 &=(&x^{(3)}_1,&x^{(3)}_2,&x^{(3)}_3,&\cdots,&x^{(3)}_k,&\cdots)\\ \vdots& &\vdots &\vdots &\vdots & &\vdots & \\ x_m &=(&x^{(m)}_1,&x^{(m)}_2,&x^{(m)}_3,&\cdots,&x^{(m)}_k,&\cdots)\\ \vdots& &\vdots &\vdots &\vdots & &\vdots & \\ x_n &=(&x^{(n)}_1,&x^{(n)}_2,&x^{(n)}_3,&\cdots,&x^{(n)}_k,&\cdots)\\ \vdots& &\vdots &\vdots &\vdots & &\vdots & \\ \downarrow& &\downarrow&\downarrow&\downarrow& &\downarrow& \\ x &=(&x_1,&x_2,&x_3,&\cdots,&x_k,&\cdots) \end{matrix}\] ~i.e. \[{_G\lim_{n \to \infty}}x^{(n)}_k=x_k~ \forall k\in \mathbb{N}.\] Further for each $\varepsilon> 1, \exists N=N(\varepsilon)$ s.t. $\forall \, n, m\geq N$ we have\\ \[|x^{(n)}_1 \ominus x^{(m)}_1|^G<\varepsilon,|x^{(n)}_{k+1} \ominus x^{(m)}_{k+1}\ominus (x^{(n)}_k \ominus x^{(m)}_k)|^G<\varepsilon \] and \[{_G\lim_{m \to \infty}}|x^{(n)}_1 \ominus x^{(m)}_1|^G =|x^{(n)}_1 \ominus x_1|^G< \varepsilon.\] This implies \[_G\lim_{m \to \infty}|(x^{(n)}_{k+1}\ominus x^{(m)}_{k+1})\ominus (x^{(n)}_k\ominus x^{(m)}_k)|^G= |(x^{(n)}_{k+1} \ominus x_{k+1})\ominus (x^{(n)}_k\ominus x_k)|^G <\varepsilon~ \forall~ n\geq N.\] Since $\varepsilon$ is independent of $k,$ \begin{align*} &\sup_k|(x^{(n)}_{k+1} \ominus x_{k+1})\ominus (x^{(n)}_k\ominus x_k)|^G<\varepsilon.\\ \Rightarrow &\sup_k|(x^{(n)}_{k+1} \ominus x^{(n)}_k)\ominus (x_{k+1}\ominus x_k)|^G= \left\|\Delta_G x_n\ominus \Delta_G x\right\|^G_\infty <\varepsilon. \end{align*} Consequently we have $\left\|x_n\ominus x\right\|^G_{\Delta_G}=|x^{(n)}_1 \ominus x_1|^G \oplus \left\|\Delta_G x_n\ominus \Delta_G x\right\|^G_\infty < {\varepsilon}^2~ \forall~ n\geq N.$\\ Hence we obtain $x_n\rightarrow x$ as $n\rightarrow \infty.$ \\ Now we must show that $x\in l^G_\infty(\Delta_G).$ We have \begin{align*} |x_k\ominus x_{k+1}|^G&=|x_k\ominus x^N_k\oplus x^N_k\ominus x^N_{k+1}\oplus x^N_{k+1}\ominus x_{k+1}|^G\\ &\leq |x^N_k\ominus x^N_{k+1}|^G\oplus ||x^N\ominus x||^G_{\Delta_G}= O(e). \end{align*} This implies $x=(x_k)\in l^G_\infty(\Delta_G).$ \end{proof} Furthermore since $l^G_\infty(\Delta_G)$ is a Banach space with continuous coordinates (that is $\left\|x_n\ominus x\right\|^\infty_{\Delta_G}\rightarrow 1$ implies $|x^{(n)}_k \ominus x_k|^G\rightarrow 1$ for each $k\in \mathbb{N},$ as $n\rightarrow \infty )$ it is a BK-space. \begin{rem} The spaces \begin{enumerate} \item[(a)] $c^{G}(\Delta_{G})=\{(x_k)\in w(G): \Delta_{G}x_k\in c^{G}\}$ \item[(b)] $c_{0}^{G}(\Delta_{G})=\{(x_k)\in w(G): \Delta_{G}x_k\in c_{0}^{G}\}$ \end{enumerate} are Banach spaces with respect to the norm $||.||^{G}_{\Delta_G}.$ Also these spaces are BK-space. \end{rem} Now we define $s: l^G_\infty(\Delta_G)\rightarrow l^G_\infty(\Delta_G), x\rightarrow sx=y=(1, x_2, x_3,....).$ It is clear that $s$ is a bounded linear operator on $l^G_\infty(\Delta_G)$ and $||s||^G_\infty =e.$ Also \[s\left[l^G_\infty(\Delta_G)\right] = sl^G_\infty(\Delta_G)=\{x=(x_k): x\in l^G_\infty(\Delta_G), x_1=1 \}\subset l^G_\infty(\Delta_G)\] is a subspace of $l^G_\infty(\Delta_G)$ and as $|x_1|^G=1$ for $x_1=1$ we have \[||x||^G_{\Delta_G}= ||\Delta_G x||^G_\infty \quad \text{in}\, sl^G_\infty(\Delta_G).\] On the other hand we can show that \begin{equation}\label{eqna} \Delta_G :sl^G_\infty(\Delta_G)\rightarrow l^G_\infty \end{equation} \[x=(x_k)\rightarrow y=(y_k)=(x_k\ominus x_{k+1})\] is a linear homomorphism. So $sl^G_\infty(\Delta_G)$ and $l^G_\infty$ are equivalent as topological space. $\Delta_G$ and $\Delta_G^{-1}$ are norm preserving and $||\Delta_G||^G_\infty=||\Delta_G^{-1}||^G_\infty =e.$ Let $\left[sl^G_\infty(\Delta_G)\right]^*$ and $\left[l^G_\infty\right]^*$ denote the continuous duals of $sl^G_\infty(\Delta_G)$ and $l^G_\infty,$ respectively.\\ We can prove that \begin{equation*} T:\left[sl^G_\infty(\Delta_G)\right]^*\rightarrow \left[l^G_\infty\right]^*,\, f_{\Delta_G}\rightarrow f= f_{\Delta_G}o\Delta_G^{-1} \end{equation*} is a linear isometry. Thus $\left[sl^G_\infty(\Delta_G)\right]^*$ is equivalent to $\left[l^G_\infty\right]^*.$ In the same way we can show that $sc^{G}(\Delta_G)$ and $c^{G},$ $sc_0^{G}(\Delta_G)$ and $c_0^{G}$ are equivalent as topological spaces and \[\left[sc^{G}\Delta_G)\right]^*=\left[sc_0^{G}(\Delta_G)\right]^*=l_1^G \,(l_1^G, \,\text{the space of geometric absolutely convergent series}).\] \section{Dual spaces of $l^G_\infty (\Delta_G)$} \begin{lemma}\label{1} The following conditions (a) and (b) are equivalent: \begin{align*} &(a) \sup_k|x_k\ominus x_{k+1}|^G<\infty ~~ i.e. ~~ \sup_k|\Delta_G x_k|^G<\infty;\\ &(b)(i) \sup_k e^{k^{-1}}\odot|x_k|^G<\infty \text{~and}\\ & \quad (ii)\sup_k|x_k\ominus e^{{k(k+1)}^{-1}}\odot x_{k+1}|^G<\infty. \end{align*} \end{lemma} \begin{proof} Let (a) be true i.e. $\sup_k|x_k\ominus x_{k+1}|^G<\infty .$ \begin{align*} \text{Now~} |x_1\ominus x_{k+1}|^G&=\left|{_G\sum^k_{v=1}}{(x_v \ominus x_{v+1})}\right|^G\\ &=\left|{_G\sum^k_{v=1}}{\Delta_Gx_v}\right|^G\\ &\leq {_G\sum^k_{v=1}}\left|\Delta_Gx_v\right|^G=O(e^k)\\ \text{and~} |x_k|^G &=|x_1\ominus x_1\oplus x_{k+1}\oplus x_k\ominus x_{k+1}|^G\\ &\leq |x_1|^G\oplus |x_1\ominus x_{k+1}|^G\oplus |x_k\ominus x_{k+1}|^G=O(e^k). \end{align*} This implies that $\sup_k e^{k^{-1}}\odot|x_k|^G<\infty.$ This completes the proof of $b(i).$\\ Again \begin{align*} \sup_k\left|x_k\ominus e^{{k(k+1)}^{-1}}\odot x_{k+1}\right|^G &=\left|\left\{e^{{(k+1)}}\odot e^{{(k+1)}^{-1}}\right\}\odot x_k\ominus e^{{k(k+1)}^{-1}}\odot x_{k+1} \right|^G \\ &=\left|\left\{(e^k \oplus e)\odot e^{{(k+1)}^{-1}}\right\}\odot x_k\ominus e^{{k(k+1)}^{-1}}\odot x_{k+1} \right|^G \\ &=\left|\left\{e^{k(k+1)^{-1}}\odot x_k\oplus e^{(k +1)^{-1}}\odot x_k \right\}\ominus e^{k(k+1)^{-1}}\odot x_{k+1}\right|^G\\ &=\left|\left\{e^{k(k+1)^{-1}}\odot(x_k\ominus x_{k+1})\right\}\oplus \left\{e^{(k+1)^{-1}}\odot x_k\right\}\right|^G\\ &\leq e^{k(k+1)^{-1}}\odot \left|x_k\ominus x_{k+1}\right|^G\oplus e^{(k+1)^{-1}}\odot \left|x_k\right|^G\\ &=O(e). \end{align*} Therefore $\sup_k|x_k\ominus e^{{k(k+1)}^{-1}}\odot x_{k+1}|^G<\infty.$ This completes the proof of $b(ii).$ Conversely let $(b)$ be true. Then \begin{align*} \left|x_k\ominus e^{k(k+1)^{-1}}\odot x_{k+1}\right|^G&=\left|e^{(k+1)(k+1)^{-1}}\odot x_k\ominus e^{k(k+1)^{-1}}\odot x_{k+1}\right|^G\\ &\geq e^{k(k+1)^{-1}}\odot|x_k\ominus x_{k+1}|^G\ominus e^{(k+1)^{-1}}\odot |x_k|^G \end{align*} i.e. $e^{k(k+1)^{-1}}\odot|x_k\ominus x_{k+1}|^G\leq e^{(k+1)^{-1}}\odot |x_k|^G\oplus \left|x_k\ominus e^{k(k+1)^{-1}}\odot x_{k+1}\right|^G.$\\ Thus $\sup_k|x_k\ominus x_{k+1}|^G<\infty$ as $b(i)$ and $b(ii)$ hold. \end{proof} \textbf{Geometric form of Abel's partial summation formula:} Abel's partial summation formula states that if $(a_k)$ and $(b_k)$ are sequences, then \[\sum_{k=1}^n a_kb_k=\sum_{k=1}^nS_k(b_k-b_{k+1})+ S_nb_{n+1},\] where $S_k=\sum_{i=1}^ka_i.$ Then \begin{align*} \sum_{k=1}^\infty a_kb_k &=\sum_{k=1}^\infty S_k(b_k-b_{k+1})+ \lim_{n\to \infty}S_nb_{n+1}\\ \sum_{k=1}^\infty a_kb_k &=\sum_{k=1}^\infty S_k(b_k-b_{k+1}), \text{~if $(b_k)$ ~ monotonically decreases to zero.} \end{align*} Similarly as $\odot$ is distributive over $\oplus$ we have \[{_G\sum_{k=1}^\infty} a_k\odot b_k ={_G\sum_{k=1}^\infty} S_k\odot(b_k\ominus b_{k+1}),\text{~where~} \,S_k={_G\sum_{i=1}^k}a_i.\] In particular, if $(b_k)=(e^{-k}),$ then $(b_k)$ monotonically decreases to zero. Then \begin{align*} _G\sum^\infty_{k=1}a_k\odot e^{-k} &= _G\sum^\infty_{k=1}S_k\odot \left(e^{-k}\ominus e^{-(k+1)}\right)\\ &= _G\sum^\infty_{k=1}S_k\odot e= _G\sum^\infty_{k=1}S_k. \end{align*} Let $(p_n)$ be a sequence of geometric positive numbers monotonically increasing to infinity. Then $(\frac {e}{p_n}G)$ is a sequence monotonically decreasing to zero(i.e. to $1$). \begin{lemma}\label{2} \[\text{If} ~ \sup_n\left|{_G\sum^n_{v=1}}c_v\right|^G\leq \infty \text {~then~} \sup_n\left(p_n\odot\left|{_G\sum^\infty_{k=1}}\frac{c_{n+k-1}}{p_{n+k}}G\right|^G\right)<\infty.\] \end{lemma} \begin{proof} Using this Abel's partial summation formula to $(c_v)$ and $\left(\frac{e}{p_n}G\right)$ we get \begin{equation}\label{Eqn2} {_G\sum_{k=1}^\infty} \frac{c_{n+k-1}}{p_{n+k}}G= {_G\sum_{k=1}^\infty}\left({_G\sum_{v=1}^k} c_{n+v-1}\right)\odot \left(\frac{e}{p_{n+k}}G\ominus \frac{e}{p_{n+k+1}}G\right) \end{equation} \[\text{and}\quad p_n \odot\left|{_G\sum_{k=1}^\infty}\frac{c_{n+k-1}}{p_{n+k}}G\right|^G= O(e).\] \end{proof} \begin{lemma}\label{3} If the series $\sum_{k=1}^\infty c_k$ is convergent then \[ \lim_n \left( p_n \odot {_G\sum_{k=1}^\infty}\frac{c_{n+k-1}}{p_{n+k}}G\right)=1.\] \end{lemma} \begin{proof} Since $$ \left|{_G\sum_{v=1}^k} c_{n+v-1}\right|^G = \left| {_G\sum_{v=n}^{n+k-1}} c_v\right|^G=O(e)$$ for every $k\in \mathbb{N}.$ Using (\ref{Eqn2}) we get \[p_n\odot \left|{_G\sum_{k=1}^\infty} \frac{c_{n+k-1}}{p_{n+k}}G\right|^G=O(e).\] \end{proof} \begin{corollary}\label{Cor1} Let $(p_n)$ be monotonically increasing. If $$\sup_n\left|{_G\sum_{v=1}^n} p_v\odot a_v\right|^G<\infty \text{~then~} \sup_n\left|p_n\odot {_G\sum_{k=n+1}^\infty} a_k\right|^G<\infty.$$ \end{corollary} \begin{proof} We put $p_{k+1}\odot a_{k+1}$ instead of $c_k$ in Lemma \ref{2} we get \begin{align*} p_n\odot {_G\sum_{k=1}^\infty} \frac{c_{n+k-1}}{p_{n+k}}G &=p_n\odot {_G\sum_{k=1}^\infty} \frac{p_{n+k}\odot a_{n+k}}{p_{n+k}}G\\ &=p_n\odot {_G\sum_{k=1}^\infty} a_{n+k}\\ &=p_n\odot {_G\sum_{k=n+1}^\infty} a_k =O(e). \end{align*} \end{proof} \begin{corollary}\label{Cor2} $$\text{If}~ {_G\sum_{k=1}^\infty} p_k\odot a_k \text{~is convergent~ then~} \lim_n p_n\odot {_G\sum_{k=n+1}^\infty} a_k=1.$$ \end{corollary} \begin{proof} We put $p_{k+1}\odot a_{k+1}$ instead of $c_k$ in Lemma \ref{3}. \end{proof} \begin{corollary}\label{Cor3} $${_G\sum_{k=1}^\infty} e^k\odot a_k \text{~is convergent iff~}{_G\sum_{k=1}^\infty} R_k\text{~is convergent with~} e^n\odot R_n = O(e),\text{~where~}$$ $$R_n = {_G\sum_{k=n+1}^\infty} a_k.$$ \end{corollary} \begin{proof} Let $p_n=e^n.$ Then it is monotonically increasing to infinity. Then \begin{align*} _G\sum_{k=1}^n e^k\odot a_{k+1} &= e\odot a_2 \oplus e^2 \odot a_3 \oplus e^3 \odot a_4 \oplus.....\oplus e^n\odot a_{n+1}\\ &= (a_2\oplus a_3 \oplus ....\oplus a_{n+1})\oplus (a_3\oplus a_4\oplus ...\oplus a_{n+1})\\ &\qquad \oplus ..........\oplus (a_n\oplus a_{n+1})\oplus (a_{n+1})\\ &= (R_1 \ominus R_{n+1})\oplus (R_2 \ominus R_{n+1})\oplus......\oplus (R_{n-1} \ominus R_{n+1})\oplus (R_n \ominus R_{n+1}) \\ &= _G\sum_{k=1}^n R_k \ominus \{e^n\odot R_{n+1}\}. \end{align*} Therefore as $e^n\odot R_n=O(e)$, so $e^n\odot R_{n+1}=O(e).$ This implies \[{_G\sum_{k=1}^n} e^k\odot a_{k+1}\text{~is convergent if}\, _G\sum_{k=1}^nR_k\text{~is convergent and vice versa.}\] \end{proof} \section{$\alpha-,\beta-,$ $\gamma-$ duals} \begin{defn} \cite{Garling67, KotheToplitz69, KotheToplitz34, Maddox80} If $X$ is a sequence space, we define \begin{enumerate} \item[(i)] $X^\alpha=\{a=(a_k) : \sum_{k=1}^\infty |a_k x_k|<\infty, \, \text{for each} \,x\in X\};$ \item[(ii)] $X^\beta=\{a=(a_k) : \sum_{k=1}^\infty a_k x_k \, \text{is convergent, for each} \,x\in X\};$ \item[(iii)] $ X^\gamma=\{a=(a_k) : \sup_n|\sum_{k=1}^n a_k x_k|<\infty, \, \text{for each} \,x\in X\}.$ \end{enumerate} \end{defn} $X^\alpha, X^\beta,$ and $X^\gamma$ are called $\alpha-$ (or K\"{o}the-Toeplitz), $\beta-$(or generalised K\"{o}the-Toeplitz), and $\gamma-$dual spaces of $X$. We can show that $X^\alpha \subset X^\beta \subset X^\gamma.$ If $X\subset Y,$ then $Y^\dag\subset X^\dag, $ for $\dag=\alpha, \beta $ or $\gamma.$ \begin{thm} \[(i)~\text{If~} D_1= \left\{a=(a_k): {_G\sum_{k=1}^\infty} e^k\odot |a_k|^G<\infty\right\} \text{~then~} \left(sl_\infty^G(\Delta_G)\right)^\alpha=D_1.\] \[(ii)~\text{If~} D_2= \left\{a=(a_k): {_G\sum_{k=1}^\infty} e^k\odot a_k \text{~is convergent with~} {_G\sum_{k=1}^\infty}|R_k|^G<\infty\right\}.\] Then $\left(sl_\infty^G(\Delta_G)\right)^\beta=D_2.$ \[(iii)~\text{If~} D_3= \left\{a=(a_k): \sup_n|{_G\sum_{k=1}^n} e^k\odot a_k|^G<\infty, {_G\sum_{k=1}^\infty}|R_k|^G<\infty\right\}.\] Then $\left(sl_\infty^G(\Delta_G)\right)^\gamma=D_3.$ \end{thm} \begin{proof}$(i)$ Let $a\in D_1.$ Then for each $x\in sl_\infty^G(\Delta_G)$ we have \[{_G\sum_{k=1}^\infty} |a_k\odot x_k|^G=_G\sum_{k=1}^\infty \left(e^k\odot |a_k|^G\right)\odot \left(e^{k^{-1}}\odot |x_k|^G\right)<\infty \quad \text{by using Lemma \ref{1}}.\] This implies that $a\in \left(sl_\infty^G(\Delta_G)\right)^\alpha.$ Therefore \begin{equation}\label{8} D_1\subseteq \left(sl_\infty^G(\Delta_G)\right)^\alpha. \end{equation} Again let $a\in \left(sl_\infty^G(\Delta_G)\right)^\alpha.$ Then ${_G\sum_{k=1}^\infty}|a_k\odot x_k|^G<\infty$ (by definition of $\alpha$-dual) for each $x\in sl_\infty^G(\Delta_G).$ So we take \begin{equation*} x_k= \begin{cases} 1, &\text{if $k=1;$}\\ e^k,&\text{if $k\geq 2,$} \end{cases} \end{equation*} then $x=(1, e^2, e^3,.....)\in sl_\infty^G(\Delta_G).$ Therefore \begin{align*} {_G\sum_{k=1}^\infty} e^k\odot|a_k|^G &=|a_1|^G \oplus {_G\sum_{k=2}^\infty} e^k\odot|a_k|^G\\ &= |a_1|^G \oplus {_G\sum_{k=1}^\infty} |a_k\odot x_k|^G<\infty~ \text{as}~ a_1\odot x_1=1. \end{align*} Therefore $a\in D_1.$ This implies that \begin{equation}\label{9} \left(sl_\infty^G(\Delta_G)\right)^\alpha \subseteq D_1. \end{equation} Therefore from (\ref{8}) and (\ref{9}) we get \[\left(sl_\infty^G(\Delta_G)\right)^\alpha = D_1.\] $(ii)$ Let $a\in D_2.$ If $x\in sl_\infty^G(\Delta_G)$ then there exists one and only one $y=(y_k)\in l_\infty^G$ such that (see \ref{eqna}) \begin{align*} x_k &= \ominus {_G\sum_{v=1}^k} y_{v-1}, \,y_0=1\\ \text{Therefore}\quad x_1 &= \ominus {_G\sum_{v=1}^1} y_{v-1} =\ominus y_o=1\\ x_2 &= \ominus {_G\sum_{v=1}^2} y_{v-1} =\ominus y_1\\ x_3 &= \ominus {_G\sum_{v=1}^3} y_{v-1} =\ominus y_1\ominus y_2\\ x_4 &= \ominus {_G\sum_{v=1}^4} y_{v-1} =\ominus y_1\ominus y_2\ominus y_3\\ ...&.................................................\\ ...&................................................. \quad .\\ \text{Then~} {_G\sum_{k=1}^n} a_k\odot x_k &=a_1\odot x_1\oplus a_2\odot x_2\oplus a_3\odot x_3\oplus ......\oplus a_n\odot x_n\\ &=a_1 \odot 1\ominus a_2\odot y_1\ominus a_3\odot(y_1\oplus y_2)\ominus a_4\odot (y_1\oplus y_2\oplus y3)\ominus\\ &\qquad .....\ominus a_n\odot(y_1\oplus y_2\oplus .....\oplus y_{n-1})\\ &= \ominus(a_2\oplus a_3\oplus....\oplus a_n)\odot y_1\\ &\qquad \qquad \ominus (a_3\oplus a_4\oplus.... \oplus a_n)\odot y_2\ominus ....... \ominus a_n\odot y_{n-1}\\ &=(\ominus R_1 \odot y_1\oplus R_n\odot y_1)\oplus (\ominus R_2 \odot y_2\oplus R_n\odot y_2)\oplus ....\\ & \qquad \qquad.........\oplus (\ominus R_{n-1} \odot y_{n-1}\oplus R_n\odot y_{n-1})\\ &= \ominus _G\sum_{k=1}^{n-1}R_k\odot y_k \oplus R_n\odot _G\sum_{k=1}^{n-1}y_k. \end{align*} \begin{equation}\label{Eqn3} {_G\sum_{k=1}^n} a_k\odot x_k =\ominus {_G\sum_{k=1}^{n-1}}R_k\odot y_k \oplus R_n\odot{ _G\sum_{k=1}^{n-1}}y_k. \end{equation} \[\text{Since~} {_G\sum_{k=1}^\infty} R_k\odot y_k \text{~is absolutely convergent and~} R_n\odot {_G\sum_{k=1}^{n-1}}y_k \rightarrow 1 \text{~as~} n\rightarrow \infty (\text{~Corollary \ref{Cor3}}),\] \[\text{the series~} {_G\sum_{k=1}^n} a_k\odot x_k \,\text{is convergent for each}\, x\in sl_\infty^G(\Delta_G). \text{~This yields~} a\in \left( sl_\infty^G(\Delta_G)\right)^\beta. \] Therefore $D_2 \subseteq \left( sl_\infty^G(\Delta_G)\right)^\beta.$\\ \[\text{Again let~} a\in \left( sl_\infty^G(\Delta_G)\right)^\beta \text{~then~} {_G\sum_{k=1}^\infty} a_k\odot x_k \text{~is convergent for each~} x\in sl_\infty^G(\Delta_G). \text{~We take}\] \begin{equation*} x_k= \begin{cases} 1, &\text{if $k=1$;}\\ e^k,&\text{if $k\geq 2.$} \end{cases} \end{equation*} \[\text{Thus~} {_G\sum_{k=1}^\infty} e^k\odot x_k \, \text{is convergent. This implies~} e^n\odot R_n=O(e) (\text{~Corollary \ref{Cor3}}).\] \[\text{Using (\ref{Eqn3}) we get~~} {_G\sum_{k=1}^\infty} a_k\odot x_k= \ominus {_G\sum_{k=1}^\infty} R_k\odot y_k \text{~converges for all~} y\in l_\infty^G. \text{~So we have}\] \[{_G\sum_{k=1}^\infty}|R_k|^G <\infty\, \text{~and~} a\in D_2. \] Therefore \[\left( sl_\infty^G(\Delta_G)\right)^\beta=D_2.\] $(iii)$ The proof of this part is same as above. \end{proof} \section{Some applications of Geometric Difference} In this section we find the Geometric Newton-Gregory interpolation formulae and solve some numerical problems using these new formulae. \begin{description} \item[Geometric Factorial]Let us define geometric factorial notation $!_G$ as \[n!_G=e^n\odot e^{n-1}\odot e^{n-2}\odot \cdots \odot e^2\odot e =e^{n!}.\] For example, \begin{align*} 0!_G &=e^{0!}=e^0=1\\ 1!_G &=e^{1!}=e=2.71828\\ 2!_G &=e^{2!}=e^2=7.38906\\ 3!_G &=e^{3!}=e^6=4.03429 \times 10^2\\ 4!_G &=e^{4!}=e^{24}=2.64891 \times 10^{10}\\ 5!_G &=e^{5!}=e^{120}=1.30418 \times 10^{52}\quad \text{etc.} \end{align*} \item[Generalized Geometric Forward Difference Operator]Let \begin{align*} \Delta_G f(a) &= f(a\oplus h) \ominus f(a).\\ \Delta^2_G f(a) &= \Delta_G f(a\oplus h) \ominus \Delta_G f(a)\\ &= \{f(a\oplus e^2\odot h) \ominus f(a\oplus h)\}\ominus \{f(a\oplus h) \ominus f(a)\}\\ &=f(a\oplus e^2\odot h) \ominus e^2 \odot f(a\oplus h)\oplus f(a).\\ \Delta^3_G f(a) &= \Delta^2_G f(a\oplus h) \ominus \Delta^2_G f(a)\\ &=\{f(a\oplus e^3\odot h) \ominus e^2 \odot f(a\oplus e^2\odot h)\oplus f(a \oplus h)\}\\ &\qquad \ominus \{f(a\oplus e^2\odot h) \ominus e^2 \odot f(a\oplus h)\oplus f(a)\}\\ &=f(a\oplus e^3\odot h) \ominus e^3 \odot f(a\oplus e^2\odot h)\oplus e^3\odot f(a \oplus h) \ominus f(a). \end{align*} Thus, $n^{\text{th}}$ geometric forward difference is \[\Delta^n_G f(a)= _G\sum^n_{k=0} (\ominus e )^{{k}_G}\odot e^{\binom{n}{k}}\odot f(a\oplus e^{n-k}\odot h), \text{with}\, (\ominus e)^{0_G}=e.\] \item[Generalized Geometric Backward Difference Operator] Let \begin{align*} \nabla_G f(a) &=f(a) \ominus f(a\ominus h).\\ \nabla^2_G f(a) &= \nabla_G f(a) \ominus \nabla_G f(a\ominus h)\\ &= \{f(a)\ominus f(a \ominus h)\} \ominus \{f(a\ominus h) \ominus f(a\ominus e^2\odot h)\}\\ &= f(a)\ominus e^2 \odot f(a \ominus h)\oplus f(a \ominus e^2 \odot h).\\ \nabla^3_G f(a) &= \nabla^2_G f(a) \ominus \nabla^2_G f(a-h)\\ &=\{f(a)\ominus e^2 \odot f(a \ominus h)\oplus f(a \ominus e^2 \odot h)\}\\ &\qquad \ominus \{f(a\ominus h)\ominus e^2 \odot f(a \ominus e^2 \odot h)\oplus f(a \ominus e^3 \odot h)\}\\ &=f(a) \ominus e^3 \odot f(a \ominus h)\oplus e^3\odot f(a \ominus e^2 \odot h) \ominus f(a\ominus e^3 \odot h). \end{align*} Thus, $n^{\text{th}}$ geometric backward difference is \[\nabla^n_G f(a)= _G\sum^n_{k=0} (\ominus e )^{{k}_G}\odot e^{\binom{n}{k}}\odot f(a\ominus e^k\odot h).\] \item[Factorial Function]\textit{The product of n consecutive factors each at a constant\\ geometric difference, h, the first factor being x is called a factorial function of degree n and is denoted by $x^{(n_G)}.$}Thus \[x^{(n_G)}=x\odot (x \ominus e\odot h)\odot(x \ominus e^2\odot h)\odot(x \ominus e^3\odot h)\odot\cdots \odot (x \ominus e^{n-1}\odot h).\] In particular, for $h=e,$ \[x^{(n_G)}=x\odot (x \ominus e)\odot (x \ominus e^2)\odot (x \ominus e^3)\odot\cdots \odot (x \ominus e^{n-1}).\] \end{description} \textbf{Geometric Newton-Gregory Forward Interpolation Formula:} Let $y=f(x)$ be a function which takes the values $f(a),f(a\oplus h), f(a\oplus e^2\odot h), f(a\oplus e^3\odot h),......,f(a\oplus e^n\odot h)$ for the $n+1$ geometrically equidistant values (which form a Geometric Progression in ordinary sense) $a, a\oplus h, a\oplus e^2\odot h, a\oplus e^3\odot h,......, a\oplus e^n\odot h$ of the independent variable $x$ and let $P_n(x)$ be a geometric polynomial in $x$ of degree $n$ defined as: \begin{gather} \begin{aligned}\label{eqn10} P_n(x)=& A_0\oplus A_1 \odot (x\ominus a)\oplus A_2\odot (x\ominus a)\odot(x\ominus a\ominus h)\\ &\oplus A_3\odot(x\ominus a)\odot(x\ominus a \ominus h)\odot(x\ominus a\ominus e^2\odot h)\oplus\cdots\\ &\oplus A_n \odot (x\ominus a)\odot(x\ominus a\ominus h)\odot\cdots \odot(x\ominus a\ominus e^{n-1}\odot h). \end{aligned} \end{gather} We choose the coefficients $A_0, A_1, A_2,....,A_n$ such that\\ $P_n(a)=f(a),P_n(a\oplus h)=f(a\oplus h), P_n(a \oplus e^2\odot h)=f(a \oplus e^2\odot h),.... ,P_n(a \oplus e^n\odot h)=f(a \oplus e^n\odot h).$ Putting $x= a, a\oplus h, a\oplus e^2\odot h, a\oplus e^3\odot h,......, a\oplus e^n\odot h$ in (\ref{eqn10}) and then also putting the values of $P_n(a), P_n(a\oplus h),......., P_n(a\oplus e^n\odot h),$ we get \[f(a)=A_0\implies A_0=f(a).\] \[f(a\oplus h)=A_0\oplus A_1\odot h \implies A_1=\frac{f(a\oplus h) \ominus f(a)}{h}G=\frac{\Delta_G f(a)}{h}G.\] \begin{align*} f(a\oplus e^2\odot h)&=A_0\oplus e^2\odot h\odot A_1 \oplus e^2\odot h\odot h\odot A_2\\ \implies A_2 &=\frac{f(a\oplus e^2\odot h) \ominus e^2\odot [f(a\oplus h)\ominus f(a)]\ominus f(a)}{e^2\odot h^{2_G}}G\\ &= \frac{f(a\oplus e^2\odot h) \ominus e^2\odot f(a\oplus h)\oplus f(a)}{2!_G\odot h^{2_G}}G\\ &=\frac{\Delta^2_G f(a)}{2!_G\odot h^{2_G}}G.\\ \text{Similarly}\quad A_3 &=\frac{\Delta^3_G f(a)}{3!_G\odot h^{3_G}}G\\ \cdots &\quad \cdots \quad \cdots \quad \cdots\\ A_n &=\frac{\Delta^n_G f(a)}{n!_G\odot h^{n_G}}G. \end{align*} Putting the values of $A_0, A_1, A_2,....,A_n$ found above in (\ref{eqn10}), we get \begin{align*} P_n(x)=& f(a)\oplus \frac{\Delta_G f(a)}{h}G \odot (x\ominus a)\oplus \frac{\Delta^2_G f(a)}{2!_G\odot h^{2_G}}G\odot (x\ominus a)\odot(x\ominus a\ominus h)\\ &\oplus \frac{\Delta^3_G f(a)}{3!_G\odot h^{3_G}}G\odot(x\ominus a)\odot(x\ominus a \ominus h)\odot(x\ominus a\ominus e^2\odot h)\oplus\cdots\\ &\oplus \frac{\Delta^n_G f(a)}{n!_G\odot h^{n_G}}G \odot (x\ominus a)\odot(x\ominus a\ominus h)\odot\cdots \odot(x\ominus a\ominus e^{n-1}\odot h). \end{align*} This is the Geometric Newton-Gregory forward interpolation formula. Putting ${\frac{x\ominus a}{h}}G= u$ or $x=a \oplus h\odot u,$ formula takes the form \begin{gather} \begin{aligned}\label{eqn11} P_n(x)=& f(a)\oplus u\odot \Delta_G f(a) \oplus \frac{u\odot(u\ominus e)}{2!_G}G\odot \Delta^2_G f(a)\\ &\oplus \frac{u\odot(u\ominus e)\odot (u \ominus e^2)}{3!_G}G \odot \Delta^3_G f(a)\oplus\cdots\\ &\oplus \frac{u\odot (u \ominus e)\odot(u \ominus e^2)\odot \cdots \odot (u \ominus e^{n-1})}{n!_G}G \odot \Delta^n_G f(a). \end{aligned} \end{gather} The result (\ref{eqn11}) can be written as \begin{align*} P_n(x)=P_n(a\oplus h\odot u)=&f(a)\oplus u^{(1_G)}\odot \Delta_G f(a) \oplus \frac{u^{(2_G)}}{2!_G}G\odot \Delta^2_G f(a)\oplus \frac{u^{(3_G)}}{3!_G}G \odot \Delta^3_G f(a)\oplus \cdots \\ & \cdots \oplus \frac{u^{(n_G)}}{n!_G}G \odot \Delta^n_G f(a). \end{align*} where $u^{(n_G)}=u\odot (u \ominus e)\odot (u \ominus e^2)\odot\cdots \odot (u \ominus e^{n-1}).$ \begin{example} Given,$f(x)=f(e^t)=\sin(e^t).$ From the following table, find $\sin(e^{1.3})$ using geometric forward interpolation formula.\\[2ex] \begin{center \begin{tabular}{|c| c| c| c|c|} \hline $x$ & $e$ &$e^{1.2}$ & $e^{1.4}$& $e^{1.6}$\\ [0.5ex] \hline $f(x)$&$0.0474$& $0.0579$&$0.0707$&$0.0863$\\[1ex] \hline \end{tabular} \end{center} \noindent \textbf{Solution.} The geometric difference table for given data is as follows:\\[1.5ex] \begin{center \begin{tabular}{|c| c| c| c| c|} \hline $x$ & $f(x)$ & $\Delta_G f(x)$ & $\Delta^2_G f(x)$& $\Delta^3_G f(x)$\\ [1.5ex] \hline $e$ & 0.0474 & & & \\ & &1.2215 & & \\ $e^{1.2}$& 0.0579 & & 0.9997 & \\ & &1.2211 & & 0.9999 \\ $e^{1.4}$& 0.0707 & & 0.9996 & \\ & &1.3306 & & \\ $e^{1.6}$& 0.0863 & & & \\ \hline \end{tabular} \end{center} We have to calculate \begin{align*} f(e^{1.3})&=f(a\oplus u\odot h),\, \text{say}.\\ \therefore \quad a\oplus u\odot h&= e^{1.3}\\ \Rightarrow e\oplus u \odot e^{0.2}&= e^{1.3}, \quad(\text{here} ~ h=e^{1.2}\ominus e=e^{0.2})\\ u&= \frac{e^{1.3} \ominus e}{e^{0.2}}G\\ &=\left(e^{0.3}\right)^\frac{1}{0.2}\\ & =e^{1.5} \end{align*} By Geometric Newton-Gregory forward interpolation formula we get \begin{align*} f(a\oplus u\odot h)&=f(a)\oplus u\odot \Delta_G f(a) \oplus \frac{u\odot (u \ominus e)}{e^2}G\odot \Delta^2_G f(a)\\ &\quad\oplus \frac{u\odot (u \ominus e)\odot (u \ominus e^2)}{e^6}G \odot \Delta^3_G f(a)\\ f(e^{1.3}) &= f(e)\oplus \{e^{1.5}\odot \Delta_G f(e)\} \oplus \{\frac{e^{1.5}\odot (e^{1.5}\ominus e)}{e^2}G\odot \Delta^2_G f(e)\}\\ & \quad \oplus \{\frac{e^{1.5}\odot (e^{1.5}\ominus e)\odot (e^{1.5}\ominus e^2)}{e^6}G \odot \Delta^3_G f(e)\}\\ &= 0.0474\oplus \{e^{1.5}\odot 1.2215\}\oplus \{\frac{e^{1.5}\odot e^{0.5}}{e^2}G\odot 0.9997\}\\ & \quad \oplus \{\frac{e^{1.5}\odot e^{0.5}\odot e^{-0.5}}{e^6}G \odot 0.9999\}\\ &=0.0474\oplus (1.2215)^{1.5} \oplus (0.9997)^{0.325} \oplus (0.9999)^{\frac{1}{0.0625}}\\ &=0.0474\oplus 1.3500 \oplus 0.9999 \oplus 0.9984\\ &=0.0474 \times 1.3500 \times 0.9999 \times 0.9984\\ &=0.0639 \end{align*} Thus $\sin(e^{1.3})=0.0639.$ \end{example} \textbf{Note:} It is to be noted that $e^x\odot e^y=e^{xy},e^x\oplus e^y=e^{x+y}, x\oslash e^y=x^{\frac{1}{y}}.$ \vspace{0.3cm}\\ \textbf{Geometric Newton-Gregory Backward Interpolation Formula:} Let $y=f(x)$ be a function which takes the values $f(a\oplus e^n\odot h),f(a\oplus e^{n-1}\odot h), f(a\oplus e^{n-2}\odot h), f(a\oplus e^{n-3}\odot h),......,f(a)$ for the $n+1$ geometrically equidistant values $a\oplus e^n\odot h, a\oplus e^{n-1}\odot h, a\oplus e^{n-2}\odot h, a\oplus e^{n-3}\odot h,......, a$ of the independent variable $x$ and let $P_n(x)$ be a geometric polynomial in $x$ of degree $n$ defined as: \begin{gather} \begin{aligned}\label{eqn12} P_n(x)=& A_0\oplus A_1 \odot (x\ominus a\ominus e^n\odot h)\oplus A_2\odot (x\ominus a \ominus e^n\odot h)\odot(x\ominus a \ominus e^{n-1}\odot h)\\ &\oplus A_3\odot(x\ominus a \ominus e^n\odot h)\odot(x\ominus a \ominus e^{n-1} \odot h)\odot(x\ominus a\ominus e^{n-2}\odot h)\oplus\cdots\\ &\oplus A_n \odot (x\ominus a \ominus e^n\odot h)\odot(x\ominus a \ominus e^{n-1} \odot h)\odot\cdots \odot(x\ominus a\ominus h). \end{aligned} \end{gather} where $A_0, A_1, A_2,......,A_n$ are constants which are to be determined so as to make \[P_n(a\oplus e^n\odot h)=f(a\oplus e^n\odot h), P_n(a\oplus e^{n-1}\odot h)=f(a\oplus e^{n-1}\odot h),..., P_n(a)=f(a)\] Putting $x=a\oplus e^n\odot h, a\oplus e^{n-}\odot h,$.... in (\ref{eqn12}) and also putting\\ $P_n(a\oplus e^n\odot h)=f(a\oplus e^n\odot h),$....., we get \begin{align*} A_0&=f(a\oplus e^n\odot h)\\ A_1&=\frac{\nabla_G f(a\oplus e^n\odot h)}{h}G\\ A_2&=\frac{\nabla^2_G f(a\oplus e^n\odot h)}{2!_G\odot h^{2_G}}G\\ A_3&=\frac{\nabla^3_G f(a\oplus e^n\odot h)}{3!_G\odot h^{3_G}}G\\ ....&\quad .............................\\ A_n&=\frac{\nabla^n_G f(a\oplus e^n\odot h)}{n!_G\odot h^{n_G}}G \end{align*} Substituting the values of $A_0, A_1, A_2,....$ in (\ref{eqn12}), we get \begin{gather} \begin{aligned}\label{eqn13} P_n(x)=& f(a\oplus e^n\odot h)\oplus \frac{\nabla_G f(a\oplus e^n\odot h)}{h}G \odot (x\ominus a\ominus e^n\odot h)\\ &\oplus \frac{\nabla^2_G f(a\oplus e^n\odot h)}{2!_G\odot h^{2_G}}G\odot (x\ominus a \ominus e^n\odot h)\odot(x\ominus a \ominus e^{n-1}\odot h)\\ &\oplus \frac{\nabla^3_G f(a\oplus e^n\odot h)}{3!_G\odot h^{3_G}}G\odot(x\ominus a \ominus e^n\odot h)\odot(x\ominus a \ominus e^{n-1} \odot h)\odot(x\ominus a\ominus e^{n-2}\odot h)\oplus\cdots\\ &\oplus \frac{\nabla^n_G f(a\oplus e^n\odot h)}{n!_G\odot h^{n_G}}G \odot (x\ominus a \ominus e^n\odot h)\odot(x\ominus a \ominus e^{n-1} \odot h)\odot\cdots \odot(x\ominus a\ominus h). \end{aligned} \end{gather} This is the Geometric Newton-Gregory backward interpolation formula. Putting $u=\frac{x\ominus (a\oplus e^n\odot h)}{h}G$ or $x=a\oplus e^n\odot h\oplus u\odot h,$ we get \begin{gather} \begin{aligned}\label{eqn14} P_n(x)=&=P_n(a\oplus e^n\odot h\oplus u\odot h)= f(a\oplus e^n\odot h)\oplus u \odot \nabla_G f(a\oplus e^n\odot h)\\ &\oplus \frac{u\odot(u \oplus e)}{2!_G}G\odot \nabla^2_G f(a\oplus e^n\odot h)\\ &\oplus \frac{u\odot(u \oplus e)\odot(u \oplus e^2)}{3!_G}G\odot \nabla^3_G f(a\oplus e^n\odot h)\oplus\cdots\\ &\oplus \frac{u\odot(u \oplus e)\odot(u \oplus e^2)\odot \cdots \odot (u \oplus e^{n-1})}{n!_G}G \odot \nabla^n_G f(a\oplus e^n\odot h). \end{aligned} \end{gather} \begin{example} Given,$f(x)=\ln (x).$ From the following table, find $\ln(22)$ using geometric backward interpolation formula.\\[3ex] \begin{center \begin{tabular}{|c| c| c| c|c|} \hline $x$ & 3 & 6 & 12 & 24\\ [0.5ex] \hline $f(x)$& 1.0986 & 1.7918 & 2.4849 & 3.1781\\[1ex] \hline \end{tabular} \end{center} \textbf{Solution.} The geometric difference table for given data is as follows:\\[1ex] \begin{center \begin{tabular}{|c| c| c| c| c|} \hline $x$& $f(x)$ & $\nabla_G f(x)$ & $\nabla^2_G f(x)$& $\nabla^3_G f(x)$\\ [1.5ex] \hline 3 & 1.0986 & & & \\ & &1.6310 & & \\ 6 & 1.7918 & & 0.8503 & \\ & &1.3868 & & 1.0847 \\ 12 & 2.4849 & & 0.9223 & \\ & &1.2790 & & \\ 24 & 3.1781 & & & \\ \hline \end{tabular \end{center} We have to compute \begin{align*} f(22)&=f(a\oplus e^n\odot h\oplus u\odot h),\, \text{say}.\\ \therefore \quad a\oplus e^n\odot h\oplus u\odot h &= 22\\ \Rightarrow 24\oplus u\odot h&= 22, \quad(\text{here} ~ h=6\ominus 3= 2)\\ u&= \frac{22 \ominus 24}{2}G\\ &=\left(0.9167\right)^\frac{1}{\ln 2}\\ & =0.8820. \end{align*} By Geometric Newton-Gregory backward interpolation formula we get \begin{align*} f(22)&=f(24)\oplus u \odot \nabla_G f(24) \oplus \frac{u\odot(u \oplus e)}{2!_G}G\odot \nabla^2_Gf(24)\\ & \quad \oplus \frac{u\odot(u \oplus e)\odot(u \oplus e^2)}{3!_G}G\odot \nabla^3_Gf(24)\\ &= 3.1781\oplus \{0.8820 \odot 1.2790\} \oplus \{\frac{0.8820\odot(0.8820 \oplus e)}{e^2}G\odot 0.9223\} \\ &\quad \oplus \{\frac{0.8820\odot(0.8820 \oplus e)\odot(0.8820 \oplus e^2)}{e^6}G\odot 1.0847\}\\ &= 3.1781\oplus 0.9696 \oplus \{0.9466\odot 0.9223\}\oplus \{0.9663 \odot 1.0847\}\\ &=3.1781 \oplus 0.9696 \oplus 1.0045 \oplus 0.9972\\ &= 3.0867 \end{align*} Therefore $\ln (22)= 3.0867.$ \end{example} \textbf{Note:} Since small change in $x$ results large change in $e^x.$ So, for better accuracy, values should be taken up to maximum possible decimal places. \textbf{Advantages of Geometric Interpolation Formulae over Ordinary Interpolation Formulae:} All the ordinary interpolation formulae are based upon the fundamental assumption that the data is expressible or can be expressed as a polynomial function with fair degree of accuracy. But geometric interpolation formulae have no such restriction. Because geometric interpolation formulae are based on geometric polynomials which are not polynomials in ordinary sense. So geometric interpolation formulae can be used to generate transcendental functions, mainly to compute exponential and logarithmic functions. Also geometric forward and backward interpolation formulae are based on the values of the argument that are geometrically equidistant but need not be equidistant like classical interpolation formulae. \section{Conclusion} In this paper, we have defined geometric difference sequence space and obtained the Geometric Newton-Gregory interpolation formulae. Our main aim is to bring up geometric calculus to the attention of researchers in the branch of numerical analysis and to demonstrate its usefulness. We think that geometric calculus may especially be useful as a mathematical tool for economics, management and finance. \thebibliography{00} \bibitem{BashirovRiza} A. Bashirov, M. R\i za, \textit{On Complex multiplicative differentiation}, TWMS J. App. Eng. Math. 1(1)(2011), 75-85. \bibitem{BashirovMisirh} A. E. Bashirov, E. M\i s\i rl\i, Y. Tando\v{g}du, A. \"{O}zyap\i c\i, \textit{On modeling with multiplicative differential equations}, Appl. Math. J. Chinese Univ., 26(4)(2011), 425-438. \bibitem{BashirovKurpinar} A. E. Bashirov, E. M. Kurp\i nar, A. \"{O}zyapici, \textit{Multiplicative Calculus and its applications}, J. Math. Anal. Appl., 337(2008), 36-48. \bibitem{CakmakBasar} A. F. \c{C}akmak, F. Ba\c{s}ar, \textit{On Classical sequence spaces and non-Newtonian calculus}, J. Inequal. Appl. 2012, Art. ID 932734, 12pp. \bibitem{Garling67} D. J. H. Garling, \textit{The $\beta$- and $\gamma$-duality of sequence spaces}, Proc. Camb. Phil. Soc., 63(1967), 963-981. \bibitem{Grossman83} M. Grossman, \textit{Bigeometric Calculus: A System with a scale-Free Derivative}, Archimedes Foundation, Massachusetts, 1983. \bibitem{GrossmanKatz} M. Grossman, R. Katz, \textit{Non-Newtonian Calculus}, Lee Press, Piegon Cove, Massachusetts, 1972. \bibitem{KadakEfe} U. Kadak and Hakan Efe, \textit{Matrix Transformation between Certain Sequence Spaces over the Non-Newtonian Complex Field}, The Scientific World Journal, Volume 2014, Article ID 705818, 12 pages. \bibitem{kadak2} U. Kadak, Murat Kiri\c{s}\c{c}i and A.F. \c{C}akmak \textit{On the classical paranormed sequence spaces and related duals over the non-Newtonian complex field} J. Function Spaces Appl., 2015 \bibitem{KADAK3} U. Kadak and Muharrem \"{O}zl\"{u}k, \textit{Generalized Runge-Kutta method with respect to non-Newtonian calculus}, Abst. Appl. Anal., Vol. 2015 (2015), Article ID 594685, 10 pages. \bibitem{Kizmaz} H. Kizmaz, \textit{On Certain Sequence Spaces}, Canad. Math. Bull., 24(2)(1981), 169-176. \bibitem{KotheToplitz69} G. K\"{o}the, Toplitz, \textit{Vector Spaces I}, Springer-Verlag, 1969. \bibitem{KotheToplitz34} G. K\"{o}the, O. Toplitz, \textit{Linear Raume mit unendlichen koordinaten und Ring unendlichen Matrizen}, J. F. Reine u. angew Math., 171(1934), 193-226. \bibitem{Maddox80} I.J. Maddox, \textit{Infinite Matrices of Operators}, Lecture notes in Mathematics, 786, Springer-Verlag(1980). \bibitem{Stanley} D. Stanley, \textit{A multiplicative calculus}, Primus IX 4 (1999) 310-326. \bibitem{TekinBasar} S. Tekin, F. Ba\c{s}ar, \textit{Certain Sequence spaces over the non-Newtonian complex field}, Abstr. Appl. Anal., 2013. Article ID 739319, 11 pages. \bibitem{TurkmenBasar} Cengiz T\"{u}rkmen and F. Ba\c{s}ar, \textit{Some Basic Results on the sets of Sequences with Geometric Calculus}, Commun. Fac. Fci. Univ. Ank. Series A1. Vol G1. No 2(2012) Pages 17-34. \bibitem{Uzer10} A. Uzer, \textit{Multiplicative type Complex Calculus as an alternative to the classical calculus}, Comput. Math. Appl., 60(2010), 2725-2737. \end{document}
1,116,691,497,061
arxiv
\section{Introduction} The nearby M\,81 group has been the subject of several surveys in the 21--cm line of atomic hydrogen (HI). These surveys can be divided into two broad categories: targeted and blind. \cite{appleton81} using the 76--m Lowell telescope and later \cite[Yun \etal\ (1994;]{yun94} \cite[see also Yun \etal\ 2000)]{yun00} mapped the large--scale HI distribution in the group, tracing the complex tidal tails and bridges resulting from the 3--body interaction between M\,81, M\,82, and NGC\,3077. Other targeted surveys, extending beyond the area covered by the triplet, have been those by \cite{huchtmeier98} and \cite{huchtmeier00} to determine the HI content of optically selected objects such as dwarf galaxies. Motivations for blind surveys have varied from i) searches for High Velocity Clouds (HVC) and Compact HVC analogues and ii) deep surveys to push down the HI luminosity function, to iii) searching for ``Dark Galaxies", i.e., dark matter haloes which have not as yet turned (part of) their gas content into stars. The first blind survey which included the M\,81 group was that of \cite{lo79}. Recently, \cite{boyce01} have repeated this at improved resolution and sensitivity as part of HIJASS, the HI Jodrell All--Sky Survey. \cite{boyce01} detected four known dIrr galaxies close to M\,81. They also detected an HI cloud apparently devoid of stars, HIJASS~J1021+6842. If anything else, this shows that blind HI surveys can still surprise and reveal objects which have been overlooked in one of the best studied and most surveyed nearby groups. \cite{walter05} present VLA follow--up observations of HIJASS~J1021+6842; \cite{kara07} report the possible detection of an optical counterpart in the form of faint H$\alpha$ emission which, if confirmed, would imply that although it would be one of the most extreme low surface brightness systems, HIJASS~J1021+6842 wouldn't be a ``Dark Galaxy". In what follows we will report on a project which started out with the intention to study the HI content of previously catalogued dSph galaxies in the M\,81 group. Deep observations were proposed to detect HI {\em in} these systems and {\em outside} the optical bodies in order to derive how much gas is associated with the galaxies themselves and how much is found in their immediate neighbourhood. We were interested in the latter to set constraints on the mechanisms that might be at work to remove gas from low--mass dwarfs. In the course of the project, however, we detected HI well beyond several of our targets, prompting us to change from targeted observations to blind survey mode. \begin{table} \begin{center} \caption{Summary of targets for the VLA D--array observations} \label{tab-obs} \begin{tabular}{lcllcc} \hline name & type & RA (2000.0) & DEC (2000.0)& m$_{\mathrm B}$ & detection\\ & & h~~~m~~~ s &\ \ $^{\odot}$~~~$^\prime$~~~$^{\prime\prime}$ & mag \\ \hline KDG 61 & sph & 09 57 03.1 & +68 35 31 & 15.2 & y \\ FM 1 & sph & 09 45 10.0 & +68 45 54 & 17.5 & n \\ BK 5 N & sph & 10 04 41.1 & +68 15 22 & 17.4 & y \\ KDG 64 & sph & 10 07 01.9 & +67 49 39 & 15.5 & y \\ KK 77 & sph & 09 50 10.5 & +67 30 24 & 16.3 & n \\ HIJASS & ? & 10 21 00.0 & +68 42 00 & ? & y \\ DDO 71 & sph & 10 05 06.4 & +66 33 32 & 15.9 & n \\ DDO 78 & sph & 10 26 27.4 & +67 39 16 & 15.8 & n \\ BK 6 N & sph & 10 34 29.8 & +66 00 30 & 16.9 & n \\ KKH 57 & sph & 10 00 15.9 & +63 11 06 & 17.9 & y \\ \hline \end{tabular} \end{center} \end{table} \section{Observations and Results} The observations reported here were made with the NRAO\footnote{The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.} Very Large Array (VLA) over an extended period. Originally 9 dSph galaxies, plus HIJASS\,J1021+6842 were observed in VLA D--array in February 2003 (see Table~\ref{tab-obs}). In addition to mapping the source detected by \cite{boyce01}, HI was detected in the fields of BK5N, KKH\,57, KDG\,61, and KDG\,64. Follow--up C--array observations were obtained on these targets in April 2004. The resulting HI surface brightness maps of the combined C plus D--array observations are shown in Fig.\,\ref{fig-obs}. The maps were made using natural weighting for best sensitivity. They have an angular resolution of $35^{\prime\prime}$ which, at the assumed distance of the M\,81 group of 3.63\,Mpc \cite[(Freedman \etal\ 2001)]{freedman01}, corresponds to 0.6\,kpc. The rms noise is typically 0.6\,mJy\,beam$^{-1}$ or 0.3\,K at a velocity resolution of 5.2\,km\,s$^{-1}$. These single channel noise levels can be used to calculate a column density detection threshold. Assuming a signal to be genuine if it is detected at the $3\sigma$ level across 3 channels, we find a minimum detectable column density of $2.5 \times 10^{19}$\,cm$^{-2}$. Using the same criterion we arrive at a minimum detectable HI mass for an HI cloud filling the beam, at the distance of the M\,81 group, of $9 \times 10^4$\,M$_\odot$. \begin{figure} \includegraphics{brinks-fig1.ps} \caption{HI surface brightness maps of the four fields where HI emission was detected, BK5N, KKH\,57, KDG\,61, and KDG\,64. KKH\,57 is the only field where we detected HI associated with the target dSph; in the other three fields, free--floating HI is found whereas the target galaxies remain undetected down to a level of $9 \times 10^4$\,M$_\odot$ (based on a $3\sigma$ detection threshold over 3 channels).}\label{fig-obs} \end{figure} The results of our observations can be summarised as follows: {\bf KKH\,57} We find a clear detection coinciding with the optical counterpart. The HI signal is only four channels wide at a systemic velocity of 203\,km\,s$^{-1}$; spatially the HI is unresolved. The low M$_\mathrm{HI}$/L suggests this is a transition galaxy, i.e., an object which combines properties of a dSph, such as a low luminosity and prominent old stellar population, with those of a dIrr, e.g., the presence of HI \cite[(see Mateo 1998, for a more detailed treatise]{mateo98} \cite[and Skillman \etal\ 2003, for examples)]{skillman03}. No other clouds are detected in this field. {\bf KDG\,61} \cite{boyce01} claim that this is a clear case of a transition galaxy. Their beam, however, is $12^\prime$ large and they are limited by confusion due to HI coming from the triplet. KDG\,61 lies close to M\,81 and, even with our much higher resolution data, it is not trivial to separate emission from KDG\,61 or other field objects from M\,81 emission. We seem to find several HI clouds which are neither related to the SE tidal arm nor to KDG\,61 itself, at velocities ranging from $-45$ to $-15$\,km\,s$^{-1}$. {\bf BK5N} Several HI clouds are detected with velocities from $-130$ to $-75$\,km\,s$^{-1}$, well offset from the optical counterpart (for which no optical radial velocity has been published). {\bf KDG\,64 = UGC\,5442} The optical galaxy lies at $-18 \pm 14$\,km\,s$^{-1}$ \cite[(Simien \& Prugniel 2002)]{simien02}. We find HI emission offset in position and velocity, with HI detected at velocities ranging from $-120$ to $-70$\,km\,s$^{-1}$. This is illustrated in Fig.\ \ref{kdg64} which shows the location of the HI clouds with respect to the optical galaxy. \begin{figure} \begin{center} \includegraphics[width=11cm]{brinks-fig2.ps} \end{center} \caption{Optical image based on the {\em Digitized Sky Survey (DSS)} covering the field imaged in HI with the VLA centred on KDG\,64. The contours indicate the locations of half a dozen free--floating HI clouds. Note that no HI is associated with KDG\,64.}\label{kdg64} \end{figure} In summary, except for KKH\,57, none of the dSph targets were detected, confirming that most of them are devoid of HI. However quite surprisingly, HI was detected in several of the fields centred on a number of our targets but not obviously associated with them, neither in position nor in velocity. This then, of course, begs the question if these clouds are somehow associated with the dSph galaxies, or if they are semi--randomly distributed within the M\,81 group. In order to investigate this, follow--up ``blind" HI observations, of a further six fields were obtained with the VLA in D--configuration (observations taken in June, July, and August 2004). HI was found in only one of these fields, bordering BK5N and KDG\,64 to the East. In total, about a dozen barely resolved HI clouds were detected. They seem to predominantly occupy a region to the South--East of the triplet, extending as far away as $\sim 100$\,kpc. Their radial velocities are in the range $-130$ to $-70$\,km\,s$^{-1}$. Velocity dispersions are $\sim 8$\,km\,s$^{-1}$. HI masses are of order $10^5$\,M$_\odot$. Figure 2 shows a DSS image of the field mapped around KDG\,64 with some of the HI clouds indicated with contours. No optical counterparts are visible on the DSS. The M\,81 group has been surveyed several times in the optical down to levels considerably fainter than the DSS. In fact, it is one of the best studied nearby groups \cite[(e.g., Froebrich \& Meusinger, 2000)]{froebrich00}. We can therefore be confident that these objects have very little starlight associated with them. \section{Discussion}\label{discuss} So what are they and what is their origin? There are several possibilities. They could be: \begin{enumerate} \item primordial material which is infalling towards the M\,81 triplet; \item High Velocity Cloud (HVC) analogues belonging to M\,81; \item material originally part of BK5N and KDG\,64 which has been expelled; \item or tidal debris scattered into the SE quadrant. \end{enumerate} Taking each of these possibilities in turn, the first explanation, infall of primordial material, is rather {\em ad hoc}. Without a determination of the metallicity of the gas in these HI clouds, it will be difficult to rule out this hypothesis. However, if primordial material were still to be around at column and space densities found here, it should have been detected in other nearby groups as well. No similar objects have been seen in deep HI observations, reaching comparable HI detection limits (\cite[Pisano \etal\ 2004, 2007]{pisano04, pisano07}). If they represent analogues to HVCs, one would expect their velocity centroid to coincide with that of their host galaxy, M\,81. Also, their distribution would be expected to be more isotropic rather than that found here. No clouds were found in any of the other targeted observations nor in the blind pointings which together covered areas in and around the triplet. The third option, that they represent material that has been expelled from BK5N and KDG\,64 is also hard to defend. Not only do the clouds seem uncorrelated spatially with both dwarfs, but in the one case where we have velocity information for the dwarf, the velocities of the clouds are offset substantially. This leaves us with the last option, the clouds being tidal debris. Inspection of the HI observations by \cite{appleton81} shows that the HI clouds fall along the extension of their ``feature VII", but beyond the area covered by either their survey or the VLA mosaic by \cite{yun94}. Feature VII is prominent in the velocity range from $-130$ to $-100$\,km\,s$^{-1}$, in broad agreement with the clouds found near BK5N and KDG\,64. The VLA map by Yun \etal\ resolves feature VII, showing seemingly a spur branching off from the tidal bridge connecting NGC\,3077 with M\,81, which points roughly in the direction of the dwarf spheroidals. This is illustrated in Fig.\ \ref{m81} which shows Yun's HI surface brightness map as contours overlaid on an optical image. The numerical simulation published by \cite{yun97} shows material from M\,81 being dragged out by the passage of NGC\,3077 and spread out towards the South--East. It therefore seems likely that the HI clouds found there are, in fact, tidal debris. They could be the neutral density peaks of a more dilute sheet of gas which is mostly ionised by the extragalactic radiation field, as a result of the column density being in general close to the value below which HI will become fully ionised. Peak column densities fall well below the canonical $\sim 10^{21}$\,cm$^{-2}$ threshold for star formation, explaining the lack of an optical counterpart. % \begin{figure} \includegraphics[width=14cm]{brinks-fig3.ps} \caption{Optical image taken from the DSS of a $2^\circ \times 2^\circ$ field to the South--East of M\,81. Overlaid are contours of the HI distribution as measured by \cite{yun94}, showing the tidal tail connecting M\,81 with NGC\,3077, and a ``spur'' pointing to the South--East. In addition we plot as contours the HI clouds detected in the VLA observations reported here that are seen in the direction of BK5N and KDG\,64, plus the HI clouds encountered in VLA D--array observations in one of the ``blind" pointings located due East of the two dwarf galaxies. The approximate locations of BK5N and KDG\,64 are indicated with stars.}\label{m81} \end{figure} \section{Conclusions}\label{sec:concl} In a search for HI in and around the lowest--mass dwarf spheroidal (dSph) galaxies in the M\,81 group, we have discovered an unexpected population of HI clouds with masses of order $10^5$\,M$_\odot$. Our observations of HIJASS J1021+6842 which were taken during the same observing runs were published separately \cite[(Walter \etal\ 2005)]{walter05}. So far, about a dozen clouds have been detected within a region to the South--East of the triplet (in the vicinity of the dSphs KDG\,64 and BK5N); there were no detections toward regions around other dSph galaxies in the same group and in five more ``blind" pointings. From a technical perspective, our observations go much deeper than any previous blind surveys done with single dish telescopes in the M\,81 group (\cite[Lo \& Sargent 1979]{lo79}; \cite[Boyce \etal\ 2001]{boyce01}). The barely resolved clouds detected with the $35^{\prime\prime}$ VLA beam fall below the detection limit of single dish telescopes as a result of beam dilution. Our VLA data are also more sensitive by an order of magnitude than similar such surveys done in the Centaurus and Sculptor groups with the ATCA (\cite[de Blok \etal\ 2002]{blok02}\cite[; see also the discussion of the null result with Arecibo by Zwaan \& Briggs 2000)]{zwaan00}. The HI clouds detected here don't seem to have any optical counterparts. Circumstantial evidence argues in favour of these clouds being debris from the tidal interaction of the galaxies making up the M\,81 triplet, notably the passage of NGC\,3077 sweeping in a prograde fashion around the South of M\,81. In order to better understand these enigmatic clouds further observations (extending the area around BK5N and KDG\,64 covered thus far) are being analysed. \begin{acknowledgments} We thank Min Yun for providing us with an electronic version of his 1994 data which were used in Fig.\,\ref{m81}. \end{acknowledgments}
1,116,691,497,062
arxiv
\section{Introduction} \label{intro} \vspace{-0.3cm} Galactic bulges are emerging as inherently complex features in spiral galaxies. Numerous studies have shown them to have several spatial structures overlaying each other. The Milky Way is no different -- over the last decades our view of the stellar content, gas and stellar dynamics in the inner few kpc of our own galaxy has developed significantly. If the spatial structures are uniquely related to dynamical and chemical features is a very actively studied field. New surveys such as VISTA Variables in The Via Lactea (VVV) public survey (described in \cite{refsaito}) are looking deeper and deeper in to this intriguing Galactic component. Several outstanding questions remain un-resolved including the true shape of the metallicity distribution function (MDF) and how the MDF is connected to different spatial and kinematical structures. A recent discussion of these issues can be found in \cite{carine}. Other questions concern the star formation history of the bulge and the presence or absence of age spreads as, for example, traced by asymptotic giant stars \cite{vanloon}. The history of a stellar population is imprinted in its stars. The elemental abundances in the atmospheres of stars often remain unperturbed over time and act as time-capsules showing the mixture of elements present in the gas from which the stars formed. This is in particular true for dwarf stars and their spectra, even for metal-rich stars, are fairly straightforward to analyze and are the best tracers of galactic chemical evolution \cite{edvardsson}. However, dwarf stars in the Galactic bulge are too faint to be observed under normal circumstances ($V$= 19 -- 20, as seen in for example colour-magnitude diagrams obtained with the HST, \cite{feltzing}). The chemical history of the bulge has therefore mainly been studied using intrinsically bright giant stars. Results based on giant spectra are not trivial to interpret as evolutionary processes erase some of the abundance information and the cool atmospheres of giants, rich in molecules, are difficult to analyse, see discussion in \cite{fulbright}. IR spectroscopy of bulge giants has recently become feasible but is still limited by the very restricted wavelength coverage on existing spectrographs. A recent example is given by the CRIRES spectra analysed in \cite{ryde}. However, the underlying assumption that giants accurately represent all stars has not yet been rigorously tested, as discussed in \cite{taylor}. Therefore, a metallicity distribution function based on red giant star may not reflect the original distribution. Physical processes within the red giant stars can also lead to the erasure of some of the original abundance signatures. This is in particular the case for C, N, and Li. These abundances are eventually altered in all red giants and in some stars O, Na, Mg, and Al are also altered as discussed in \cite{kraft}. This means that a true study of the star formation history in the Galactic bulge requires the study of dwarf stars. Furthermore, the precision achieved in dwarfs is better than in giants, allowing us to look for any substructure that may be present in the bulge population. Finally, for dwarf stars close to the turn-off point or on the sub-giant branch it is possible to derive individual ages. These give us a unique insight in to the age structure of the Galactic bulge. Micro-lensing offers the unique opportunity to observe dwarf stars in the bulge. When the star is lensed by a foreground object its magnitude can increase more than 5 magnitudes making it possible to obtain a spectrum of high resolution and S/N so that a standard abundance analysis can be done \cite{bensby2011}. The observed micro-lensing events discussed in this contribution and their analysis are fully documented in \cite{bensby2010} and \cite{bensby2011}. Issues concerning limb-darkening are further developed in \cite{johnson}. \section{Micro-lensed dwarf stars in the Galactic Bulge - discussion} \label{sec:1} \vspace{-0.3cm} \begin{figure} \resizebox{0.7\columnwidth}{!}{% \includegraphics{feltzingfig1.eps}} \caption{Colour-magnitude diagrams for the micro-lensed dwarf stars. The colours and magnitudes are determined using micro-lensing techniques ($\circ$) and spectroscopy ($\bullet$). {\bf a} Shows the stars with [Fe/H]$<0$ and {\bf b} the stars with [Fe/H]$>$0. Each colour-magnitude diagram also show representative $Y^2$ isochrones for 1, 5, 10, and 15\,Gyr, from \cite{demarque}. For each star we connect the result based on micro-lensing techniques with that based on spectroscopy using a dotted line. 4 stars have no values from micro-lensing techniques (see \cite{bensby2011}). } \label{feltzingfig:1} \end{figure} \subsection{Abundance trends} \vspace{-0.3cm} We find that the elemental abundance trends in the Galactic bulge as traced by the micro-lensed dwarf stars is very similar, if not identical, to that found in the solar neighbourhood for dwarf stars with kinematics typical of the thick disk \cite{bensby2011}. Recent studies of red giant stars have also shown great similarities between the local thick disk giants and the giants in the Galactic bulge \cite{alvesbrito}. Thus it appears that the earlier results where red giant stars in the Galactic bulge showed large $\alpha$-enhancements also at solar and even super-solar metallicities must be ascribed to the difficulty in analysing optical spectra of metal-rich red giants (see also discussion in \cite{bensby2010}). \vspace{-0.3cm} \subsection{MDF and IMF} \label{subsec:1} \vspace{-0.3cm} In our two papers \cite{bensby2010} and \cite{bensby2011} we compare the MDF based on the small number of micro-lensed dwarf stars available with the then best MDF based on spectroscopy of red giant stars as analysed in \cite{zoccali2008}. We found a significant difference with the dwarf stars showing a bi-modal MDF. A recent re-analysis of the spectra of these red giant stars has changed the situation somewhat by making the MDF based on giant stars somewhat more bi-modal. This re-analysis is presented in \cite{hill}. A KS-test between that result and the MDF based on micro-lensed dwarf stars (including only those presented by us in \cite{bensby2011}) is inconclusive. The MDFs could be drawn from the same population or not. The most recent update of our MDF based on micro-lensed dwarf stars, in total 37 stars (end of September 2011), still shows a bi-modal MDF and with an ever increasing fraction of metal-rich stars. An interesting implication of the true shape of the MDF concerns the Initial Mass Function (IMF). The origin of the slope of the IMF is much debated (for a short, recent introduction to the debate see \cite{oey}). Many processes leads to roughly the same slope and there is not much evidence that metallicity influences the shape or slope of the IMF, a review can be found in \cite{bastian}. Recent work on the IMF in the Galactic bulge has resulted in interesting conclusions. The peak of the MDF depends on the slope of the IMF. The chemical evolution model of the Galactic bulge by \cite{ballero} is based on the photometric MDF of giant stars by \cite{zoccali2003} and on the spectroscopic MDF of giant stars by \cite{fulbright}. In \cite{ballero} an IMF much flatter than in the solar neighbourhood is found, i.e., an IMF skewed towards high mass stars. A more recent model used the spectroscopic MDF, still using the original results from \cite{zoccali2008}, also requires a flat IMF to reproduce the peak of the MDF as derived from the red giant stars (\cite{cescutti}). However, the MDF based on micro-lensed dwarf stars persistently shows two well-defined peaks, one that can be associated with a metal-poor old bulge, and another with super-solar metallicities that can be associated with a younger population (compare also the discussion of the connection between the stellar kinematics and their metallicities presented in \cite{carine}). Hence, the IMF no longer has to be flat to explain a single peaked solar metallicity MDF that was made in 0.5 Gyr, as in these recent models (\cite{cescutti}). To reproduce the bi-modal nature of the bulge MDF, probably a normal IMF can be used for the metal-poor bulge, while contributions from type Ia SN might explain the younger metal-rich peak. However, more detailed models are needed before firm conclusions can be drawn. \vspace{-0.3cm} \subsection{Ages} \label{subsec:2} \vspace{-0.3cm} \begin{figure} \begin{center} \resizebox{0.55\columnwidth}{!}{% \includegraphics{feltzingfig2.eps} } \end{center} \caption{The ages and metallicities of the 26 dwarf stars as derived in \cite{bensby2011}.} \label{feltzingfig:2} \end{figure} Dwarfs near the turn-off are unique as we can get stellar ages for them. Figures\,\ref{feltzingfig:1} and \ref{feltzingfig:2} show a summary of results of our determination of stellar parameters and ages for stars presented in \cite{bensby2011}. In Fig.\,\ref{feltzingfig:1} we split the stars according to metallicity. We show two different values of $M_{\rm I}$ and $(V-I)_0$ for each star. One is based on the effective temperature and surface gravity derived from the stellar spectra alone (spectroscopic values). The micro-lensing technique relates the magnitude of the star to that of the red clump stars in the same field. The advantages and draw-backs of each technique is discussed in our recent paper, \cite{bensby2011}. The thing to take away from the left-hand panels in Fig.\,\ref{feltzingfig:1} is that regardless of which technique is used, stars with sub-solar metallicities essentially trace an old turn-off, while stars with super-solar metallicities show a wider range of ages. This is still true when the latest events are included (Bensby et al. 2012 in prep.). A surprising result from the micro-lensed dwarf stars is the presence of a large age spread among the most metal-rich stars. This result might appear unexpected given the large amount of evidence based on deep CMDs that show a red and faint turn-off, most often interpreted as the result of a uniquely old and metal-rich stellar population (as shown in numerous studies, including \cite{holtzman}, \cite{ortolani}, \cite{feltzing} \cite{zoccali2003}, \cite{clarkson}). However, there is evidence from AGB stars of an intermediate age population in the Galactic bulge. This has been seen at least in three independent studies (\cite{vanloon}, \cite{cole}, \cite{uttenthaler}). Based on ISOGAL and DENIS data of the inner 10$^\circ$ of the Galactic bulge \cite{vanloon} find a few hundred asymptotic giant branch stars, which is consistent with their inferences from the near infrared CMDs and \cite{uttenthaler} find evidence for Tc in a sub-sample of their C-stars, indicative of third dredge up and a minimum stellar mass of 1.5\,M$_{\odot}$ which implies an upper age limit of 3\,Gyr. Our data for micro-lensed dwarfs appear to confirm the existence of such an intermediate age population in the inner kpcs. \vspace{-0.3cm} \section{Summary and outlook} \vspace{-0.3cm} So far we have presented elemental abundances and ages for 26 micro-lensed dwarf stars (\cite{bensby2010} and \cite{bensby2011}). They show that dwarf stars in the Galactic bulge share the elemental abundance trends with the thick disk in the solar neighbourhood and have an MDF that is bi-modal. This is also true for the most recent observations (Bensby et al. 2012 in prep.). Surprisingly, we find, among the stars with super-solar metallicities, a wide range of ages. This remains to be better explained, but we note that AGB stars and variable stars (Miras) present in the Galactic bulge also point to a sub-population with an intermediate age. We note that this is, on the surface, contradictory to the red and faint turn-offs seen in all CMDs of the Bulge. However, a smaller intermediate age, metal-rich stellar population can most likely still be accommodated. Detailed modelling of this and larger samples of both dwarf and giant stars in the Galactic bulge covering wider areas of the bulge are needed to fully understand the connection between the various spatial structures in the bulge and the MDF. \vspace{-0.5cm}
1,116,691,497,063
arxiv
\section*{Introduction} \label{s - intro} In this paper we prove simplicity (up to center) of some (incomplete) Kac-Moody groups over algebraic closures of finite fields. At first glance, this might be a surprising result because the examples which are usually given to introduce incomplete Kac-Moody groups (as defined by J.~Tits \cite{TitsKM}) are of affine type, and the latter groups have a matrix interpretation. For instance, a Kac-Moody group of type $\widetilde{{\rm A}}_n$ over some field $F$ is isogenous to ${\rm SL}_n(F[t,t^{-1}])$. In fact, any $F$-split simple algebraic group ${\bf G}$ gives rise to a Kac-Moody group functor $R \mapsto {\bf G}(R[t,t^{-1}])$ on $F$-algebras. The values over fields of such a functor are (Kac-Moody) groups admitting a lot of (congruence) quotients, since the ring $R[t,t^{-1}]$ has arbitrarily small ideals. \smallskip The question is thus: given a certain class of ground fields, which types of Kac-Moody groups shall we exclude to hope for simplicity? The situation over finite ground fields is almost completely understood \cite{CaRe}. The outcome suggests that among the irreducible generalized Cartan matrices, the only types that should be excluded are the affine ones. To be more precise, this general picture over finite fields is completely confirmed except when the generalized Cartan matrix is $2 \times 2$, in which case the problem is only half solved \cite{CaReRk2}. The connection with our case, where ground fields are of the form $\overline{{\bf F}_q}$, is that simplicity over finite ground fields easily implies simplicity over the algebraic closure (\ref{ss - proof}, Remark \ref{rk - simple}). We deal here with the only case where simplicity over finite ground fields is still an open question. \begin{thm*} Let $A = \left( \begin{array}{cc} \hfill 2 & -n \\ -m & \hfill 2 \end{array} \right)$ be a generalized Cartan matrix of indefinite type, i.e. $mn>4$. Let $\mathscr{G}_A$ be the corresponding simply connected incomplete Kac-Moody group functor and let $F$ be an algebraic closure of a finite field. Assume that $m,n \geqslant 2$. Then, the group $\mathscr{G}_A(F)/Z\bigl( \mathscr{G}_A(F) \bigr)$ is simple. \end{thm*} In particular, this theorem settles the last case needed to prove the following statement (see Remark \ref{rk - simple final}, subsection \ref{ss - heuristic}): {\it irreducible, simply connected, non-affine Kac-Moody groups over algebraic closures of finite fields are simple modulo their centers}. The picture over finite fields is slightly less complete. \smallskip The reason why excluding affine types for simplicity over finite ground fields has a geometric explanation which naturally leads us to introduce the main tool in the investigation of these groups, namely buildings (another concept introduced by J.~Tits and presented in \ref{ss - twin buildings}). Roughly speaking a building is a nice, symmetric, simplicial complex designed to admit group actions. By definition, a building is covered by subcomplexes (called apartments) which are all isomorphic and whose geometry is fully encoded by a Coxeter group which is called the Weyl group of the building. An infinite Weyl group is a Euclidean reflection group if and only if it has polynomial growth for its natural generating set. For generalized Cartan matrices of size $\geqslant 3$, simplicity occurs (at least over finite fields) precisely when the Weyl group of the buildings is not Euclidean, because then the associated root system has some nice weak hyperbolicity properties. The proof of our result is also related to some kind of hyperbolicity since our assumption $mn>4$ corresponds to hyperbolic root systems of rank 2. This proof requires in addition the use of some weak version of simplicity, called the normal subgroup property, which is reminiscent to a famous result of G.~Margulis about lattices in higher rank Lie groups. \smallskip The structure of the paper is the following. In Section 1, we introduce the basic objects used in the paper, namely twin buildings and Kac-Moody groups. In Section 2, we recall the situation over finite fields because we need to state the normal subgroup property in this case. In Section 3, we prove our main theorem and mention the remaining related problem for finitely generated Kac-Moody groups. \smallskip Let us finally introduce some notation. Concerning groups, $Z(G)$ means the center of a group $G$. Concerning rings, ${\bf Z}$ (resp. ${\bf Q}$, ${\bf R}$) means the set of integral (resp. rational, real) numbers. In this article, $p$ is a prime number and $q$ a power of some $p$; at last, ${\bf Q}_p$ (resp. ${\bf F}_p$, ${\bf F}_q$) means the field of $p$-adic numbers (resp. a prime field of characteristic $p$, a finite field of order $q$). \bigskip \section{Twin building and Kac-Moody theory} \label{s - TB & KM} All the theories in this subsection are due to J.~Tits, see for instance \cite{TitsVancouver} and \cite{TitsTwin} for (twin) buildings and \cite{TitsKM} for Kac-Moody groups. \subsection{Twin building theory} \label{ss - twin buildings} Let us first recall the definition of a building. If $W = \langle s \in S \mid (st)^{M_{st}}=1\rangle$ is a Coxeter group defined by the Coxeter matrix $[M_{st}]_{s,t \in S}$, there is a simplicial complex, called the {\it Coxeter complex}~$\Sigma$ of $(W,S)$, on the maximal simplices of which $W$ acts simply transitively \cite{BBK}. In this context, simplices are rather called {\it facets}. Coxeter complexes (seen as simplicial or metric spaces) are generalized tilings on which the initial Coxeter group acts as a generalized reflection group generated by natural involutions (reflections in faces of a given {\it chamber}, i.e. a maximal facet). \smallskip Up to removing the facets with infinite stabilizers, there exists a geometric realization for $\Sigma$, usually different from the one introduced in Bourbaki, carrying a complete metric such that the resulting metric space is non-positively curved and contractible. Technically the notion is that of a complete {\it {\upshape CAT($0$)}\xspace-space}. Since we will use this terminology without going into technical details, we simply refer to \cite{BH}. \begin{defn} A {\it building of type $\Sigma$}~is a simplicial complex covered by sub-complexes all isomorphic to the Coxeter complex $\Sigma$, called {\it apartments} and required to satisfy the following axioms. \begin{enumerate} \item[(i)]~Any two simplices are always contained in an apartment. \item[(ii)]~Given any two apartments $A$, $A'$ there is an isomorphism $A \simeq A'$ fixing $A \cap A'$. \end{enumerate} The group $W$ is called the {\it Weyl group}~of the building. \end{defn} The above axioms can be motivated by metric considerations. Indeed, they can be used to glue together the above (Davis-Moussong) metrics on each apartment in order to define a complete {\upshape CAT($0$)}\xspace metric on the building: axiom (i) says that computing the distance between two points can always be done by doing it in a suitable apartment and axiom (ii), up to additional work in order to define suitable 1-lipschitz retractions, shows that the distance computed this way doesn't depend on the apartment. \begin{example} Let $D_\infty$ be the infinite dihedral group, i.e. the group generated by two reflections in consecutive integers on the real line. Then a building of type $D_\infty$ is a tree (without pending leaf). Note that such a tree may have no automorphism at all since trees in which any two vertices have distinct valencies are not excluded by the axioms (the isomorphism in (ii) need not be defined globally). \end{example} \begin{example} The Coxeter complex of type $\widetilde A_2$ is the one given by the tiling of ${\bf R}^2$ by regular triangles. Buildings whose apartments have this shape are called triangle buildings; they appear as Bruhat-Tits buildings for Lie groups like ${\rm SL}_3$ over local fields. More generally, one consequence of Cartan and Bruhat-Tits theories is the possibility to associate to any $S$-arithmetic group a complete {\upshape CAT($0$)}\xspace-space on which it acts nicely. These spaces are obtained as products of symmetric spaces and of {\it Euclidean buildings}, i.e. buildings in which apartments are Euclidean tilings. \end{example} New interesting questions occur when the buildings of the geometric actions under consideration are no longer Euclidean. Many examples of buildings with hyperbolic tilings as apartments are available thanks to Kac-Moody theory (\ref{ss - KM}). Such exotic buildings provide more opportunities to study non-linear discrete groups via geometric actions. \smallskip Let us now turn very quickly to twinnings \cite{TitsTwin}. Initially, the idea is to extend some rigidity properties (useful in the classification of spherical buildings) to non-spherical buildings, provided they are twinned with another isomorphic building. The idea to add a twin to a non-spherical building enables one to define an opposition relation between facets in the two different buildings. Conventionally, each of the two twinned buildings is given a sign $\pm$. This opposition relation between facets of opposite signs is a substitute for the existence of a longest element in the Weyl group of the buildings. \begin{example} By Bruhat-Tits theory, the groups ${\rm SL}_n\bigl( \mathbf{F}_q (\!( t^{\pm 1} )\!) \bigr)$, where $\mathbf{F}_q (\!( t^{\pm 1} )\!)$ are locally compact non-Archimedean fields of formal Laurent series, act on isomorphic Euclidean buildings, say $X_\pm$. There is a natural twinning between $X_-$ and $X_+$ such that the discrete group ${\rm SL}_n(\mathbf{F}_q[t,t^{-1}])$ (embedded diagonally in the product of the two previous groups) acts on $X_- \times X+$ and preserves opposition of chambers of opposite signs. \end{example} Given a homogeneous tree, there are uncountably many ways to associate to it a twin tree, but most twinnings have no automorphism at all \cite{RonanTitsIJM}. Still, the additional Moufang condition on twin buildings guarantees the existence of enough automorphisms for these buildings. We will not go into details, but we simply mention that Kac-Moody theory provides lots of examples of twin buildings satisfying the Moufang condition. Even more exotic (i.e., non Kac-Moody) Moufang twin buildings with enough automorphisms are also available by means of more down-to-earth constructions, see \cite{RemRon} and also \cite{AbRe}. \subsection{Kac-Moody groups} \label{ss - KM} Kac-Moody groups are constructed from the same kind of data as for Chevalley groups, namely a ground field and some Lie-theoretic data classifying semisimple Lie algebras \cite{RemAst}. \smallskip More precisely, a {\it generalized Cartan matrix}~is an integral matrix $A = [A_{s,t}]_{s,t \in S}$ indexed by a set $S$ (which is here assumed to be finite), such that $A_{s,s}=2$ for any $s \in S$ and $A_{s,t} \leqslant 0$ for any $s \neq t$ in $S$; it is further required that $A_{s,t} = 0$ if and only if $A_{t,s} =0$. From this Lie-theoretic matrix, a certain group functor over rings can be constructed by generators and relations \cite{TitsKM}. It is a heavy machinery of algebraic and combinatorial nature, which gives a Chevalley group scheme if the matrix $[A_{s,t}]_{s,t \in S}$ is a Cartan matrix (i.e., if it can be written as the product of a diagonal matrix with a positive definite symmetric matrix). In fact, as in this classical case, the matrix $A$ only determines Kac-Moody group functors up to center, and in what follows we always use the simply connected groups (this choice plays no significant role for our purposes -- it makes simplicity results easier to state). A {\it Kac-Moody group}~is the value of a Kac-Moody functor on a field, called the {\it ground field}~in what follows. We are interested in the geometric outcome of this construction: {\it a Kac-Moody group acts on the product of two Moufang twin buildings and the kernel of the action is its center}. It is a well-known fact that a group enjoying the structure of a Tits system (also called BN-pair) naturally acts (strongly transitively) on a building. In the case of a Kac-Moody group of non-classical (i.e., non-Chevalley) type, there are two conjugacy classes of subgroups which leads to two distinct buildings. Moreover the Weyl group, i.e. the shape of the apartments of the twinned buildings $X_\pm$, is explicitly known since its Coxeter matrix $[M_{s,t}]_{s,t \in S}$ is determined by the rule $M_{s,t} = 2$ (resp. $3,4,6$ or $\infty$) according to whether $A_{s,t}A_{t,s}$ is equal to $0$ (resp. $1,2,3$ or is $\geqslant 4$). At last the associated buildings $X_\pm$ are locally finite if and only if the ground field is finite, which we assume until the end of the next section. This implies that, for the {\upshape CAT($0$)}\xspace-metric, the isometry groups ${\rm Isom}(X_\pm)$ are locally compact for the compact open topology, and as such admit Haar measures. \begin{example} Over a given field ${\bf F}$, for a suitable choice of generalized Cartan matrices (namely for those of affine type), the corresponding Kac-Moody groups are of the form ${\bf G}({\bf F}[t,t^{-1}])$ where ${\bf G}$ is a semisimple algebraic group over ${\bf F}$. Then the actions of ${\bf G}({\bf F}[t,t^{-1}])$ on the associated twin buildings are those given by Bruhat-Tits theory by seeing ${\bf G}({\bf F}[t,t^{-1}])$ as a subgroup of the two completions ${\bf G}\bigl({\bf F}(\!(t)\!)\bigr)$ and ${\bf G}\bigl({\bf F}(\!(t^{-1})\!)\bigr)$. \end{example} \begin{example} Using the rule $[A_{s,t}]_{s,t \in S} \to [M_{s,t}]_{s,t \in S}$, we easily see that many buildings whose apartments are real hyperbolic tilings are made available by Kac-Moody theory. An interesting point in this construction is the fact that a Kac-Moody group acts on each of the two twinned buildings in a highly transitive way (in particular it acts on each factor with a chamber as fundamental domain). \end{example} \bigskip \section{Finitely generated Kac-Moody groups} \label{s - fg KM} In this section, we recall the general situation of twin building lattices, as far as the question of simplicity is concerned. For this we need to recall some general notions from geometric group theory. \subsection{A glimpse of geometric group theory} \label{ss - GGT} Roughly speaking, arithmetic groups are matrix groups with coefficients in rings of integers of global fields and in natural generalizations; examples of such groups are ${\rm SL}_n({\bf Z})$ or ${\rm SL}_n({\bf Z}[1/p])$. An arithmetic group appears as a subgroup in a product of (real and totally disconnected) Lie groups (e.g., ${\rm SL}_n({\bf Z}) < {\rm SL}_n({\bf R})$, and ${\rm SL}_n({\bf Z}[1/p]) < {\rm SL}_n({\bf R}) \times {\rm SL}_n({\bf Q}_p)$ for the diagonal inclusion). Furthermore a non-compact simple Lie group naturally acts on a complete {\upshape CAT($0$)}\xspace-space \cite{SMF18}. It is a symmetric space if the simple Lie group is defined over the real numbers. When the ground field of the simple Lie group is a non-Archimedean local field, the metric space is a Euclidean building (the construction of the latter space is not trivial at all, it follows from the so-called Bruhat-Tits theory \cite{Rousseau}). Putting these two facts together (and forgetting the step involving the ambient topological groups), we obtain an interesting situation (called here a {\it geometric action}) where a discrete group $\Gamma$ acts on a metric space $(X,d)$ so that: \smallskip \begin{enumerate} \item[(GA1)] the metric $d$ on $X$ is complete and {\upshape CAT($0$)}\xspace; \item[(GA2)] the group $\Gamma$ acts by isometries and properly discontinuously on $X$; \item[(GA3)] the $\Gamma$-action has a nice fundamental domain. \end{enumerate} \noindent By "nice" fundamental domain, we can mean compact, but compactness is usually too strong. More technically, it means that the full isometry group ${\rm Isom}(X,d)$ carries a Haar measure and that the corresponding invariant measure on the homogeneous space ${\rm Isom}(X,d)/\Gamma$ has finite volume. We say then that $\Gamma$ is a {\it lattice}~for $(X,d)$. \begin{example} \label{ex - Poincare} The symmetric space associated to ${\rm SL}_2({\bf R})$ is Poincar\'e's upper half-plane $\mathbb{H}^2_{\bf R}$ and the group ${\rm SL}_2({\bf Z})$ acts on it with the well-known fundamental domain $\{ z \in {\bf C} \,\, : \,\, \mid\! z \!\mid \,\geqslant 1$ and $\mid\! {\rm Re}(z) \!\mid \,\leqslant {1 \over 2}\}$. \end{example} \begin{example} The Bruhat-Tits building associated to the rank 1 non-Archimedean simple Lie group ${\rm SL}_2({\bf Q}_p)$ is the homogeneous tree $T_{p+1}$ of valency $p+1$. The natural action of the lattice ${\rm SL}_2({\bf Z}[{1 \over p}])$ is the diagonal action on the mixed product $\mathbb{H}^2_{\bf R} \times T_{p+1}$ of a differentiable manifold and a simplicial complex. \end{example} \begin{example} To obtain a geometric action of a lattice on a product of two trees, one can use slightly less familiar matrix groups. Namely, start with a quaternion algebra over ${\bf Q}$, say $H$, such that $H({\bf R})$ is a skew-field (in arithmetic terms, $H$ is ramified at $\infty$); pick two prime numbers $p$ and $l$ such that $H({\bf Q}_p)$ and $H({\bf Q}_l)$ are matrix algebras. Then the elements in $H\bigl({\bf Z}[{1\over pl}]\bigr)$ form a discrete group having a geometric action on $T_{p+1} \times T_{l+1}$, and the fundamental domain is compact. \end{example} A typical question in geometric group theory consists in asking what can be said about a discrete group once it is known to admit a geometric action on a particularly nice {\upshape CAT($0$)}\xspace-space (e.g. a non-spherical building or a cube complex -- products of trees belong to both classes). Relevant questions are for instance related to freeness, linearity, residual finiteness, simplicity etc. The historical statement, in connection with Example \ref{ex - Poincare}, is the proof that ${\rm SL}_2({\bf Z})$ contains a finite index subgroup isomorphic to the free group $F_2$ (this is F.~Klein's ping-pong argument). \subsection{Non-affine higher-rank finitely generated Kac-Moody groups} \label{ss - NSP} We can now go back to the objects defined in the previous section. Let $\Lambda$ be a Kac-Moody group over a finite field $\mathbf{F}_q$ of order $q$. Then the diagonal $\Lambda$-action on $X_-\times X_+$ is geometric in the sense of the axioms (GA) in \ref{ss - GGT}. Using the group combinatorics of twinTits systems, we can see that a fundamental domain is given for instance by the product of a negative chamber by a suitable positive apartment. The starting point of the analogy between Kac-Moody groups over finite fields and $S$-arithmetic groups is the following result \cite{RemCRAS}: {\it at least when $q > \#S$, the group $\Lambda$, which is finitely generated by construction, is a lattice in ${\rm Isom}(X_\pm) \times {\rm Isom}(X_\pm)$}. In fact, the covolume of $\Lambda$ is given by $\sum_{w \in W} q^{-\ell(w)}$ for a suitable normalization of Haar measures; in particular, for twin trees (where $\# S=2$) the covolume is always finite since $W$ has linear growth in that case. Now the main structure result on normal subgroups of lattices in Lie groups is due to G.~Margulis \cite[Lecture 4]{Benoist}: {\it let $\Gamma$ be an irreducible lattice in a higher-rank semisimple Lie group. Then for any $\Delta \triangleleft \Gamma$, either the subgroup $\Delta$ is finite and central, or $\Delta$ has finite index in $\Gamma$.} A group all of whose normal subgroups satisfy the previous dichotomy is said to have the {\it normal subgroup property}, (NSP) for short. This is a typical result to try to generalize for lattices in products of buildings obtained from Moufang twin buildings. This was indeed checked in \cite{RemInt}: {\it let $\Lambda$ be an irreducible Kac-Moody group over a finite field. Then $\Lambda$ has {\rm (NSP)}~whenever it is a lattice of the product of its two twinned buildings}~(i.e., whenever the finite ground field is big enough with respect to the growth of the Weyl group -- see the above covolume formula). The proof follows Margulis' general strategy consisting in showing that for an infinite $\Delta \triangleleft \Lambda$, the discrete group $\Lambda/\Delta$ is both amenable and Kazhdan (implying compactness, hence finiteness by discreteness). The next step after (NSP) is simplicity. Here is a simplified statement of what is proved in \cite{CaRe}: {\it let $\Lambda$ be a (simply connected) Kac-Moody group over the finite field $\mathbf{F}_q$. Assume that the generalized Cartan matrix defining $\Lambda$ is non-affine and indecomposable, say of size $n$. Then $\Lambda/Z(\Lambda)$ is simple whenever $q>n>2$.} For this simplicity theorem, by (NSP) the key point is to rule out also the possibility to have finite quotients either; this is where the new conditions on the generalized Cartan matrix appear (non-affineness and $n>2$). Indeed the argument to exclude finite quotients for $\Lambda$ uses the geometry of the root system of the Weyl group, more precisely the fact that whenever an infinite Coxeter group is irreducible, non-affine and of rank $>2$, then its root system has many hyperbolic triples:~seeing roots as half-spaces bordered by fixed-point sets of reflections in the Coxeter complex $\Sigma$, this means existence triples of pairwise disjoint roots in $\Sigma$ (which is clearly excluded for Euclidean reflection groups). \begin{remark} It is interesting to have simple groups occurring as lattices in products of buildings in which some freedom for the shape of the apartments is available. Indeed, this leads to the following statement in geometric group theory \cite{CaReQI}: {\it there exists infinitely many quasi-isometry classes of finitely presented simple groups}. \end{remark} \bigskip \section{Simplicity for non locally finite twin trees} \label{s - simplicity} We prove simplicity for hyperbolic rank 2 Kac-Moody groups over algebraic closures of finite fields (cf. Theorem in Introduction). This can be easily established when the corresponding Kac-Moody group is simple over a finite subfield (see Remark \ref{rk - simple}), so we concentrate on the case where the latter simplicity is still unknown. This is when the commutation relations between root groups indexed by prenilpotent pairs are trivial. \subsection{Simplicity without using simplicity} \label{ss - proof} Let us recall Tits functors, $\mathscr{G}_A$ as well as $\mathscr{T}_A$, associated with generalized Cartan matrices $A$ to produce the corresponding Kac-Moody groups \cite{TitsKM}. \smallskip Let $A = \left( \begin{array}{cc} \hfill 2 & -n \\ -m & \hfill 2 \end{array} \right)$ be a generalized Cartan matrix of indefinite type (i.e. $mn>4$). We assume that $m,n \geqslant 2$, which implies that the commutation relations between root groups indexed by prenilpotent pairs are trivial \cite{Morita}. Then, we obtain the corresponding Kac-Moody group $\mathscr{G}_A(F)$ over $F = \overline{{\bf F}_q}$ and the so-called standard maximal split torus $\mathscr{T}_A(F) \simeq {\rm Hom}_{\mathbb{Z}}(\mathbb{Z}^2,F^\times)$. The group $\mathscr{G}_A(F)$ is generated by root subgroups $U_\delta$ for all real roots $\delta$ in this case. \smallskip For each real root $\delta$, there is a natural isomorphism from the additive group $(F,+)$ onto $U_\delta$, which we denote by $r \mapsto u_\delta(r)$. Tits' presentation \cite{TitsKM} implies that the group $S_\delta = \langle U_\delta, U_{-\delta} \rangle$ is isomorphic to ${\rm SL}_2(F)$ via an isomorphism which sends $u_\delta(r)$ (resp. $u_{-\delta}(r)$) to $\left( \begin{array}{cc} \hfill 1 & r \\ 0 & \hfill 1 \end{array} \right)$ (resp. $\left( \begin{array}{cc} \hfill 1 & 0 \\ r & \hfill 1 \end{array} \right)$). Then, $\mathscr{T}_A(F)$ is generated by $h_\delta(\mu)$ for all real roots $\delta$ and for all $\mu \in F^\times$, where $h_\delta(\mu)$ is an element of $S_\delta$ corresponding to $\left( \begin{array}{cc} \hfill \mu & 0 \\ 0 & \hfill \mu^{-1} \end{array} \right)$. Let $\alpha$ and $\beta$ be the simple roots defined by this presentation. For any nonzero $j \in \mathbf{Z}$ we set $\gamma_j = \tau^j . \alpha$ where $\tau = w_\alpha(1)w_\beta(1)$ and $w_\delta(1) = u_\delta(1)u_{-\delta}(-1)u_\delta(1)$; there exist integers $a_j$ and $b_j$ with $a_j b_j > 0$ such that \medskip \centerline{$\gamma_j = a_j \alpha + b_j \beta$.} \medskip Note that in the geometric realization of the Weyl group $D_\infty$, the Coxeter complex (hence any apartment) is the real line. The reflections in the Weyl group are those with respect to the integers and the element $\tau$ acts as a translation along this line. \smallskip A element $t \in \mathscr{T}_A(F)$ given by this presentation has the form $t = h_\alpha(\mu) h_\beta(\nu)$, where $\mu, \nu \in F^\times$ are two multiplicative parameters. Then, we have: \medskip \centerline{$\alpha(t) = \mu^2 \nu^{\alpha(\beta^\vee)} = \mu^2 \nu^{-m}$} \medskip \noindent and \medskip \centerline{$\gamma_j(t) = \mu^{\gamma_j(\alpha^\vee)} \nu^{\gamma_j(\beta^\vee)} = \mu^{2a_j+\beta(\alpha^\vee)b_j}\nu^{\alpha(\beta^\vee)a_j + 2b_j} = \mu^{2a_j-nb_j}\nu^{-ma_j+2b_j}$,} \medskip \noindent where $\gamma^\vee$ denotes the coroot of a real root $\gamma$. \bigskip {\it Proof of the theorem.}~ Let $K \triangleleft \mathscr{G}_A(F)$ be a non-central normal subgroup. In order to prove our simplicity theorem (see Introduction), we must show that we have in fact $K = \mathscr{G}_A(F)$. \smallskip Since each root subgroup is conjugate to $U_\alpha$ or $U_\beta$, and since $S_\alpha \simeq S_\beta \simeq {\rm SL}_2(F)$, it is enough to show that $U_\alpha \cap K \neq \{ 1 \}$ and $U_\beta \cap K \neq \{ 1 \}$ (the group ${\rm SL}_2(F)$ doesn't contain any proper normal subgroup intersecting non-trivially a root group). \smallskip Since $F = \bigcup_{i \geqslant 1} {\bf F}_{q^i}$, we have $Z\bigl( \mathscr{G}_A(F) \bigr) = \bigcup_{i \geqslant 1} Z\bigl( \mathscr{G}_A({\bf F}_{q^i}) \bigr)$, and therefore there exists $\ell \geqslant 1$ such that ${\bf F}_{q^\ell} \subset F$ and $K \cap \mathscr{G}_A({\bf F}_{q^\ell})$ is non-central. By the normal subgroup property \cite{RemInt}, and assuming that $\ell$ is large enough, the normal subgroup $K \cap \mathscr{G}_A({\bf F}_{q^\ell})$ has finite index, say $k$, in $\mathscr{G}_A({\bf F}_{q^\ell})$. This implies, in particular, that $[\langle \tau \rangle : K \cap \langle \tau \rangle]$ divides $k$, which follows from \smallskip \centerline{$\displaystyle{ \begin{array}{lll} k & = & [ \mathscr{G}({\bf F}_{q^\ell}) : K \cap \mathscr{G}({\bf F}_{q^\ell}) ]\\ & = & [ \mathscr{G}({\bf F}_{q^\ell}) : \langle \tau \rangle (K \cap \mathscr{G}({\bf F}_{q^\ell}) ) ] \times [ \langle \tau \rangle (K \cap \mathscr{G}({\bf F}_{q^\ell}) ) : K \cap \mathscr{G}({\bf F}_{q^\ell}) ] \end{array} }$} \smallskip \noindent and \smallskip \centerline{$ [ \langle \tau \rangle (K \cap \mathscr{G}({\bf F}_{q^\ell}) ) : K \cap \mathscr{G}({\bf F}_{q^\ell}) ] = [\langle \tau \rangle : K \cap \langle \tau \rangle] $,} \smallskip \noindent so that $\tau^k \in K$. As a consequence, we have $[\tau^k, U_\alpha] \subset K$. Let us start with $u \in U_\alpha-\{ 1 \}$, i.e. $u = u_\alpha(c)$ for some $c \in F^\times$. It follows from the defining relations of an incomplete Kac-Moody group that we have $\tau^j U_\delta \tau^{-j} = U_{\tau^j.\alpha}$, so that: \medskip \centerline{$[\tau^j,u] = (\tau^j u_\alpha(c) \tau^{-j}) u_\alpha(-c) = u_{\tau^j.\alpha}(r) u_\alpha(s)$} \medskip \noindent for some suitable $r, s \in F^\times$. Hence we see that for suitable powers $j$ (e.g. $j$ divisible by $k$) we can find elements in $\bigl( (U_\alpha - \{ 1 \}) \cdot (U_{\gamma_j} - \{ 1 \}) \bigr) \cap K$. Therefore, we consider an element $v \in K$ of the form $v = u_\alpha(r)u_{\gamma_j}(s)$ with $r,s \in F^\times$. It remains to use the action of the torus $\mathscr{T}_A(F)$ to separate the two factors $U_\alpha$ and $U_{\gamma_j}$. Again we compute for $v$ as above and $t = h_\alpha(\mu) h_\beta(\nu)$: \medskip \centerline{$[t,v] = (t u_\alpha(r)u_{\gamma_j}(s) t^{-1}) \bigl(u_\alpha(r)u_{\gamma_j}(s)\bigr)^{-1} = u_{\alpha}\bigl( \alpha(t)r \bigr) u_{\gamma_j}\bigl( \gamma_j(t) s \bigr) u_{\gamma_j}(-s) u_\alpha(-r)$.} \medskip \noindent In view of the previous computation, and since $U_\alpha$ and $U_{\gamma_j}$ commute (this is where we use $m,n \geqslant 2$), this provides: \medskip \centerline{$[t,v] = u_{\alpha}\bigl( ( \mu^2 \nu^{-m}-1) r \bigr) u_{\gamma_k} \bigl( (\mu^{2a_j-nb_j} \nu^{-ma_j+2b_j}-1) s \bigr)$.} \medskip \noindent Now we can specialize our choice of multiplicative parameters $\mu$ and $\nu$. For $\kappa \in F^\times$ we set $\mu = \kappa^m$ and $\nu = \kappa^2$; then for $t = h_\alpha(\kappa^m) h_\beta(\kappa^2)$ we obtain: \medskip \centerline{$[t,v] = u_{\gamma_j} \bigl( (\kappa^{m(2a_j-nb_j)} \kappa^{2(-ma_j+2b_j)}-1) s \bigr) = u_{\gamma_j} \bigl( (\kappa^{(4-mn)b_j}-1) s \bigr)$.} \medskip \noindent It remains to choose $\kappa \in F$ so that $\kappa^{(4-mn)b_j} \neq 1$ to conclude that $K \cap U_{\gamma_j} \neq \{ 1 \}$ and $K \cap U_\alpha \neq \{ 1 \}$. Similarly we can obtain $K \cap U_\beta \neq \{ 1 \}$. Therefore, again using the action of $\mathscr{T}_A(F)$, we obtain $U_\alpha, U_\beta \subset K$, which finally shows that $K = \mathscr{G}_A(F)$. \hfill$\square$ \medskip \begin{remark} \label{rk - simple} Let us explain here why the same simplicity result over $F$ is easier when simplicity over finite fields is known. Indeed let $\mathscr{G}_A$ be a simply connected Kac-Moody group for which simplicity is known over (sufficiently large) finite fields and let $K \triangleleft \mathscr{G}_A(F)$ be non-central. Then, arguing as in the beginning of the above proof, we know that there exists $\ell \geqslant 1$ such that $K \cap \Bigl( \mathscr{G}_A({\bf F}_{q^\ell}) - Z\bigl( \mathscr{G}_A({\bf F}_{q^\ell}) \bigr) \Bigr) \neq \varnothing$. Up to enlarging $\ell$, simplicity of $\mathscr{G}_A({\bf F}_{q^\ell})/ Z\bigl( \mathscr{G}_A({\bf F}_{q^\ell}) \bigr)$ implies that $K$ contains the latter group, hence intersects non-trivially all the root groups, which finally implies that $K = \mathscr{G}_A(F)$. \end{remark} \bigskip \subsection{Simplicity using simplicity} \label{ss - heuristic} For the sake of completeness, we conclude by explaining how simplicity for hyperbolic rank 2 Kac-Moody groups with non-trivial commutation relations for prenilpotent pairs can be proved. \smallskip Recall that if $\Gamma$ is an infinite finitely generated group satisfying (NSP), then $\Gamma/Z(\Gamma)$ is called {\it just infinite}~in the sense that all its proper quotients are finite; this is, so to speak, half of simplicity (\ref{ss - NSP}). Recall also that for an infinite finitely generated group, the following implications are well-known: linearity $\Rightarrow$ residual finiteness $\Rightarrow$ non-simplicity (a group $\Gamma$ is said to be {\it residually finite}~if we have $\bigcap_{[\Gamma:\Delta]<\infty} \Delta = \{1\}$). Here is a rough strategy to construct simple groups. Let $\Gamma$ be an infinite group acting geometrically on a {\upshape CAT($0$)}\xspace-space. Assume in addition that $\Gamma$ is both just infinite and {\it not}~residually finite. Then the normal subgroup $\Gamma^\circ = \bigcap_{[\Gamma:\Delta]<\infty} \Delta$ is non-trivial, so it is a finite index subgroup since $\Gamma$ is just infinite. In fact, more can be said: $\Gamma^\circ$ is a finite direct product of simple groups (all isomorphic to one another) \cite{Wilson}. It remains then to stand by the geometric situation (e.g. a suitable irreducibility of the geometric action) to be able to conclude that $\Gamma^\circ$ contains only one factor. By (NSP), we know that a Kac-Moody lattice is just infinite (modulo center). Therefore it is enough to show that a non-affine Kac-Moody lattice is non-residually finite, for instance because it contains a suitable non-residually finite subgroup. The latter subgroup can be given by some wreath product: {\it if $F$ is a finite non-abelian group, then $F \wr {\bf Z} = F^{({\bf Z})} \rtimes {\bf Z}$ is not residually finite} \cite{Meskin}. Using this, the following simplicity theorem can be proved \cite{CaReRk2}. \begin{thm} \label{thm - general} Let $A = \left( \begin{array}{cc} \hfill 2 & -n \\ -1 & \hfill 2 \end{array} \right)$ be a generalized Cartan matrix of indefinite type (i.e. $n>4$), and let $F$ be an algebraic closure of a finite field ${\bf F}_q$. Then, the corresponding simply connected Kac-Moody groups $\mathscr{G}_A({\bf F}_q)$ and $\mathscr{G}_A(F)$ are simple groups modulo their centers. \end{thm} \noindent {\it Reference}. This is \cite[Theorem 2]{CaReRk2}. \hfill$\square$ \medskip \noindent Summarizing all (including known) facts, and taking into account Remark \ref{rk - simple}, we obtain the following statement. \begin{remark} \label{rk - simple final} Let $A$ be a generalized Cartan matrix of non-affine type, and let $\mathscr{G}_A$ be a Tits functor of type $A$. Let $G$ be the elementary subgroup of $\mathscr{G}_A(F)$ over the algebraic closure $F$ of a finite field ${\bf F}_p$ (that is, $G = [ \mathscr{G}_A(F) , \mathscr{G}_A(F) ]$); the group $G$ is generated by all root subgroups. Then, $G$ is a simple group modulo its center whenever $A$ is indecomposable. \end{remark} At last, it is natural to formulate the following question. \begin{question} Let $A = \left( \begin{array}{cc} \hfill 2 & -n \\ -m & \hfill 2 \end{array} \right)$ be a generalized Cartan matrix of indefinite type, i.e. $mn>4$. Let $\mathscr{G}_A$ be the corresponding simply connected incomplete Kac-Moody group and let ${\bf F}_q$ be a finite field. Assume that $m,n \geqslant 2$. Is the finitely generated group $\mathscr{G}_A({\bf F}_q)/Z\bigl( \mathscr{G}_A({\bf F}_q) \bigr)$ simple? \end{question} Simplicity in this case would shortcut the proof of the present paper, but we think that providing a simplicity proof over $\overline{{\bf F}_q}$, using only the weakening of simplicity (NSP) over ${\bf F}_q$, has its own interest. Note that the above question also applies to more exotic lattices of locally finite Moufang twin trees, as defined in \cite{AbRe}. Some of these groups can be constructed with a trivial torus, which might be an obstruction to simplicity. \begin{bibdiv} \begin{biblist} \bib{AbRe}{article}{ author={Abramenko, Peter}, author={R{\'e}my, Bertrand}, title={Commensurators of some non-uniform tree lattices and Moufang twin trees}, conference={ title={Essays in geometric group theory}, }, book={ series={Ramanujan Math. Soc. Lect. Notes Ser.}, volume={9}, publisher={}, place={Mysore}, }, date={2009}, pages={79--104}, review={\MR{2605356 (2011f:20111)}}, } \bib{Benoist}{article}{ author={Benoist, Yves}, title={Five lectures on lattices in semisimple Lie groups}, language={}, conference={ title={in \cite{SMF18} of these references}, }, book={ series={}, volume={}, publisher={}, place={}, }, date={2009}, pages={117--176}, review={\MR{2655311 (2011h:22012)}}, } \bib{SMF18}{collection}{ title={G\'eom\'etries \`a courbure n\'egative ou nulle, groupes discrets et rigidit\'es}, language={}, series={S\'eminaires et Congr\`es}, volume={18}, editor={Bessi{\`e}res, Laurent}, editor={Parreau, Anne}, editor={R{\'e}my, Bertrand}, note={}, publisher={Soci\'et\'e Math\'ematique de France}, place={Paris}, date={2009}, pages={xxvi+466}, isbn={978-2-85629-240-2}, review={\MR{2664216 (2011b:53004)}}, } \bib{BBK}{book} { AUTHOR = {Bourbaki, Nicolas}, TITLE = {{L}ie {IV-VI}}, SERIES = {Actualit\'es Scientifiques et Industrielles, No. 1337}, PUBLISHER = {Hermann}, ADDRESS = {Paris}, YEAR = {1968}, PAGES = {288 pp. (loose errata)}, review={\MR{MR0240238}}, } \bib{BH}{book} {, AUTHOR = {Bridson, Martin R.}, AUTHOR = {Haefliger, Andr{\'e}}, TITLE = {Metric spaces of non-positive curvature}, SERIES = {Grundlehren der Mathematischen Wissenschaften}, VOLUME = {319}, PUBLISHER = {Springer-Verlag}, ADDRESS = {Berlin}, YEAR = {1999}, PAGES = {xxii+643}, ISBN = {3-540-64324-9}, review={\MR{MR1744486}}, } \bib{CaRe}{article}{ author={Caprace, Pierre-Emmanuel}, author={R{\'e}my, Bertrand}, title={Simplicity and superrigidity of twin building lattices}, journal={Invent. Math.}, volume={176}, date={2009}, number={1}, pages={169--221}, issn={0020-9910}, review={\MR{2485882 (2010d:20056)}}, doi={10.1007/s00222-008-0162-6}, } \bib{CaReQI}{article}{ author={Caprace, Pierre-Emmanuel}, author={R{\'e}my, Bertrand}, title={Non-distortion of twin building lattices}, journal={Geom. Dedicata}, volume={147}, date={2010}, pages={397--408}, issn={0046-5755}, review={\MR{2660586 (2011e:20038)}}, doi={10.1007/s10711-010-9469-8}, } \bib{CaReRk2}{unpublished}{ author={Caprace, Pierre-Emmanuel}, author={R{\'e}my, Bertrand}, title={Simplicity of twin tree lattices with non-trivial commutation relations}, place={Preprint of the Institut Camille Jordan 377}, date={2012}, } \bib{Meskin}{article}{ author={Meskin, Stephen}, title={Nonresidually finite one-relator groups}, journal={Trans. Amer. Math. Soc.}, volume={164}, date={1972}, pages={105--114}, issn={0002-9947}, review={\MR{0285589 (44 \#2807)}}, } \bib{Morita}{article}{ author={Morita, Jun}, title={Commutator relations in Kac-Moody groups}, journal={Proc. Japan Acad. Ser. A Math. Sci.}, volume={63}, date={1987}, number={1}, pages={21--22}, issn={0386-2194}, review={\MR{892949 (88g:17013)}}, } \bib{RemCRAS}{article}{ author={R{\'e}my, Bertrand}, title={Construction de r\'eseaux en th\'eorie de Kac-Moody}, language={}, journal={C. R. Acad. Sci. Paris S\'er. I Math.}, volume={329}, date={1999}, number={6}, pages={475--478}, issn={0764-4442}, review={\MR{1715140 (2001d:20028)}}, doi={10.1016/S0764-4442(00)80044-0}, } \bib{RemAst}{article}{ author={R{\'e}my, Bertrand}, title={Groupes de Kac-Moody d\'eploy\'es et presque d\'eploy\'es}, language={French, with English and French summaries}, journal={Ast\'erisque}, number={277}, date={2002}, pages={viii+348}, issn={0303-1179}, review={\MR{1909671 (2003d:20036)}}, } \bib{RemInt}{article}{ author={R{\'e}my, Bertrand}, title={Integrability of induction cocycles for Kac-Moody groups}, journal={Math. Ann.}, volume={333}, date={2005}, number={1}, pages={29--43}, issn={0025-5831}, review={\MR{2169827 (2006k:22018)}}, doi={10.1007/s00208-005-0663-1}, } \bib{RemRon}{article}{ author={R{\'e}my, Bertrand}, author={Ronan, Mark A.}, title={Topological groups of Kac-Moody type, right-angled twinnings and their lattices}, journal={Comment. Math. Helv.}, volume={81}, date={2006}, number={1}, pages={191--219}, issn={0010-2571}, review={\MR{2208804 (2007b:20063)}}, doi={10.4171/CMH/49}, } \bib{RonanTitsInvMath}{article}{ author={Ronan, Mark A.}, author={Tits, Jacques}, title={Twin trees. I}, journal={Invent. Math.}, volume={116}, date={1994}, number={1-3}, pages={463--479}, issn={0020-9910}, review={\MR{1253201 (94k:20058)}}, doi={10.1007/BF01231569}, } \bib{RonanTitsIJM}{article}{ author={Ronan, Mark A.}, author={Tits, Jacques}, title={Twin trees. II. Local structure and a universal construction}, journal={Israel J. Math.}, volume={109}, date={1999}, pages={349--377}, issn={0021-2172}, review={\MR{1679605 (2000f:05030)}}, doi={10.1007/BF02775043}, } \bib{Rousseau}{article}{ author={Rousseau, Guy}, title={Euclidean buildings}, language={}, conference={ title={in \cite{SMF18} of these references}, }, book={ series={}, volume={}, publisher={}, place={}, }, date={2009}, pages={77--116}, review={\MR{2655310 (2011m:20072)}}, } \bib{TitsVancouver}{article}{ author={Tits, Jacques}, title={On buildings and their applications}, conference={ title={Proceedings of the International Congress of Mathematicians (Vancouver, B. C., 1974), Vol. 1}, }, book={ publisher={Canad. Math. Congress, Montreal, Que.}, }, date={1975}, pages={209--220}, review={\MR{0439945 (55 \#12826)}}, } \bib{TitsKM}{article}{ author={Tits, Jacques}, title={Uniqueness and presentation of Kac-Moody groups over fields}, journal={J. Algebra}, volume={105}, date={1987}, number={2}, pages={542--573}, issn={0021-8693}, review={\MR{873684 (89b:17020)}}, doi={10.1016/0021-8693(87)90214-6}, } \bib{TitsTwin}{article}{ author={Tits, Jacques}, title={Twin buildings and groups of Kac-Moody type}, conference={ title={Groups, combinatorics \& geometry}, address={Durham}, date={1990}, }, book={ series={London Math. Soc. Lecture Note Ser.}, volume={165}, publisher={Cambridge Univ. Press}, place={Cambridge}, }, date={1992}, pages={249--286}, review={\MR{1200265 (94d:20030)}}, doi={}, } \bib{Wilson}{article}{ author={Wilson, John S.}, title={Groups with every proper quotient finite}, journal={Proc. Cambridge Philos. Soc.}, volume={69}, date={1971}, pages={373--391}, review={\MR{0274575 (43 \#338)}}, } \end{biblist} \end{bibdiv} \end{document}
1,116,691,497,064
arxiv
\section{Introduction} \label{Intro} Transition metal oxides are of great current interest because of the wide variety of the ordered phases that they exhibit and the strong sensitivity to external perturbations. \cite{imada} Among them, manganese oxides with formula $R_{1-x}A_{x}MnO_{3}$ ($R$ stands for a rare earth as $La$, $A$ represents a divalent alkali element such as $Sr$ or $Ca$ and $x$ the hole doping), known as manganites, have been studied intensively both for their very rich phase diagram and for the phenomenon of ��colossal�� magnetoresistance. \cite{dagotto} This effect is often exhibited in the doping regime $0.2<x<0.5$, where the ground state of the systems is ferromagnetic. The ferromagnetic phase is usually explained by invoking the double exchange mechanism in which hopping of an outer-shell electron from a $Mn^{3+}$ to a $Mn^{4+}$ site is favored by a parallel alignment of the core spins. \cite{zener} In addition to the double-exchange term that promotes hopping of the carriers, a strong interaction between electrons and lattice distortions plays a non-negligible role in these compounds giving rise to formation of polaron quasi-particles. \cite{millis} Very recently, high quality atomic-scale "digital" heterostructures consisting of combination of transition metal oxide materials have been realized. Indeed, heterostructures represent the first steps to use correlated oxide systems in realistic devices. Moreover, at the interface, the electronic properties can be drastically changed in comparison with those of the bulk. Recent examples include the formation of a thin metallic layer at the interface between band and Mott insulators as, for example, between $SrTiO_{3}$ ($STO$) and $LaTiO_{3}$ oxides \cite{ohtomo} or between the band insulators \cite{ohtomo1} $LaAlO_{3}$ and $STO$. Very interesting examples of heterostructure are given by the superlattices $(LaMnO_{3})_{m}/(SrMnO_{3})_{n}$ with $n/(m+n)$ average hole doping. \cite{koida} Here $LaMnO_{3}$ ($LMO$) (one electron per $Mn$ $e_{g}$ state) and $SrMnO_{3}$ ($SMO$) (no electrons per $Mn$ $e_{g}$ state) are the two end-member compounds of the alloy $La_{1-x}Sr_{x}MnO_{3}$ and are both antiferromagnetic insulating. In these systems, not only the chemical composition but also the thickness of the constituent blocks specified by $m$ and $n$ is important for influencing the properties of superlattices. Focus has been on the case $m=2n$ corresponding to the average optimal hole doping $x=1/3$. \cite{eckstein,adamo1} The superlattices exhibit a metal-insulator transition as function of temperature for $n \leq 2$ and behave as insulators for $n \geq 3$. The superlattices undergo a rich variety of transitions among metal, Mott variable range hopping insulator, interaction-induced Efros-Shklovskii insulator, and polaronic insulator. \cite{adamo2} Interfaces play a fundamental role in tuning the metal-insulator transitions since they control the effective doping of the different layers. Even when the system is globally insulating ($n \geq 3$), some nonlinear optical measurements suggest that, for a single interface, ferromagnetism due to double-exchange mechanism can be induced between the two antiferromagnetic blocks. \cite{ogawa} Moreover, it has been found that the interface density of states exhibits a pronounced peak at the Fermi level whose intensity correlates with the conductivity and magnetization. \cite{eckstein1} These measurements point toward the possibility of a two-dimensional half-metallic gas for the double-layer \cite{ogawa1} whose properties have been studied by using ab-initio density functional approaches. \cite{nanda} However, up to now, this interesting two-dimensional gas has not been experimentally assessed in a direct way by using lateral contacts on the region between the $LMO$ and $SMO$ blocks. In analogy with thin films, strain is another important quantity in order to tune the properties of manganite heterostructures. For example, far from interfaces, inside $LMO$, electron localization and local strain favor antiferromagnetism and $e_g$ ($3z^2-r^2$) orbital occupation. \cite{aruta} The magnetic phase in $LMO$ is compatible with the $C$ type. \cite{dagotto} Moreover, by changing the substrate, the ferromagnetism in the superlattice can be stabilized. \cite{yamada} From the theoretical point of view, in addition to $ab$ initio calculations, tight-binding models have been used to study manganite superlattices. Effects of magnetic and electron-lattice interactions on the electronic properties have been investigated going beyond adiabatic mean-field approximations. \cite{dagotto1,millis1} However, the double layer with large blocks of $LMO$ and $SMO$ has not been much studied. Moreover, the effects of strain have been analyzed only within mean-field approaches. \cite{nanda1} In this paper we have studied phase diagrams, spectral and optical properties for a very large bilayer $(LMO)_{2n}/(SMO)_{n}$ (up to size of $48$ planes relevant for a comparison with fabricated heterostructures) starting from a tight binding model. We have developed a correlated inhomogeneous mean-field approach taking into account the effects of electron-lattice anti-adiabatic fluctuations. Strain is simulated by modulating hopping and spin-spin interaction terms. We have found that a metallic ferromagnetic interface forms for a large range of the electron-lattice couplings and strain strengths. For this regime of parameters, the interactions are able to change the size of the interface region. We find the magnetic solutions that are stable at low temperature in the entire superlattice. The general structure of our solutions is characterized by three phases running along growth $z$-direction: antiferromagnetic phase with localized/delocalized (depending on the model parameters) charge carriers inside $LMO$ block, ferromagnetic state at the interface with itinerant carriers, localized polaronic $G$-type antiferromagnetic phase inside $SMO$ block. The type of antiferromagnetic order inside $LMO$ depends on the strain induced by the substrate. We have discussed the spectral and optical properties corresponding to different parameter regimes. Due to the formation of the metallic interface, the density of states is finite at the chemical potential. With increasing the electron-phonon interaction, it gets reduced at the chemical potential, but it never vanishes even in the intermediate to strong electron-phonon coupling regime. Finally, we have studied both the in-plane and out-of-plane optical conductivities pointing out that they are characterized by marked differences: the former shows a metallic behavior, the latter a transfer of spectral weight at high frequency due to the effects of the electrostatic potential well trapping electrons in $LMO$ block. The in-plane response at low frequency is mainly due to the region between the two insulating blocks, so that it can be used as a tool to assess the formation of the metallic ferromagnetic interface. The paper is organized as follows: in sec. II the model and variational approach are introduced, in III the results regarding the phase diagrams are discussed, in sec. IV the spectral properties and in sec. V the optical conductivities are analyzed, in the final section the conclusions. \section{The variational approach} \subsection{Model Hamiltonian} \label{m-va} For manganite superlattices, the hamiltonian of the bulk $H_{0}$ has to be supplemented by Coulomb terms representing the potential arising from the pattern of the $La$ and $Sr$ ions, \cite{millis2} thus \begin{equation} H=H_{0}+H_{Coul}. \label{ham} \end{equation} In order to set up an appropriate model for the double layer, it is important to take into account the effects of the strain. The epitaxial strain produces the tetragonal distortion of the $MnO_6$ octahedron, splitting the $e_g$ states into $x^2-y^2$ and $3z^2-r^2$ states. \cite{nanda1} If the strain is tensile, $x^2-y^2$ is lower in energy, while, if the strain is compressive, $3z^2-r^2$ is favored. In the case of $n=8$ and three interfaces, \cite{aruta} the superlattices grown on $STO$ are found to be coherently strained: all of them are forced to the in-plane lattice parameter of substrate and to an average out-of-plane parameter $c \simeq 3.87 \r{A}$. \cite{aruta} As a consequence, one can infer that $LMO$ blocks are subjected to compressive strain $(-2.2 \%)$ and $SMO$ blocks to tensile strain $(+2.6 \%)$. For the case of $LMO$ block, the resulting higher occupancy of $3z^2-r^2$ enhances the out-of-plane ferromagnetic interaction owing to the larger electron hopping out-of-plane. For the case of $SMO$ block, the reverse occurs. A suitable model for the bilayer has to describe the dynamics of the $e_g$ electrons which in $LMO$ block and $SMO$ block preferentially occupy the more anisotropic $3z^{2}-r^{2}$ orbitals and more isotropic $x^{2}-y^{2}$ orbitals, respectively. For this reason, in this paper we adopt an effective single orbital approximation for the bulk manganite. The model for the bulk takes into account the double-exchange mechanism, the coupling to the lattice distortions and the super-exchange interaction between neighboring localized $t_{2g}$ electrons on $Mn$ ions. The coupling to longitudinal optical phonons arises from the Jahn-Teller effect that splits the $e_g$ double degeneracy. Then, the Hamiltonian $H_{0}$ reads: \begin{eqnarray} H_{0}=&& - \sum_{\vec{R}_i, \vec{\delta}} t_{|\vec{\delta}|} \left(\frac{S^{\vec{R}_i,\vec{R}_i+\vec{\delta}}_0+1/2}{2 S+1}\right) c^{\dagger}_{\vec{R}_i}c_{\vec{R}_i+\vec{\delta}} \nonumber \\ && +\omega_0 \sum_{\vec{R}_i}a^{\dagger}_{\vec{R}_i}a_{\vec{R}_i} +g \omega_0 \sum_{\vec{R}_i} c^{\dagger}_{\vec{R}_i}c_{\vec{R}_i} \left( a_{\vec{R}_i}+a^{\dagger}_{\vec{R}_i} \right) \nonumber \\ && +\frac{1}{2} \sum_{\vec{R}_i,\vec{\delta}} \epsilon_{|\vec{\delta|}} \vec{S}_{\vec{R}_i} \cdot \vec{S}_{\vec{R}_i+\vec{\delta}} - \mu \sum_{\vec{R}_i} c^{\dagger}_{\vec{R}_i} c_{\vec{R}_i} . \label{1r} \end{eqnarray} Here $t_{|\vec{\delta}|}$ is the transfer integral of electrons occupying $e_g$ orbitals between nearest neighbor ($nn$) sites, $S^{\vec{R}_i,\vec{R}_i+\vec{\delta}}_0$ is the total spin of the subsystem consisting of two localized spins on $nn$ sites and the conduction electron, $\vec{S}_{\vec{R}_i}$ is the spin of the $t_{2g}$ core states $\left( S= 3/2 \right)$, $c^{\dagger}_{\vec{R}_i} \left( c_{\vec{R}_i} \right)$ creates (destroys) an electron with spin parallel to the ionic spin at the i-th site in the $e_g$ orbital. The coordination vector $\vec{\delta}$ connects $nn$ sites. The first term of the Hamiltonian describes the double-exchange mechanism in the limit where the intra-atomic exchange integral $J$ is far larger than the transfer integral $t_{|\vec{\delta}|}$. Furthermore, in eq.(\ref{1r}), $\omega_0$ denotes the frequency of the local optical phonon mode, $ a^{\dagger}_{\vec{R}_i} \left( a_{\vec{R}_i} \right)$ is the creation (annihilation) phonon operator at the site $i$, the dimensionless parameter $g$ indicates the strength of the electron-phonon interaction. Finally, in Eq.(\ref{1r}), $\epsilon_{|\vec{\delta|}}$ represents the antiferromagnetic super-exchange coupling between two $nn$ $t_{2g}$ spins and $\mu$ is the chemical potential. The hopping of electrons is supposed to take place between the equivalent $nn$ sites of a simple cubic lattice (with finite size along the $z$ axis corresponding to the growth direction of the heterostructure) separated by the distance $|n-n^{\prime}|=a$. The units are such that the Planck constant $\hbar=1$, the Boltzmann constant $k_B$=1 and the lattice parameter $a$=1. Regarding the terms due to the interfaces, one considers that $La^{3+}$ and $Sr^{2+}$ ions act as $+1$ charges of magnitude $e$ and neutral points, respectively. In the heterostructure, the distribution of those cations induces an interaction term for $e_g$ electrons of $Mn$ giving rise to the Hamiltonian \begin{eqnarray} H_{Coul}= && \sum_{\vec{R_{i}} \neq \vec{R_{j}}}\frac{1}{2 \epsilon_d} \frac{e^{2} n_{\vec{R}_{i}} n_{\vec{R}_{j}}}{|\vec{R_{i}}-\vec{R_{j}}|} +\sum_{\vec{R}_{i}^{La} \neq \vec{R}_{j}^{La}}\frac{1}{2\epsilon_d} \frac{e^{2}}{|\vec{R}_{i}^{La}-\vec{R}_{j}^{La}|} \nonumber \\ && -\sum_{\vec{R}_{i},\vec{R}_{j}^{La}}\frac{1}{\epsilon_d} \frac{e^{2}n_{\vec{R}_{i}}}{|\vec{R}_{i}-\vec{R}_{j}^{La}|}, \end{eqnarray} with $n_{\vec{R}_{i}}=c^{\dag}_{\vec{R}_{i}} c_{\vec{R}_{i}}$ electron occupation number at $Mn$ site $i$, $\vec{R}_{i}$ and $\vec{R}_{i}^{La}$ are the positions of $Mn$ and $La^{3+}$ in $i$th unit cell, respectively, and $\epsilon_d$ is the dielectric constant of the material. In our calculation the long-range Coulomb potential has been modulated by a factor $\eta$ inducing a fictitious finite screening-length (see Appendix A). This factor was added only for computational reasons since it allows to calculate the summations of the Coulomb terms over the lattice indices. We have modeled the heterostructures as slabs whose in-plane size is infinite. In order to describe the magnitude of the Coulomb interaction, we define the dimensionless parameter $\alpha=e^{2}/(a\epsilon_d t_{|\vec{\delta}|})$ which controls the charge-density distribution. The order of magnitude of $\alpha$ can be estimated from the hopping parameter $t_{|\vec{\delta}|} \sim 0.65 eV$, lattice constant $a=4 \r{A}$, and typical value of dielectric constant $\epsilon \sim 10$ to be around $0.2$. Strain plays an important role also by renormalizing the heterostructure parameters. Strain effects can be simulated by introducing an anisotropy into the model between the in-plane hopping amplitude $t_{\delta_{||}}=t$ (with $\delta_{||}$ indicating nearest neighbors in the $x-y$ planes) and out-of-plane hopping amplitude $t_{|\delta_z|}=t_{z}$ (with $\delta_z$ indicating nearest neighbors along $z$ axis). \cite{dagotto2} Moreover, the strain induced by the substrate can directly affect the patterns of core spins. \cite{fang} Therefore, in our model, we have also considered the anisotropy between the in-plane super-exchange energy $\epsilon_{|\delta_{||}|}=\epsilon$ and the out-of-plane one $\epsilon_{|\delta_z|}=\epsilon_z$. We have found that the stability of magnetic phases in $LMO$ blocks is influenced by the the presence of compressive strain, while in $SMO$ the sensitivity to strain is poor. Therefore, in all the paper, we take as reference the model parameters of the $SMO$ layers and we will consider anisotropy only in the $LMO$ blocks with values of the ratio $t_{z}/t$ larger than unity and of the ratio $\epsilon_z/\epsilon$ smaller than unity. Finally, in order to investigate the effects of the electron-lattice coupling, we will use the dimensionless quantity $\lambda$ defined as \begin{equation} \lambda=\frac{g^{2} \omega_{0}}{6t}. \end{equation} In all the paper we will assume $\omega_{0}/t=0.5$. \subsection{Test Hamiltonian} In this work, we will consider solutions of the hamiltonian that break the translational invariance in the out-of-plane z-direction. The thickness of the slab is a parameter of the system that will be indicated by $N_z$. We will build up a variational procedure including these features of the heterostructures. A simplified variational approach similar to that developed in this work has already been proposed by some of the authors for manganite bulks \cite{perroni} and films. \cite{perroni1,iorio} In order to treat variationally the electron-phonon interaction, the Hamiltonian (\ref{ham}) has been subjected to an inhomogeneous Lang-Firsov canonical transformation. \cite{lang} It is defined by parameters depending on plane indices along z-direction: \begin{equation} U=exp\left[-g\sum_{i_{||},i_{z}}(f_{i_{z}}c^{\dag}_{i_{||},i_{z}}c_{i_{||},i_{z}}+ \Delta_{i_{z}})(a_{i_{||},i_{z}}-a^{\dag}_{i_{||},i_{z}})\right], \label{lang} \end{equation} where $i_{||}$ indicates the in-plane lattice sites $(i_x,i_y)$, while $i_{z}$ the sites along the direction $z$. The quantity $f_{i_{z}}$ represents the strength of the coupling between an electron and the phonon displacement on the same site belonging to $i_{z}$-plane, hence it measures the degree of the polaronic effect. On the other hand, the parameter $\Delta_{i_{z}}$ denotes a displacement field describing static distortions that are not influenced by instantaneous position of the electrons. In order to obtain an upper limit for free energy, the Bogoliubov inequality has been adopted: \begin{eqnarray} F \leq F_{test}+\langle \tilde{H}-H_{test} \rangle_{t}, \label{fe} \end{eqnarray} where $F_{test}$ and $H_{test}$ are the free energy and the Hamiltonian corresponding to the test model that is assumed with an ansatz. $\tilde{H}$ stands for the transformed Hamiltonian $\tilde{H}=UHU^{\dag}$. The symbol $\langle \rangle_{t}$ indicates a thermodynamic average performed by using the test Hamiltonian. The only part of $H_{test}$ which contributes to $\langle \tilde{H}-H_{test} \rangle_{t} $ is given by the spin freedom degrees and depends on the magnetic order of the $t_{2g}$ core spins. For the spins, this procedure is equivalent to the standard mean-field approach. The model test hamiltonian, $H_{test}$, is such that that electron, phonon and spin degrees of freedom are not interacting with each other: \begin{equation} H_{test}=H^{sp}_{test}+H^{ph}_{test}+H^{el}_{test}. \label{test} \end{equation} The phonon part of $H_{test}$ simply reads \begin{eqnarray} H^{ph}_{test}=\omega_{0}\sum_{i_{||}, i_{z}}a^{\dag}_{i_{||},i_{i_{z}}}a_{i_{||},i_{i_{z}}}, \end{eqnarray} and the spin term is given by \begin{equation} H^{sp}_{test}=-g_{S}\mu_{B}\sum_{i_{||}}\sum_{i_{z}} h^{z}_{i_{||},i_{z}} S^{z}_{i_{||}, i_{z}}, \label{spin} \end{equation} where $g_{S}$ is the dimensionless electron-spin factor ($g_{S}\simeq 2$), $\mu_{B}$ is the Bohr magneton, and $h^{z}_{i_{||},i_{z}}$ is the effective variational magnetic field. In this work, we consider the following magnetic orders modulated plane by plane: \begin{eqnarray} && F, \qquad h^{z}_{i_{||},i_{z}}=|h^{z}_{i_{z}}|; \nonumber \\ && A, \qquad h^{z}_{i_{||},i_{z}}=(-1)^{i_z} |h^{z}_{i_{z}}|; \nonumber \\ && C, \qquad h^{z}_{i_{||},i_{z}}=(-1)^{ix+iy} |h^{z}_{i_{z}}|; \nonumber \\ && G, \qquad h^{z}_{i_{||},i_{z}}=(-1)^{ix+iy+iz} |h^{z}_{i_{z}}|. \end{eqnarray} For all these magnetic orders, the thermal averages of double-exchange operator, corresponding to neighboring sites in the same plane $i_{z}$ $\gamma_{i_{z}; i_{||},i_{||}+\delta_{||}}$ and in different planes $\eta_{i_{z}, i_{z}+\delta_{z}; i_{||} }$, preserve only the dependence on the $z$ plane index: \begin{eqnarray} \gamma_{i_{z}; i_{||},i_{||}+\delta_{||}}=\langle \frac{S^{i_{||},i_z;i_{||}+\delta_{||},i_z}_0+1/2}{2 S+1}\rangle_t=\gamma_{i_{z}} \nonumber \\ \eta_{i_{z}, i_{z}+\delta_{z}; i_{||} }=\langle \frac{S^{i_{||},i_z;i_{||},i_z+\delta_{z}}_0+1/2}{2 S+1}\rangle_t=\eta_{i_{z}, i_{z}+\delta_{z}} . \end{eqnarray} In order to get the mean-field electronic Hamiltonian, we make the Hartree approximation for the Coulomb interaction. The electronic contribution $H^{el}_{test}$ to the test Hamiltonian becomes \begin{eqnarray} H^{el}_{test}&&=- t \sum_{i_{||}}\sum_{i_{z}=1}^{N_{z}}\sum_{\delta_{||}} \gamma_{i_{z}} e^{-V_{i_{z}}} c^{\dag}_{i_{||},i_{z}} c_{i_{||}+\delta_{||},i_{z}} \nonumber \\ && -t_{z} \sum_{i_{||}}\sum_{i_{z}=1}^{N_{z}}\sum_{\delta_{z}} \eta_{i_{z}, i_{z}+\delta_{z}} e^{-W_{i_{z},i_{z}+\delta_{z}}} c^{\dag}_{i_{||},i_{z}} c_{i_{||},i_{z}+\delta_{z}} \nonumber \\ && +\sum_{i_{||}}\sum_{i_{z}=1}^{N_{z}} \left[ \phi_{eff}(i_{z})-\mu \right] c^{\dag}_{i_{||},i_z} c_{i_{||},i_z} \nonumber \\ && +N_{x}N_{y}(T_{1}+T_{2})+N_{x}N_{y}g^{2}\omega_{0}\sum_{i_z}\Delta_{i_z}. \label{eltest} \nonumber \\ \end{eqnarray} In Eq.(\ref{eltest}), the quantity $\phi_{eff}(i_{z})$ indicates the effective potential seen by the electrons. It consists of the Hartee self-consistent potential $\phi(i_{z})$ (see Appendix A) and a potential due to the electron-phonon coupling: \begin{equation} \phi_{eff}(i_{z})=\phi(i_{z})+ g^{2} \omega_{0} C_{i_z}, \end{equation} with \begin{equation} C_{i_{z}}=f^{2}_{i_{z}}-2f_{i_{z}}+2\Delta_{i_{z}}(f_{i_{z}}-1). \end{equation} The factors $e^{-V_{i_{z}}}$ and $e^{-W_{i_{z},i_{z}+\delta_{z}}}$ represent the phonon thermal average of Lang-Firsov operators: \begin{eqnarray} e^{-V_{i_{z}}} = \langle X_{i_{||},i_z} X^{\dag}_{i_{||}+\delta_{||},i_z}\rangle_t \nonumber \\ e^{-W_{i_{z},{i_{z}}+\delta_{z}}} = \langle X_{i_{||},i_z} X^{\dag}_{i_{||},i_z+\delta_z} \rangle_t, \end{eqnarray} where the operator $X_{\vec{R}_i}$ reads \begin{equation} X_{\vec{R}_i}= e^{g f_{i_{z}} (a_{\vec{R}_i}-a^{\dag}_{\vec{R}_i})}. \nonumber \\ \end{equation} Finally, the quantity $T_{1}$ and $T_{2}$ derive \label{fe} from the Hartree approximation (see Appendix A), $N_x$ and $N_y$ denote the size of the system along the two in-plane directions, respectively. In order to calculate the variational free energy, we need to know eigenvalues and eigenvectors of $H^{el}_{test}$ which depend on the magnetic order of core spins through the double exchange terms. \subsection{Magnetic order and diagonalization of the electronic mean-field Hamiltonian} In order to develop the calculation, we need to fix the magnetic order of core spins. The patters of magnetic orders is determined by the minimization of the total free energy. By exploiting the translational invariance along the directions perpendicular to the growth axis of the heterostructure, the diagonalization for $H^{el}_{test}$ reduces to an effective unidimensional problem for each pair of continuous wave vectors $(k_{x},k_{y})=\vec{k}_{||}$. For some magnetic patterns, the electronic problem is characterized at the interface by a staggered structure. Therefore, we study the electron system considering a reduced first Brillouin zone of in-plane wave vectors. To this aim, we represent $H^{el}_{test}$ with the $2N_{z}$ states \begin{equation} |k_{x},k_{y},i_z\rangle, \qquad |k_{x}+\pi,k_{y}+\pi,i_z\rangle, \end{equation} with the wave vectors such that $-\pi/2 < k_{x} < \pi/2$, $-\pi/2< k_{y}<\pi/2$, and $i_z$ going from $1$ to $N_z$. The eigenstates of electronic test Hamiltonian are indicated by $E(k_x,k_y,n)$, with the eigenvalue index $n$ going from $1$ to $2 N_z$. The eigenvector related to $n$ is specified in the following way: $b_{i_{z}}(\vec{k}_{||},n)$ for the first $N_{z}$ components, $p_{i_{z}}(\vec{k}_{||},n)$ for the remaining $N_{z}$ components. The variational procedure is self-consistently performed by imposing that the total density of the system $\rho$ is given by $N_{La}/N_{z}$, with $N_{La}$ the number of layers of $ LMO $ block, and the local plane density $\chi(i_z)$ is equal to $\langle n_{\vec{R}_i} \rangle$. Therefore, one has to solve the following $N_z+1$ equations: \begin{equation} \rho=\frac{1}{N_{x}N_{y}N_{z}}\sum_{\vec{k}_{||}}\sum_{n}n_{F} \left[ E(\vec{k}_{||},n) \right] \end{equation} and \begin{eqnarray} \chi(i_z)&=&\frac{1}{N_{x}N_{y}}\sum_{\vec{k}_{||}}\sum_{n}n_{F} \left[ E(\vec{k}_{||},n) \right] \nonumber \\ && \Bigg[|b_{i_{z}}(\vec{k}_{||},n)|^{2}+|p_{i_{z}}(\vec{k}_{||},n)|^{2}+ \nonumber \\ && [b^{*}_{i_{z}}(\vec{k}_{||},n)p_{i_{z}}(\vec{k}_{||},n)+p^{*}_{i_{z}}(\vec{k}_{||},n)b_{i_{z}}(\vec{k}_{||},n)]\Bigg], \nonumber \\ \end{eqnarray} where $n_F(z)$ is the Fermi distribution function. These equations allow to obtain the chemical potential $\mu$ and the the local charge density $\chi(i_{z})$. As result of the variational analysis, one is able to get the charge density profile corresponding to magnetic solutions which minimize the free energy. \section{Static properties and phase diagrams} We have found the magnetic solutions and the corresponding density profiles that are stable for different sizes of the $LMO$ and $SMO$ blocks. The inhomogeneous variational approach allows to determine the values of the electron-phonon parameters $f_{i_z}$ and $\Delta_{i_z}$, and the magnetic order of the $t_{2g}$ spins through the effective magnetic fields $h_{i_z}$. We will study the systems in the intermediate to strong electron-phonon regime characteristic of manganite materials focusing on two values of coupling: $\lambda=0.5$ and $\lambda=0.8$. The maximum value of in-plane antiferromagnetic super-exchange is $\epsilon=0.01 t$. The value of the Coulomb term $\alpha$ is fixed to $\alpha=0.2$. We will analyze the heterostrucures in the low-temperature regime: $T=0.05 t$. The general structure of our solutions is characterized by three phases running along $z$-direction. Actually, according to the parameters of the model, we find $G$ or $C$ antiferromagnetic phases corresponding to localized or delocalized charge carriers inside $LMO$ block, respectively. The localization is ascribed to the electron-phonon coupling which gives rise to the formation of small polarons. For the values of $\lambda$ considered in this paper, a ferromagnetic phase always stabilizes around the interface. The size of the ferromagnetic region at the interface is determined by the minimization of the free energy and depends on the values of the system parameters. Only for larger values of $\lambda$ and $\epsilon$, the possibility of interface ferromagnetism is forbidden. Inside the $SMO$ block, a localized polaronic $G$-type antiferromagnet phase is always stable. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth, angle=0]{figc1.eps \end{center} \caption{Comparison among density profiles corresponding to different sizes at $\lambda=0.5$ and $\epsilon=0.01t$. The index $0$ indicates the interface $Mn$-plane between the last $La$-plane in $LMO$ block and the first $Sr$-plane in $SMO$ block.} \label{f1} \end{figure} At first, we have analyzed the scaling of the static properties as function of the size of the system along the $z$ growth direction. Therefore, a comparison of the density profiles has been done with $(LMO)_{8}/(SMO)_{4}$, $(LMO)_{16}/(SMO)_{8}$ and $(LMO)_{32}/(SMO)_{16}$ systems. In Fig. \ref{f1}, we show the density profiles in a situation where strain-induced anisotropy has not been introduced. It is worth noticing that we indicate the interface $Mn$-plane between the last $La$-plane in $LMO$ block and the first $Sr$-plane in $SMO$ block with the index $0$. For a sufficiently large numbers of planes, the charge profile along $z$ shows a well-defined shape. Indeed, the local density is nearly unity in $LMO$ block, nearly zero in $SMO$ block, and it decreases from $1$ to $0$ in the interface region. The decrease of charge density for the first planes of $LMO$ is due to the effect of open boundary conditions along the $z$ direction. In the intermediate electron-phonon coupling regime that we consider in Fig. (\ref{f1}), the region with charge dropping involves $4-5$ planes between the two blocks. We notice that the local charge density for $(LMO)_{16}/(SMO)_{8}$ and $(LMO)_{32}/(SMO)_{16}$ systems are very similar around the interface. Furthermore, the numerical results show close values of variational free energy corresponding to above mentioned systems. Given the similarity of the properties of these two systems, in the following, we will develop the analysis on the role of interface studying the system $(LMO)_{16}/(SMO)_{8}$. For the same set of electron-phonon and magnetic couplings, the variational parameters and the Hartree self-consistent potential along z-axis are shown in Fig. 2. The effective magnetic fields are plotted for the most stable magnetic solution: antiferro $G$ orders well inside $LMO$ (planes $1-15$) and $SMO$ (planes $19-24$), and ferromagnetic planes at the interface (planes $16-18$). The peak in the plot of the magnetic fields signals that ferromagnetism is quite robust at the interface. The variational electron-phonon parameters $f_{i_z}$ are small on the $LMO$ side and at the interface, but close to unity in $SMO$ block. This means that, for these values of the couplings, carriers are delocalized in $LMO$ up to the interface region, but small polarons are present in the $SMO$ block. The quantities $\Delta_{i_z}$, entering the variational treatment of the electron-phonon coupling, are determined by $f_{i_z}$ and the local density $<n_{i_z}>$ through the equation: $\Delta_{i_z}=<n_{i_z}>(1-f_{i_z})$. The Hartree self-consistent potential $\Phi$ indicates that charges are trapped into a potential well corresponding to the $LMO$ block. Moreover, it is important to stress the energy scales involved in the well: the barrier between $LMO$ and $SMO$ block is of the order of the electron band-width. Furthermore, at the interface, the energy difference between neighboring planes is of the order of the hopping energy $t$. \begin{figure} \includegraphics[width=0.5\textwidth,height=0.30\textheight,angle=0]{figc2.eps \caption{Self-consistent Hartree potential $\phi(i_{z})$ (upper panel, in units of $t$), variational parameters $f_{i_z}$ (mid panel) and effective magnetic fields $|h^{z}_{i_{z}}|$ (lower panel) along the z-axis for $\lambda=0.5$ and $\epsilon=0.01t$.} \label{f2} \end{figure} As mentioned above, for these systems, strain plays an important role. In order to study quantitatively its effect, we have investigated the phase diagram under the variation of the hopping anisotropy $t_{z}/t$ for two different values of $\epsilon_{z}$ ($\epsilon_z = \epsilon =0.01 t$, $\epsilon_z = 0$). Indeed, we simulate the compressive strain in the $LMO$ block increasing the ratio $t_{z}/t$ and decreasing $\epsilon_z / \epsilon$. On the other hand, the tensile strain in the $SMO$ block favour the more isotropic $x^2-y^2$ orbital and does not yield sizable effects. Therefore, for the $SMO$ block, in the following, we choose $t_{z}=t$ and $\epsilon_z = \epsilon$. For what concerns the electron-phonon interaction, we assume an intermediate coupling, $\lambda=0.8$. As shown in the upper panel of Fig. 3, with increasing the ratio $t_{z}/t$ up to $1.7$ for $\epsilon_z = \epsilon$, the magnetic order in $LMO$ does not change since it remains $G$ antiferromagnetic. However, the character of charge carriers is varied. Actually, for $\lambda=0.8$, in the absence of anisotropy, small polarons are present in the $LMO$ block. Moreover, at $t_{z}/t \simeq 1.5$, in $LMO$, a change from small localized polarons to large delocalized polaron occurs. For all values of the ratio $t_{z}/t$, the interface region is characterized by ferromagnetic order with large polaron carriers and $SMO$ by $G$ antiferromangnetic order with small polaron carriers. \begin{figure} \includegraphics[width=0.5\textwidth,height=0.30\textheight,angle=0]{figc3.eps} \caption{Phase diagram in the hopping anisotropy-energy plane for $LMO_{16} SMO_{8}$ system, corresponding to $\lambda=0.8$ for $\epsilon_{z}=0.01t$ (upper panel) and $\epsilon_{z}=0$ (lower panel).} \label{f4} \end{figure} It has been shown that it is also important to consider the anisotropy in super-exchange ($\epsilon_{z} \neq \epsilon$) parameters as consequence of strain. \cite{fang} In order to simulate the effect of compressive strain in $LMO$, a reduction of $\epsilon_{z}$ will be considered. We discuss the limiting case: $\epsilon_{z}=0$. For this regime of parameters, the effect on the magnetic phases is the strongest. As shown in the lower panel of Fig. 3, for $1.28 \le t_{z}/t \le 1.5$, in $LMO$ block, a $C$-type antiferromagnetic phase is the most favorable. The transition from small to large polaron again takes place at $t_{z}/t \simeq 1.5$. Therefore, we have shown that there is a range of parameters where $LMO$ block has $C$-type antiferromagnetic order with small localized polarons. Due to the effect of strain, the magnetic solution in $LMO$ turns out to be compatible with experimental results in superlattices. \cite{aruta} The interface is still ferromagnetic with metallic large polaron features. In the figure $A$/$B$/$C$ refers to magnetic orders and character of charge carriers inside $LMO$ (A), at interface (B), inside $SMO$ (C). In order to analyze the effects of the electron-phonon interaction, a comparison between two different electron-phonon couplings is reported in Fig. 4. We have investigated the solutions which minimize the variational free energy at fixed value of the anisotropy factors $t_{z}/t=1.3$ and $\epsilon_z=0$ at $\lambda=0.5$ and $\lambda=0.8$. The magnetic solution in $LMO$ block is $C$ antiferromagnetic until the $15th$ plane. For both values of $\lambda$, polarons are small. In $SMO$ block, starting from the $19th$ plane, the solution is $G$-type antiferromagnetic together with localized polarons. Three planes around the interface are ferromagnetically ordered. For $\lambda=0.5$, all the three planes at the interface are characterized by delocalized polarons, while, for $\lambda=0.8$, only the plane linking the ends of $LMO$ and $SMO$ blocks is with delocalized charge carriers. As shown in Fig. 4, the quantity $\lambda$ has important consequences on the physical properties such as the local particle density. Actually, for $\lambda=0.8$ the transition from occupied to empty planes is sharper at the interface. Only one plane at the interface shows an intermediate density close to $0.5$. For $\lambda=0.5$ the charge profile is smoother and the three ferromagnetic planes with large polarons have densities different from zero and one. For the analysis of the spectral and optical quantities, we will consider the parameters used for the discussion of the results in this last figure. \begin{figure} \includegraphics[width=0.5\textwidth, angle=0]{figc4.eps \caption{Comparison between local particle density corresponding to $\lambda=0.5$ and $\lambda=0.8$.} \label{f5} \end{figure} \section{Spectral properties} In the following section we will calculate the spectral properties of the heterostructure for the same parameters used in Fig. 4. Performing the canonical transformation (\ref{lang}) and exploiting the cyclic properties of the trace, the electron Matsubara Green's function becomes \begin{eqnarray} \mathcal{G}(\vec{R}_{i},\vec{R}_{j},\tau)=-\langle T_{\tau}c_{\vec{R}_i}(\tau)X_{\vec{R}_i}(\tau)c^{\dag}_{\vec{R}_j}(0)X^{\dag}_{\vec{R}_j}(0)\rangle. \end{eqnarray} By using the test Hamiltonian (\ref{test}), the correlation function can be disentangled into electronic and phononic terms. \cite{perroni,perroni1} Going to Matsubara frequencies and making the analytic continuation $i\omega_{n} \rightarrow \omega+i\delta$, one obtains the retarded Green's function and the diagonal spectral function $A^{i_{x}i_{y}}_{i_{z}}(\omega)$ corresponding to $\vec{R}_{i}=\vec{R}_{j}$ \begin{eqnarray} && A^{i_{x},i_{y}}_{i_{z}}(\omega)= \nonumber \\ && e^{S_T^{i_{z}}}\sum_{l=- \infty}^{\infty} I_{l} (S^{i_{z}})e^{\frac{\beta l\omega_{0}}{2}}[1-n_{F}(\omega-l\omega_{0})] g^{i_{x},i_{y}}_{i_{z}}(\omega-l\omega_{0}) \nonumber \\ && +e^{S_T^{i_{z}}}\sum_{l=-\infty}^{\infty} I_{l} (S^{i_{z}})e^{\frac{\beta l\omega_{0}}{2}} n_{F}(\omega+l\omega_{0}) g^{i_{x},i_{y}}_{i_{z}}(\omega+l \omega_{0}), \nonumber \\ \label{A} \end{eqnarray} where $S^{i_{z}}_{T}=g^{2}f^{2}_{i_{z}}(2N_{0}+1)$, $S^{i_{z}}=2g^{2}f^2_{i_{z}}[N_0(N_{0}+1)]^{\frac{1}{2}}$, $I_l(z)$ modified Bessel functions, and $g^{i_{x},i_{y}}_{i_{z}}(\omega)$ is \begin{eqnarray} && g^{i_{x},i_{y}}_{i_{z}}(\omega)=\frac{2 \pi}{N_{x}N_{y}}\sum_{\vec{k}_{||}}\sum_{n=1}^{2N_{z}}\delta[\omega-E(\vec{k}_{||},n)] \nonumber \\ && \times \Bigg[|b_{i_{z}}(\vec{k}_{||},n)|^{2}+|p_{i_{z}}(\vec{k}_{||},n)|^{2}+ \nonumber \\ && (-1)^{i_{x}+i_{y}}[b^{*}_{i_{z}}(\vec{k}_{||},n)p_{i_{z}}(\vec{k}_{||},n) +p^{*}_{i_{z}}(\vec{k}_{||},n)b_{ic_{z}}(\vec{k}_{||},n)]\Bigg]. \nonumber \\ \end{eqnarray} The density of states $D(\omega)$ is defined as \begin{equation} D(\omega)=\frac{1}{N_{x}N_{y}N_z} \frac{1}{2 \pi} \sum_{i_x,i_y,i_z} A^{i_{x},i_{y}}_{i_{z}}(\omega). \end{equation} \begin{figure} \includegraphics[width=0.5\textwidth, height=0.25\textheight, angle=0]{figc5.eps \caption{Comparison between density of states (in units of $1/t$) as a function of the energy (in units of $t$) corresponding to $\lambda=0.5$ and $\lambda=0.8$.} \label{f6} \end{figure} In Fig. 5 we report the density of state of the system $(LMO)_{16}/(SMO)_{8}$. It has been calculated measuring the energy to the chemical potential $\mu$. This comparison has been made at fixed low temperature ($K_{B}T=0.05t$), therefore we can consider the chemical potential very close to the Fermi energy of the system. At $\lambda=0.5$, the spectral function exhibits a residual spectral weight at $\mu$. The main contribution to the density of states at the chemical potential $\mu$ comes from the three ferromagnetic large polaron planes at the interface. Indeed, the contributions due to the ($LMO$) and ($SMO$) blocks is negligible. For stronger electron-phonon coupling at $\lambda=0.8$, we observe an important depression of the spectral function at $\mu$. Hence the formation of a clear pseudogap takes place. This result is still compatible with the solution of our variational calculation since, for this value of $\lambda=0.8$, there is only one plane with delocalized charge carriers which corresponds to the plane indicated as the interface ($i_{z}=17$), while the two further ferromagnetic planes around the interface are characterized by small polarons. The depression of the density of the states at the Fermi energy is due also to the polaronic localization well inside the $LMO$ and $SMO$ block. In any case we find that, even for $\lambda=0.8$, the density of states never vanishes at the interface in agreement with experimental results. \cite{eckstein1} In this section we have found strong indications that a metallic ferromagnetic interface can form at the interface between $LMO$ and $SMO$ blocks. This situation should be relevant for superlattices with $n \geq 3$, where resistivity measurements made with contacts on top of $LMO$ show a globally insulating behavior. In our analysis we have completely neglected any effect due to disorder even if, both from experiments \cite{eckstein,adamo1} and theories \cite{dagotto1}, it has been suggested that localization induced by disorder could be the cause of the metal-insulator transition observed for $n \geq 3$. We point out that the sizable source of disorder due to the random doping with $Sr^{2+}$ is strongly reduced since, in superlattices, $La^{3+}$ and $Sr^{2+}$ ions are spatially separated by interfaces. Therefore, the amount of disorder present in the heterostructure is strongly reduced in comparison with the alloy. However, considering the behavior of the $LMO$ ($SMO$) block as that of a bulk with a small amount of holes (particles), one expects that even a weak disorder induces localization. On the other hand, a weak disorder is not able to prevent the formation of the ferromagnetic metallic interface favored by the double-exchange mechanism and the charge transfer between the bulk-like blocks: the states at the Fermi level due to the interface formation have enough density \cite{eckstein1} so that they cannot be easily localized by weak disorder. In this section, we have shown that this can be the case in the intermediate electron-phonon coupling regime appropriate for $LMO/SMO$ heterostructures. In the next section we will analyze the effects of electron-phonon coupling and strain on the optical conductivity in the same regime of the parameters considered in this section. \section{Optical properties} To determine the linear response to an external field of frequency $\omega$, we derive the conductivity tensor $\sigma_{\alpha,\beta}$ by means of the Kubo formula. In order to calculate the absorption, we need only the real part of the conductivity \begin{eqnarray} Re \sigma_{\alpha,\alpha}(\omega)=-\frac{Im \Pi^{ret}_{\alpha,\alpha}}{\omega}, \label{realsig} \end{eqnarray} where $\Pi^{ret}_{\alpha,\beta}$ is the retarded current-current correlation function. Following a well defined scheme \cite{perroni,perroni1} and neglecting vertex corrections, one can get a compact expression for the real part of the conductivity $\sigma_{\alpha,\alpha}$. It is possible to get the conductivity both along the plane perpendicular to growth axis, $\sigma_{xx}$, and parallel to it, $\sigma_{zz}$. In order to calculate the current-current correlation function, one can use the spectral function $A_{\vec{k}_{||};i_{z},j_{z}}$ derived in the previous section exploiting the translational invariance along in-plane direction. It is possible to show that the components of the real part of the conductivity become \begin{eqnarray} Re[\sigma_{xx}](\omega)=\frac{e^{2}t^{2}}{N_{x}N_{y}}\sum_{k_{x},k_{y}}4sen^2(k_{x}) \frac{1}{N_{z}}\sum_{i_{z},j_{z}}\gamma_{i_{z}}\gamma_{j_{z}} \nonumber \\ \times \frac{1}{\omega}\int^{\infty}_{-\infty}\frac{d\omega_{1}}{4\pi}[n_{F}(\omega_{1}-\omega)-n_{F}(\omega_{1})] \nonumber \\ \times A_{k_{x},k_{y};i_{z},j_{z}}(\omega_{1}-\omega)A_{k_{x},k_{y};i_{z},j_{z}}(\omega_{1}), \end{eqnarray} and \begin{eqnarray} && Re [\sigma_{zz}](\omega)=\frac{e^2t^2}{N_xN_y}\sum_{k_x,k_y}\frac{1}{N_z}\sum_{i_z,j_z} \sum_{\delta_{1z},\delta_{2z}} \delta_{1z} \delta_{2z} \nonumber \\ &&\times \eta_{i_z,i_z+\delta_{1z}} \eta_{j_z,j_z+\delta_{2z}} \frac{1}{\omega}\int^{\infty}_{-\infty}\frac{d\omega_1}{4\pi}[n_F(\omega_1-\omega)-n_F(\omega_1)] \nonumber \\ &&\times A_{k_x,k_y;i_z+\delta_{1z},j_z+\delta_{2z}}(\omega_1-\omega)A_{k_x,k_y;i_z,j_z}(\omega_1). \end{eqnarray} \begin{figure} \includegraphics[width=0.55\textwidth, angle=0]{figc6.eps \caption{The conductivity (in units of $e^2/(mt)$, with $m=1/(2t)$) into the plane perpendicular to growth direction of the $(LMO)_{16}/(SMO)_{8}$ bilayer as a function of the energy (in units of $t$) for different values of $\lambda$.} \label{f6} \end{figure} In Fig. 6, we report the in-plane conductivity as function of the frequency at $\lambda=0.5$ and $\lambda=0.8$. We have checked that the in-plane response mainly comes from the interface planes. Both conductivities are characterized by a Drude-like response at low frequency. Therefore, the in-plane conductivity provides a clear signature of the formation of the metallic ferromagnetic interface. However, due to the effect of the interactions, we have found that the low frequency in-plane response is at least one order of magnitude smaller than that of free electrons in the heterostructures. Moreover, additional structures are present in the absorption with increasing energy. For $\lambda=0.5$, a new band with a peak energy of the order of hopping $t=2 \omega_0$ is clear in the spectra. This structure can be surely ascribed to the presence of large polarons at the three interface planes. \cite{perroni} Actually, this band comes from the incoherent multiphonon absorption of large polarons at the interface. This is also confirmed by the fact that this band is quite broad, therefore it can be interpreted in terms of multiple excitations. For $\lambda=0.8$, the band is even larger and shifted at higher energies. In this case, at the interface, large and small polarons are present with a ferromagnetic spin order. Therefore, there is a mixing of excitations whose net effect is the transfer of spectral weight at higher frequencies. The out-of-plane optical conductivities show significant differences in comparison with the in-plane responses. In Fig. 7, we report out-of-plane conductivity as function of the frequency at $\lambda=0.5$ and $\lambda=0.8$. First, we observe the absence of the Drude term. Moreover, the band at energy about $2 \omega_0$ is narrower than that in the in-plane response. Therefore, the origin of this band has to be different. Actually, the out-of-plane optical conductivities are sensitive to the interface region. A charge carrier at the interface has to overcome an energy barrier in order to hop to the neighbour empty site. As shown in Fig. 2, the typical energy for close planes at the interface is of the order of the hopping $t$. Therefore, when one electrons hops along $z$, it has to pay at least an energy of the order of $t$. In the out-of-plane spectra, the peaks at low energy can be ascribed to this process. Of course, by paying a larger energy, the electron can hop to next nearest neighbors. This explains the width of this band due to inter-plane hopping. Additional structures are present at higher energies in the out-of-plane conductivities. For $\lambda=0.5$ the band at high energy is broad with small spectral weight. For $\lambda=0.8$, there is an actual transfer of spectral weight at higher energies. A clear band is peaked around $10 t$. This energy scale can be intepreted as given by $2 g^2 \omega_0=9.6 t$ for $\lambda=0.8$. Therefore, in the out-of-plane response, the contribution at high energy can be interpreted as due to small polarons. \cite{perroni,mahan} \begin{figure} \includegraphics[width=0.5\textwidth, angle=0]{figc7.eps \caption{The conductivity (in units of $e^2/(mt)$, with $m=1/(2t)$) along the growth direction of the $(LMO)_{16}(SMO)_{8}$ bilayer as a function of the energy (in units of $t$) for $\lambda=0.5$ and $\lambda=0.8$.} \label{f6} \end{figure} Unfortunately, experimental data about optical properties of the $LMO/SMO$ bilayers are still not available. Therefore, comparison with experiments is not possible. Predictions about the different behaviors among $\sigma_{xx}$ and $\sigma_{zz}$ can be easily checked if one uses in-plane and out-of-plane polarization of the electrical fields used in the experimental probes. More important, the formation of two-dimensional gas at the interface expects to be confirmed by experiments made by using lateral contacts directly on the region between the $LMO$ and $SMO$ blocks. The d.c. conductivity of the sheet could directly measure the density of carriers of the interface metal and confirm the Drude-like low frequency behavior of in-plane response. Finally, one expects that a weak disorder present in the system and not included in our analysis can increase the scattering rate of the carriers reducing the value of the in-plane conductivity for $\omega \rightarrow 0$. \section{Conclusions} In this paper we have discussed phase diagrams, spectral and optical properties for a very large bilayer $(LMO)_{2n}/(SMO)_{n}$ (up to $48$ sites along the growth direction). A correlated inhomogeneous mean-field approach has been developed in order to analyze the effects of electron-lattice anti-adiabatic fluctuations and strain. We have shown that a metallic ferromagnetic interface is a quite robust feature of these systems for a large range of the electron-lattice couplings and strain strengths. Furthermore, we have found that the size of the interface region depends on the strength of electron-phonon interactions. At low temperature, the general structure of our solutions is characterized by three phases running along growth $z$-direction: antiferromagnetic phase with localized/Delocalized charge carriers inside $LMO$ block, ferromagnetic state with itinerant carriers at the interface, localized polaronic $G$-type antiferromagnetic phase inside $SMO$ block. The type of antiferromagnetic order inside $LMO$ depends on the strain induced by the substrate. Spectral and optical properties have been discussed for different parameter regimes. Due to the formation of the metallic interface, even in the intermediate to strong electron-phonon coupling regime, the density of states never vanishes at the chemical potential. Finally, in-plane and out-of-plane optical conductivities are sharply different: the former shows a metallic behavior, the latter a transfer of spectral weight at high frequency due to the effects of the electrostatic potential well trapping electrons in $LMO$ block. The in-plane response provides a signature of the formation of the metallic ferromagnetic interface. In this paper we have focused on static and dynamic properties at very low temperature. The approach used in the paper is valid at any temperature. Therefore, it could be very interesting to analyze not only single interfaces, but also superlattices with different unit cells at finite temperature. Work in this direction is in progress.
1,116,691,497,065
arxiv
\section{Introduction} \label{intro} Distributed \emph{Gaussian process} (GP) models \cite{LowUAI13,Marc15,Yarin14,NghiaICML16,LowAAAI15} are conventionally designed with a server-client paradigm where a server distributes the computational load among parallel machines (i.e., client nodes) to achieve scalability to massive, streaming datasets. This paradigm can potentially allow the richness and expressive power of GP models \cite{Rasmussen06} (Section~\ref{fgp}) to be exploited by multiple mobile sensing agents for distributed inference of the complex latent behavior and correlation structure underlying their local data. Such a prospect has inspired the recent development of distributed GP fusion algorithms \cite{LowUAI12,LowRSS13,Arik15,Rakshit17}: Essentially, the ``client'' agents encapsulate their own local data into memory-efficient local summary statistics based on a \emph{common} set of \emph{fixed/known} GP hyperparameters and \emph{inducing inputs}, then communicate them to some ``server'' agent(s) to be fused into globally consistent summary statistics. These will in turn be sent back to the ``clients'' for predictive inference. These distributed GP fusion algorithms inherit the advantage of being adjustably lightweight by restricting the number of inducing inputs (hence the size of the local and global summary statistics) to fit the agents' limited computational and communication capabilities at the expense of predictive accuracy. However, such algorithms fall short of achieving the truly decentralized GP fusion necessary for scaling up to a massive number of agents grounded in the real world (e.g., traffic sensing, modeling, and prediction by autonomous vehicles cruising in urban road networks \cite{Arik15,NghiaICML14,min11,wang05,Bayen10a}, distributed inference on a network of IoT and mobile devices \cite{Kang16,Sarkar14}) due to several critical issues. These includes: (a) an obvious limitation is the single point(s) of failure with the server agent(s) whose computational and communication capabilities must be superior and robust; (b) different mobile sensing agents are likely to gather local data of varying behaviors and correlation structure from possibly separate localities of the input space (e.g., spatiotemporal) and could therefore incur considerable information loss due to summarization based on a common set of fixed/known GP hyperparameters and inducing inputs, especially when the inducing inputs are few and far from the data (in the correlation sense); and (c) like their non-fusion counterparts, distributed GP fusion algorithms implicitly assume a one-time processing of a fixed set of data and would hence repeat the entire fusion process involving all local data gathered by the agents whenever new batches of streaming data arrives, which is potentially very expensive. Further problems could occur in the event of a transmission loss between the clients and server, which can happen when the locations of clients are changing over time (e.g., autonomous vehicles cruising an urban road network to collect traffic data \cite{Arik15}). This loss might prevent the prediction model from being generated \cite{LowUAI13} or as shown in Section~\ref{exp}, cause its performance to degrade badly due to irrecoverable loss. To overcome these limitations, this paper presents a \emph{\underline{Co}llective \underline{O}nline \underline{L}earning via \underline{GP}} (COOL-GP) framework that enables a massive number of agents to perform decentralized online GP fusion based on their own possibly different sets of \emph{learned} GP hyperparameters and inducing inputs. A key technical challenge here lies in how the summary statistics currently being maintained by an agent can be fused efficiently in constant time and space with the summary statistics of a new batch of data or another agent based on a possibly different set of GP hyperparameters and inducing inputs. To realize this, we exploit the notion of a latent encoding vocabulary \cite{Candela05,Snelson07a,Titsias09,Miguel10,Hensman13,NghiaICML15,NghiaAAAI17} as a shared medium to exchange and fuse summary statistics of different batches of data or agents based on different sets of GP hyperparameters and inducing inputs (Section~\ref{sgps}). This consequently enables us to design and develop a novel sampling scheme for efficient approximate online GP inference, a novel pairwise operator for fusing the summary statistics of different agents, and a novel decentralized message passing algorithm that can exploit sparse connectivity among agents for improving efficiency and enhance the robustness of our framework to transmission loss (Section~\ref{fusion}). We provide a rigorous analysis of the approximation loss arising from the online update and fusion in Section~\ref{analysis}. Finally, we empirically evaluate the performance of COOL-GP on an extensive benchmark comprising both synthetic and real-world datasets with thousands of agents (Section~\ref{exp}). \vspace{-4mm} \section{Background and Notation} \vspace{-2mm} \label{fgp} GP \cite{Rasmussen06} is a state-of-the-art model for predictive analytics due to its capacity to represent complex behaviors of data in highly sophisticated domains. Specifically, let $\mathbb{X} \subseteq \mathbb{R}^d$ represents an input domain and $\mathrm{f}: \mathbb{X} \rightarrow \mathbb{R}$ denotes a random function mapping each $d$-dimensional input feature vector $\mathbf{x} \in \mathbb{X}$ to a stochastic scalar measurement $\mathrm{f}(\mathbf{x}) \in \mathbb{R}$ and its noisy observation $\mathrm{y} \triangleq \mathrm{f}(\mathbf{x}) + \epsilon$ where $\epsilon \sim \mathcal{N}(0, \sigma^2)$. To characterize the stochastic behavior of $\mathrm{f}(\mathbf{x})$, a GP model assumes that for every finite subset of inputs $\mathbf{X}_\mathcal{D} \triangleq \{\mathbf{x}_1, \ldots, \mathbf{x}_n\} \subseteq \mathbb{X}$, the corresponding column vector $\mathbf{f}_\mathcal{D} \triangleq [\mathrm{f}(\mathbf{x}_1) \ldots \mathrm{f}(\mathbf{x}_n)]^\top$ of stochastic scalar measurements is distributed \emph{a priori} by a multivariate Gaussian distribution with mean $\mathbf{m}_\mathcal{D} \triangleq [\mathrm{m}(\mathbf{x}_1) \ldots \mathrm{m}(\mathbf{x}_n)]^\top$ and covariance $\mathbf{K}_\mathcal{DD} \triangleq [\mathrm{k}(\mathbf{x}_i,\mathbf{x}_j)]_{ij}$ induced from a pair of user-specified mean and covariance functions, $\mathrm{m}: \mathbb{X} \rightarrow \mathbb{R}$ and $\mathrm{k}: \mathbb{X} \times \mathbb{X} \rightarrow \mathbb{R}$, respectively. For notational simplicity, we assume a zero mean function $\mathrm{m}(\mathbf{x}) = 0$. Then, let $\mathbf{y}_\mathcal{D} \triangleq [\mathrm{y}_1 \ldots \mathrm{y}_n]^\top$ denotes the corresponding vector of noisy observations $\{\mathrm{y}_i\}_{i=1}^n$ where $\mathrm{y}_i \triangleq \mathrm{f}(\mathbf{x}_i) + \epsilon$ with $\epsilon \sim \mathcal{N}(0, \sigma^2)$, the posterior distribution over $\mathrm{f}(\mathbf{x}_\ast)$ for any test input $\mathbf{x}_\ast$ is Gaussian with mean $\mu(\mathbf{x}_\ast) = \mathbf{k}_\ast^\top(\mathbf{K}_\mathcal{DD} + \sigma^2 \mathbf{I})^{-1}\mathbf{y}_\mathcal{D}$ and variance $\sigma^2(\mathbf{x}_\ast) = \mathrm{k}(\mathbf{x}_\ast,\mathbf{x}_\ast) - \mathbf{k}_\ast^\top(\mathbf{K}_\mathcal{DD} + \sigma^2 \mathbf{I})^{-1}\mathbf{k}_\ast$ where $\mathbf{k}_\ast \triangleq [\mathrm{k}(\mathbf{x}_\ast,\mathbf{x}_1) \ldots \mathrm{k}(\mathbf{x}_\ast, \mathbf{x}_n)]^\top$. A complete predictive map over the (possibly infinite) input domain $\mathbb{X}$ can then be succinctly represented with $\{(\mathbf{K}_\mathcal{DD} + \sigma^2 \mathbf{I})^{-1}\mathbf{y}_\mathcal{D}, (\mathbf{K}_\mathcal{DD} + \sigma^2 \mathbf{I})^{-1}\}$. This representation is not efficient because its size (computation) grow quadratically (cubically) in the size of data. More importantly, since the GP representation is specific to a particular data variation scale (i.e., the kernel parameters or hyper-parameters), it cannot be used as a common ground to facilitate communication between agents operating in related domains with different variation scales. To mitigate these issues, we instead represent each agent's local model using a common unit-scale GP and a transformation operator that warps the unit-scale GP into a domain-specific GP parameterized with different scale reflecting the variation in local data. Intuitively, this allows each agent to translate the statistical properties of its specific domain to those of a common domain and facilitates efficient communication between agents (Section~\ref{fusion}) while maintaining its own set of hyper-parameters. Let $\mathrm{u}(\mathbf{z}) \sim \mathcal{GP} (0, \mathrm{k}_{\mathrm{uu}}(\mathbf{z},\mathbf{z}'))$ with $\mathrm{k}_{\mathrm{uu}}(\mathbf{z},\mathbf{z'}) = \mathrm{exp}\left(-0.5(\mathbf{z} - \mathbf{z'})^\top(\mathbf{z} - \mathbf{z'})\right)$. We can then characterize the distribution of a domain-specific function $\mathrm{f}(\mathbf{x})$ in terms of $\mathrm{u}(\mathbf{z})$ and its prior distribution $\mathcal{GP} (0, \mathrm{k}_{\mathrm{uu}}(\mathbf{z},\mathbf{z}'))$ over the unit-scale domain, which will be referred to as the standardized domain hereafter for convenience. In particular, let $\mathbf{W}$ be a projection matrix that maps domain-specific inputs $\mathbf{x} \in \mathbb{X}$ onto the standardized domain of $\mathbf{z}$ and the latent function $\mathrm{f}$ can be characterized in terms of $\mathrm{u}$ as $\mathrm{f}(\mathbf{x}) = \sigma_s \mathrm{u}(\mathbf{Wx})$. This implies $\mathrm{f}(\mathbf{x}) \sim \mathcal{GP}(0, \mathrm{k}_\mathrm{ff}(\mathbf{x,x'}))$ where \cite{Titsias13} \begin{eqnarray} \hspace{-1mm}\mathrm{k}_\mathrm{ff}(\mathbf{x,x'}) \triangleq \sigma_s^2\mathrm{exp}\left(-0.5(\mathbf{x - x'})^\top\mathbf{W}^\top\mathbf{W}(\mathbf{x - x'})\right) . \label{eq:3.3} \end{eqnarray} Furthermore, it can be shown that the cross-domain covariance between $\mathrm{f}(\mathbf{x})$ and $\mathrm{u}(\mathbf{z})$ is also analytically tractable: $\mathrm{k}_{\mathrm{fu}}(\mathbf{x,z}) = \sigma_s \mathrm{exp} \left(-0.5(\mathbf{Wx - z})^\top(\mathbf{Wx - z})\right)$. This enables an inference of statistical properties of $\mathrm{u}(\mathbf{z})$ using observations of the domain-specific function $\mathrm{f}(\mathbf{x})$ via learning an appropriate projection matrix $\mathbf{W}$ (as detailed in the remaining of this section), which forms the basis for an efficient agent representation (Section~\ref{sgps}) amenable to cross-domain communication via the common function $\mathrm{u}(\mathbf{z})$ (Section~\ref{fusion}). The cost-efficient GP representation of a learning agent can be achieved via exploiting the vector $\mathbf{u} = [\mathrm{u}(\mathbf{z}_1) \ldots \mathrm{u}(\mathbf{z}_m)]^\top$ of latent inducing output or encoding vocabulary for a small set of $m$ standardized inputs $\mathbf{Z} = \{\mathbf{z}_1, \ldots, \mathbf{z}_m\}$ to construct sufficient statistics for $\mathbf{y}_\mathcal{D}$. That is, for every test input $\mathbf{x}_\ast$ and $\mathbf{f}_\ast = \mathrm{f}(\mathbf{x}_\ast)$, we can characterize the predictive distribution $\mathrm{p}(\mathbf{f}_\ast | \mathbf{y}_\mathcal{D})$ in terms of the posterior $\mathrm{p}(\mathbf{u},\mathbf{W} |\mathbf{y}_\mathcal{D})$ which, in turn, induces a cost-efficient surrogate representation $\mathrm{q}(\mathbf{u}, \mathbf{W})$. This can be achieved by minimizing the KL-divergence between $\mathrm{q}(\mathbf{f}_\mathcal{D},\mathbf{u},\mathbf{W}) \triangleq \mathrm{q}(\mathbf{u},\mathbf{W}) \mathrm{p}(\mathbf{f}_\mathcal{D}|\mathbf{u,W})$ and $\mathrm{p}(\mathbf{f}_\mathcal{D}, \mathbf{u}, \mathbf{W} | \mathbf{y}_\mathcal{D})$, which is equivalent to maximizing $\mathrm{L}(\mathrm{q}) \triangleq \mathbb{E}_{\mathrm{q}} \left[ \mathrm{log}\ \mathrm{p}(\mathbf{y}_\mathcal{D}|\mathbf{f}_\mathcal{D})\right] - \mathrm{D_{KL}}(\mathrm{q}(\mathbf{u},\mathbf{W}) \| \mathrm{p}(\mathbf{u},\mathbf{W}))$. By parameterizing the prior $\mathrm{p}(\mathbf{u,W}) = \mathrm{p}(\mathbf{u})\mathrm{p}(\mathbf{W})$ where $\mathrm{p}(\mathbf{u}) \triangleq \mathcal{N}(\mathbf{u} | 0, \mathbf{K}_\mathcal{UU})$ with $\mathbf{K}_\mathcal{UU} \triangleq [\mathrm{k}_{\mathrm{uu}}(\mathbf{z}_i, \mathbf{z}_j)]_{i,j}$ and $\mathrm{p}(\mathbf{W})$ is a product of standard normals, it follows that the optimal marginal distribution $\mathrm{q}(\mathbf{W}) = \prod_{i=1}^d\prod_{j=1}^d\mathcal{N}(\mathrm{w}_{ij} | \mu_{ij}, \sigma^2_{ij})$. The agent's unique defining hyperparameters $\theta = \{\mu_{ij},\sigma_{ij}\}_{i,j}$ can then be optimized via gradient ascent of $\mathrm{L}(\mathrm{q})$, hence accounting for the data variation scale at its specific location. Then, given $\mathrm{q}(\mathbf{W})$, $\mathrm{q}(\mathbf{u})$ is also a Gaussian whose mean $\mathbf{m}$ and covariance $\mathbf{S}$ can be analytically derived as\vspace{1mm} \begin{eqnarray} \hspace{-10mm}\mathbf{S} \ =\ \sigma^2_n\mathbf{K}_\mathcal{UU}(\sigma^2_n\mathbf{K}_\mathcal{UU} + \mathbf{C}_\mathcal{UU})^{-1}\mathbf{K}_\mathcal{UU} \ \ \ \ \text{;}\ \ \ \ \mathbf{m} \ =\ \mathbf{K}_\mathcal{UU}(\sigma^2_n\mathbf{K}_\mathcal{UU} + \mathbf{C}_\mathcal{UU})^{-1} \mathbf{C}_\mathcal{UD}\mathbf{y}_\mathcal{D}\vspace{1mm} \label{eq:2.4} \end{eqnarray} where $\mathbf{K}_\mathcal{DU} \triangleq \left[\mathrm{k}_\mathrm{fu}(\mathbf{x}_i,\mathbf{z}_j)\right]_{i,j}, \mathbf{K}_\mathcal{UD} \triangleq \mathbf{K}_\mathcal{DU}^\top, \mathbf{C}_\mathcal{UU} \triangleq \mathbb{E}_{q(\mathbf{W})}\left[\mathbf{K}_\mathcal{UD}\mathbf{K}_\mathcal{DU} \right]$, and $\mathbf{C}_\mathcal{UD} \triangleq \mathbb{E}_{q(\mathbf{W})}\left[\mathbf{K}_\mathcal{UD}\right]$. Eq.~\eqref{eq:2.4} yields an efficient representation $\{\mathbf{S},\mathbf{m},\theta \}$ of the posterior distribution $\mathrm{p}(\mathbf{u}, \mathbf{W} | \mathbf{y}_\mathcal{D}) \simeq \mathrm{q}(\mathbf{u})\mathrm{q}(\mathbf{W})$ which incurs linear computation and representation costs in the size of data. This enables the development of a communicable agent representation that can be updated efficiently when new data arrives and is amenable to cross-domain model fusion (Sections~\ref{sgps} and~\ref{pairwise}). {\bf Remark 1.} The standardized inputs $\mathbf{Z}$ can be selected and optimized offline via simulation: different sets of synthetic data can be generated from the standardized domain and we select $\mathbf{Z}$ that yields the best averaged RMSE on those synthetic datasets (to ensure that $\mathbf{Z}$ best represents the domain). \vspace{-3mm} \section{Agent Representation} \vspace{-1mm} \label{sgps} Recomputation of the approximate posterior $\mathrm{q}(\mathbf{u})$ as new data arrives is often prohibitively expensive. This section presents a reparameterization of Eq.~\eqref{eq:2.4} achieved by exploiting the natural representation of $\mathrm{q}(\mathbf{u})$ that enables an efficient update of the reformulated parameters as new data arrives. We then show that the hyperparameters $\theta$ can also be learned online (Section~\ref{qw}) as an important extension of the prior decentralized ML literature, which assumes knowledge of hyperparameters \cite{Rakshit17,Arik15}. \subsection{Online Update for Inducing Output Posterior} \label{qu} Let $\mathbf{R} = [\mathbf{R_1}; \mathbf{R}_2] \triangleq [\mathbf{S}^{-1}; \ \mathbf{S}^{-1}\mathbf{m}]$ denote the natural parameters of $\mathrm{q}(\mathbf{u})$. Eq.~\eqref{eq:2.4} can then be reparameterized in terms of $\mathbf{R}$ to reveal an additive decomposition across different blocks of data. That is, let $\{\mathcal{D}_1, \mathcal{D}_2, \ldots, \mathcal{D}_p\}$ denote a sequence of streaming data blocks where $\mathcal{D}_i \triangleq \{\mathbf{X}_{\mathcal{D}_i},\mathbf{y}_{\mathcal{D}_i}\}$ such that $\{\mathbf{f}_{\mathcal{D}_i}\}_i$ are conditionally independent given $\mathbf{W}$ and $\mathbf{u}$. It can then be shown that (Appendix~\ref{app:a}) $\mathbf{R}_1 = \mathbf{K}^{-1}_\mathcal{UU} + \sum_{i=1}^p \mathbf{E}^{(i)}_1$ and $\mathbf{R}_2 = \sum_{i=1}^p \mathbf{E}^{(i)}_2$ where \begin{eqnarray} \mathbf{E}^{(i)}_1 = \frac{1}{\sigma^{2}_n}\mathbf{K}^{-1}_\mathcal{UU}\mathbf{C}^i_\mathcal{UU}\mathbf{K}^{-1}_\mathcal{UU} \ \ \text{;}\ \ \mathbf{E}^{(i)}_2 = \frac{1}{\sigma^2_n}\mathbf{K}^{-1}_\mathcal{UU}\mathbf{C}_{\mathcal{UD}_i}\mathbf{y}_{\mathcal{D}_i} \label{eq:3.5} \end{eqnarray} where $\mathbf{C}^i_\mathcal{UU} \triangleq \mathbb{E}_{q(\mathbf{W})}[\mathbf{K}_{\mathcal{UD}_i}\mathbf{K}_{\mathcal{D}_i\mathcal{U}}]$ and $\mathbf{C}_{\mathcal{UD}_i} \triangleq \mathbb{E}_{q(\mathbf{W})}[\mathbf{K}_{\mathcal{UD}_i}]$, with $\mathbf{K}_{\mathcal{D}_i\mathcal{U}}$ and $\mathbf{K}_{\mathcal{U}\mathcal{D}_i}$ being defined similarly to $\mathbf{K}_{\mathcal{D}\mathcal{U}}$ and $\mathbf{K}_{\mathcal{U}\mathcal{D}}$, respectively (by replacing $\mathcal{D}$ with $\mathcal{D}_i$). Supposing $\mathrm{q}(\mathbf{W})$ is fixed, Eq.~\eqref{eq:3.5} reveals an efficient online update for $\mathrm{q}(\mathbf{u})$ where each update only scales with the size of an incoming data block. Specifically, let $\mathbf{R}^{(i)} = [\mathbf{R}^{(i)}_1; \mathbf{R}^{(i)}_2]$ denote the representation of $\mathrm{q}(\mathbf{u})$ following the arrival of $\{\mathcal{D}_1,\ldots, \mathcal{D}_i\}$ and $\mathbf{E}^{(i+1)} \triangleq [\mathbf{E}^{(i+1)}_1;\mathbf{E}^{(i+1)}_2]$ denote the summary of $\mathcal{D}_{i+1}$, \begin{eqnarray} \mathbf{R}^{(i+1)} &=& \mathbf{R}^{(i)} \ +\ \mathbf{E}^{(i+1)} \ .\vspace{-1mm} \label{eq:3.6} \end{eqnarray} This is efficient since the computation of Eq.~\eqref{eq:3.6} only depends on the cost of computing $\mathbf{E}^{(i+1)}$, which in turn only scales linearly with the size of incoming block of data $\mathcal{D}_{i+1}$. If $\mathrm{q}(\mathbf{W})$ is also being updated as data arrives, we would, however, have to recompute $\mathbf{C}_\mathcal{UU}^i$ and $\mathbf{C}_{\mathcal{UD}_i}$ with respect to the updated $\mathrm{q}(\mathbf{W})$. Eq.~\eqref{eq:3.6} therefore incurs a linear recomputation cost in the size of the accumulating dataset and is no longer efficient when data arrives at high frequency. To sidestep this recomputation inefficiency, we instead approximate $\mathbf{C}^i_\mathcal{UU} \simeq \widehat{\mathbf{C}}^i_\mathcal{UU}$ and $\mathbf{C}_{\mathcal{UD}_i} \simeq \widehat{\mathbf{C}}_{\mathcal{UD}_i}$ using a finite set $\mathbf{P} = \{\mathbf{W}_1,\ldots, \mathbf{W}_k\}$ sampled i.i.d. from the prior $\mathrm{p}(\mathbf{W})$ where \begin{eqnarray} \widehat{\mathbf{C}}^i_\mathcal{UU} = \frac{1}{k} \sum_{t=1}^k \frac{\mathrm{q}(\mathbf{W}_t)}{\mathrm{p}(\mathbf{W}_t)}\mathbf{K}^{(t)}_{\mathcal{UD}_i}\mathbf{K}^{(t)}_{\mathcal{D}_i\mathcal{U}}\ \ \text{;}\ \ \widehat{\mathbf{C}}_{\mathcal{UD}_i} = \frac{1}{k} \sum_{t=1}^k \frac{\mathrm{q}(\mathbf{W}_t)}{\mathrm{p}(\mathbf{W}_t)}\mathbf{K}^{(t)}_{\mathcal{UD}_i} \label{eq:3.7} \end{eqnarray} where $\mathbf{K}^{(t)}_{\mathcal{UD}_i}$ and $\mathbf{K}^{(t)}_{\mathcal{D}_i\mathcal{U}}$ denote the covariance matrices evaluated with parameter sample $\mathbf{W}_t$. Since $\mathbf{P}$ can be generated \emph{a priori}, the terms $\{\mathbf{K}^{(t)}_{\mathcal{UD}_i}\mathbf{K}^{(t)}_{\mathcal{D}_i\mathcal{U}},\mathbf{K}^{(t)}_{\mathcal{UD}_i}\}_t$ can be precomputed and cached once $\mathcal{D}_i$ arrives for all future uses. This helps to reduce the recomputation cost of $\mathbf{C}_\mathcal{UU}^i$ and $\mathbf{C}_{\mathcal{UD}_i}$ from $\mathcal{O}(|\mathcal{D}_i|)$ to $\mathcal{O}(k)$ (treating $m$ as a constant). Using Eq.~\eqref{eq:3.7}, we can approximate $\mathbf{E}^{(i)}$, as: \begin{eqnarray} {\mathbf{E}}^{(i)}_1 \simeq \widehat{\mathbf{E}}^{(i)}_1 \ =\ \frac{1}{\sigma_n^2}\mathbf{K}^{-1}_\mathcal{UU}\widehat{\mathbf{C}}^i_\mathcal{UU}\mathbf{K}^{-1}_\mathcal{UU} \ \ \text{;}\ \ {\mathbf{E}}^{(i)}_2 \simeq \widehat{\mathbf{E}}^{(i)}_2 \ =\ \frac{1}{\sigma_n^2}\mathbf{K}^{-1}_\mathcal{UU}\widehat{\mathbf{C}}_{\mathcal{UD}_i}\mathbf{y}_{\mathcal{D}_i} \ . \label{eq:3.5b} \end{eqnarray} The streaming update in Eq.~\eqref{eq:3.6} can then be approximated by $\widehat{\mathbf{R}}^{(i+1)} = \widehat{\mathbf{R}}^{(i)} + \widehat{\mathbf{E}}^{(i+1)}$. Supposing all $p$ blocks of data have arrived, this operation incurs only $\mathcal{O}(kp)$ computation cost, which is independent of the number of data points. Furthermore, an appropriate choice of $k$ will guarantee an arbitrarily small approximation loss (Section~\ref{analysis}, Lemma~\ref{lem1}). This is possible via our choices of $\widehat{\mathbf{C}}^i_\mathcal{UU}$ and $\widehat{\mathbf{C}}_{\mathcal{UD}_i}$ in Eq.~\eqref{eq:3.7} which are always unbiased estimates of ${\mathbf{C}}^i_\mathcal{UU}$ and ${\mathbf{C}}_{\mathcal{UD}_i}$.\vspace{-2mm} \subsection{Online Update for Hyperparameters} \label{qw} Following the above update of $\mathrm{q}(\mathbf{u})$, we need to update $\mathrm{q}(\mathbf{W})$ to incorporate the statistical information of the new block of data. Naively, this can be achieved via gradient ascent $\theta \leftarrow \theta + \partial\mathrm{L}(\mathrm{q})/\partial\theta$. This is, however, inefficient as the gradient $\partial\mathrm{L}(\mathrm{q})/\partial\theta$ needs to be re-computed with respect to the entire accumulated dataset as well as the updated $\mathrm{q}(\mathbf{u})$. To sidestep this computational issue, we first notice an additive decomposition (across different blocks of data) of the variational lower-bound. That is, supposing the data stream consists of $N$ data blocks $\{\mathcal{D}_1,\mathcal{D}_2, \ldots,\mathcal{D}_N\}$ of which the agent has received $\mathrm{t}$ data blocks in uniformly random order with $\mathcal{D}_\ast$ being the last block, it follows that (Appendix~\ref{app:b}) $\mathrm{L}({\mathrm{q}}) = \sum_{i=1}^N \mathrm{L}_{\mathcal{D}_i}(\mathrm{q}) - \mathrm{D_{KL}}(\mathrm{q}(\mathbf{u,W})\|\mathrm{p}(\mathbf{u,W}))$ where $\mathrm{L}_{\mathcal{D}_i}(\mathrm{q}) \triangleq \mathbb{E}_{\mathrm{q}(\mathbf{u,W})}[\mathbb{E}_{\mathrm{p}(\mathbf{f}_{\mathcal{D}_i}|\mathbf{u,W})}[\mathrm{log \ p}(\mathbf{y}_{\mathcal{D}_i}|\mathbf{f}_{\mathcal{D}_i})]]$ and $\mathcal{D}_\ast$ can be treated as a random block sampled uniformly from the stream of data $\{\mathcal{D}_1,\mathcal{D}_2, \ldots,\mathcal{D}_N\}$. Using $\mathcal{D}_\ast$, we can construct an unbiased stochastic gradient $\partial\widehat{\mathrm{L}}(\mathrm{q})/\partial\theta$ of $\mathrm{L}(\mathrm{q})$ which satisfies $\mathbb{E}_{\mathcal{D}_\ast}[\partial\widehat{\mathrm{L}}(\mathrm{q})/\partial\theta] = \partial{\mathrm{L}}(\mathrm{q})/\partial\theta$ (Appendix~\ref{app:c}) and is more computationally efficient than the exact gradient $\partial{\mathrm{L}}(\mathrm{q})/\partial\theta$. The computation of $\partial\widehat{\mathrm{L}}(\mathrm{q})/\partial\theta$ only involves $\mathcal{D}_\ast$ and as such, its complexity depends on $|\mathcal{D}_\ast|$ instead of the entire accumulated dataset if we were to use the exact gradient. The resulting stochastic gradient ascent is guaranteed to converge to a local optima given an appropriate schedule of learning rates \cite{Monro1951}. Even though the stochastic gradient above only makes use of the latest block of data $\mathcal{D}_\ast$, the information from previously received data have been extracted and succinctly summarized by the updated $\mathrm{q}(\mathbf{u})$. {\bf Remark 2.} There also exists other recently developed online GP paradigms such as \cite{Opper02,Bui17} but their representations are not suitable to facilitate communication between agents operating in related domains with different variation scales. In contrast, our developed GP representation characterizes the transformation of the GP prior/posterior from an arbitrary domain to that of a common unit-scale domain and vice versa, thus allowing efficient agent communication across different domains. \section{Model Fusion} \vspace{-2mm} \label{fusion} This section presents a novel fusion operator which allows two agents to exchange and fuse their local predictive models efficiently (Section~\ref{pairwise}). The resulting operator is generalized to a large-scale model fusion paradigm (Section~\ref{distributed}).\vspace{-2mm} \subsection{Pairwise Agent Fusion} \vspace{-2mm} \label{pairwise} Suppose two agents learning from two data streams $\mathcal{D}_a \triangleq \{\mathcal{D}_1^a,\ldots \mathcal{D}_{n_a}^{a}\}$ and $\mathcal{D}_b \triangleq \{\mathcal{D}_1^b,\ldots \mathcal{D}_{n_b}^{b}\}$ are respectively characterized by local approximate posteriors $\mathrm{q_a}(\mathbf{u},\mathbf{W}_a) \simeq \mathrm{p}(\mathbf{u},\mathbf{W}_a|\mathbf{y}_{\mathcal{D}_a})$ and $\mathrm{q_b}(\mathbf{u},\mathbf{W}_b) \simeq \mathrm{p}(\mathbf{u},\mathbf{W}_b|\mathbf{y}_{\mathcal{D}_b})$. Since $\mathbf{W}_a$ and $\mathbf{W}_b$ will be marginalized out for prediction, we are interested in approximating the marginal posterior $\mathrm{p}(\mathbf{u}|\mathbf{y}_{\mathcal{D}_a}, \mathbf{y}_{\mathcal{D}_b})$ directly. To achieve this, note that $\mathrm{p}(\mathbf{u}|\mathbf{y}_{\mathcal{D}_a},\mathbf{y}_{\mathcal{D}_b}) \propto \mathrm{p}(\mathbf{u}|\mathbf{y}_{\mathcal{D}_a})\mathrm{p}(\mathbf{u}|\mathbf{y}_{\mathcal{D}_b})/\mathrm{p}(\mathbf{u}) \simeq \mathrm{q}_a(\mathbf{u})\mathrm{q}_b(\mathbf{u}) / \mathrm{p}(\mathbf{u})$ where the first step is shown in Appendix~\ref{app:d}. This implies approximating $\mathrm{p}(\mathbf{u}|\mathbf{y}_{\mathcal{D}_a}, \mathbf{y}_{\mathcal{D}_b})$ can be achieved via constructing the fusion statistics $\mathrm{q}_{ab}(\mathbf{u}) \propto {\mathrm{q}_a(\mathbf{u})\mathrm{q}_b(\mathbf{u})}/{\mathrm{p}(\mathbf{u})}$. Specifically, let $\mathrm{q}_a(\mathbf{u}) = \mathcal{N}(\mathbf{u}|\mathbf{m}_a, \mathbf{S}_a)$ and $\mathrm{q}_b(\mathbf{u}) = \mathcal{N}(\mathbf{u}|\mathbf{m}_b, \mathbf{S}_b)$ where the parameters $\mathbf{m}_a,\mathbf{m}_b,\mathbf{S}_a$, and $\mathbf{S}_b$ are computed using Eq.~\eqref{eq:2.4}. Then $\mathrm{q}_{ab}(\mathbf{u}) = \mathcal{N}(\mathbf{u}|\mathbf{m}_{ab}, \mathbf{S}_{ab})$ where (Appendix~\ref{app:e}): \begin{eqnarray} \mathbf{S}_{ab} \ =\ \left(\mathbf{S}^{-1}_a + \mathbf{S}^{-1}_b - \mathbf{K}_{\mathcal{UU}}^{-1}\right)^{-1} \ \text{;}\ \ \mathbf{m}_{ab} \ =\ \mathbf{S}_{ab} \left(\mathbf{S}^{-1}_a \mathbf{m}_a + \mathbf{S}^{-1}_b \mathbf{m}_b \right) \ . \label{eq:4.10} \end{eqnarray} Let $\mathbf{R}_{ab}$, $\mathbf{R}_a$, $\mathbf{R}_b$, and $\mathbf{R}_0$ respectively be the natural representation of $\mathrm{q}_{ab}(\mathbf{u})$, $\mathrm{q}_a(\mathbf{u})$, $\mathrm{q}_b(\mathbf{u})$, and $\mathrm{p}(\mathbf{u})$ (see Section~\ref{qu}). Eq.~\eqref{eq:4.10} can be rewritten concisely as $\mathbf{R}_{ab} = \mathbf{R}_{a} + \mathbf{R}_{b} - \mathbf{R}_{0}$. In practice, however, since maintaining $\mathbf{R}_a$ and $\mathbf{R}_b$ is not efficient for online update, we instead use their approximated versions $\widehat{\mathbf{R}}_a$ and $\widehat{\mathbf{R}}_b$ (see Section~\ref{qu}) to approximate $\mathbf{R}_{ab}$ by $\widehat{\mathbf{R}}_{ab} = \widehat{\mathbf{R}}_{a} + \widehat{\mathbf{R}}_{b} - \mathbf{R}_{0}$. This fusion operator's total cost depends only on the size of $\mathbf{u}$ and is constant w.r.t data size. {\bf Remark 3.} Although $\mathrm{q}(\mathbf{W}_a)$ and $\mathrm{q}(\mathbf{W}_b)$ are not fused explicitly, they will still be updated later using $\mathrm{q}(\mathbf{u})$ when new data arrives (see Remark $2$). This implicitly helps agents utilizing the fused model to improve their projection matrices $\mathbf{W}_a$ and $\mathbf{W}_b$ for better cross-domain mapping (Section~\ref{fgp}). \vspace{-5mm} \subsection{Decentralized Multi-Agent Fusion} \vspace{-2mm} \label{distributed} This section extends the above pairwise fusion protocol to facilitate model fusion beyond two agents. Specifically, consider a distributed network of $s$ independent agents with local models $\mathrm{q}_i(\mathbf{u}) \simeq \mathrm{p}(\mathbf{u} | \mathbf{y}_{\mathcal{D}_i})$ for $1 \leq i \leq s$. Let $\mathbf{R}_1,\mathbf{R}_2,\ldots,\mathbf{R}_s$ denote their exact representations, it can be shown that (Appendix~\ref{app:f}) the representation $\mathbf{R}_g$ of their fused model $\mathrm{q}(\mathbf{u}) \simeq \mathrm{p}(\mathbf{u} | \mathbf{y}_{\mathcal{D}_1}, \ldots, \mathbf{y}_{\mathcal{D}_s})$ is $\mathbf{R}_g = \sum_{i = 1}^{s} \mathbf{R}_i - (s - 1)\mathbf{R}_0$ where $\mathbf{R}_0$ denotes the natural representation of prior $\mathrm{p}(\mathbf{u})$. Naively, $\widehat{\mathbf{R}}_g$ can be approximated by $\widehat{\mathbf{R}}_g = \sum_{i = 1}^{s} \widehat{\mathbf{R}}_i - (s - 1)\mathbf{R}_0$ using $\widehat{\mathbf{R}}_1,\ldots,\widehat{\mathbf{R}}_s$ for efficient online update (Section~\ref{qu}). This, however, requires either direct communication between every two agents or a central server through which agents coordinate their communications. The former implies a fully connected network which is not desirable in situations that require large spatial coverage such as environmental sensing \cite{NghiaICML14} or terrain exploration \cite{LowAAMAS12,LowAAMAS13} while the latter will create a computational bottleneck and risk exposing a single choke point for failure. To avoid these issues, this section develops a decentralized model fusion algorithm that allows agents to exchange local representations as messages among one another within their broadcasting ranges. In particular, let $\mathbf{M}^{t +1}_{ij}$ denote the message that agent $i$ sends to agent $j$ (within broadcasting range) at time step $t + 1$, which summarizes and integrates $i$'s local representation with the shared representations it received from other agents in the previous $t$ steps of communication. This must not include the representation of agent $j$ to avoid aggregating duplicates of knowledge. Thus, $\mathbf{M}^{t+1}_{ij}$ should essentially aggregate the representation of all agents (excluding $j$) whose messages can reach $i$ within $t$ steps of direct transmission. As such, $\mathbf{M}^{t+1}_{ij}$ can be recursively computed by aggregating only received messages from those in $i$'s local neighborhood in the previous time step $t$, $\mathbf{M}^{t + 1}_{ij} = \widehat{\mathbf{R}}_i + \sum_k (\mathbf{M}^t_{ki} - \mathbf{R}_0)$ where $k \in \mathbb{N}(i)\setminus\{j\}$ and $\mathbb{N}(i)$ denotes the neighborhood of $i$ and the subtraction of $\mathbf{R}_0$ from $\mathbf{M}^t_{ki}$ is to prevent aggregating multiple copies of the prior model's representation $\mathbf{R}_0$, which has already been aggregated into $\widehat{\mathbf{R}}_i$, by definition. At time $t = 0$, the message only contains $i$'s local representation (i.e., $\mathbf{M}_{ij}^t = \widehat{\mathbf{R}}_i$) since obviously, only $i$ can reach itself in $0$ step of transmission. Upon convergence at $t = t_{\max}$\footnote{For a tree-topology network, the above message passing algorithm will converge to the exact optimum after $t_{\max}$ time-steps where $t_{\max}$ is the tree's diameter. The agents can employ decentralized minimum spanning tree to eliminate redundant connections with high latencies to guarantee that their connection topology is a tree.}, each agent $i$ can aggregate the received messages to assemble the same global representation, $\widehat{\mathbf{R}}_g = \widehat{\mathbf{R}}_i + \sum_k (\mathbf{M}^{t_{\max}}_{ki} - \mathbf{R}_0)$ where $k \in \mathbb{N}(i)$ and again, the repeated subtraction of $\mathbf{R}_0$ from $\mathbf{M}^{t_{\max}}_{ki}$ is to prevent aggregating multiple copies of $\mathbf{R}_0$ into $\widehat{\mathbf{R}}_g$. \vspace{-3mm} \section{Theoretical Analysis} \vspace{-2mm} \label{analysis} This section shows that the approximate global approximation can be made arbitrarily close to the exact representation $\mathbf{R}_g$ with high confidence (Theorem~\ref{theo1}). In particular, we are interested in bounding the difference between $\mathbf{R}_g$ and its approximation $\widehat{\mathbf{R}}_g$ w.r.t the numbers $k$ of projection matrices, $s$ of agents and the size $m$ of the encoding vocabulary. Let $\mathbf{R}_i$ be the exact representation for agent $i$ and $\widehat{\mathbf{R}}_i$ be its approximation generated by our framework (Section~\ref{qu}), the difference between $\mathbf{R}_i$ and $\widehat{\mathbf{R}}_i$ is bounded below: \begin{lemma}[Representation Loss] Given $\epsilon > 0$ and $\delta \in (0,1)$, it can be guaranteed that with probability at least $1 - \delta$, $\|\mathbf{R}_i - \widehat{\mathbf{R}}_i\| \leq \epsilon$ by choosing $k = \displaystyle\mathcal{O}((m^2/\epsilon^2)\mathrm{log}(m/\delta))$. \label{lem1} \end{lemma} {\bf Proof.} A detailed proof is provided in Appendix~\ref{app:g}. Exploiting the result of Lemma~\ref{lem1}, we can bound the difference between $\mathbf{R}_g$ and $\widehat{\mathbf{R}}_g$ with high probability in terms of $m$, $s$, and $k$, as detailed in Theorem~\ref{theo1} below. \begin{theorem}[Fusion Loss] \label{theo1} Given $\epsilon > 0$ and $\delta \in (0,1)$, it can be guaranteed that with probability at least $1 - \delta$, $\|\mathbf{R}_g - \widehat{\mathbf{R}}_g\| \leq \epsilon$ by choosing $k = \displaystyle\mathcal{O}((m^2s^2/\epsilon^2)\mathrm{log}(ms/\delta))$. \end{theorem} {\bf Proof.} A detailed proof is provided in Appendix~\ref{app:h}. {\bf Remark $3$.} The above results imply that both the representation and fusion losses can be made arbitrarily small with high probability by choosing a sufficiently large number of cross-domain projection matrix samples (Section~\ref{qu}) to approximately represent each agent's predictive model. In addition, Theorem~\ref{theo1} also tells us that the no. of samples $k$ needs to grow quadratically in the size of the encoding vocabulary and the no. of agents to guarantee the above. This means the agent's complexity needs to increase to guarantee fusion quality when we have more agents.\vspace{-1mm} \vspace{-3mm} \section{Experiments} \label{exp} This section demonstrates our decentralized \underline{Co}llective \underline{O}nline \underline{L}earning {GP} (COOL-GP) framework's efficiency, resiliency to information disparity, and fault-tolerance to information loss on several synthetic and real-world domains: (a) The SYNTHETIC domain features two streaming datasets generated by $\mathrm{f}_1(\mathbf{x}) \triangleq \mathrm{u}(\mathbf{W}_1\mathbf{x})$ and $\mathrm{f}_2(\mathbf{x}) \triangleq \mathrm{u}(\mathbf{W}_2\mathbf{x})$ where the common random function $\mathrm{u}(\mathbf{z})$ is sampled from a standardized GP (Section~\ref{fgp}) with different projection matrices $\mathbf{W}_1$ and $\mathbf{W}_2$. Each dataset comprises of $200$ batches of $6$-dimensional training data which amount to $8000$ data points. A separate dataset of $4000$ data points (generated from both $\mathrm{f}_1$ and $\mathrm{f}_2$) is used for testing. (b) The AIRLINE domain \cite{Hensman13,NghiaICML15} features an air transportation delay phenomenon that generates a stream of data comprising of $30000$ batches of observations ($600000$ data points in total). Each batch consists of $20$ observations. Each observation is a $8$-dimensional feature vector containing the information log of a commercial flight and a corresponding output recording its delay time (min). The system comprises of $1000$ agents. Each agent is tested on a separate set of $10000$ data points. (c) The AIMPEAK domain \cite{NghiaICML16} features a traffic phenomenon which took place over an urban road network comprising of $775$ road segments. $10000$ batches of data are then generated from the traffic phenomenon and streamed in random order to a group of $100$ collective learning agents. Each observation is a $5$-dimensional input vector. Its output corresponds to the traffic speed (km/h). The predictive performance of each agent is then evaluated using a separate test set of $2000$ data points. In all experiments, each data batch arrives sequentially in a random order and is dispatched to a random learning agent. This simulates learning scenarios with streaming data where agents collect one batch of data at a time. We report the averaged predictive performance before and after fusion of the agents vs. the number of arrived batches of data to demonstrate the efficiency of our collective learning paradigm in such distributed data streaming settings as a proof-of-concept. \begin{figure}[t] \begin{tabular}{ccc} \hspace{-2mm}\includegraphics[width=4.2cm]{./synthetic_RMSE_vs_Data_u50_w5} & \includegraphics[width=4.2cm]{./synthetic_RMSE_vs_Data_u100_w5} & \includegraphics[width=4.2cm]{./synthetic_RMSE_vs_Data_u150_w5}\\ \hspace{1mm}(a) $|\mathbf{Z}| = 50$ $\&$ $|\mathbf{P}| = 5$ & (b) $|\mathbf{Z}| = 100$ $\&$ $|\mathbf{P}| = 5$ & (c) $|\mathbf{Z}| = 150$ $\&$ $|\mathbf{P}| = 5$\vspace{2mm}\\ \hspace{-2mm}\includegraphics[width=4.2cm]{./synthetic_RMSE_vs_Data_u50_w10} & \includegraphics[width=4.2cm]{./synthetic_RMSE_vs_Data_u100_w10} & \includegraphics[width=4.2cm]{./synthetic_RMSE_vs_Data_u150_w10}\\ \hspace{1mm}(d) $|\mathbf{Z}| = 50$ $\&$ $|\mathbf{P}| = 10$ & (e) $|\mathbf{Z}| = 100$ $\&$ $|\mathbf{P}| = 10$ & (f) $|\mathbf{Z}| = 150$ $\&$ $|\mathbf{P}| = 10$ \end{tabular \caption{Graphs of averaged pre- and post-fusion performance vs. no. of data batches dispatched to $2$ agents with varying sizes of encoding vocabulary $|\mathbf{Z}|$ and projection matrix samples $|\mathbf{P}|$. \label{fig:synthetic} \end{figure} \begin{figure}[t] \begin{tabular}{ccc} \hspace{-2mm}\includegraphics[width=4.2cm]{./aimpeak_RMSE_vs_Data_u50_w5} &\includegraphics[width=4.2cm]{./aimpeak_RMSE_vs_Data_u100_w5} &\includegraphics[width=4.2cm]{./aimpeak_RMSE_vs_Data_u200_w5}\\ \hspace{1mm}(a) $|\mathbf{Z}| = 50$ $\&$ $|\mathbf{P}| = 5$ & (b) $|\mathbf{Z}| = 100$ $\&$ $|\mathbf{P}| = 5$ & (c) $|\mathbf{Z}| = 200$ $\&$ $|\mathbf{P}| = 5$\vspace{2mm}\\ \hspace{-2mm}\includegraphics[width=4.2cm]{./aimpeak_RMSE_vs_Data_u50_w20} & \includegraphics[width=4.2cm]{./aimpeak_RMSE_vs_Data_u100_w20} & \includegraphics[width=4.2cm]{./aimpeak_RMSE_vs_Data_u200_w20} \\ \hspace{1mm}(d) $|\mathbf{Z}| = 50$ $\&$ $|\mathbf{P}| = 20$ & (e) $|\mathbf{Z}| = 100$ $\&$ $|\mathbf{P}| = 20$ & (f) $|\mathbf{Z}| = 200$ $\&$ $|\mathbf{P}| = 20$ \end{tabular} \caption{Graphs of averaged pre- and post-fusion performance vs. no. of data batches of $100$ agents collecting data from the same traffic phenomenon with varying $|\mathbf{Z}|$ and $|\mathbf{P}|$. \label{fig:aimpeak} \end{figure} Fig.~\ref{fig:synthetic} reports the results of our COOL-GP framework in a cross-domain learning scenario where two agents integrate their predictive models of two correlated, synthetic phenomena to improve their averaged performance on test instances from both domains. Fig.~\ref{fig:aimpeak} further reports the performance of COOL-GP in a real-world traffic monitoring application deployed on a large, decentralized network consisting of $100$ learning agents. Both of these cases demonstrate the effect of COOL-GP fusion on the averaged predictive accuracy w.r.t varying amount of dispatched data batches for different choices of encoding vocabulary sizes $|\mathbf{Z}|$ and the sampling size $|\mathbf{P}|$ used to approximate the agent's representation (Section~\ref{qu}). Across all configurations, a consistent pattern can be observed: (a) post-fusion predictions exhibit significant performance gain as compared to pre-fusion predictions; and (b) the performance gap gradually closes up with more data collected, which suggests a diminishing marginal gain of model fusion. \begin{figure}[t] \begin{tabular}{ccc} \hspace{-3mm}\includegraphics[width=4.5cm]{./6cluster} &\hspace{-3mm} \includegraphics[width=4.5cm]{./aimpeak_resiliency} & \hspace{-3mm} \includegraphics[width=4.5cm]{./aimpeak_fault_tolerance} \\ (a) & (b) & (c) \vspace{-2mm} \end{tabular} \caption{Graphs of (a) individual performance profiles (pre- vs. post-fusion RMSE) of a $1000$-agent system collectively learning using our COOL-GP framework in the AIRLINE domain \cite{Hensman13,NghiaICML15}; (b) pre- and post-fusion individual performance of two agents with different learning capabilities; and (c) post-fusion performance of COOL-GP in comparison to those of state-of-the-art distributed GPs (e.g., $d$DTC \cite{Yarin14} and $d$PITC \cite{NghiaICML16}) vs. rate of transmission loss in the AIMPEAK domain.}\vspace{-4mm} \label{fig:profile} \end{figure} Fig.~\ref{fig:profile} visualizes a comprehensive collection of individual performance profiles of $1000$ agents in the AIRLINE domain (each profile is represented by a pair of pre- and post-fusion RMSEs). The result shows that with more data collected, clusters of performance profiles (i.e., each cluster is visualized by a colored point cloud) gradually migrate towards regions with superior pre- and post-fusion accuracy. The migration distance, however, reduces rapidly in latter stages of data collection, which is consistent with the previous observation on the diminishing return of model fusion. Interestingly, it can also be observed that within each cluster, the performance profiles exhibit high variance for pre-fusion and low variance for post-fusion performance, which suggests that agents are able to achieve post-fusion consensus within small range of variation (i.e., fusion stability). \vspace{-1mm} We also investigate an interesting case study of model fusion between agents allocated with different amounts of data in the AIMPEAK traffic domain. Specifically, Fig.~\ref{fig:profile}b reports the performance of two agents A$1$ (fixed amount of data) and A$2$ (continuous supply of data). Without fusion, A$1$ fails to update its model, and improve its performance as expected, whereas A$2$ still exhibits gain in performance as it receives more data. With fusion, however, the performance of A$1$ is brought close to that of A$2$ and far exceeds its original accuracy. More interestingly, it can be observed that the performance of A$2$ also marginally improves upon fusion with a conservative A$1$ that never collects new data to update its model. This demonstrates that COOL-GP greatly benefits agents with lesser learning capabilities and, at the same time, mildly improves the performance of those with better capabilities (i.e., resiliency to information disparity). Finally, in the traffic domain (i.e., AIMPEAK), we present another interesting case study that features a distributed learning scenario among 100 agents where each transmission of local representations (or local statistics in the cases of cloud-oriented distributed GPs such as $d$DTC \cite{Yarin14} and $d$PITC \cite{NghiaICML16}) might not reach its destination with a certain probability. The averaged post-fusion performance are plotted against the rate of transmission loss to demonstrate the high fault-tolerance of our COOL-GP. Fig.~\ref{fig:profile}c shows that, as transmission losses occur more frequently, the averaged performance of COOL-GP agents degrades more gracefully than those of state-of-the-art\footnote{We do not compare with $d$PIC \cite{NghiaICML16} as it requires storing local data and is not suitable for online learning.} distributed learning frameworks $d$DTC and $d$PITC which communicate directly to a central server that coordinates them. This is expected since both $d$DTC and $d$PITC require every agent to successfully transmit its local model directly to a single master server. Failing to achieve this immediately leads to irrecoverable information loss. In contrast, COOL-GP allows each local agent to propagate its model to multiple agents within its neighborhood (see Section~\ref{distributed}), thus lowering the risk of losing information. \vspace{-4mm} \section{Conclusion} \vspace{-2mm} \label{conclude} Traditional distributed algorithms for ML implemented with server-client architecture are often undesirable due to the centralized risk of operational failure and various capacity bottlenecks imposed by the server. In this paper, we advocate a shift in paradigm towards distributed ML paradigm with peer-to-peer decentralized communication architecture, which exploits the collective computation capacities of local devices and preserves analytic quality through on-demand integration of local models. Specifically, we propose a collective decentralized Gaussian process (GP) framework that is to be simultaneously deployed on a network of learning agents, each of which is designed to be capable of independently building local model from self-collected data and steadily improving its analytic quality through exchanging its model with other devices in the network. Finally, we showcase our empirical results via an assortment of practical scenarios, featuring both synthetic and real-world domains, which highlight the efficiency, resiliency and fault-tolerance of our framework. {\bf \noindent Acknowledgements.} This research is funded in part by ONR under BRC award $\#$N000141712072. \bibliographystyle{natbib}
1,116,691,497,066
arxiv
\section[Introduction]{Introduction} Knots can have one of five symmetry types: (+) amphichiral, (-) amphichiral, reversible, no symmetry, and full symmetry. The symmetry type of each prime knot is known through high crossing number \cite{Cerf}, \cite{Kod1},\cite{VIGRE}. For composite knots, new symmetries may arise or be destroyed by combining knots. For example, the trefoil is a reversible (but not amphichiral) knot, so it comes in left- and right-handed versions. However, the square knot (the connect sum of a left- and right-handed trefoil) has full symmetry while the granny knot (the connect sum of two trefoils of the same handedness) is again only reversible. Wilbur Whitten gave a theorem in 1969 \cite{Whit} giving necessary and sufficient conditions for a given symmetry to apply to a composite knot in terms of the symmetry groups of the prime factors of the knot, but the statement of Whitten's theorem encodes much of its content in a very complicated system of indices, making it hard to apply. Here we give an enhanced prime decomposition theorem which allows us to calculate the full set of composites arising from a set of prime factors and their symmetry groups as orbits and stabilizers of a natural group action. Prime knot tabulation has been given much attention\cite{Kirk1}, \cite{First}, \cite{Rolfsen}, but there is a striking lack of composite knot tables in the literature. This is perhaps somewhat surprising given the many applications in which composite knots arise naturally, such as the knotted strands of DNA discussed by Arsuaga, et al. \cite{Ars1}. From a topological perspective connected sum decompositions of links in $3$-manifolds could be considered inconvenient due to the ambiguity in embedding disjoint $2$-spheres into $S^3$. This ambiguity led $3$-manifold topologists to consider torus decompositions and, in particular, the JSJ-decomposition of link complements in $S^3$ (see, for example, the manuscript of Bonahon and Siebenmann \cite{BS1}, the excellent paper of Ryan Budney \cite{Bud1} or the original papers of Jaco and Shalen \cite{JSJ1} and Johannson \cite{JSJ2}). While this toroidal perspective is convenient for studying many properties of knots and links, the prime decomposition is obfuscated by this viewpoint. For knots the prime summands can be detected from the JSJ-decomposition of the complement and it is this fact that we exploit presently to construct an algebraic structure within which to state an enhanced version of the prime decomposition theorem for knots. We then apply this theorem to the problem of tabulating composite knots. It is an open question (and a current research project of the author) to give a relationship between the JSJ-decomposition of a link complement and the link's prime factorization. The main obstruction to this generalization is the increased complexity of the JSJ-decomposition of link complements in $S^3$. Section \ref{sect:JSJGraph} will be dedicated to giving a brief survey of the relationship between the JSJ-decomposition of a knot complement and the prime factorization of the knot. Then, in section \ref{sect:PDT} we define the combinatorial object that we will use to describe composite knots. We will be working with knot \emph{diagrams} described by a notation called PD-codes. In section \ref{sect:sym}, discuss the \emph{intrinsic symmetries} of knots in terms of PD-codes. We use the term intrinisic symmetry to refer to the invertibility and/or chirality of a knot. In section \ref{sect:decomp} we will state and prove an enhanced prime decomposition theorem for knots, the main theorem of the paper. The classical prime decomposition theorem \footnote{The original paper of Schubert \cite{Shub2} has not been translated, but there is a nice survey of the result by Sullivan \cite{Sul1}} states that a given composite knot has a unique set of prime factors. This enhanced theorem addresses the question of what composite knots are obtained from a given list of prime knots if you are allowed to reverse the orientation and/or mirror each prime factor. In particular, it gives the isotopy classes of knots with a given list of prime factors (up to mirroring the crossings and reversing orientation) as the collection of orbits of a certain group action. Section \ref{sect:CompKnotSyms} is dedicated to the intrinsic symmetry group of a composite knot in terms of the symmetry groups of the prime factors. We close with a discussion of future work, the most important of which is the generalization of the techniques described here to the case of composite links. The appendix gives a table of composite knots that can be constructed by connected sums of prime knots with $9$ or fewer crossings as well as some details of the computations involved. This table is complete under the conjecture that crossing number is additive under connected sum. \section[Prime Decomposition Graphs]{Prime Decomposition Graphs}\label{sect:JSJGraph} The results in this section either appear in or are corollaries of results in a paper of Ryan Budney~\cite{Bud1}. In order to translate the computation of composite knot symmetries from topology to combinatorics we will define a tree associated to a composite knot. The well-definedness of this construction will depend on a specialized construction related to the JSJ-decomposition of the knot complement. The interested reader is directed to Budney~\cite{Bud1} for full details. Given a composite knot $K$ we can decompose its complement $C_K$ by embedding solid tori $\cup T_i$ as shown in Figure~\ref{fig:companions}. These tori are the swallow-follow tori corresponding to each prime factor of $K$. It is important to note that the swallow-follow tori are not necessarily the only tori appearing in the full JSJ-decomposition. For example, if one of the prime factors was the Whitehead double of some knot, then we would see additional tori in the JSJ-decomposition which are not swallow-follow tori, but are representing doubling as a satellite operation (that is not connected sum). However, the swallow-follow tori for each prime factor are precisely the tori in the JSJ-decomposition which are adjacent to the knot complement. In fact, the companion link to the $3$-manifold co-bounded by a neighborhood of $K$ and these swallow-follow tori is the link given in the following definition. The companions corresponding to the $3$-manifolds bounded by each swallow-follow tori are simply the prime factors of $K$ as these $3$-manifolds are already knot complements. \begin{definition} We denote by $H^p$ the $(p+1)$-component \textbf{keychain link} (shown in Figure \ref{fig:keychain}) and denote the $i$th component of $H^p$ by $H_i^p$. The compononents are oriented so that $\operatorname{Lk}(*,H^p_i)=+1$ for $i=1,\ldots,p$, where $\operatorname{Lk}$ is the standard linking number. \end{definition} \begin{figure} \begin{center} \includegraphics[height=3cm]{keychain.pdf} \caption{\label{fig:keychain}Here we see a $(p+1)$ component keychain link link. Notice that the orientations have been chosen so that all crossings are positive.} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[height=4cm]{composite_companions.pdf} \caption{\label{fig:companions}This figure, similar to the one appearing in Budney~\cite{Bud1}, shows the JSJ-decomposition of a composite knot. The tori shown are the swallow-follow tori and, in this case, these are the only tori in the JSJ-decomposition. } \end{center} \end{figure} If we restrict our attention to only the swallow-follow tori we can define a simplified version of the JSJ and splicing graphs described by Budney. Such graph constructions seem to be first given by Siebenmann \cite{Sieb1} as well as Eisenbud and Neumann \cite{EN1}. \begin{definition}\label{def:PDT} A \textbf{prime decomposition tree} is a depth one rooted tree $\mathbb{PG}(K)$ with root labeled by the link $H^p$ whose children are ordered vertices labeled by oriented prime knots (see Figure~\ref{fig:comp_tree}). We denote the knot corresponding to vertex $v$ by $\mathbb{PG}(v)$. We say that two prime decomposition trees $\mathbb{PG}$ and $\mathbb{PG}'$ are equivalent, denoted $\mathbb{PG} \sim \mathbb{PG}'$ , if there exists an isomorphism of rooted trees $g:\mathbb{PG} \rightarrow \mathbb{PG}'$ such that the knots $\mathbb{PG}(v)$ and $\mathbb{PG}'(g(v))$ are isotopic. \end{definition} The following result is a restatement of Proposition $4.6$ in Budney \cite{Bud1}. \begin{proposition}\label{prop:Bud1} Two knots $K_1$ and $K_2$ are isotopic if and only if $\mathbb{PG}(K_1) \sim \mathbb{PG}(K_2)$. \end{proposition} \begin{figure} \begin{center} \includegraphics[height=3cm]{prime_tree.pdf} \caption{\label{fig:comp_tree}An example of a prime decompostion tree.} \end{center} \end{figure} A knot can be recovered from a prime decomposition tree via the splicing operation, but the details of the construction are not needed to proceed. These details can be found in Budney \cite{Bud1}, and we denote the knot obtained via splicing according to $\mathbb{PG}$ by $S(\mathbb{PG})$. \begin{theorem}{\label{thm:PGEquiv}} Let $\mathbb{PG}$ and $\mathbb{PG}'$ be prime decomposition trees, then the knots $S(\mathbb{PG})$ and $S(\mathbb{PG}')$ are isotopic if and only if $\mathbb{PG}$ and $\mathbb{PG}'$ are equivalent as prime decomposition trees. \end{theorem} \begin{proof} Suppose that $S(\mathbb{PG}) \sim S(\mathbb{PG}')$, then by Proposition~\ref{prop:Bud1} we have that $\mathbb{PG}(S(\mathbb{PG})) \sim \mathbb{PG}'(S(\mathbb{PG}))$. So, in particular the knots have the same prime factors. Thus, by the uniqueness of the prime factorization of knots there is a permutation of the leaves of $\mathbb{PG}(S(\mathbb{PG}))$ so that we obtain $\mathbb{PG}'$. On the other hand if $\mathbb{PG} \sim \mathbb{PG}'$, then we know immediately that $S(\mathbb{PG})$ and $S(\mathbb{PG}')$ have the same prime factors and are thus isotopic by the uniqueness of the prime factorization of knots. \end{proof} It is interesting to note that although we use the classical prime decomposition theorem in this proof (for simplicity) one could avoid it and use only the uniqueness of the JSJ-decomposition to obtain a proof of the classical prime decomposition theorem. \section[Prime Diagram Trees]{Prime Diagram Trees}\label{sect:PDT} In order to have a unique description of each prime knot type we will from here on be working with particular diagrams instead of equivalence classes of space curves. In particular, we will use Planar Diagram codes (PD-codes) to describe our knot diagrams. PD-codes seem to have been first defined by Dror Bar-Natan for use in the KnotTheory Mathematica Package. The details of the construction have been worked out by the author and can be found in \cite{Ma2}, but the basic definition is included here for convenience. \begin{definition}\label{def:PD} Given a knot diagram on an oriented surface $S$, we generate the set of quadruples of the \textbf{PD-code} representing this diagram by the following procedure. For each crossing we include the quadruple of arc labels involved beginning with the incoming under-edge and proceeding around the crossings in the positively oriented direction of $S$ (see Figure~\ref{fig:PD}). We give a positive sign to incoming edges and a negative sign to outgoing edges. \begin{figure} \begin{center} \begin{overpic}[height=4cm]{PD.pdf} \put(40,35){$1$} \put(28,15){$2$} \put(-3,30){$3$} \put(28,33){$4$} \put(35,-1){$5$} \put(16,21){$6$} \put(55,27){$\{ [+4,-2,-5,+1],$} \put(57.5,21){$[+2,-6,-3,+5],$} \put(57.5,15){$[+6,-4,-1,+3]\}$} \end{overpic} \end{center} \caption{\label{fig:PD} A diagram for $3_1$ and its PD-code. The labels are only single integers here as there is only one component. Note that we may omit directional arrows as the orientation can be inferred from the ordering of the edge labels.} \end{figure} \end{definition} We now define the combinatorial object that is analogous to the prime decomposition tree except that we label our vertices with PD-codes as opposed to knots. \begin{definition}\label{def:PDiagT} A \textbf{prime diagram tree} is a depth $1$ rooted tree whose vertices are ordered and labeled by PD-codes from the prime knot table. We denote the PD-code at vertex $v$ by $\mathbb{PD}(v)$ and the corresponding knot $k(v)$. The vertices respect a chosen ordering on base types so that if $i \leq j$, then $k(v_i) \leq k(v_j)$. We say that two prime diagram trees $T_1$ and $T_2$ are equivalent (denoted $T_1 \sim T_2$) if there is an isomorphism of trees $\phi: T_1 \rightarrow T_2$ such that $k(\phi(v))\sim k(v)$ for every vertex $v \in T_1$. \end{definition} Defining a connected sum of PD-codes essentially amounts to choosing a convention for dealing with the indices. We choose the conventions laid out in the following definition. \begin{definition}\label{def:PDSum} Consider an ordered list of knot PD-codes $(D_1, D_2)$ where the first has $n_1$ arcs and the second has $n_2$ arcs. We define the \textbf{connected sum of the PD-codes} to be the PD-code obtained by the following procedure (cf. Figure~\ref{fig:sum}). \begin{enumerate} \item add $n_2$ to each \emph{positive} label of $D_1$ and subtract $n_2$ to each negative label of $D_1$ except label $-1$ \item change the $-1$ to a $-(n_2+1)$ in the quadruple of $D_2$ which also contains $+n_2$ \item concatenate $D_1$ and $D_2$ \end{enumerate} \end{definition} The conventions chosen in Definition \ref{def:PDSum} give the PD code of the connected sum that maintains the edge labels of $D_2$, changes the edge labels of $D_1$ to be consecutive with those in $D_2$, and performs the connected sum along the edges labeled $1$ (as shown in Figure \ref{fig:sum}). \begin{figure} \begin{center} \begin{overpic}[height=6cm]{sum.pdf} \put(-18,91){$D_1$} \put(109,91){$D_2$} \put(-25,38){$D_1 \# D_2$} \put(42,60){$1$} \put(16,72){$2$} \put(37,90){$3$} \put(30,60){$4$} \put(-5,75){$5$} \put(25,85){$6$} \put(56,75){$1$} \put(86,83){$2$} \put(101,60){$3$} \put(75,71){$4$} \put(96,90){$5$} \put(90,60){$6$} \put(53,28){$1$} \put(86,32){$2$} \put(100,9){$3$} \put(75,21){$4$} \put(96,40){$5$} \put(88,9){$6$} \put(52,6){$7$} \put(17,20){$8$} \put(38,37){$9$} \put(30,7){$10$} \put(-6,27){$11$} \put(27,31){$12$} \end{overpic} \end{center} \caption{\label{fig:sum} The connected sum of the diagrams of two right-handed trefoils.} \end{figure} \begin{example} Figure \ref{fig:sum} shows the connected sum of two right-handed trefoils. The PD-codes for these two knots are as follows. $$\{[+2,-6,-3,+5],[+6,-4,-1,+3],[+4,-2,-5,+1]\} $$ $$\{[+4,-2,-5,+1],[+2,-6,-3,+5],[+6,-4,-1,+3]\}$$ Note that these are the \emph{same} PD-codes as the diagrams in Figure \ref{fig:sum} are related by a rotation of the page (or more appropriately a diffeomorphism of $S^2$ which is isotopic to the identity). We now apply the procedure outlined in Definition \ref{def:PDSum} in order to take the connected sum of these knots letting the first be $D_1$ and the second be $D_2$. Note that $n_1 = n_2 = 6$. \begin{enumerate} \item We first add $n_2 = 6$ to each label of $D_1$ except the label $-1$. The result is as follows. $$\{[+8,-12,-9,+11],[+12,-10,-7,+9],[+10,-8,-11,+7]\}$$ \item Next, we change the $-1$ to $-(n_2+1) = -7$ in $D_2$ to obtain the following. $$\{[+4,-2,-5,+1],[+2,-6,-3,+5],[+6,-4,-7,+3]\}$$ \item Concatenating the PD-codes from $(1)$ and $(2)$ we are arrive at the PD-code below. $$\{[+8,-12,-9,+11],[+12,-10,-7,+9],[+10,-8,-11,+7],$$ $$[+4,-2,-5,+1],[+2,-6,-3,+5],[+6,-4,-7,+3]\}$$ \end{enumerate} It is easily checked that this is the PD-code for the composite knot in Figure \ref{fig:sum} \end{example} Since a knot type can be uniquely recovered from a PD-code the equivalence relation defined in Definition \ref{def:PDiagT} is equivalent to knot isotopy and we can therefore consider prime diagram trees instead of prime decomposition trees. We can thus restate Theorem \ref{thm:PGEquiv} as follows. \begin{proposition}\label{prop:PDiagEquiv} If $T_1$ and $T_2$ are prime diagram trees, then $k(T_1) \sim k(T_2)$ if and only if $T_1 \sim T_2$. \end{proposition} \begin{figure} \begin{center} \begin{overpic}[height=5cm]{tree_com_diag.pdf} \put(-3,22){$\phi$} \put(100,22){$\bar{\phi}$} \put(49,54){$k$} \put(49,7){$k$} \put(-1,33){$D_1$} \put(8,31){$D_2$} \put(33,33){$D_n$} \put(-3,-3){$D_1'$} \put(6,-5){$D_2'$} \put(31,-3){$D_n'$} \put(56,-3){$k(D_1)'$} \put(71,-5){$k(D_2)'$} \put(95,-3){$k(D_n)'$} \put(59,33){$k(D_1)$} \put(73,31){$k(D_2)$} \put(95,33){$k(D_n)$} \end{overpic} \end{center} \caption{\label{fig:TreeComDiag} Trees of diagrams mapping to trees of knots.} \end{figure} \section{Symmetries of Prime Knots}\label{sect:sym} The \emph{intrinsic symmetry group} of a link was first defined by Whitten \cite{Whit} and discussed in detail by the author in Cantarella, et al. \cite{VIGRE}. These symmetries are the generalization of invertiblility and chirality and are described by the group given in the following definition. We give the specialized definition corresponding to a knot. \begin{definition} The group of possible intrinsic symmetries of a knot is given by the group $\Gamma = \mathbb{Z}_2 \times \mathbb{Z}_2$. We will describe $\mathbb{Z}_2$ as the multiplicative group $\{1,-1\}$ and we will write an element $\gamma \in \Gamma$ as $(\epsilon_0, \epsilon_1)$. If $\epsilon_0 = -1$, then the element corresponds to mirroring the knot. Similarly, if $\epsilon_1=-1$ then the element corresponds to changing the knots orientation. \end{definition} The action of the intrinsic symmetry group on PD-codes (for any link) is described in detail by the author in \cite{Ma2}. Lemma 27 of \cite{Ma2} ensures the existence of a preferred list of PD-codes, one for each knot type, which we will refer to from now on by \textbf{the prime knot table}. The PD-codes from the preferred list have the property that only the trivial element of $\Gamma$ acts trivially on the PD-code. We will use the prime knot table to define a combinatorial object analogous to the prime decomposition tree that will play a key role in the computation of symmetries and the tabulation of composite knots. We now introduce our conventions for indexing the prime factors of a knot and define the action of $\Gamma$ on a prime diagram tree. \begin{definition}\label{def:T} Let $\mathbb{T}$ be the collection of prime diagram trees whose vertices are labeled by diagrams from the prime knot table. \end{definition} \begin{definition} Given a prime decomposition tree $\mathbb{PG}(K)$ and $\gamma \in \Gamma$ we define $\gamma(\mathbb{PG}(K))$ to be the tree whose ordered leaves are the knots $\gamma(K_i)$ where the $K_i$ are the labels of $\mathbb{PG}(K)$. \end{definition} \begin{definition}\label{def:FactorList} A \textbf{base prime factor list} $P=\{(D_i,n_i)\}_{i=1}^l$ is a set of PD-codes from the prime knot table along with multiplicities. We say that $P$ is the base prime factor list for a knot $k$ if the base types of the prime factors of $k$ appear in $P$ with the correct multiplicities. \end{definition} \begin{definition}\label{def:TP} The collection of all prime diagram trees whose leaves are labeled by knots whose base types are exactly the base prime factor list $P$ will be denoted $\mathbb{T}(P)$. \end{definition} \begin{definition}\label{def:X} Let $X(P)$ be the set $\{\times_{i=1}^l \Gamma^{n_i}\}$. \end{definition} The set $X(P)$ describes the possible choices for taking the connected sum of $n$ prime knots. \begin{lemma}\label{lemma:XisTP} $X(P)$ is in bijection with $\mathbb{T}(P)$. \end{lemma} \begin{proof} The correspondence is given by associating $x=((x_{1,1},\ldots,x_{1,n_1}),\ldots,(x_{l,1},\ldots,x_{l,n_l}))$ with the prime diagram tree whose children are $x_{1,1}D_1,\ldots,x_{1,n_1}D_1,\ldots,x_{l,1}D_l,\ldots,x_{l,n_l}D_l$ (cf. Figure~\ref{fig:Tree}). We will denote this tree by $\mathbb{T}(x)$. \begin{figure} \begin{center} \begin{overpic}{tree.pdf} \put(-14,-5){$x_{1,1}D_1$} \put(16,-8){$x_{1,n_1}D_1$} \put(66,-8){$x_{l,1}D_l$} \put(95,-5){$x_{l,n_l}D_l$} \end{overpic} \end{center} \caption{\label{fig:Tree} The prime diagram tree associated to $P=\{(D_i,n_i)\}_{i=1}^l$ and $x=((x_{1,1},\ldots,x_{1,n_1}),\ldots,(x_{l,1},\ldots,x_{l,n_l}))$.} \end{figure} \end{proof} \begin{definition}\label{def:GammaP} We define the group $\Gamma(P)$ by $$\Gamma(P):= \oplus_{i=1}^l \left[ \left( \oplus_{n_i} \Gamma \right) \rtimes S_{n_i} \right]$$ and the group $\Sigma(P)$ by $$\Sigma(P):= \oplus_{i=1}^l \left[ \left( \oplus_{n_i} \Sigma(k_i) \right) \rtimes S_{n_i} \right]$$ \end{definition} We can let the group $\Gamma(P)$ act on $X(P)$ to change between different choices of orientations and handedness of each prime factor in the connected sum. This action is defined as follows. \begin{definition}\label{def:GammaOnX} There is a natural action of $\Gamma(P)$ on $X(P)$ given by the following. $$((\gamma_{1,1},\ldots,\gamma_{1,n_1},p_1),\ldots,(\gamma_{l,1},\ldots,\gamma_{l,n_l},p_l))*((x_{1,1},\ldots,x_{1,n_1}),\ldots,(x_{l,1},\ldots,x_{l,n_l}))$$ $$=((\gamma_{1,1}x_{1,p_1(1)},\ldots,\gamma_{1,n_1}x_{1,p_1(n_1)}),\ldots,(\gamma_{l,1}x_{1,p_l(1)},\ldots,\gamma_{l,n_l}x_{l,p_l(n_l)}))$$ \end{definition} Therefore, we have an induced action of $\Gamma(P)$ on $\mathbb{T}(P)$. \section[An Enhanced Prime Decomposition Theorem]{An Enhanced Prime Decomposition Theorem}\label{sect:decomp} We will now lead up to the main theorem of the paper. Theorem \ref{theorem:SigPOnX} is an enhanced version of the classical prime decomposition theorem for knots in the sense that it gives an explicit way of constructing all composite knots whose \emph{base} prime factor list is given. \begin{proposition}\label{prop:SigmaOrbs} Let $x$ and $x'$ be elements of $X(P)$ for some prime factor list $P$. Then $\mathbb{T}(x)$ and $\mathbb{T}(x')$ are equivalent as prime diagram trees if and only if there exists $\sigma \in \Sigma(P)$ such that $\sigma(x)=x'$ under the action of $\Sigma(P)$ on $X(P)$. \end{proposition} \begin{proof} First suppose that $\mathbb{T}(x)$ and $\mathbb{T}(x')$ are equivalent as prime diagram trees. Then, there is an isomorphism of trees $\phi: \mathbb{T}(x) \rightarrow \mathbb{T}(x')$ with corresponding isotopies between $k(v)$ and $k(\phi(v))$. Let $x=((x_{1,1},\ldots,x_{1,n_1}),\ldots,(x_{l,1},\ldots,x_{l,n_l}))$ and $x'=((x_{1,1}',\ldots,x_{1,n_1}'),\ldots,(x_{l,1}',\ldots,x_{l,n_l}'))$ and not that $\phi$ can only permute elements within each tuple since only knots of the same base type may be interchanged. Moreover, if $\phi(v) = v'$, then $k(v)$ and $k(v')$ must be isotopic knots and are therefore related by an element in $\Sigma(k(v)) = \Sigma(k(v'))$ (recall that knots of the same base type have the same intrinsic symmetry group). Let $p_i$ be the permutations induced by $\phi$ on each collection of leaves with a common base type and consider the element $\sigma \in \Sigma(P)$ that is defined as follows. $$\sigma = ((x_{1,1}' * x_{1,p_1(1)}^{-1},\ldots,x_{1,n_1}' * x_{1,p_1(n_1)}^{-1},p_1),\ldots,(x_{l,1}' * x_{l,p_l(1)}^{-1},\ldots,x_{l,n_1}' * x_{l,p_l(n_1)}^{-1},p_l))$$ We claim that $\sigma(x) = x'$ and this can be verified by the following computation. \\ \begin{tabular}{lcl} $\sigma(x)$ & $=$ & $((x_{1,1}' * x_{1,p_1(1)}^{-1},\ldots,x_{1,n_1}' * x_{1,p_1(n_1)}^{-1},p_1),\ldots,(x_{l,1}' * x_{l,p_l(1)}^{-1},\ldots,x_{l,n_1}' * x_{l,p_l(n_1)}^{-1},p_l)) $ \\ \\ & & $ *~~ ((x_{1,1},\ldots,x_{1,n_1}),\ldots,(x_{l,1},\ldots,x_{l,n_l})) $\\ \\ & $ = $ & $ ((x_{1,1}' * x_{1,p_1(1)}^{-1} * x_{1,p_1(1)},\ldots,x_{1,n_1}' * x_{1,p_1(n_1)}^{-1} * x_{1,p_1(n_1)}),\ldots,$\\ \\ & & $(x_{l,1}' * x_{l,p_l(1)}^{-1} * x_{l,p_l(1)},\ldots,x_{l,n_1}' * x_{l,p_l(n_1)}^{-1} * x_{l,p_l(n_l)}))$ \\ \\ & $ = $ & $((x_{1,1}',\ldots,x_{1,n_1}'),\ldots,(x_{l,1}',\ldots,x_{l,n_l}')) $ \\ \\ & $=$ & $x'$. \\ \end{tabular} It remains to show that $\sigma$ is in fact an element of $\Sigma(P)$. Since $p_i$ can only permute leaves that correspond to isotopic knots we know that $x_{j,k}$ and $x_{j,p_i(k)}$ must be elements in the intrinsic symmetry of the base type corresponding to those leaves. It follows that $x_{j,k} * x_{j,p_i(k)}^{-1}$ is also an element in the intrinsic symmetry group and thus $\sigma \in \Sigma(P)$. Now conversely suppose that there is some $\sigma \in \Sigma(P)$ such that $\sigma(x)=x'$. If $$\sigma = ((\gamma_{1,1},\ldots,\gamma_{1,n_1},p_1),\ldots,(\gamma_{l,1},\ldots,\gamma_{l,n_l},p_l))$$ then permuting the leaves of $\mathbb{T}(x)$ by the permutation $p_i$ only permutes leaves of the same base type and since each $\gamma_{i,j} \in \Sigma(k_i)$ we have that permuted vertices are equivalent PD-codes (or we could say isotopic knots). Thus, $\sigma$ induces an isomorphism of decomposition trees and so $\mathbb{T}(x)\sim \mathbb{T}(x')$. \begin{figure} \begin{center} \begin{overpic}{tree_permute.pdf} \put(48,36){$\phi$} \put(2,-5){$\gamma_{s}D$} \put(31,-5){$\gamma_{t}D$} \put(61,-5){$\gamma_{s}'D$} \put(91,-5){$\gamma_{t}'D$} \end{overpic} \end{center} \caption{\label{fig:TreePermute} A map of prime diagram trees.} \end{figure} \end{proof} \begin{lemma}\label{lemma:kSurjects} The map $k$ of Definition \ref{def:PDiagT} descends to a surjective map from the $\Sigma(P)$ orbits of $\mathbb{T}(P)$ to knot types whose base prime factor list is $P$. \end{lemma} \begin{proof} We first show that $k$ descends to a map on the $\Sigma(P)$ orbits of $\mathbb{T}(P)$. Suppose $T$ and $T'$ are in the same orbit, then there exists a $\sigma \in \Sigma(P)$ such that $\sigma T = T'$. Thus, by Proposition~\ref{prop:SigmaOrbs} we have that $T \sim T'$. So by Proposition~\ref{prop:PDiagEquiv}, $k(T) \sim k(T')$ and we see that $k$ descends to a map on $\Sigma(P)$ orbits. Since every knot has a prime factorization and the action of $\Gamma$ on each base type is transitive, we see that $k$ is surjective. \end{proof} \begin{theorem}\label{theorem:SigPOnX}(Enchanced Prime Decomposition Theorem) The orbits of the $\Sigma(P)$ action on $X(P)$ are in bijection with the isotopy classes of knots whose base prime factor list is $P$. \end{theorem} \begin{proof} First note that the $\Sigma(P)$ orbits of $X(P)$ are the same as the $\Sigma(P)$ orbits of $\mathbb{T}(P)$ by construction. Proposition~\ref{prop:SigmaOrbs} shows that the collection of $\Sigma(P)$ orbits on $\mathbb{T}(P)$ is the partition associated to the equivalence on prime decomposition trees. Thus, the result follows from Proposition~\ref{prop:PDiagEquiv}. \end{proof} \begin{definition} We can now define $\operatorname{Orbit}(K)$ to be the orbit in $X(P)$ which corresponds to the knot $K$ under the map $k$. \end{definition} \begin{example}\label{example:31_31Orbs} Consider the base prime factor list $P=\{(3_1,2)\}$ where $3_1$ denotes the PD-code of the standard diagram of the trefoil of Figure~\ref{fig:PD}. This corresponds to taking the connected sum of $2$ trefoils. The trefoil is an invertible, chiral knot so $\Sigma(3_1)=\{(1,1),(1,-1)\}$. Therefore, $$\Sigma(\{(3_1,2)\})=(\Sigma(3_1) \oplus \Sigma(3_1)) \rtimes S_2 = (\{(1,1),(1,-1)\} \oplus \{(1,1),(1,-1)\}) \rtimes S_2,$$ and $$X(\{(3_1,2)\})=\Gamma \times \Gamma.$$ The orbits of the action of $\Sigma(\{(3_1,2)\})$ on $X(\{(3_1,2)\})$ are shown in Table~\ref{figure:TrefoilClasses}. There are $3$ isotopy classes of composite knots that can be constructed from $2$ trefoils. They are the right and left handed granny knots and the square knot. \begin{figure} \begin{center} \includegraphics[height=3cm]{squareknot_tree.pdf} \caption{\label{figure:SquareTree} A prime decomposition tree and its associated tree labeled by $\Gamma$.} \end{center} \end{figure} \begin{table} \begin{center} \begin{tabular}{cc} \toprule Composite & Orbit\\ \toprule \midrule Granny Knot & $\begin{array}{cc} ((1,1),(1,1))&((1,-1),(1,1))\\((1,1),(1,-1))&((1,-1),(1,-1)) \end{array}$\\ \midrule Square Knot & $\begin{array}{cc} ((1,1),(-1,1))&((1,-1),(-1,1))\\((1,1),(-1,-1))&((1,-1),(-1,-1)) \\ ((-1,1),(1,1))&((-1,-1),(1,1))\\ ((-1,1),(1,-1)) & ((-1,-1),(1,-1)) \end{array}$\\ \midrule Granny Knot & $\begin{array}{cc} ((-1,1),(-1,1))&((-1,-1),(-1,1))\\((-1,1),(-1,-1))&((-1,-1),(-1,-1)) \end{array}$\\ \bottomrule \end{tabular} \end{center} \caption{\label{figure:TrefoilClasses} The orbits of the action of $\Gamma$ corresponding to the isotopy classes of $3_1 \# 3_1$ of Example~\ref{example:31_31Orbs}.} \end{table} \end{example} \section[Symmetries of Composite Knots]{Symmetries of Composite Knots}\label{sect:CompKnotSyms} We now turn to the task of computing the intrinsic symmetries of a composite knot from those of its prime factors. Table~\ref{figure:SymNums} gives the occurrences of each symmetry type among the $544$ composite knots through $12$ crossings. The results in this section justify the computation which is explained in section~\ref{sect:CompTab}. \begin{definition}\label{def:Delta} We define $\Delta$ to be the following subgroup of $\Gamma(P)$. $$\Delta:=\{((\gamma,\ldots,\gamma,p_1),\ldots,(\gamma,\ldots,\gamma,p_l)) | \gamma \in \Gamma, p_i \in S_{n_i}\}$$ \end{definition} The following lemma is immediate. \begin{lemma}\label{lemma:DeltatoGamma} The map $\pi:\Delta \rightarrow \Gamma$ given by $$((\gamma,\ldots,\gamma,p_1),\ldots,(\gamma,\ldots,\gamma,p_l)) \mapsto \gamma$$ is a surjection. \end{lemma} By restricting the action of $\Gamma(P)$ to the subgroup $\Delta$ and factoring through the map of Lemma~\ref{lemma:DeltatoGamma} we have a well-defined action of $\Gamma$ on prime diagram trees which corresponds with the action of $\Gamma$ on knots. The following theorem gives the symmetry group of a composite knot. \begin{theorem}\label{theorem:Syms} $\Sigma(K) = \pi(\Delta \cap \operatorname{Stab}{(\operatorname{Orbit}(K))})$ \end{theorem} \begin{proof} First note that if $\sigma \in \Sigma(K)$ then the element $\bar{\sigma}:=((\sigma,\ldots,\sigma,id),\ldots,(\sigma,\ldots,\sigma,id)) \in \Delta$ must stabilize $\operatorname{Orbit}(K)$ by Theorem~\ref{theorem:SigPOnX} as $K \sim \sigma(K)$. Since $\pi(\bar{\sigma})=\sigma$ we have that $\Sigma(k) \subset \pi(\Delta \cap \operatorname{Stab}{(\operatorname{Orbit}(K))})$. Now suppose that $\gamma \in \pi(\Delta \cap \operatorname{Stab}{(\operatorname{Orbit}(K))})$. We must show that $K \sim \gamma(K)$. First note that elements of the form $(((1,1),\ldots,(1,1),q_1),\ldots,((1,1),\ldots,(1,1),q_k))$ always act trivially for any choice of the permutations $q_i$ since the connected sum of knots is commutative. Thus we may assume that $((\gamma,\ldots,\gamma,id),\ldots,(\gamma,\ldots,\gamma,id)) \in \operatorname{Stab}{(\operatorname{Orbit}(K))}$. But, $\pi(((\gamma,\ldots,\gamma,id),\ldots,(\gamma,\ldots,\gamma,id)))=\gamma$. Thus, $\gamma \in \Sigma(K)$. \end{proof} This provides an alternate proof of Theorem 2 of Whitten \cite{Whit}, which was obtained without using the JSJ-decomposition. Whitten’s version of the theorem gave conditions for an element of $\Gamma$ to be a symmetry of a composite knot, encoding the action of $\Gamma(P)$ on $X(P)$ in a complicated system of indices. By exposing the underlying algebra, our version is simpler to state and more amenable to computer calculation of symmetry groups. There are several immediate corollaries to Theorem~\ref{theorem:Syms} which predict symmetries of a composite knot from the symmetries of the prime factors. We first discuss a generalization of the square knot from Example~\ref{example:31_31Orbs}. \begin{definition} $K$ is a \textbf{generalized square knot} if there exist $\gamma_1, \gamma_2 \in \Gamma$ so that the prime factors for $K$ are $$\{K_1,\gamma_1 K_1,\ldots,K_1,\gamma_1 K_1\}$$ where $\Sigma(K_1)=<\gamma_2>$ and $\Gamma=<\gamma_1,\gamma_2>$. \end{definition} \begin{corollary}\label{cor:squareknot} If $K$ is a generalized square knot, then $\Sigma(K)=\Gamma$. \end{corollary} Corollary~\ref{cor:squareknot} could be thought of as an application of the following which gives sufficient conditions for a $2$ factor summand to admit a symmetry. \begin{corollary} If $K=K_1 \# \gamma K_1$, then $\gamma \in \Sigma(K)$. \end{corollary} It is also immediate that a connected sum of knots with full symmetry will also have full symmetry. \begin{corollary} If $K$ is a composite knot with prime factors $\{K_1,\ldots,K_n\}$ such $\Sigma(K_i)=\Gamma$ for all $i$, then $\Sigma(K)=\Gamma$. \end{corollary} We can produce a knot with full symmetry by taking the connected sum of each flavor of base type. \begin{corollary} If $K=K_1 \# \gamma_1 K_1 \# \gamma_2 K_1 \# \gamma_3 K_1$ with $\gamma_1, \gamma_2, \gamma_3$ all distinct, then $\Sigma(K)=\Gamma$. \end{corollary} If $K$ has no symmetry, then iterated connected sums of $K$ will also have no symmetry. \begin{corollary} If $K$ has no symmetry. Then $K \# \cdots \# K$ has no symmetry. \end{corollary} \begin{table} \begin{center} \begin{tabular}{cc} \toprule Symmetry Type & Number of Occurrences\\ \toprule \toprule No Symmetry & $20$\\ (+) Amphichiral Symmetry & $0$\\ Invertible Symmetry & $506$\\ (-) Amphichiral Symmetry & $2$\\ Full Symmetry & $16$\\ \bottomrule \end{tabular} \end{center} \caption{\label{figure:SymNums} The number of each symmetry type among the $544$ composite knots of up to $12$ crossings (up to the conjectural additivity of crossing number under connected sum).} \end{table} \begin{table} \begin{center} \begin{tabular}{cccccc} \toprule Crossing Number & No Symmetry & (+) Amp & Invertible & (-) Amph & Full Symmetry\\ \toprule \toprule 6 &0&0 & 2& 0 & 1\\ 7 & 0&0 & 2& 0& 0\\ 8 & 0&0 & 8& 0& 1\\ 9 & 0&0 & 18&0 &0 \\ 10 &0 &0 & 42&0 &4 \\ 11 &4 & 0& 120&0 &0 \\ 12 &16 & 0& 314&2 &10 \\ \bottomrule \end{tabular} \end{center} \caption{\label{figure:SymsByCross} The number of each symmetry type among the $544$ composite knots of up to $12$ crossings (up to the conjectural additivity of crossing number under connected sum) by crossing number.} \end{table} \section[Future Directions]{Future Directions} The obvious next step in this theory is to generalize to the case of links. Not only would a composite link table be a useful addition to the literature, but the underlying topology and combinatorics are very interesting. The first goal for developing such a theory is to understand how the prime decomposition of a link is ascertained from the JSJ-decomposition of the link complement. In the case of knots this is straight-forward: you simply look at the JSJ tori which are adjacent to the knot compliment. The prime factors are the companions to the $3$-cells which bound these tori and are opposite the knot complement. For links the situation seems to be much more complicated because there could be link adjacent tori in the JSJ-decomposition which do not correspond to prime summands. In addition, it is not clear what the analogous object to the prime decomposition tree (cf. Definition~\ref{def:PDT}) should be. These objects must contain enough combinatorial information to encode which component of each prime link is involved in the connected sum. It is also unclear what group naturally acts on these objects in a analogous way to the action of $\Sigma(P)$ on $X(P)$. However, once these objects are understood, the analogous enhanced prime decomposition theorem would be immediate using the same techniques used for knots due to the strong uniqueness properties of the JSJ-decomposition. \bibliographystyle{alpha}
1,116,691,497,067
arxiv
\section{Introduction} In recent years, unmanned aerial vehicles (UAVs) or drones have received increasing attention from both the research and industry community. Of particular interest are applications enabled by combining UAVs with the Internet of Things (IoT)~\cite{motlagh2016low,choi2015building}, such as environmental monitoring~\cite{gao2018high}, structural health monitoring~\cite{kang2018autonomous}, precision agriculture~\cite{tsouros2019data}, search and rescue operations~\cite{silvagni2017multipurpose}, and so on. An important requirement of such applications is the ability to accurately localize the position of ground devices (GDs\xspace), making the collected data more meaningful. Since it is costly to equip each GD\xspace in the network with a GPS module, a fixed set of {\em anchor} devices, whose positions are known a-priori, is generally used~\cite{priyantha2003anchor}. Moreover, given the anchors use wireless transmissions to localize other GDs\xspace and their range is often limited, the number of required anchors could dramatically increase with the size of the network, thus increasing the cost of the localization procedure. This problem can be solved by replacing fixed anchor devices with a single {\em mobile anchor} (MA\xspace) equipped with a GPS unit and periodically broadcasting its position to help nearby GDs\xspace localize. Although there exists some work in the literature on localization based on ground MAs\xspace, such as rovers~\cite{han2016survey}, relatively less has been proposed using flying MAs\xspace, like UAVs or drones~\cite{malhotra2019comprehensive}, which is the focus of this paper. Compared to the ground MAs\xspace, the flying MAs\xspace are able to reach remote locations, move at a faster speed, and cover a wider area than terrestrial rovers~\cite{bekmezci2013flying}. Due to these advantages, in this paper we concentrate on localization algorithms involving flying MAs\xspace. When using UAVs as MAs\xspace, the distance can be estimated wirelessly \revision{by measuring the time of flight (ToF) between the MA\xspace and GDs\xspace. For distance measurements, in this paper we adopt the ultra wide band (UWB) technology~\cite{mueller2015fusing}.} Localization algorithms can be broadly categorized as {\em range-free} and {\em range-based} approaches~\cite{han2016survey}. In the range-free algorithms, the position is estimated without any type of measurements, but by only discovering if the GD\xspace and MA\xspace are in range. Among these, the {\em radius-based} approaches assume the knowledge of transmission radius~\cite{ssu2005localization}, while the {\em radius-free} ones do not~\cite{xiao2008distributed}. Such algorithms are often based on the assumption that the antenna radiation pattern is isotropic, which is unrealistic in general. In fact, our recent works~\cite{bettisorbelli2019ground, bettisorbelli2020rangefree} have shown that the localization \revision{accuracy} depends on the quality (pattern and radius) of the antenna, and how much they differ from the assumed isotropic pattern. On the other hand, in the range-based localization algorithms, the position of the GD\xspace is estimated by taking several measurements between it and the MA\xspace. These algorithms are known to be more \revision{accurate} than range-free algorithms but at the cost of additional specialized hardware. For example, the estimation of distance exploits techniques like the received signal strength indicator (RSSI), the time of arrival (ToA), or the time difference of arrival (TDoA)~\cite{laaraiedh2011comparison}. Now, in any range-based localization procedure, {\em measurement errors} are unavoidable, and can seriously impact the localization \revision{accuracy}. This is particularly relevant when the MA\xspace is a drone because the measurement errors can occur while calculating the distance between the MA\xspace and the GD\xspace. The magnitude of such errors depends on the adopted technology and on the quality of the \revision{air-to-ground (A2G)} link between the MA\xspace and the GD\xspace. For example, the distance measurement error in a fully line of sight (LoS) link using the Wi-Fi technology is about $7$--$10 \unit{m}$; using Bluetooth it can be up to $15 \unit{m}$ while it is only $10 \unit{cm}$ using the UWB technology~\cite{www-Deca-dwm1000}. Additional errors may be caused by non-optimal weather conditions or the drone's GPS \revision{{\em accuracy}}~\cite{www-gps}. For instance, the 3DR Solo drone used in our experiments has a GPS \revision{accuracy} of $1 \unit{m}$~\cite{www-3DR-GPS}. In general, these errors propagate when combined and projected to the ground to localize the GDs\xspace. The error propagation depends on the specific localization technique, such as trilateration~\cite{thomas2005revisiting}, intersection of points, centroid~\cite{blumenthal2007weighted, bulusu2000gps}, and so on. \paragraph{Our Contributions} In this paper, we first provide bounds on various errors (e.g., instrumental error, rolling error, altitude error) impacting the estimated ground distance between the MA\xspace and GD\xspace. Then we focus on the commonly used {\em trilateration} based localization, and derive bounds on the propagation of ground distance errors on the estimated position of the GD\xspace. Finally, we perform extensive {\em in-field} experiments to quantify the localization \revision{accuracy} of several existing state-of-the-art localization algorithms. Specifically, we consider the \textsc{Drb-C}\xspace~\cite{bettisorbelli2019ground} range-based algorithm, the \textsc{Drf}\xspace~\cite{bettisorbelli2019rangefree}, \textsc{IoC}\xspace~\cite{xiao2008distributed}, and \textsc{IoA}\xspace~\cite{lee2009localization} range-free algorithms extended to distance measurements, and two trilateration based algorithms like \textsc{Scan}\xspace~\cite{bettisorbelli2018range} and \textsc{Omni}\xspace~\cite{koutsonikolas2007path}. Our testbed uses two UWB DecaWave\xspace kits, namely EVK1000 kit~\cite{www-Deca-dwm1000} and MDEK1001 kit~\cite{www-Deca-dwm1001}, and a 3DR Solo drone as the MA\xspace. To the best of our knowledge, ours is the {\em first work} that provides an extensive in-field evaluation on the localization \revision{accuracy} of the most relevant algorithms in the literature in real experimental settings using drones. Our novel contributions are summarized as follows. \revision{ \begin{itemize} \item We derive bounds on various measurement errors (instrumental, rolling, and altitude) to estimate the impact on the estimated ground distance between the UAV (MA\xspace) and GD\xspace. \item We validate our theoretical analysis on the ground error with a simple set of static experiments using two UWB antennas. We observe the impact of measurement errors on the trilateration technique. \item Through experiments, we comprehensively compare three range-based and three range-free state-of-the-art localization algorithms using a UAV as MA\xspace, extending the range-free algorithms with distance measurements to significantly improve their localization accuracy. We also implement these algorithms employing 3DR Solo drone and ten GDs\xspace, the first such realistic testbed built. \end{itemize} } \vspace{5pt} The rest of the paper is organized as follows. Section~\ref{sec:related} reviews the existing literature on localization approaches relevant to our context. Section~\ref{sec:mes-er} derives expressions to approximate the measurement and ground errors, \revision{and introduces how our results are interpreted in the light of A2G communications}. Section~\ref{sec:loc-error} investigates the localization error affecting the estimated position of the GD\xspace when the trilateration procedure is applied, and describes two localization algorithms based on trilateration compared in Section~\ref{sec:ev}. Section~\ref{sec:algorithms} introduces four more localization algorithms not based on trilateration which are also compared using the testbed. Three of them are transformed from range-free to range-based algorithms. Section~\ref{sec:ev} presents a rich set of real experiments on the field aiming to evaluate the localization error of the different localization algorithms. Finally, Section~\ref{sec:concl} offers conclusions with directions of future research. \vspace{-0.1in} \section{Related Works}\label{sec:related} \revision{This section reviews the relevant literature on localization of GDs\xspace using MAs\xspace and also efforts on testbed implementations considering UAVs as MAs\xspace.} \vspace{-0.1in} \subsection{\revision{MA\xspace-based Localization Algorithms}} There exist many algorithms for ground MAs\xspace that can be classified as range-free and range-based. \revision{In the range-free localization algorithms, such as \textsc{IoC}\xspace~\cite{xiao2008distributed} (intersection of circles) and \textsc{IoA}\xspace~\cite{lee2009localization} (intersection of annuli), a rover broadcasts its current position at regular time intervals while following a path. From the heard and not-heard rover's positions (informally, HnH technique), the GD\xspace builds a limited area where it may reside and places itself at the ``center''. (More details about these algorithms are in Section~\ref{sec:algorithms}.)} Usually range-free algorithms have relatively low localization \revision{accuracy}, leading to the development of range-based algorithms as like \textsc{Scan}\xspace and \textsc{Double-Scan}\xspace~\cite{koutsonikolas2007path}. In \textsc{Scan}\xspace and \textsc{Double-Scan}\xspace, the MA\xspace follows a path formed by vertical straight lines interconnected by horizontal lines. However, such algorithms result in a large number of collinear anchor points. Collinearity can be reduced by increasing the changes of direction in the path, as in \textsc{Hilbert}\xspace~\cite{koutsonikolas2007path} and \textsc{Lmat}\xspace~\cite{jiang2011lmat}. The path generated by \textsc{Lmat}\xspace logically tessellates the deployment area by equilateral triangles so that each GD\xspace falls inside a triangle. The vertices of the triangle where the GD\xspace resides are used to trilaterate the GD\xspace position, thus completely solving the collinearity issue. The above algorithms are designed for ground MAs\xspace. \revision{A few localization algorithms have been proposed for flying MAs\xspace. Since the drone flies at a certain altitude, there are new constraints on the anchor points. In~\cite{perazzo2017drone}, one can find simulation comparisons of the above algorithms extended to flying MAs\xspace, with particular attention to the path length.} \revision{The \textsc{Omni}\xspace~\cite{bettisorbelli2018range} algorithm is the first localization algorithm for drones that selects the anchor points in such a way that a certain accuracy is guaranteed.} A simple and lightweight range-based algorithm called \textsc{Drb-C}\xspace~\cite{bettisorbelli2019ground} localizes the GDs\xspace by determining the correct intersection point of two circles exploiting a third reference point for the disambiguation. Finally, the \textsc{Drf}\xspace~\cite{bettisorbelli2019rangefree} algorithm, the first range-free algorithm for UAVs as MAs\xspace, exploits lightweight geometrical rules for position estimation. \revision{(See Section~\ref{sec:algorithms} for more details on \textsc{Omni}\xspace and \textsc{Drf}\xspace.)} \revision{Very recently, in~\cite{ebrahimi2020autonomous}, a novel framework based on reinforcement learning (RL) has been proposed to enable a UAV to autonomously find a suitable trajectory. This improves the localization accuracy of multiple GDs\xspace minimizing the flight time, path length, and UAV's energy consumption. As usual for RL techniques, an initial step is required for allowing the UAV to be trained for an online real scenario. However, this work does not mention how long this training phase lasts.} \vspace{-0.1in} \subsection{\revision{UAV based Testbeds for Localization Experiments}} Recently, we performed preliminary experiments evaluating the range-free, radius-free \textsc{Drf}\xspace~\cite{bettisorbelli2019rangefree} algorithm using a UAV as MA\xspace and relatively inexpensive antennas~\cite{bettisorbelli2019ground}. Our conclusion was that the performance of \textsc{Drf}\xspace heavily depends on the shape of the antenna radiation pattern; precisely, the more the omnidirectionality of the antenna, the higher is the localization \revision{accuracy}. Employing the DecaWave\xspace MDEK1001 kit, in~\cite{bettisorbelli2020rangefree}, it has been experimentally shown that other range-free radius-based algorithms like \textsc{IoC}\xspace~\cite{xiao2008distributed} and \textsc{IoA}\xspace~\cite{lee2009localization}, also exhibit higher accuracy with the omnidirectionality of antennas. \revision{ A real outdoor implementation of a range-based localization algorithm is presented in~\cite{greco2015localization}, which aims to localize a radio-frequency identification (RFID) ground tag with the help of a UAV. Initially, the RFID tag is detected by the UAV using its RFID reader. Then, the UAV takes hundreds of RSSI measurements and estimates the tag's position using a multilateration procedure. Experimental results show an average localization error of $6 \unit{m}$. Note that this algorithm considers only random paths and random measurements, and does not investigate the relationship between the localization error and the UAV's altitude. A range-based algorithm is experimentally tested in~\cite{grigulo2018experimenting}, in which a UAV regularly broadcasts its current GPS position while flying. The GD\xspace aims to detect a set of at least three equidistant RSSI distance measurements from its center in order to apply trilateration. Experimental results show an average error of $4 \unit{m}$ under normal conditions that reduces to $1 \unit{m}$ using GPS corrections provided by a Real Time Kinematic (RTK) algorithm. However, an RTK positioning system is not always available. In~\cite{cisek2017ultra} a testbed deploying five UWB ground anchors is implemented for evaluating and tracking the 3D position of a UAV equipped with UWB antenna. The anchors and the UAV are also equipped with GPS receivers with RTK capabilities. The experimental error between UWB and RTK distance measurements ranges from $2$--$24 \unit{cm}$, while the GPS positioning error alone is $2 \unit{m}$ on an average. Another similar set-up employing UWB technology is proposed in~\cite{lazzari2017numerical} to localize a moving UAV. In this scenario, four fixed UWB anchor devices are placed on the ground. Experimental results show an average localization error of $1 \unit{m}$. Notice that, differently from the setting considered in this paper, the last two approaches~\cite{cisek2017ultra, lazzari2017numerical} aim at localizing or tracking the UAV (MA\xspace) instead of the GDs\xspace. Nonetheless, they employ UWB and UAV technologies. } \vspace{-0.1in} \section{Measurements and Ground Errors}\label{sec:mes-er} In this section, we provide analytical bounds on the impact of different measurement errors that may occur on the estimated ground distance between the MA\xspace and a GD\xspace. \vspace{-0.05in} \subsection{Terminology and Notations} Let the {\em slant distance} $s$ denote the 3D distance between the drone as MA\xspace and the GD\xspace. We define the {\em \revision{accuracy}} as the maximum error in absolute value, and we let $\epsilon_s$ denote the {\em instrumental \revision{accuracy}}, i.e., the maximum error in estimating the slant distance. Let the point $P$ be the GD\xspace's position on the ground, the point $\widetilde{W}$ be the actual drone's position, and the point $W$ be the scheduled drone's position (see Figure~\ref{fig:ground_error_combined}). The measured slant distance $s'= \overline{\widetilde{W}P}$ can be different from the exact slant distance $s=\overline{WP}$ due to the instrumental error and due to the \revision{accuracy} of the drone's position (i.e., the drone resides at $\widetilde{W}$ and not at the scheduled position $W$). Let the {\em slant error} $E_s$ be the 3D measurement error that affects the measured slant distance $s$. The slant and the instrumental errors are depicted along with $s$ in Figure~\ref{fig:ground_error_combined}. \vspace{-0.05in} \begin{figure}[htbp] \centering \def0.5{1.0} \input{figures/ground_error_combined.pdf_tex} \caption{The ground error: the point $P$ is the GD\xspace.} \label{fig:ground_error_combined} \end{figure} In reality, the \revision{accuracy} of the drone's position $\widetilde{W}$ depends on its drift with respect to its trajectory, and on the changes of its altitude. Indeed, a drone is more unstable than a rover, even if it hovers in a given position. We say that the drone {\em rolls} when it drifts to some directions on a fixed plane, and that the drone {\em uplifts} or {\em downfalls} when it elevates or decreases its altitude, respectively. We denote with $\gamma_d$ and $\gamma_h$, respectively, the {\em rolling} and the {\em altitude \revision{accuracy}} that depend on the GPS, and the barometer. Figure~\ref{fig:ground_error_combined} depicts the cylinder where the drones may reside due to the rolling and altitude errors. Interestingly, the drone resides inside a cylinder instead of a sphere since we consider $\gamma_d$ and $\gamma_h$ independently to each other. Let $\alpha$ in Figure~\ref{fig:ground_error_combined} be the elevation angle as when there are no errors. That is, $\alpha$ is given assuming the scheduled position of the drone. To localize the GD\xspace, we must convert the 3D slant distance $s$ into the {\em ground distance} $d$, which is a distance derived on the 2D plane. The exact ground distance $d$ is the distance $\overline{W'P}$ between $P$ and the projection $W'$ on the ground of the drone's position $W$. That is, $d$ assumes the drone to be in the scheduled position $W$. However, we do not know $s$, but we know $s'$. Then, let the {\em ground error} $E_d$ be the measurement error $\overline{PP'}$, where $P'$ is the position of $P$ estimated on the ground by using the measured slant distance $s'$ and the scheduled elevation angle $\alpha$ which assumes the exact ground distance $d$ and the scheduled altitude $h$. Finally, let the {\em ground \revision{accuracy}} $\epsilon_d$ be the maximum $E_d$. \begin{table}[htbp] \vspace{-0.1in} \renewcommand{\arraystretch}{1.15} \caption{Summary of Notations for errors and \revision{accuracies}.} \label{tab:nomencltature} \vspace{-0.1in} \centering \begin{tabular}{cl} \hline symbol & description \\ \hline $\alpha$ & elevation angle \\ $\epsilon_s$ & instrumental \revision{accuracy} \\ $\gamma_d$ & rolling \revision{accuracy} \\ $\gamma_h$ & altitude \revision{accuracy} \\ $\epsilon_d$ & ground \revision{accuracy} (max error) \\ $E_d$ & ground error \\ $E_s$ & slant error \\ $E_L$ & localization error \\ $E^T_L$ & localization trilateration error \\ $\epsilon^T_L$ & localization trilateration \revision{accuracy} (max error) \\ \hline \end{tabular} \end{table} The ground error $E_d$ is the 3D slant error $E_s$ as it is perceived on the ground. With a single measurement, we only know the relative distance between the MA\xspace and the GD\xspace, thus the GD\xspace is not yet localized. Beyond the ground error, there is the {\em localization error} $E_L$, which is instead the distance from the GD\xspace's estimated position (by any localization algorithm) and the GD\xspace's actual position. This error also depends on the invoked algorithm and its implicit rules to find the GD\xspace's position, and will be investigated farther. Table~\ref{tab:nomencltature} summarizes the notations used in this paper. \vspace{-0.05in} \subsection{The Ground Error} In this section, we analytically study the ground error $E_d$ by breaking it up into three independent components $E_d(\epsilon_s)$, $E_d(\gamma_d)$, and $E_d(\gamma_h)$ that, respectively, depend on: \begin{inparaenum} \item the {\em instrumental} \revision{accuracy}, \item the {\em rolling} \revision{accuracy}, and \item the {\em altitude} \revision{accuracy}. \end{inparaenum} We recall that we define {\em \revision{accuracy}} as the maximum error in absolute value. $E_d(\gamma_d)$ and $E_d(\gamma_h)$ model the error in the drone's position. Note that each component depends on an independent hardware part, namely, UWB, GPS, and barometer, and thus it makes sense to study them separately. Whenever we study one component, we assume the other errors to be null. \vspace{-0.05in} \subsubsection{Instrumental error}\label{ss:ic} Let us investigate $E_d(\epsilon_s)$, i.e., the impact of the {\em instrumental error} $e_s$ on $E_d$. Note that $e_s$ is defined as the difference, positive (overestimation) or negative (underestimation), between the measured distance and the actual distance. Moreover, $\epsilon_s$ is the absolute value of the maximum instrumental error. Accordingly, $|e_s| \le \epsilon_s$. Here we assume $\gamma_d=\gamma_h=0$. Let $s$ be the exact 3D distance between the drone and the object $P$ ($P$ denotes the GD\xspace's position). Then, let $s'=s+e_s$ be the measure of the segment $\overline{WP}$, where $-\epsilon_s \le e_s \le \epsilon_s$. In the following, we geometrically show how the measured slant distance $s'$ is converted into the ground distance. Figure~\ref{fig:ranging_precision} illustrates the reasoning behind the choice of $e_s= \pm \epsilon_s$. We draw a circumference of radius $s'$ centered at the waypoint $W$. Such circumference will intersect the line that passes between $W$ and $P$ in $Q$ (see Figure~\ref{fig:ranging_precision}). Since the measured slant distance is different from the exact one, i.e., $s' \not = s$, $Q$ does not coincide with $P$, and $Q$ is not at the ground level. Specifically, the segment $\overline{PQ}$ of length $e_s=s'-s$ is on the extension of $\overline{WP}$ if $e_s >0$; whereas $\overline{PQ}$ is on the radius $\overline{WP}$ if $e_s < 0$. \begin{figure}[htbp] \centering \def0.5{0.8} \input{figures/ground_error_instrumental_over.pdf_tex} \vspace{-0.05in} \caption{Overestimation in the instrumental \revision{accuracy} $\epsilon_s$.} \label{fig:ranging_precision} \vspace{-0.05in} \end{figure} Since in general $\epsilon_s \ll s$, we can approximate the circumference of radius $s'$ with its tangent in $Q$. The point $P'$, where the tangent intersects the ground\footnote{Figure~\ref{fig:ranging_precision} shows the intersection $P'$ between the tangent and the ground, which approximates the intersection (white dot) between the circumference and the ground. However, the two intersections become closer and closer when $s$ increases.}, is the estimated position for $P$ according to the measurement $s'$. Thus, recalling that $W'$ is the projection of $W$ on the ground, $\overline{PP'}$ is the error on the ground derived from the slant error $e_s$. By elementary geometric rules applied to the right-angled triangle $PQP'$, we obtain $\overline{PP'} = e_s \cdot \frac{1}{\cos(\alpha)} = e_s \cdot \sqrt{1 + \frac{h^2}{d^2}}$, where $h$ is the drone's altitude. because $\angle{QPP'}$ is equal to the elevation angle $\alpha$. The error $E_d (\epsilon_s)$, when the instrumental error is maximum and the object is at ground distance $d$ from the drone, is given by: \vspace{-0.05in} \begin{equation}\label{eq:ground_distance_instrumental} E_d(\epsilon_s) = \epsilon_s \cdot \frac{1}{\cos(\alpha)} = \epsilon_s \cdot \sqrt{1 + \frac{h^2}{d^2}}. \end{equation} The ground error $E_d(\epsilon_s)$ varies with the distance $d$ on the ground. When $h \not = 0$, the error increases when $d$ decreases (whereas, when $h=0$ the error does not depend on $d$). When $h \not = 0$, the worst case occurs when the drone is perpendicular to the point to be measured (i.e., $W'=P$, $d=0$, $E_d \rightarrow \infty$). From this observation, we can assert that, when the measurements are taken by a UAV, rather than a rover, in order to bound $E_d(\epsilon_s)$, it is convenient to add the constraint that all the measurements have to respect a given {\em minimum ground distance} \ensuremath{d\textsubscript{min}}\xspace. \vspace{-0.05in} \subsubsection{Rolling error} In this section, we only consider the rolling error (i.e., $\epsilon_s=\gamma_h=0$). When the drone hovers in position $W=(x,y,z)$, it may not be in $W$, but rather in position $\widetilde{W}$ due to the GPS \revision{accuracy} or the bad weather conditions (see Figure~\ref{fig:ranging_precision_gamma_d}). \begin{figure}[htbp] \centering \def0.5{0.8} \input{figures/ground_error_rolling_over.pdf_tex} \vspace{-0.05in} \caption{The rolling \revision{accuracy} $\gamma_d>0$ and ground error.} \label{fig:ranging_precision_gamma_d} \end{figure} To better define the rolling error, we set a 3D-Cartesian coordinate system whose origin is the projection $W'=(0,0,0)$ on the ground of the exact drone's position $W$, whose $x$-axis passes through the object to measure $P$, and $z$-axis passes through $W$. Thus, $W=(0,0,h)$ and $P=(d,0,0)$. Then, let the actual drone's position be $\widetilde{W}=(e_x,e_y,h)$, with $-\gamma_d \le e_x, e_y \le \gamma_d$, where $\gamma_d$ is the rolling \revision{accuracy}. Obviously, $\widetilde{W}'=(e_x,e_y,0)$ is the projection of $\widetilde{W}$ on the ground, which is inside a circle of radius $\gamma_d$ centered at the origin $W'$. For each point of the circle, it holds $e_x = \gamma_d \cos(\psi)$ and $e_y = \gamma_d \sin(\psi)$, where $\psi=\angle{\widetilde{W}'W'P}$ and $0 \le \psi \le 2\pi$. The measured slant distance $s'$ between $\widetilde{W}$ and $P$ given by: \begin{align*} s'&=\sqrt{h^2 + (d-e_x)^2 + e_y^2} \nonumber \\ &=\sqrt{h^2 + (d-\gamma_d \cos(\psi))^2 + (\gamma_d \sin(\psi))^2} \nonumber \\ &= \sqrt{h^2 + d^2 - 2d\gamma_d\cos(\psi) + \gamma_d^2 \cos^2(\psi) + \gamma_d^2 \sin^2(\psi) } \nonumber \\ &= \sqrt{h^2 + d^2 - 2d\gamma_d\cos(\psi) + \gamma_d^2 }\nonumber \end{align*} Recalling that $h>0$, $d>0$, $\gamma_d \geq 0$ and $0 \le \psi \le 2\pi$, we note that $s'$ is maximum when $f = - 2d\gamma_d\cos(\psi)$ is minimum. Including the instrumental error, the slant error is: $$E_s = s'-s= \sqrt{h^2 + d^2 - 2d\gamma_d\cos(\psi) + \gamma_d^2}-\sqrt{h^2+d^2}$$ which is maximum when $\cos(\psi)=-1$ and $\epsilon_s > 0$. In order to project $E_s$ on the ground, we repeat the same construction as in Section~\ref{ss:ic}. We draw a circumference of radius $s'$ centered in the waypoint $W$, which intersects the line that passes for $W$ and $P$ in $Q$. The tangent in $Q$ intersects the ground in the estimated position $P'$. Applying elementary trigonometry to the right-angled triangle $PQP'$ whose $\angle{QPP'}$ is equal to the elevation angle $\alpha$, \begin{align} E_d(\gamma_d) &= \frac{E_s(\gamma_d)}{\cos(\alpha)} = \frac{\left| s'- s \right|}{\cos(\alpha)} = \frac{|(s')^2 - (s)^2|}{\cos(\alpha)\left( s' + s \right)} \nonumber \\ &= \frac{|\gamma_d^2 - 2d\gamma_d \cos(\psi)|\sqrt{h^2 + d^2}}{\left( s' + s \right)d} = \frac{|\gamma_d^2 - 2d\gamma_d \cos(\psi)|s}{\left( s' + s \right)d} \nonumber \end{align} When $s'>s$ (i.e., $\frac{\pi}{2} < \psi < \frac{3\pi}{2}$), that is, when the drone rolls away from the object, it holds: \begin{align} \label{eq:ground_distance_gamma_d_roll_away} E_d(\gamma_d) &\le \frac{|\gamma_d^2 - 2d\gamma_d|}{2 d } \nonumber \\ \intertext{and assuming $\gamma_d \ll d$} E_d(\gamma_d) &\le \gamma_d \end{align} When $s'< s$ (i.e., $0 \le \psi \le \frac{\pi}{2}$ or $\frac{3\pi}{2} \le \psi \le 2\pi$), that is, when the drone rolls close to the object, since $s+s'> s$, we obtain a weaker bound: \begin{align*} E_d(\gamma_d) & < \frac{|\gamma_d^2 - 2d\gamma_d|}{d}\frac{s}{s} \nonumber \\ & < 2\gamma_d \end{align*} Now, if $\gamma_d \ll d$ holds, $\frac{s}{s'} \rightarrow 1$. Since $s+s' \ge 2s'$, we have: \begin{align} \label{eq:ground_distance_gamma_d_bound_strict} E_d(\gamma_d) & < \frac{|\gamma_d^2 - 2d\gamma_d|}{2d}\frac{s}{s'} \nonumber \\ & < \gamma_d\frac{s}{ s'} \rightarrow \gamma_d \end{align} We will see in our experiments that indeed the stricter bound in Eq.~\eqref{eq:ground_distance_gamma_d_bound_strict} well approximates the rolling error even when the drone rolls close to the GD\xspace. \vspace{-0.05in} \subsubsection{Altitude error} In this section, we only consider the altitude error (i.e., $\gamma_d=\epsilon_s=0$). When the drone is subject to an uplift (resp., downfall), the measured slant distance $s'$ is overestimated (resp., underestimated). The overestimate case is illustrated in Figure~\ref{fig:gamma_h_error}. \begin{figure}[htbp] \centering \def0.5{0.8} \input{figures/ground_error_altitude_over.pdf_tex} \vspace{-0.05in} \caption{The altitude \revision{accuracy} $\gamma_h> 0$ and ground error.} \label{fig:gamma_h_error} \end{figure} The measured slant distance $s'$ between $\widetilde{W}$ and $P$ is: $s' = \sqrt{h+{\gamma_h}^2 + d^2}$. Recalling that $h>0$, $d>0$, and $\gamma_h \geq 0$, including the instrumental error, the slant error is: $$E_s = s'-s= \sqrt{h+{\gamma_h}^2 + d^2} - \sqrt{h^2+d^2}$$ Moreover, \vspace{-0.1in} \begin{align} E_d(\gamma_h) &= \frac{E_s(\gamma_h)}{\cos(\alpha)} = \frac{\left| s'- s \right|}{\cos(\alpha)} = \frac{|\gamma_h^2 - 2h\gamma_h|s}{\left( s' + s \right)d} \nonumber \end{align} Repeating calculations similar to those above, and assuming that the altitude \revision{accuracy} $\gamma_h$ is very small with respect to $h$, and that $\frac{s}{s'} \rightarrow 1$, we find that the ground error can be approximated as: \vspace{-0.1in} \begin{equation}\label{eq:ground_distance_gamma_h_bound} E_d(\gamma_h) \approx \gamma_h \frac{h}{d} \end{equation} \subsubsection{Overall ground error} From the previous discussions we can estimate the overall ground error as stated by the following: \begin{fact} Let $\epsilon_s$, $\gamma_d$, and $\gamma_h$ be respectively the instrumental \revision{accuracy}, rolling \revision{accuracy}, and altitude \revision{accuracy} that may affect the slant measurement. By projecting the slant distance on the ground, the largest error $E_d$ given the ground distance $d$, is: \vspace{-0.1in} \begin{equation}\label{eq:Ed} E_{d}(\gamma_d, \gamma_h, \epsilon_s) \approx \gamma_d + \frac{h}{d}\gamma_h + \epsilon_s\sqrt{1+\frac{h^2}{d^2}} \end{equation} \end{fact} Analyzing Eq.~\eqref{eq:Ed}, it is clear that when $d$ is very small, the ground error is very large. Increasing $d$, the impact of both instrumental and altitude \revision{accuracies} decreases, but $E_d$ cannot be smaller than the rolling \revision{accuracy} $\gamma_d$. In conclusion, the ground error can be bounded by adding a constraint on the minimum ground distance (\ensuremath{d\textsubscript{min}}\xspace) between the drone and the GD\xspace. If it is ensured that $d \ge \ensuremath{d\textsubscript{min}}\xspace$ using the drone, then the {\em ground \revision{accuracy}} $\epsilon_d$, i.e., the maximum error on the ground distance, is bounded by: \begin{equation}\label{eq:ground_distance_combined_bound_def} \epsilon_d = \epsilon_d(\gamma_d, \gamma_h, \epsilon_s) \approx \gamma_d + \frac{h}{\ensuremath{d\textsubscript{min}}\xspace}\gamma_h + \epsilon_s\sqrt{1+\frac{h^2}{\ensuremath{d\textsubscript{min}}\xspace^2}} \end{equation} \revision{Our first takeaway is that the ground \revision{accuracy} $\epsilon_d$ can be monitored by monitoring the ratio $h/d$ between altitude and ground distance. } \vspace{5pt} \revision{\paragraph{A2G links and ground error} In this paragraph, we explain how the A2G communication link quality between GD\xspace and MA\xspace impacts on our results. According to the model in~\cite{al2014optimal}, each A2G link has a certain probability $P(\text{LoS})$ to be in LoS and $P(\text{NLoS})$ to be in NLoS. $P(\text{LoS})$ depends on the elevation angle $\alpha$ between drone and GD\xspace and on the environment type, i.e., sub-urban, urban, dense, and highrise. Clearly, in crowded environments, links have a higher probability to be mixed LoS and NLoS scenarios. \begin{table}[ht] \vspace{-0.1in} \renewcommand{\arraystretch}{1.15} \caption{The line of sight probabilities $P(\text{LoS})$ in different environments~\cite{al2014optimal}.} \label{tab:lap} \vspace{-0.1in} \centering \begin{tabular}{cc|cccc} \hline $h/\ensuremath{d\textsubscript{min}}\xspace$ & $\alpha$ & sub-urban & urban & dense & highrise \\ \hline $5.67$ & $80^{\circ}$ & $100\%$ & $100\%$ & $100\%$ &$100\%$ \\ $\sqrt{3}$ & $60^{\circ}$ & $100\%$ & $100\%$ & $100\%$ &$60\%$ \\ $1$ & $45^{\circ}$ & $100\%$ & $97\%$ & $85\%$ & $30\%$ \\ ${1}/{2}$ & $26.5^{\circ}$ & $100\%$ & $75\%$ & $30\%$ & $5\%$ \\ ${1}/{3}$ & $20^{\circ}$ & $100\%$ & $40\%$ & $20\%$ & $\to 0\%$ \\ \hline \end{tabular} \vspace{-0.05in} \end{table} The UWB distance measurements are possible as long as the antennas keep an A2G link. Up to $35 \unit{m}$, even if the elevation angle is small, communications can be established since UWB works in both LoS and NLoS~\cite{www-Deca-ieee}. Beyond $35 \unit{m}$ UWB only works in LoS, and hence only LoS links can be guaranteed. Suitable values for $h/\ensuremath{d\textsubscript{min}}\xspace$ such that the elevation angle $\alpha=\arctan(h/\ensuremath{d\textsubscript{min}}\xspace)$ gives LoS links with high probability, have to be selected. For example, as reported in Table~\ref{tab:lap}, in sub-urban environment $h/\ensuremath{d\textsubscript{min}}\xspace=1/3$ because A2G link has $100\%$ probability to be LoS whenever $\alpha \ge 20^{\circ}$. In urban environment, links are $100\%$ in LoS when $h/\ensuremath{d\textsubscript{min}}\xspace \ge 1/2$ ($\alpha \ge 26.5^{\circ}$). Similarly, in highrise the minimum ratio for LoS links is $h/\ensuremath{d\textsubscript{min}}\xspace=5.67$. Note that for $h/\ensuremath{d\textsubscript{min}}\xspace < 1/3$ ($\alpha < 20^{\circ}$) links could be mixed and hence UWB might work or not. Recall that the ground accuracy in Eq.~\eqref{eq:ground_distance_combined_bound_def} can be bounded selecting a small $h/\ensuremath{d\textsubscript{min}}\xspace$. Keeping in mind the maximum UWB NLoS range of $35 \unit{m}$, $h/\ensuremath{d\textsubscript{min}}\xspace$ cannot be freely chosen. However, in our experiments since we work in a sub-urban, obstacle-free, and flat environment, any ratio $h/\ensuremath{d\textsubscript{min}}\xspace \ge 1/3$ is sufficient to be in LoS. } \vspace{-0.1in} \section{Localization Error for Trilateration Based Algorithms}\label{sec:loc-error} Once a GD\xspace has collected a suitable number of distance measurements from the MA\xspace, it can be localized by invoking any localization algorithm. A very common approach for localization is {\em trilateration}. In Section~\ref{sec:error-trilateration} we analytically derive the {\em localization trilateration error} $E^T_L$ and the {\em localization trilateration \revision{accuracy}} $\epsilon^T_L$, which are incurred by any algorithm based on this method. Subsequently, in Section~\ref{sec:omni-scan-algs}, we discuss the trilateration based algorithms that are considered in our experiments. \begin{figure*}[ht] \centering \subfloat[Linearization of each measurement.]{% \def0.5{0.85} \input{figures/trilateration_error.pdf_tex} \label{fig:trilateration_error} } \subfloat[Same signs estimation.]{% \def0.5{0.85} \input{figures/trilateration_error_sin.pdf_tex} \label{fig:trilateration_error_sin} } \subfloat[Different signs estimation.]{% \def0.5{0.85} \input{figures/trilateration_error_cos.pdf_tex} \label{fig:trilateration_error_cos} } \caption{Trilateration error with different conditions.} \label{fig:trilateration_error_proof} \vspace{-0.1in} \end{figure*} \vspace{-0.05in} \subsection{Trilateration Error}\label{sec:error-trilateration} This section discusses the localization error $E^T_L$ that may affect the estimated GD\xspace's position when the trilateration procedure is applied. Let us briefly recall that the trilateration procedure for estimating the position of the object $P$, takes as input three ground distances $d_1$, $d_2$, and $d_3$ of $P$ from three waypoints $W_1$, $W_2$, and $W_3$ respectively. The procedure returns, as the GD\xspace's estimated position $P$, the intersection of the three circumferences corresponding to the radii $d_1$, $d_2$, and $d_3$ centered at the projections $W'_1$, $W'_2$, and $W'_3$ of the waypoints. Due to the ground errors, however, the three circumferences do not intersect at a single point, but they delimit a small {\em star} area, as depicted in Figure~\ref{fig:trilateration_error}. In fact, a pair of extreme circumferences, one obtained by considering the radius affected by the maximum positive $E_d$ error ($d_i + E_d$, measurement overestimation) and one whose radius is affected by the maximum negative $E_d$ error ($d_i - E_d$, measurement underestimation) is drawn in place of each circumference of radius $d_i$. Assuming that all the ground distances are sufficiently large compared to the ground error, these extreme circumferences can be linearized (i.e., replaced by the tangent to the radius) without significantly changing the area. Each different non-parallel pair of linearized circumferences intersects at a single point forming overall $12$ different points, that correspond to the vertices of the star shape. Note that $P$ is at the center of the star. The trilateration procedure returns as the estimated position, instead of the exact intersection $P$, a point $P'$ in the star. The point $P'$ is selected by means of the least-squares-error method. In fact, given three ground measurements, the estimated position of $P$ is the point $(x_P,y_P)$ that minimizes the sum of the least squares, i.e.: \vspace{-0.05in} \begin{equation}\label{eq:trilateration} \begin{array}{r@{}r@{}r@{}l} \text{min} \quad \delta^2_1 + \delta^2_2 + \delta^2_3 \\[\jot] \text{s.t.}\qquad \sqrt{(x_{W'_i}-x_P)^2+(y_{W'_i}-y_P)^2} &{} +\delta_i=\overline{W'_i P} \\ \multicolumn{4}{c}{ \hspace{4.9cm} \textrm{for} \quad i=1,2,3.} \end{array} \vspace{-0.05in} \end{equation} The largest value of the positioning error, i.e., $\overline{PP'}$, called {\em localization trilateration error} $E^T_L$, or simply {\em trilateration error}, occurs when the estimated position $P'$ is at the furthest vertex of the star shape. In other words, the positioning error is bounded by the distance between the center of the star $P$ (i.e., the actual position of the GD\xspace) and its farthest vertex. As an example, in Figure~\ref{fig:trilateration_error_sin}, the distance between the actual point $P$ and the estimated point $P'$ at the intersection of two measurement underestimations $d_2 (-)$ and $d_3 (-)$ is $\frac{E_d}{\cos(\beta/2)}$, where $\beta$ is one of the three different angles in which the turn angle in $P$ is divided by the lines $\overline{W'_1P}$, $\overline{W'_2P}$, and $\overline{W'_3P}$ (see Figure~\ref{fig:angular_aperture}). In Figure~\ref{fig:trilateration_error_cos}, the distance between $P$ and $P'$ that results from the measurement underestimation, i.e., $d_1 (+)$, and the measurement overestimation, i.e., $d_3 (-)$, is depicted. In this case, the distance $\overline{PP'}=\frac{E_d}{\sin(\overline{\beta}/2)}$. For each vertex of the star, depending on the signs of the estimations ($+$ overestimation, $-$ underestimation) of each pair of circumferences, we have: $\frac{E_d}{\sin(\beta_i/2)}$ if the signs are the same; and $\frac{E_d}{\cos(\beta_i/2)}$ if the signs are different, where $\beta_1 \le \beta_2 \le \beta_3$ are the three different angles formed in $P$ such that $\sum_i \beta_i = \pi$. In the following, we prove that the farthest vertex occurs when the measurement estimations have the same signs and the angle is minimum. \begin{lemma}[\cite{bettisorbelli2018accuracy}] Let $\ensuremath{\beta\textsubscript{min}}\xspace = \min_{i}\{\beta_i\}$, $\ensuremath{\beta\textsubscript{max}}\xspace = \max_{i}\{\beta_i\}$ and $\sum_i \beta_i = \pi$. Then $\sin ( \frac{\ensuremath{\beta\textsubscript{min}}\xspace}{2} ) \leq \cos ( \frac{\ensuremath{\beta\textsubscript{max}}\xspace}{2} )$. \end{lemma} \begin{IEEEproof} Let $\ensuremath{\beta\textsubscript{min}}\xspace = \min_{i}\{\beta_i\}$ and $\ensuremath{\beta\textsubscript{max}}\xspace = \max_{i}\{\beta_i\}$. Then, we have: $\ensuremath{\beta\textsubscript{max}}\xspace \leq \pi - 2\ensuremath{\beta\textsubscript{min}}\xspace \Rightarrow \frac{\ensuremath{\beta\textsubscript{max}}\xspace}{2} \leq \frac{\pi}{2} - \ensuremath{\beta\textsubscript{min}}\xspace$ from which $\cos ( \frac{\pi}{2} -\ensuremath{\beta\textsubscript{min}}\xspace ) \leq \cos ( \frac{\ensuremath{\beta\textsubscript{max}}\xspace}{2} )$, and thus $\sin(\ensuremath{\beta\textsubscript{min}}\xspace) \leq \cos ( \frac{\ensuremath{\beta\textsubscript{max}}\xspace}{2} )$. \noindent Since $0 \leq \ensuremath{\beta\textsubscript{min}}\xspace \leq \pi/3$, it yields: \begin{gather} \sin \left( \frac{\ensuremath{\beta\textsubscript{min}}\xspace}{2} \right) \leq \sin(\ensuremath{\beta\textsubscript{min}}\xspace) \leq \cos \left( \frac{\ensuremath{\beta\textsubscript{max}}\xspace}{2} \right) \nonumber \end{gather} Thus, the furthest vertex is at distance $\frac{E_d}{\sin (\frac{\ensuremath{\beta\textsubscript{min}}\xspace}{2})}$ from $P$. \end{IEEEproof} \begin{theorem}[\cite{bettisorbelli2018accuracy}] Given the \revision{accuracies} $\epsilon_s$, $\gamma_d$, and $\gamma_h$, given \ensuremath{d\textsubscript{min}}\xspace, and recalling that $\epsilon_{d}(\gamma_d, $ $\gamma_h, \epsilon_s) \approx |\gamma_d| + \frac{h }{\ensuremath{d\textsubscript{min}}\xspace}|\gamma_h| + |\epsilon_s|\sqrt{1+\frac{h^2}{\ensuremath{d\textsubscript{min}}\xspace^2}}$, the localization trilateration \revision{accuracy} defined as the maximum trilateration error is obtained as: \begin{equation} \label{eq:eps_l} \epsilon^T_L(\gamma_d, \gamma_h, \epsilon_s) = \frac{\epsilon_d(\gamma_d, \gamma_h, \epsilon_s) }{\sin \left(\frac{\ensuremath{\beta\textsubscript{min}}\xspace}{2}\right)} \end{equation} \end{theorem} Therefore, from Eq.~\eqref{eq:eps_l}, we learn that, given a certain ground error, the localization error is minimized when $\ensuremath{\beta\textsubscript{min}}\xspace \rightarrow \frac{\pi}{3}=60^{\circ}$. Figure~\ref{fig:precision_vs_angle_distance} reports an example of the trilateration error ${E^T_L}$ computed by varying both the values $d$ and $\ensuremath{\beta\textsubscript{min}}\xspace$, and assuming only the instrumental error, i.e., $\gamma_d=\gamma_h=0 \unit{m}$ and $\epsilon_s = 0.10 \unit{m}$. As expected, when both $d$ and $\ensuremath{\beta\textsubscript{min}}\xspace$ tend to $0$, ${E^T_L}$ grows quickly. \begin{figure}[htbp] \vspace{-0.1in} \centering \subfloat[The three angles $\beta_i$.]{% \def0.5{0.65} \input{figures/angular_aperture.pdf_tex} \label{fig:angular_aperture} } \subfloat[$E^T_L$.]{% \includegraphics[scale=0.775]{tikz/precision_vs_angle_distance} \label{fig:precision_vs_angle_distance} } \caption{The trilateration error $E^T_L$: when $d$ and $\ensuremath{\beta\textsubscript{min}}\xspace$ are very small, the error is extremely high.} \label{fig:angles} \end{figure} \revision{Analyzing Eq.~\eqref{eq:eps_l} it is clear that the localization trilateration \revision{accuracy} $\epsilon^T_L$ can be bounded keeping the minimum angle $\ensuremath{\beta\textsubscript{min}}\xspace$ as large as possible, i.e., closer to $60^{\circ}$. Our second takeaway is that a good localization accuracy in trilateration methods can be obtained keeping as low as possible the ratio $h/\ensuremath{d\textsubscript{min}}\xspace$, the elevation angle $\alpha$ as small as possible respecting also the communication conditions in LoS, and making the minimum angle \ensuremath{\beta\textsubscript{min}}\xspace as large as $60^{\circ}$.} \vspace{-0.05in} \subsection{\textsc{Omni}\xspace and \textsc{Scan}\xspace Localization Algorithms}\label{sec:omni-scan-algs} In this section, we review trilateration based algorithms \textsc{Omni}\xspace and \textsc{Scan}\xspace considered in our experiments. Based on the above discussion, the localization error for these algorithms is bounded by $\epsilon^T_L$, which is a function of the \revision{accuracies} $\epsilon_s$, $\gamma_d$, and $\gamma_h$, the minimum angle \ensuremath{\beta\textsubscript{min}}\xspace, altitude $h$, and minimum distance \ensuremath{d\textsubscript{min}}\xspace. Both algorithms are based on a static path \ensuremath{\Pi}\xspace formed by a series of vertical lines (each called as {\em vertical scan}) connected by horizontal lines. \begin{figure*}[htbp] \centering \small \subfloat[\textsc{Drf}\xspace.]{% \def0.5{0.50} \input{figures/model-drf.pdf_tex} \label{fig:model-drf} } \subfloat[\textsc{IoC}\xspace.]{% \def0.5{0.50} \input{figures/model-xiao.pdf_tex} \label{fig:model-xiao} } \subfloat[\textsc{IoA}\xspace.]{% \def0.5{0.50} \input{figures/model-lee.pdf_tex} \label{fig:model-lee} } \caption{The \textsc{Drf}\xspace, \textsc{IoC}\xspace, and \textsc{IoA}\xspace localization algorithms. In \textsc{IoC}\xspace e \textsc{IoA}\xspace there are two symmetric intersection areas: a third point is required to find and disambiguate the correct intersection area.} \label{fig:intersection_areas} \vspace{-0.1in} \end{figure*} \paragraph{The \textsc{Scan}\xspace Algorithm} \textsc{Scan}\xspace~\cite{koutsonikolas2007path} is one of the first range-based localization algorithms designed for rovers. Each GD\xspace is localized employing trilateration using three waypoints. The main drawback is the collinearity between points in the estimation phase. Since we wish to avoid such undesirable conditions, in our experiments we perform single trilateration selecting three non-collinear waypoints from at least two distinct vertical scans. In this slightly improved version of \textsc{Scan}\xspace, the \ensuremath{\beta\textsubscript{min}}\xspace and \ensuremath{d\textsubscript{min}}\xspace constraints may not be satisfied, resulting in large localization errors. \paragraph{The \textsc{Omni}\xspace Algorithm} \textsc{Omni}\xspace~\cite{bettisorbelli2018range} is the first range-based localization algorithm that takes into account the impact of the drone's altitude on the measurement \revision{accuracy} and on the geometry of the waypoints from which trilateration is performed. It logically tessellates the deployment area into diamonds. Then, each GD\xspace, once it has acquired a sufficient number of waypoints/distances from the drone, performs two trilaterations. The first trilateration is invoked using any three non-collinear waypoints in order to compute the logical diamond in which the GD\xspace resides. Since each diamond is associated with an optimal triple of waypoints which satisfy the minimum angle/distance constraints~\cite{bettisorbelli2018range}, any GD\xspace belonging to such diamond can be finally trilaterated for a second time using that triple. In conclusion, \textsc{Omni}\xspace has been proved to be highly accurate but requires two trilaterations, as opposed to the single trilateration performed by \textsc{Scan}\xspace. \vspace{-0.1in} \section{Other Localization Algorithms}\label{sec:algorithms} In this section, we describe four more localization algorithms, namely \textsc{Drf}\xspace, \textsc{IoA}\xspace, \textsc{IoC}\xspace, and \textsc{Drb-C}\xspace, not based on trilateration. The first three of such algorithms are {\em range-free}, therefore the localization \revision{accuracy} depends on the antenna radiation pattern quality. Recently, Betti et al.~\cite{bettisorbelli2019ground} experimentally showed the poor \revision{accuracy} of \textsc{Drf}\xspace using relatively inexpensive hardware. In this paper, motivated by these results, we extend these algorithms by considering distance measurements to improve on the localization \revision{accuracy}. Specifically, as also detailed in the experiments in Section~\ref{sec:ev}, the GD\xspace stores, for each waypoint that it hears, the relative distance between itself and that waypoint. Exploiting this information, we reformulate all the range-free techniques making them actually range-based. This way, we mean to overcome the poor localization \revision{accuracy} resulting from the low-quality of the radio antenna, while still keeping the original procedures for the localization. In the following, for each of these extended algorithms, as well as for \textsc{Drb-C}\xspace, we identify the sources of the localization error. However, we do not derive any analytical expression of $E_L$, since the analysis would involve too many variables to be expressed in a closed formula. Nevertheless, we study their error through real experiments in Section~\ref{sec:ev}. \paragraph{The \textsc{Drf}\xspace Algorithm} \textsc{Drf}\xspace~\cite{bettisorbelli2019rangefree} is a lightweight range-free radius-free algorithm designed for drones. This algorithm is based on the notion of {\em chord}. In general, the perpendicular bisector of any chord passes through the center $O$ of the circle itself. So, the bisector of another non-parallel chord and the previous one intersect at $O$ point. In Figure~\ref{fig:model-drf}, two chords are identified by the pairs $A_1A_2$ and $A_2B_1$. Theoretically, the circle is identified by the receiving disk of GD\xspace which is centered at $O$. Accordingly, the GD\xspace starts to estimate its position when it detects two chords. The two chords are detected using the HnH technique~\cite{bettisorbelli2020rangefree} on each scan. The detection of chords incurs several problems that eventually affects the localization accuracy. First, recalling that the MA\xspace regularly broadcasts its current position (waypoint) at discrete intervals of time and that two consecutive waypoints are at distance {\em inter-waypoint} \ensuremath{I_w}\xspace, the endpoints of the chords may not exactly fall on the circumference of the receiving disk, even if the receiving disk is a perfect circle (e.g., $A_2$ and $A_3$ in Figure~\ref{fig:model-drf}). However, the chords can be improperly defined if the antenna pattern has ``holes'' and ``bubbles'', as experienced in the field as reported in~\cite{bettisorbelli2020rangefree}. \underline{{\em Range-based extension}:} Exploiting the fact that our tested kit allows us to take distance measurements, the choice of the chords can be performed selecting three waypoints at a certain fixed distance $d$ from the GD\xspace, relaxing the range-free constraint. In this way, with three waypoints on the same circumference of radius $d$, two chords can be derived. Accordingly, we can obtain a localization error which depends only on the length of \ensuremath{I_w}\xspace and on the error $E_d$. A more detailed explanation of the original version of \textsc{Drf}\xspace can be found in~\cite{bettisorbelli2019ground}. \paragraph{The \textsc{IoC}\xspace Algorithm} \textsc{IoC}\xspace~\cite{xiao2008distributed} is a range-free radius-based localization algorithm initially developed for ground MAs\xspace. Like \textsc{Drf}\xspace, the \textsc{IoC}\xspace algorithm exploits the HnH method in order to detect special points used for building a constraint area that bounds the GD\xspace's position. However, diversely from \textsc{Drf}\xspace, \textsc{IoC}\xspace relies also on the value of the communication radius $d$. In fact, initially the GD\xspace detects the pair of endpoints ($A_1$ and $A_2$, in Figure~\ref{fig:model-xiao}) using the HnH method. Successively, two more points called pre-arrival and post-departure ($A_0$ and $A_3$), respectively, are determined using the value of \ensuremath{I_w}\xspace, since MA\xspace sends its current position at discrete intervals. Note that such four points belong on the same straight line. Then, four circles of radius $d$ centered at each of these four points are drawn. Those circles create two symmetrical intersection areas where the GD\xspace may reside. In order to select the correct intersection area, the GD\xspace needs to detect a third point. Finally, the GD\xspace is localized at the ``center'' of the correct intersection area. This definition of center slightly varies depending on the shape of the intersection area, which may vary from four to five vertices. \underline{{\em Range-based extension}:} As for \textsc{Drf}\xspace, also in \textsc{IoC}\xspace we exploit the distance measurements for computing all the required points. That is, we select the two waypoints on the same line at distance $d$ from GD\xspace as $A_1$ and $A_2$, and the preceding and subsequent waypoints as $A_0$ and $A_3$. \paragraph{The \textsc{IoA}\xspace Algorithm} \textsc{IoA}\xspace~\cite{lee2009localization}, is a range-free radius-based algorithm very similar to \textsc{IoC}\xspace. Indeed, it builds a similar constrained area using the HnH method and the knowledge of both $d$ and \ensuremath{I_w}\xspace. Once the GD\xspace has detected the two extreme endpoints ($A_1$ and $A_2$, in Figure~\ref{fig:model-lee}), it traces two circles of radius $d$ and $d-\ensuremath{I_w}\xspace$ on both the points. These circles, which create two annuli, intersect in two distinct and symmetrical intersection areas, so also in this case a third point is required. Finally, the GD\xspace estimates its position at the center of such an area, using easy geometrical rules. \underline{{\em Range-based extension}:} As for the previous algorithms, we select the two extreme endpoints $A_1$ and $A_2$ as two waypoints at distance $d$ from the GD\xspace. \vspace{-0.05in} \paragraph{The \textsc{Drb-C}\xspace Algorithm} \textsc{Drb-C}\xspace~\cite{bettisorbelli2019ground} is a lightweight range-based technique designed for UAVs. The goal of GD\xspace is to detect two waypoints at distance $d_1$ and $d_2$, and drawing two circumferences centered at these waypoints, of radius $d_1$ and $d_2$, respectively. Then, the GD\xspace knows to reside simultaneously on the two intersections of two circumferences, and a third point is required to disambiguate the correct intersection point. In conclusion, we can note that, differently from the trilateration based algorithms \textsc{Omni}\xspace and \textsc{Scan}\xspace in which the least-squares-error method is employed (see Eq.~\eqref{eq:trilateration}), \textsc{Drb-C}\xspace only demands a few algebraic calculations. Finally, Table~\ref{tab:algorithms_evaluation} summarizes the six algorithms that will be compared in our testbed. \begin{table}[ht] \vspace{-0.1in} \renewcommand{\arraystretch}{1.15} \caption{Summary of the compared algorithms.} \label{tab:algorithms_evaluation} \vspace{-0.05in} \centering \begin{tabular}{llcl} \hline name & method & points & error source \\ \hline \textsc{Omni}\xspace~\cite{bettisorbelli2018range} & trilaterations & $3+3$ & geometry \\ \textsc{Scan}\xspace~\cite{koutsonikolas2007path} & trilateration & $3$ & geometry \\ \textsc{Drb-C}\xspace~\cite{bettisorbelli2019ground} & circles intersection & $2+1$ & center \\ \textsc{Drf}\xspace~\cite{bettisorbelli2019rangefree} & bisector intersection & $3$ & chords \\ \textsc{IoC}\xspace~\cite{xiao2008distributed} & points ``center'' & $2+1$ & center \\ \textsc{IoA}\xspace~\cite{lee2009localization} & points ``center'' & $2+1$ & center \\ \hline \end{tabular} \vspace{-0.1in} \end{table} \vspace{-0.1in} \section{Evaluation on a Real Testbed}\label{sec:ev} In this section we propose our experimental evaluation. \revision{Initially, in Section~\ref{sec:uwb-performance} we describe the adopted hardware for our testbed.} In Section~\ref{sec:1st-experiments}, we study the ground error $E_d$. In Section~\ref{sec:2nd-experiments}, we study the localization error ${E^T_L}$ of the trilateration method. Finally, in Section~\ref{sec:3rd-experiments}, we run a campaign of experiments with the goal of comparing the localization error of different algorithms. \vspace{-0.05in} \subsection{\revision{Performance of UWB Antennas}}\label{sec:uwb-performance} The experiments in Sections~\ref{sec:1st-experiments} and~\ref{sec:2nd-experiments} are done using the DecaWave\xspace EVK1000 kit (see Figure~\ref{img:evk1000}), formed by two UWB antennas which are based on the DW1000 UWB chip~\cite{www-Deca-dwm1000}. For the experiments done in Section~\ref{sec:3rd-experiments}, we rely on a more consistent set of twelve antennas, using the MDEK1001 kit (see Figure~\ref{img:dwm1001}), based on the same DW1000 UWB chip~\cite{www-Deca-dwm1001}. \revision{ According to DecaWave\xspace, those chips have a $6.5 \unit{GHz}$ center frequency, and have a declared and reliable point-to-point range up to $60 \unit{m}$ LoS and $35 \unit{m}$ NLoS~\cite{www-Deca-dwm1001} on a typical use-case. Although the DW1000 chip transmitting power is set to $-41.3 \unit{dBm/MHz}$, and the typical receiver sensitivity is $-93 \unit{dBm/500 MHz}$~\cite{www-Deca-dwm1000}, the received power is influenced by the antenna polarity. In both the DecaWave\xspace kits the antennas are vertically polarized, meaning that the module is intended to be vertically positioned to let another vertically polarized antenna observe an omnidirectional radiation pattern in the azimuth plane~\cite{www-Deca-dwm1001}. For this reason, following the recommendations provided by DecaWave\xspace in their datasheet, in our experiments we always vertically placed our antennas. The antenna placed on the drone is positioned vertically, but reversely, keeping the transceiver on the bottom, for avoiding the MA\xspace's body to become an obstacle between the GDs\xspace and the MA\xspace itself. \begin{figure}[htbp] \vspace{-0.2in} \centering \hfill \subfloat[EVK1000 kit.]{% \includegraphics[height=3.0cm]{images/evk1000.jpg} \label{img:evk1000} } \hfill \subfloat[MDEK1001 kit.]{% \includegraphics[height=2.25cm]{images/dwm1001.jpg} \label{img:dwm1001} } \hfill \subfloat[An antenna.]{% \includegraphics[height=3.0cm]{images/exp-antenna.jpg} \label{img:exp-antenna} } \vspace{-0.1in} \hfill \subfloat[The drone.]{% \includegraphics[height=3.0cm]{images/exp-drone.jpg} \label{img:exp-drone} } \vspace{-0.07in} \caption{The used DecaWave\xspace kits and the 3DR Solo drone.} \label{fig:kits} \vspace{-0.2in} \end{figure} } \vspace{-0.05in} \subsection{Experiments with Ground Error}\label{sec:1st-experiments} In this section we analyze the ground error employing the DecaWave\xspace EVK1000 kit. We start with pre-arranged antenna experiments in which two antennas (one reproduces the GD\xspace to localize and another one the MA\xspace) are used. The antenna that acts as GD\xspace is fixed on the ground, while the other one (MA\xspace), fixed on a pole, moves accordingly to the specific experiment emulating the rolling and altitude error. \revision{We fixed the drone's position on the ground and that of the GD\xspace measuring the distance with a Bosch digital laser~\cite{www-laser}. The ground GPS drone's position is then converted as the origin $W'=(0,0,0)$ of the local Cartesian coordinate system used during the experiments.} Then, in the subsequent experiments, we replace the pole with a drone hovering at a certain altitude. The goal is to understand how the drone impacts the measurement error. For each experiment, we record at least $30$ slant distances and we finally determine the final computed value at the $95\%$ confidence level. \begin{figure*}[htbp] \centering \subfloat[Instrumental error.]{% \includegraphics[scale=0.75]{tikz/measurement_errors_instrumental} \label{fig:measurement_errors_instrumental} } \subfloat[Rolling error.]{% \includegraphics[scale=0.75]{tikz/measurement_errors_rolling} \label{fig:measurement_errors_rolling} } \subfloat[Altitude error.]{% \includegraphics[scale=0.75]{tikz/measurement_errors_altitude} \label{fig:measurement_errors_altitude} } \subfloat[Combined error.]{% \includegraphics[scale=0.75]{tikz/measurement_errors_combined} \label{fig:measurement_errors_combined} } \hfill \vspace{-0.1in} \subfloat[Experimental $\overline{E_d}$.]{% \includegraphics[scale=0.75]{tikz/measurement_errors_drone} \label{fig:measurement_errors_drone} } \subfloat[Experimental $\overline{E^T_L}$: $d$ varies.]{% \includegraphics[scale=0.75]{tikz/compare_measurements_same_angle} \label{fig:compare_measurements_same_angle} } \subfloat[Experimental $\overline{E^T_L}$: $\ensuremath{\beta\textsubscript{min}}\xspace$ varies.]{% \includegraphics[scale=0.75]{tikz/compare_measurements_same_distance} \label{fig:compare_measurements_same_distance} } \caption{The experimental error and the theoretical error in different cases.} \label{fig:compare_real_precision} \vspace{-0.1in} \end{figure*} In the first experiment, we measure the slant distance and comparing its projection on the ground with the exact ground distance $d$. Consequently, we compute the {\em experimental ground error} $\overline{E_d}$ and compare it with theoretical error $E_d$. To verify Eq.~\eqref{eq:ground_distance_instrumental}, Eq.~\eqref{eq:ground_distance_gamma_d_roll_away}, and Eq.~\eqref{eq:ground_distance_gamma_h_bound}, we have measured and reported in Figure~\ref{fig:measurement_errors_instrumental}, Figure~\ref{fig:measurement_errors_rolling}, and Figure~\ref{fig:measurement_errors_altitude} the experimental error $\overline{E_d}$ when the instrumental error, rolling error, and altitude error, separately affect the GD\xspace on the ground, respectively. We also report the theoretical ground error bound $E_d$. It is interesting to see that, in each plot, the measurement error $E_d$ (solid line) almost always upper-bounds the experimental error $\overline{E_d}$ (dashed line). We also measured and reported in Figure~\ref{fig:measurement_errors_combined} the combined error where all the three components affect the error along with the bound in Eq.~\eqref{eq:ground_distance_combined_bound_def}. The curves almost coincide. In the second experiment, we repeat the previous setting employing this time a drone. In Figure~\ref{fig:measurement_errors_drone} we report the experimental $\overline{E_d}$ for different altitudes. Since the drone's position is affected by the wind, air density, humidity, the strength of the propellers, and GPS error, we know that even the slant distance is affected at the same time. Moreover, we know from Eq.~\eqref{eq:ground_distance_combined_bound_def} that the error $\overline{E_d}$ increases when $h$ increases and when $d$ tends to $0 \unit{m}$. In Figure~\ref{fig:measurement_errors_drone}, we also plot the theoretical error $E_d$ in solid lines, fixing $\epsilon_s=0.10 \unit{m}$ and using $\gamma_d=\{0.6, 0.8, 1.2\} \unit{m}$ and $\gamma_h=\{0.1, 0.15, 0.2\} \unit{m}$ for each $h=\{10, 20, 30\} \unit{m}$, respectively, that empirically fit the experimental curves. Differently from the previous ones, this is the first experiment that somehow simulates a real scenario. It is interesting to note that, we can model the curve of the combined error even in a non-optimal scenario, just tuning in advance the parameters on Eq.~\eqref{eq:ground_distance_combined_bound_def}, which provides a good approximation of the error. In conclusion, upon this first campaign of experiments, we can confirm that the measurement error is small when either the ground distance between the drone and the GD\xspace is large or the altitude of the drone is low. \vspace{-0.1in} \subsection{Experiments on the Trilateration Error}\label{sec:2nd-experiments} In this section, we describe two more comparative experiments to better understand how the localization error can be affected when the trilateration method is applied. From Eq.~\eqref{eq:eps_l}, it is clear that the localization error $E_L$ can be bounded if the three waypoints are sufficiently apart from the GD\xspace. In other words, the three points must respect good geometry and minimum distance constraints. \revision{In both the experiments we use our 3DR Solo drone as a MA\xspace and placed a single GD\xspace in $P=(0, 0, 0)$. Moreover, the drone's initial position $W'$ was initially set at the same GD\xspace's position, i.e., in $W' = P$.} In the first experiment depicted in Figure~\ref{fig:compare_measurements_same_angle}, we plot the {\em experimental localization trilateration error} $\overline{E^T_L}$ between the estimated and the actual position of the GD\xspace. Here, we fix the best possible minimum angle $\ensuremath{\beta\textsubscript{min}}\xspace=60^{\circ}$ and we decrease the value of the ground distance $d$ to smaller values. For each value of $d$, we perform trilateration using three points which satisfy the optimal geometry. As expected and according to Eq.~\eqref{eq:eps_l}, $\overline{E_L^T}$ is high when $d$ is short, even though the minimum angle is fixed at the best possible value $60^{\circ}$. In the second experiment shown in Figure~\ref{fig:compare_measurements_same_distance} we do the opposite by keeping a large and good enough ground distance $d=40 \unit{m}$, and decreasing the value of \ensuremath{\beta\textsubscript{min}}\xspace to narrow values. Even here we perform trilateration and according to Eq.~\eqref{eq:eps_l}, the error decreases when $\ensuremath{\beta\textsubscript{min}}\xspace$ increases. \vspace{-0.05in} \subsection{Comparison of Localization Algorithms}\label{sec:3rd-experiments} In this section, we describe the hardware and software architecture of the comparative testbed. The goal is to evaluate the performance of different localization algorithms in-field. In this testbed, we cannot use the previous EVK1000 kit since it is formed by only two antennas, hence it is definitively not sufficient for evaluating a real scenario in which we have to localize multiple GDs\xspace at once. Instead, we move towards the larger set of antennas relying on the new MDEK1001 kit from DecaWave\xspace, since it comprises of a set of twelve antennas. In addition, the testbed consists also of a Raspberry Pi which is the main component that auto-pilots the drone via Wi-Fi and sends UWB commands via a single UWB antenna that is physically connected to it by the serial peripheral interface (SPI). \vspace{-0.05in} \subsubsection{Testbed setup} We set a rectangular deployment area of sizes $100 \times 100 \unit{m^2}$, and fix a Cartesian coordinate system with origin at the special position {\sc{Home}}\xspace $(0, 0, h_0 = 1 \unit{m})$. Then, we deploy on the ground $n=10$ antennas placed at the top of a tripod of height $h_0$. Each antenna (Figure~\ref{img:exp-antenna}) identified by own ID is not aware of its relative position with respect to the {\sc{Home}}\xspace, even though we already know its position. \revision{In fact, as illustrated in Figure~\ref{fig:deployment_area}, the deployed 10 antennas respect a predefined pattern, i.e., form a series of equilateral triangles (shown in green) with the same side of length $30 \unit{m}$ in which each vertex is a GD\xspace. Thus, we are able to measure, with reasonable accuracy before our experiments, the relative distance between the GDs\xspace, with the help of a digital laser. Finally, the {\sc{Home}}\xspace position is set between antennas ID 4 and ID 5 accurately measured with the same digital laser.} By a drone's {\em mission} at a certain altitude $h$, we actually refer to a drone (see Figure~\ref{img:exp-drone}) that flies at a fixed altitude $h_0+h$ (see Figure~\ref{fig:ranging_precision_setup}) following a certain static path \ensuremath{\Pi}\xspace. For each algorithm, the trajectory \ensuremath{\Pi}\xspace starts and finishes at {\sc{Home}}\xspace and consists of vertical scans connected by horizontal scans (see Figure~\ref{fig:deployment_area}). Once all the GDs\xspace are deployed, the drone starts its mission flying over the deployment area. When both GD\xspace and MA\xspace are within the communication range of the other, the devices start a ToA based distance measurement protocol. Then, the GD\xspace stores the computed distance along with the current MA\xspace's position. In other words, the GD\xspace memorizes the position of the waypoint and the associated distance on it. At the end of the mission, each GD\xspace estimates its position by invoking a localization algorithm. \begin{figure}[htbp] \vspace{-0.1in} \centering \hfill \subfloat[The MA\xspace and GD\xspace $P$.]{% \def0.5{0.8} \input{figures/ranging_precision.pdf_tex} \label{fig:ranging_precision_setup} } \hfill \subfloat[The deployment area.]{% \def0.5{0.5} \input{figures/deployment_area.pdf_tex} \label{fig:deployment_area} } \caption{The experimental testbed on the field.} \label{fig:exps} \vspace{-0.1in} \end{figure} All the compared algorithms require at least three points (see Table~\ref{tab:algorithms_evaluation}) to compute and estimate the position of a GD\xspace. However, each GD\xspace has several stored distance measurements, so it can potentially exploit all of them. In order to better understand how either the altitude of the drone or the geometry of the waypoints impact the quality of the localization \revision{accuracy}, as already investigated in Section~\ref{sec:2nd-experiments}, we fix two constraints during the selection of the three points: \begin{inparaenum}[(i)] \item the ground distance $d$ between the GD\xspace and the MA\xspace, and \item the geometry angle $\beta$ to keep between the three waypoints. \end{inparaenum} Accordingly, we fix $d=\{20, 30, \ldots, 60\} \unit{m}$ and $\beta=\{0, 15, 30\}^{\circ}$, where $\beta = 0^{\circ}$ means an unconstrained geometry. Moreover, we vary the altitude $h=\{10, 20, 30\} \unit{m}$. Clearly, it is not easy to find three points at an exact distance $d$. Thus, we relax the constraint and we search for three points at distance $d \pm \tau$, where $\tau$ indicates a tolerance in our measurements (we fix $\tau=1\unit{m}$) due to the fact that the drone sends its position at discrete intervals of time, i.e., the inter-waypoint distance \ensuremath{I_w}\xspace. The \ensuremath{I_w}\xspace value is affected by the drone's speed. In our experiments, we have seen that $\ensuremath{I_w}\xspace=1 \unit{m}$ with a drone's speed of $10 \unit{m/s}$. \subsubsection{Results} We compare all algorithms varying the drone's altitude $h$, the minimum distance $d$ among the GDs\xspace and the waypoints, and the waypoint geometry. In \textsc{Omni}\xspace, by construction, we select always the furthest three waypoints that guarantee good geometry (see Figure~\ref{fig:angular_aperture}). The \textsc{Omni}\xspace error is reported as a reference for the other algorithms. \begin{figure}[htbp] \vspace{-0.15in} \subfloat[$h=10 \unit{m}, \beta = 0^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_trilat_h10_a0} \label{fig:comparison_algorithms_trilat_h10_a0} } \subfloat[$h=10 \unit{m}, \beta = 30^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_trilat_h10_a30} \label{fig:comparison_algorithms_trilat_h10_a30} } \hfill \vspace{-0.1in} \subfloat[$h=30 \unit{m}, \beta = 0^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_trilat_h30_a0} \label{fig:comparison_algorithms_trilat_h30_a0} } \subfloat[$h=30 \unit{m}, \beta = 30^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_trilat_h30_a30} \label{fig:comparison_algorithm_trilats_h30_a30} } \caption{Errors for \textsc{Scan}\xspace and \textsc{Omni}\xspace.} \label{fig:comparison_algorithms_tri_err} \vspace{-0.05in} \end{figure} In Figure~\ref{fig:comparison_algorithms_tri_err}, we show the observed errors of \textsc{Scan}\xspace and \textsc{Omni}\xspace along with their theoretical bounds $\epsilon^T_L$ given in Eq.~\eqref{eq:eps_l} obtained by substituting $\epsilon_s=0.10 \unit{m}$, and the values of $\gamma_d$ and $\gamma_h$ taken from Figure~\ref{fig:measurement_errors_drone}. Obviously, \textsc{Omni}\xspace is better than \textsc{Scan}\xspace because the geometry of the waypoints is enforced. The difference between the theoretical and the observed error is smaller for \textsc{Omni}\xspace than \textsc{Scan}\xspace when $\beta=0^{\circ}$, and almost the same when $\beta=30^{\circ}$. \begin{figure}[htbp] \vspace{-0.15in} \subfloat[$h=30 \unit{m}, \beta = 0^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_h30_a0} \label{fig:comparison_algorithms_h30_a0} } \subfloat[$h=30 \unit{m}, \beta = 30^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_h30_a30} \label{fig:comparison_algorithms_h30_a30} } \hfill \vspace{-0.1in} \subfloat[$h=10 \unit{m}, \beta = 0^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_h10_a0} \label{fig:comparison_algorithms_h10_a0} } \subfloat[$h=10 \unit{m}, \beta = 30^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_h10_a30} \label{fig:comparison_algorithms_h10_a30} } \caption{For fixed $h$ and $\beta$, the algorithms error when $d$ varies.} \label{fig:comparison_algorithms_loc_err} \vspace{-0.1in} \end{figure} \revision{Figure~\ref{fig:comparison_algorithms_loc_err} compares the localization errors $E_L$ of the algorithms when $d$ varies. The localization error of \textsc{Drf}\xspace, \textsc{IoC}\xspace, and \textsc{IoA}\xspace is greater than that of \textsc{Scan}\xspace and \textsc{Omni}\xspace. As said, the trilateration based algorithms, \textsc{Omni}\xspace and \textsc{Scan}\xspace, get a good localization but pose many constraints (angle, distance) in the selection of waypoints; they also compute the estimated position by performing a least-squares-error optimization technique (which is complex). The \textsc{Drb-C}\xspace is not as accurate as \textsc{Omni}\xspace or \textsc{Scan}\xspace because it omits the least-squares-error optimization technique, but it is quite good. The chords based method in \textsc{Drf}\xspace is the least \revision{accurate}. \textsc{IoC}\xspace and \textsc{IoA}\xspace improve over \textsc{Drf}\xspace because their localization technique use the radius information. The errors are large when $\beta=0^{\circ}$, while they significantly decrease for all the algorithms when $\beta=30^{\circ}$. This shows that all the algorithms, and not only those based on trilateration, benefit from a good waypoint geometry. The experiments with $h=10 \unit{m}$ in Figures~\ref{fig:comparison_algorithms_h10_a0} and~\ref{fig:comparison_algorithms_h10_a30} have a smaller elevation angle and thus a smaller error than those with $h=30\unit{m}$ reported in Figures~\ref{fig:comparison_algorithms_h30_a0} and~\ref{fig:comparison_algorithms_h30_a30}. When $h=30 \unit{m}$, all the ratios yield $h/d \ge 1/3$, and since our experiments are in sub-urban area all the measurements are possibly in LoS; whereas when $h=10 \unit{m}$, the measurements at $d \ge 30\unit{m}$ are mixed LoS and NLoS. Nonetheless, we cannot notice any special behavior, probably, thanks to the UWB multipath immunity. \begin{figure}[htbp] \vspace{-0.15in} \centering \subfloat[$h=10 \unit{m}, d = 30 \unit{m}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_h10_d30} \label{fig:comparison_algorithms_h10_d30} } \subfloat[$h=10 \unit{m}, d = 50 \unit{m}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_h10_d50} \label{fig:comparison_algorithms_h10_d50} } \caption{For fixed $h$ and $d$, the algorithms error when $\beta$ varies.} \label{fig:fixed_h_d} \end{figure} Figure~\ref{fig:fixed_h_d} compares the localization error $E_L$ when $h=10 \unit{m}$ and $d=30 \unit{m}$ or $d=50 \unit{m}$, varying the geometry angle $\beta$. From the observed errors, any localization that satisfies $\beta \ge 30^{\circ}$ has a small error, and cannot significantly improve decreasing the elevation angle (i.e., ratio $h/d$). The decrease of the error when $h$ decreases from $50 \unit{m}$ to $30 \unit{m}$ is large when $0^{\circ} \le \beta \le 30^{\circ}$. Finally, note that in Figure~\ref{fig:comparison_algorithms_h10_d50}, it holds $h/d=0.2$, which is below the ratio that guarantees $100\%$ LoS in sub-urban area in Table~\ref{tab:lap}, but we do not notice a meaningful worsening of the error. Figure~\ref{fig:fixed_ratio} plots the error for different pairs of $h$ and $d$ with the same ratio $h/d$. Precisely, we compare two ratios $h/d$: $0.5$ and $1.0$. Each ratio can be extracted from three different combinations altitude/distance. For example, for $h/d=0.5$ we consider the combinations $h=\{10, 20, 30\} \unit{m}$ and $d=\{20, 40, 60\} \unit{m}$. The improvement in the accuracy is high when the elevation angle decreases from $45^{\circ}$ to $26^{\circ}$. \textsc{Drf}\xspace, the least accurate algorithm in all our experiments, is very sensitive to the change of the elevation angle. \begin{figure} \vspace{-0.2in} \centering \subfloat[$h/d = 1.0$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_hd_10} \label{fig:comparison_algorithms_hd_10} } \subfloat[$h/d = 0.5$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_hd_05} \label{fig:comparison_algorithms_hd_05} } \caption{For fixed $h/d$ ratio, the algorithms error when $\beta$ varies.} \label{fig:fixed_ratio} \end{figure} } \begin{table}[ht] \vspace{-0.1in} \renewcommand{\arraystretch}{1.15} \caption{Error between range-free and range-based algorithms, in meters (m).} \label{tab:comparison_algorithms_rf_rb} \centering \vspace{-0.1in} \begin{tabular}{ll|cccccc} \hline \multicolumn{2}{c}{ } & \multicolumn{2}{c}{\textsc{Drf}\xspace} & \multicolumn{2}{c}{\textsc{IoC}\xspace} & \multicolumn{2}{c}{\textsc{IoA}\xspace} \\ \multicolumn{2}{c}{ } & RF & RB & RF & RB & RF & RB \\ \hline \multirow{3}{*}{$h$ (m)} & $10$ & $52.3$ & $16.6$ & $48.8$ & $10.8$ & $47.2$ & $10.8$ \\ & $20$ & $55.1$ & $19.8$ & $49.1$ & $11.2$ & $48.8$ & $11.0$ \\ & $30$ & $57.7$ & $20.4$ & $48.2$ & $13.8$ & $51.2$ & $13.6$ \\ \hline \end{tabular} \vspace{-0.05in} \end{table} \revision{In conclusion,} in Table~\ref{tab:comparison_algorithms_rf_rb} compares the localization error $E_L$ among range-free (RF) and range-based (RB) versions of the three original range-free algorithms \textsc{Drf}\xspace, \textsc{IoC}\xspace, and \textsc{IoA}\xspace, for different altitudes $h$. In particular, we report the localization error obtained from our previous testbed~\cite{bettisorbelli2020rangefree} on which the three algorithms were implemented as pure range-free techniques based on the HnH technique (RF columns), along with the average results shown in Figure~\ref{fig:comparison_algorithms_loc_err} (RB columns). As reported in~\cite{bettisorbelli2020rangefree}, on average, the experimental error of those algorithms is very large (almost $60 \unit{m}$) and variable. These experiments show that the error of the original range-free versions is 3-4 times larger than the corresponding extended range-based version that exploit distance measurements. Moreover, in~\cite{bettisorbelli2020rangefree}, about one-third of the GDs\xspace were left unlocalized by \textsc{IoC}\xspace and \textsc{IoA}\xspace, while \textsc{Drf}\xspace localized all of them. These results and the fact that our antennas are able to take distance measurements via ToA fully justify our transformation of \textsc{Drf}\xspace, \textsc{IoC}\xspace, and \textsc{IoA}\xspace in range-based algorithms, implying that measurements help. \vspace{-0.1in} \section{Conclusions}\label{sec:concl} In this paper, we analytically study and experimentally evaluate thorough real experiments on the field the errors that can affect the localization of GDs\xspace using a drone as MA\xspace. We decompose the error in measurement error, ground error, and localization error, and provide analytical expressions of these errors. We also link the ground error with the theory of the A2G communication link via the elevation angle. Our experiments confirm that our analytical analysis is accurate. Furthermore, results also show that extending range-free algorithms with range-based measurements, significantly increases the localization \revision{accuracy}. \revision{In the future, we plan to extend the analysis to NLoS scenarios. We will investigate DecaWave\xspace antenna capabilities, and also modulating and tuning the transmitting power in different scenarios. We finally plan to extend our work to propose a more realistic antenna radiation pattern. } \vspace{-5pt} \paragraph{Acknowledgments} The authors are grateful to the editor and reviewers for valuable comments that helped us improve the quality of the manuscript. This work was partially supported by Project {\em NALP-SAPR} granted by FSE, Project {\em NALP-SAPR2} granted by University of Perugia, by NATO grant G4936, Intelligent Systems Center (ISC) at Missouri S\&T, and by NSF grants CNS-1545050, CNS-1725755, CNS-1818942, and SCC-1952045. \vspace{-0.05in} \bibliographystyle{IEEEtran} \section{Introduction} In recent years, unmanned aerial vehicles (UAVs) or drones have received increasing attention from both the research and industry community. Of particular interest are applications enabled by combining UAVs with the Internet of Things (IoT)~\cite{motlagh2016low,choi2015building}, such as environmental monitoring~\cite{gao2018high}, structural health monitoring~\cite{kang2018autonomous}, precision agriculture~\cite{tsouros2019data}, search and rescue operations~\cite{silvagni2017multipurpose}, and so on. An important requirement of such applications is the ability to accurately localize the position of ground devices (GDs\xspace), making the collected data more meaningful. Since it is costly to equip each GD\xspace in the network with a GPS module, a fixed set of {\em anchor} devices, whose positions are known a-priori, is generally used~\cite{priyantha2003anchor}. Moreover, given the anchors use wireless transmissions to localize other GDs\xspace and their range is often limited, the number of required anchors could dramatically increase with the size of the network, thus increasing the cost of the localization procedure. This problem can be solved by replacing fixed anchor devices with a single {\em mobile anchor} (MA\xspace) equipped with a GPS unit and periodically broadcasting its position to help nearby GDs\xspace localize. Although there exists some work in the literature on localization based on ground MAs\xspace, such as rovers~\cite{han2016survey}, relatively less has been proposed using flying MAs\xspace, like UAVs or drones~\cite{malhotra2019comprehensive}, which is the focus of this paper. Compared to the ground MAs\xspace, the flying MAs\xspace are able to reach remote locations, move at a faster speed, and cover a wider area than terrestrial rovers~\cite{bekmezci2013flying}. Due to these advantages, in this paper we concentrate on localization algorithms involving flying MAs\xspace. When using UAVs as MAs\xspace, the distance can be estimated wirelessly \revision{by measuring the time of flight (ToF) between the MA\xspace and GDs\xspace. For distance measurements, in this paper we adopt the ultra wide band (UWB) technology~\cite{mueller2015fusing}.} Localization algorithms can be broadly categorized as {\em range-free} and {\em range-based} approaches~\cite{han2016survey}. In the range-free algorithms, the position is estimated without any type of measurements, but by only discovering if the GD\xspace and MA\xspace are in range. Among these, the {\em radius-based} approaches assume the knowledge of transmission radius~\cite{ssu2005localization}, while the {\em radius-free} ones do not~\cite{xiao2008distributed}. Such algorithms are often based on the assumption that the antenna radiation pattern is isotropic, which is unrealistic in general. In fact, our recent works~\cite{bettisorbelli2019ground, bettisorbelli2020rangefree} have shown that the localization \revision{accuracy} depends on the quality (pattern and radius) of the antenna, and how much they differ from the assumed isotropic pattern. On the other hand, in the range-based localization algorithms, the position of the GD\xspace is estimated by taking several measurements between it and the MA\xspace. These algorithms are known to be more \revision{accurate} than range-free algorithms but at the cost of additional specialized hardware. For example, the estimation of distance exploits techniques like the received signal strength indicator (RSSI), the time of arrival (ToA), or the time difference of arrival (TDoA)~\cite{laaraiedh2011comparison}. Now, in any range-based localization procedure, {\em measurement errors} are unavoidable, and can seriously impact the localization \revision{accuracy}. This is particularly relevant when the MA\xspace is a drone because the measurement errors can occur while calculating the distance between the MA\xspace and the GD\xspace. The magnitude of such errors depends on the adopted technology and on the quality of the \revision{air-to-ground (A2G)} link between the MA\xspace and the GD\xspace. For example, the distance measurement error in a fully line of sight (LoS) link using the Wi-Fi technology is about $7$--$10 \unit{m}$; using Bluetooth it can be up to $15 \unit{m}$ while it is only $10 \unit{cm}$ using the UWB technology~\cite{www-Deca-dwm1000}. Additional errors may be caused by non-optimal weather conditions or the drone's GPS \revision{{\em accuracy}}~\cite{www-gps}. For instance, the 3DR Solo drone used in our experiments has a GPS \revision{accuracy} of $1 \unit{m}$~\cite{www-3DR-GPS}. In general, these errors propagate when combined and projected to the ground to localize the GDs\xspace. The error propagation depends on the specific localization technique, such as trilateration~\cite{thomas2005revisiting}, intersection of points, centroid~\cite{blumenthal2007weighted, bulusu2000gps}, and so on. \paragraph{Our Contributions} In this paper, we first provide bounds on various errors (e.g., instrumental error, rolling error, altitude error) impacting the estimated ground distance between the MA\xspace and GD\xspace. Then we focus on the commonly used {\em trilateration} based localization, and derive bounds on the propagation of ground distance errors on the estimated position of the GD\xspace. Finally, we perform extensive {\em in-field} experiments to quantify the localization \revision{accuracy} of several existing state-of-the-art localization algorithms. Specifically, we consider the \textsc{Drb-C}\xspace~\cite{bettisorbelli2019ground} range-based algorithm, the \textsc{Drf}\xspace~\cite{bettisorbelli2019rangefree}, \textsc{IoC}\xspace~\cite{xiao2008distributed}, and \textsc{IoA}\xspace~\cite{lee2009localization} range-free algorithms extended to distance measurements, and two trilateration based algorithms like \textsc{Scan}\xspace~\cite{bettisorbelli2018range} and \textsc{Omni}\xspace~\cite{koutsonikolas2007path}. Our testbed uses two UWB DecaWave\xspace kits, namely EVK1000 kit~\cite{www-Deca-dwm1000} and MDEK1001 kit~\cite{www-Deca-dwm1001}, and a 3DR Solo drone as the MA\xspace. To the best of our knowledge, ours is the {\em first work} that provides an extensive in-field evaluation on the localization \revision{accuracy} of the most relevant algorithms in the literature in real experimental settings using drones. Our novel contributions are summarized as follows. \revision{ \begin{itemize} \item We derive bounds on various measurement errors (instrumental, rolling, and altitude) to estimate the impact on the estimated ground distance between the UAV (MA\xspace) and GD\xspace. \item We validate our theoretical analysis on the ground error with a simple set of static experiments using two UWB antennas. We observe the impact of measurement errors on the trilateration technique. \item Through experiments, we comprehensively compare three range-based and three range-free state-of-the-art localization algorithms using a UAV as MA\xspace, extending the range-free algorithms with distance measurements to significantly improve their localization accuracy. We also implement these algorithms employing 3DR Solo drone and ten GDs\xspace, the first such realistic testbed built. \end{itemize} } \vspace{5pt} The rest of the paper is organized as follows. Section~\ref{sec:related} reviews the existing literature on localization approaches relevant to our context. Section~\ref{sec:mes-er} derives expressions to approximate the measurement and ground errors, \revision{and introduces how our results are interpreted in the light of A2G communications}. Section~\ref{sec:loc-error} investigates the localization error affecting the estimated position of the GD\xspace when the trilateration procedure is applied, and describes two localization algorithms based on trilateration compared in Section~\ref{sec:ev}. Section~\ref{sec:algorithms} introduces four more localization algorithms not based on trilateration which are also compared using the testbed. Three of them are transformed from range-free to range-based algorithms. Section~\ref{sec:ev} presents a rich set of real experiments on the field aiming to evaluate the localization error of the different localization algorithms. Finally, Section~\ref{sec:concl} offers conclusions with directions of future research. \vspace{-0.1in} \section{Related Works}\label{sec:related} \revision{This section reviews the relevant literature on localization of GDs\xspace using MAs\xspace and also efforts on testbed implementations considering UAVs as MAs\xspace.} \vspace{-0.1in} \subsection{\revision{MA\xspace-based Localization Algorithms}} There exist many algorithms for ground MAs\xspace that can be classified as range-free and range-based. \revision{In the range-free localization algorithms, such as \textsc{IoC}\xspace~\cite{xiao2008distributed} (intersection of circles) and \textsc{IoA}\xspace~\cite{lee2009localization} (intersection of annuli), a rover broadcasts its current position at regular time intervals while following a path. From the heard and not-heard rover's positions (informally, HnH technique), the GD\xspace builds a limited area where it may reside and places itself at the ``center''. (More details about these algorithms are in Section~\ref{sec:algorithms}.)} Usually range-free algorithms have relatively low localization \revision{accuracy}, leading to the development of range-based algorithms as like \textsc{Scan}\xspace and \textsc{Double-Scan}\xspace~\cite{koutsonikolas2007path}. In \textsc{Scan}\xspace and \textsc{Double-Scan}\xspace, the MA\xspace follows a path formed by vertical straight lines interconnected by horizontal lines. However, such algorithms result in a large number of collinear anchor points. Collinearity can be reduced by increasing the changes of direction in the path, as in \textsc{Hilbert}\xspace~\cite{koutsonikolas2007path} and \textsc{Lmat}\xspace~\cite{jiang2011lmat}. The path generated by \textsc{Lmat}\xspace logically tessellates the deployment area by equilateral triangles so that each GD\xspace falls inside a triangle. The vertices of the triangle where the GD\xspace resides are used to trilaterate the GD\xspace position, thus completely solving the collinearity issue. The above algorithms are designed for ground MAs\xspace. \revision{A few localization algorithms have been proposed for flying MAs\xspace. Since the drone flies at a certain altitude, there are new constraints on the anchor points. In~\cite{perazzo2017drone}, one can find simulation comparisons of the above algorithms extended to flying MAs\xspace, with particular attention to the path length.} \revision{The \textsc{Omni}\xspace~\cite{bettisorbelli2018range} algorithm is the first localization algorithm for drones that selects the anchor points in such a way that a certain accuracy is guaranteed.} A simple and lightweight range-based algorithm called \textsc{Drb-C}\xspace~\cite{bettisorbelli2019ground} localizes the GDs\xspace by determining the correct intersection point of two circles exploiting a third reference point for the disambiguation. Finally, the \textsc{Drf}\xspace~\cite{bettisorbelli2019rangefree} algorithm, the first range-free algorithm for UAVs as MAs\xspace, exploits lightweight geometrical rules for position estimation. \revision{(See Section~\ref{sec:algorithms} for more details on \textsc{Omni}\xspace and \textsc{Drf}\xspace.)} \revision{Very recently, in~\cite{ebrahimi2020autonomous}, a novel framework based on reinforcement learning (RL) has been proposed to enable a UAV to autonomously find a suitable trajectory. This improves the localization accuracy of multiple GDs\xspace minimizing the flight time, path length, and UAV's energy consumption. As usual for RL techniques, an initial step is required for allowing the UAV to be trained for an online real scenario. However, this work does not mention how long this training phase lasts.} \vspace{-0.1in} \subsection{\revision{UAV based Testbeds for Localization Experiments}} Recently, we performed preliminary experiments evaluating the range-free, radius-free \textsc{Drf}\xspace~\cite{bettisorbelli2019rangefree} algorithm using a UAV as MA\xspace and relatively inexpensive antennas~\cite{bettisorbelli2019ground}. Our conclusion was that the performance of \textsc{Drf}\xspace heavily depends on the shape of the antenna radiation pattern; precisely, the more the omnidirectionality of the antenna, the higher is the localization \revision{accuracy}. Employing the DecaWave\xspace MDEK1001 kit, in~\cite{bettisorbelli2020rangefree}, it has been experimentally shown that other range-free radius-based algorithms like \textsc{IoC}\xspace~\cite{xiao2008distributed} and \textsc{IoA}\xspace~\cite{lee2009localization}, also exhibit higher accuracy with the omnidirectionality of antennas. \revision{ A real outdoor implementation of a range-based localization algorithm is presented in~\cite{greco2015localization}, which aims to localize a radio-frequency identification (RFID) ground tag with the help of a UAV. Initially, the RFID tag is detected by the UAV using its RFID reader. Then, the UAV takes hundreds of RSSI measurements and estimates the tag's position using a multilateration procedure. Experimental results show an average localization error of $6 \unit{m}$. Note that this algorithm considers only random paths and random measurements, and does not investigate the relationship between the localization error and the UAV's altitude. A range-based algorithm is experimentally tested in~\cite{grigulo2018experimenting}, in which a UAV regularly broadcasts its current GPS position while flying. The GD\xspace aims to detect a set of at least three equidistant RSSI distance measurements from its center in order to apply trilateration. Experimental results show an average error of $4 \unit{m}$ under normal conditions that reduces to $1 \unit{m}$ using GPS corrections provided by a Real Time Kinematic (RTK) algorithm. However, an RTK positioning system is not always available. In~\cite{cisek2017ultra} a testbed deploying five UWB ground anchors is implemented for evaluating and tracking the 3D position of a UAV equipped with UWB antenna. The anchors and the UAV are also equipped with GPS receivers with RTK capabilities. The experimental error between UWB and RTK distance measurements ranges from $2$--$24 \unit{cm}$, while the GPS positioning error alone is $2 \unit{m}$ on an average. Another similar set-up employing UWB technology is proposed in~\cite{lazzari2017numerical} to localize a moving UAV. In this scenario, four fixed UWB anchor devices are placed on the ground. Experimental results show an average localization error of $1 \unit{m}$. Notice that, differently from the setting considered in this paper, the last two approaches~\cite{cisek2017ultra, lazzari2017numerical} aim at localizing or tracking the UAV (MA\xspace) instead of the GDs\xspace. Nonetheless, they employ UWB and UAV technologies. } \vspace{-0.1in} \section{Measurements and Ground Errors}\label{sec:mes-er} In this section, we provide analytical bounds on the impact of different measurement errors that may occur on the estimated ground distance between the MA\xspace and a GD\xspace. \vspace{-0.05in} \subsection{Terminology and Notations} Let the {\em slant distance} $s$ denote the 3D distance between the drone as MA\xspace and the GD\xspace. We define the {\em \revision{accuracy}} as the maximum error in absolute value, and we let $\epsilon_s$ denote the {\em instrumental \revision{accuracy}}, i.e., the maximum error in estimating the slant distance. Let the point $P$ be the GD\xspace's position on the ground, the point $\widetilde{W}$ be the actual drone's position, and the point $W$ be the scheduled drone's position (see Figure~\ref{fig:ground_error_combined}). The measured slant distance $s'= \overline{\widetilde{W}P}$ can be different from the exact slant distance $s=\overline{WP}$ due to the instrumental error and due to the \revision{accuracy} of the drone's position (i.e., the drone resides at $\widetilde{W}$ and not at the scheduled position $W$). Let the {\em slant error} $E_s$ be the 3D measurement error that affects the measured slant distance $s$. The slant and the instrumental errors are depicted along with $s$ in Figure~\ref{fig:ground_error_combined}. \vspace{-0.05in} \begin{figure}[htbp] \centering \def0.5{1.0} \input{figures/ground_error_combined.pdf_tex} \caption{The ground error: the point $P$ is the GD\xspace.} \label{fig:ground_error_combined} \end{figure} In reality, the \revision{accuracy} of the drone's position $\widetilde{W}$ depends on its drift with respect to its trajectory, and on the changes of its altitude. Indeed, a drone is more unstable than a rover, even if it hovers in a given position. We say that the drone {\em rolls} when it drifts to some directions on a fixed plane, and that the drone {\em uplifts} or {\em downfalls} when it elevates or decreases its altitude, respectively. We denote with $\gamma_d$ and $\gamma_h$, respectively, the {\em rolling} and the {\em altitude \revision{accuracy}} that depend on the GPS, and the barometer. Figure~\ref{fig:ground_error_combined} depicts the cylinder where the drones may reside due to the rolling and altitude errors. Interestingly, the drone resides inside a cylinder instead of a sphere since we consider $\gamma_d$ and $\gamma_h$ independently to each other. Let $\alpha$ in Figure~\ref{fig:ground_error_combined} be the elevation angle as when there are no errors. That is, $\alpha$ is given assuming the scheduled position of the drone. To localize the GD\xspace, we must convert the 3D slant distance $s$ into the {\em ground distance} $d$, which is a distance derived on the 2D plane. The exact ground distance $d$ is the distance $\overline{W'P}$ between $P$ and the projection $W'$ on the ground of the drone's position $W$. That is, $d$ assumes the drone to be in the scheduled position $W$. However, we do not know $s$, but we know $s'$. Then, let the {\em ground error} $E_d$ be the measurement error $\overline{PP'}$, where $P'$ is the position of $P$ estimated on the ground by using the measured slant distance $s'$ and the scheduled elevation angle $\alpha$ which assumes the exact ground distance $d$ and the scheduled altitude $h$. Finally, let the {\em ground \revision{accuracy}} $\epsilon_d$ be the maximum $E_d$. \begin{table}[htbp] \vspace{-0.1in} \renewcommand{\arraystretch}{1.15} \caption{Summary of Notations for errors and \revision{accuracies}.} \label{tab:nomencltature} \vspace{-0.1in} \centering \begin{tabular}{cl} \hline symbol & description \\ \hline $\alpha$ & elevation angle \\ $\epsilon_s$ & instrumental \revision{accuracy} \\ $\gamma_d$ & rolling \revision{accuracy} \\ $\gamma_h$ & altitude \revision{accuracy} \\ $\epsilon_d$ & ground \revision{accuracy} (max error) \\ $E_d$ & ground error \\ $E_s$ & slant error \\ $E_L$ & localization error \\ $E^T_L$ & localization trilateration error \\ $\epsilon^T_L$ & localization trilateration \revision{accuracy} (max error) \\ \hline \end{tabular} \end{table} The ground error $E_d$ is the 3D slant error $E_s$ as it is perceived on the ground. With a single measurement, we only know the relative distance between the MA\xspace and the GD\xspace, thus the GD\xspace is not yet localized. Beyond the ground error, there is the {\em localization error} $E_L$, which is instead the distance from the GD\xspace's estimated position (by any localization algorithm) and the GD\xspace's actual position. This error also depends on the invoked algorithm and its implicit rules to find the GD\xspace's position, and will be investigated farther. Table~\ref{tab:nomencltature} summarizes the notations used in this paper. \vspace{-0.05in} \subsection{The Ground Error} In this section, we analytically study the ground error $E_d$ by breaking it up into three independent components $E_d(\epsilon_s)$, $E_d(\gamma_d)$, and $E_d(\gamma_h)$ that, respectively, depend on: \begin{inparaenum} \item the {\em instrumental} \revision{accuracy}, \item the {\em rolling} \revision{accuracy}, and \item the {\em altitude} \revision{accuracy}. \end{inparaenum} We recall that we define {\em \revision{accuracy}} as the maximum error in absolute value. $E_d(\gamma_d)$ and $E_d(\gamma_h)$ model the error in the drone's position. Note that each component depends on an independent hardware part, namely, UWB, GPS, and barometer, and thus it makes sense to study them separately. Whenever we study one component, we assume the other errors to be null. \vspace{-0.05in} \subsubsection{Instrumental error}\label{ss:ic} Let us investigate $E_d(\epsilon_s)$, i.e., the impact of the {\em instrumental error} $e_s$ on $E_d$. Note that $e_s$ is defined as the difference, positive (overestimation) or negative (underestimation), between the measured distance and the actual distance. Moreover, $\epsilon_s$ is the absolute value of the maximum instrumental error. Accordingly, $|e_s| \le \epsilon_s$. Here we assume $\gamma_d=\gamma_h=0$. Let $s$ be the exact 3D distance between the drone and the object $P$ ($P$ denotes the GD\xspace's position). Then, let $s'=s+e_s$ be the measure of the segment $\overline{WP}$, where $-\epsilon_s \le e_s \le \epsilon_s$. In the following, we geometrically show how the measured slant distance $s'$ is converted into the ground distance. Figure~\ref{fig:ranging_precision} illustrates the reasoning behind the choice of $e_s= \pm \epsilon_s$. We draw a circumference of radius $s'$ centered at the waypoint $W$. Such circumference will intersect the line that passes between $W$ and $P$ in $Q$ (see Figure~\ref{fig:ranging_precision}). Since the measured slant distance is different from the exact one, i.e., $s' \not = s$, $Q$ does not coincide with $P$, and $Q$ is not at the ground level. Specifically, the segment $\overline{PQ}$ of length $e_s=s'-s$ is on the extension of $\overline{WP}$ if $e_s >0$; whereas $\overline{PQ}$ is on the radius $\overline{WP}$ if $e_s < 0$. \begin{figure}[htbp] \centering \def0.5{0.8} \input{figures/ground_error_instrumental_over.pdf_tex} \vspace{-0.05in} \caption{Overestimation in the instrumental \revision{accuracy} $\epsilon_s$.} \label{fig:ranging_precision} \vspace{-0.05in} \end{figure} Since in general $\epsilon_s \ll s$, we can approximate the circumference of radius $s'$ with its tangent in $Q$. The point $P'$, where the tangent intersects the ground\footnote{Figure~\ref{fig:ranging_precision} shows the intersection $P'$ between the tangent and the ground, which approximates the intersection (white dot) between the circumference and the ground. However, the two intersections become closer and closer when $s$ increases.}, is the estimated position for $P$ according to the measurement $s'$. Thus, recalling that $W'$ is the projection of $W$ on the ground, $\overline{PP'}$ is the error on the ground derived from the slant error $e_s$. By elementary geometric rules applied to the right-angled triangle $PQP'$, we obtain $\overline{PP'} = e_s \cdot \frac{1}{\cos(\alpha)} = e_s \cdot \sqrt{1 + \frac{h^2}{d^2}}$, where $h$ is the drone's altitude. because $\angle{QPP'}$ is equal to the elevation angle $\alpha$. The error $E_d (\epsilon_s)$, when the instrumental error is maximum and the object is at ground distance $d$ from the drone, is given by: \vspace{-0.05in} \begin{equation}\label{eq:ground_distance_instrumental} E_d(\epsilon_s) = \epsilon_s \cdot \frac{1}{\cos(\alpha)} = \epsilon_s \cdot \sqrt{1 + \frac{h^2}{d^2}}. \end{equation} The ground error $E_d(\epsilon_s)$ varies with the distance $d$ on the ground. When $h \not = 0$, the error increases when $d$ decreases (whereas, when $h=0$ the error does not depend on $d$). When $h \not = 0$, the worst case occurs when the drone is perpendicular to the point to be measured (i.e., $W'=P$, $d=0$, $E_d \rightarrow \infty$). From this observation, we can assert that, when the measurements are taken by a UAV, rather than a rover, in order to bound $E_d(\epsilon_s)$, it is convenient to add the constraint that all the measurements have to respect a given {\em minimum ground distance} \ensuremath{d\textsubscript{min}}\xspace. \vspace{-0.05in} \subsubsection{Rolling error} In this section, we only consider the rolling error (i.e., $\epsilon_s=\gamma_h=0$). When the drone hovers in position $W=(x,y,z)$, it may not be in $W$, but rather in position $\widetilde{W}$ due to the GPS \revision{accuracy} or the bad weather conditions (see Figure~\ref{fig:ranging_precision_gamma_d}). \begin{figure}[htbp] \centering \def0.5{0.8} \input{figures/ground_error_rolling_over.pdf_tex} \vspace{-0.05in} \caption{The rolling \revision{accuracy} $\gamma_d>0$ and ground error.} \label{fig:ranging_precision_gamma_d} \end{figure} To better define the rolling error, we set a 3D-Cartesian coordinate system whose origin is the projection $W'=(0,0,0)$ on the ground of the exact drone's position $W$, whose $x$-axis passes through the object to measure $P$, and $z$-axis passes through $W$. Thus, $W=(0,0,h)$ and $P=(d,0,0)$. Then, let the actual drone's position be $\widetilde{W}=(e_x,e_y,h)$, with $-\gamma_d \le e_x, e_y \le \gamma_d$, where $\gamma_d$ is the rolling \revision{accuracy}. Obviously, $\widetilde{W}'=(e_x,e_y,0)$ is the projection of $\widetilde{W}$ on the ground, which is inside a circle of radius $\gamma_d$ centered at the origin $W'$. For each point of the circle, it holds $e_x = \gamma_d \cos(\psi)$ and $e_y = \gamma_d \sin(\psi)$, where $\psi=\angle{\widetilde{W}'W'P}$ and $0 \le \psi \le 2\pi$. The measured slant distance $s'$ between $\widetilde{W}$ and $P$ given by: \begin{align*} s'&=\sqrt{h^2 + (d-e_x)^2 + e_y^2} \nonumber \\ &=\sqrt{h^2 + (d-\gamma_d \cos(\psi))^2 + (\gamma_d \sin(\psi))^2} \nonumber \\ &= \sqrt{h^2 + d^2 - 2d\gamma_d\cos(\psi) + \gamma_d^2 \cos^2(\psi) + \gamma_d^2 \sin^2(\psi) } \nonumber \\ &= \sqrt{h^2 + d^2 - 2d\gamma_d\cos(\psi) + \gamma_d^2 }\nonumber \end{align*} Recalling that $h>0$, $d>0$, $\gamma_d \geq 0$ and $0 \le \psi \le 2\pi$, we note that $s'$ is maximum when $f = - 2d\gamma_d\cos(\psi)$ is minimum. Including the instrumental error, the slant error is: $$E_s = s'-s= \sqrt{h^2 + d^2 - 2d\gamma_d\cos(\psi) + \gamma_d^2}-\sqrt{h^2+d^2}$$ which is maximum when $\cos(\psi)=-1$ and $\epsilon_s > 0$. In order to project $E_s$ on the ground, we repeat the same construction as in Section~\ref{ss:ic}. We draw a circumference of radius $s'$ centered in the waypoint $W$, which intersects the line that passes for $W$ and $P$ in $Q$. The tangent in $Q$ intersects the ground in the estimated position $P'$. Applying elementary trigonometry to the right-angled triangle $PQP'$ whose $\angle{QPP'}$ is equal to the elevation angle $\alpha$, \begin{align} E_d(\gamma_d) &= \frac{E_s(\gamma_d)}{\cos(\alpha)} = \frac{\left| s'- s \right|}{\cos(\alpha)} = \frac{|(s')^2 - (s)^2|}{\cos(\alpha)\left( s' + s \right)} \nonumber \\ &= \frac{|\gamma_d^2 - 2d\gamma_d \cos(\psi)|\sqrt{h^2 + d^2}}{\left( s' + s \right)d} = \frac{|\gamma_d^2 - 2d\gamma_d \cos(\psi)|s}{\left( s' + s \right)d} \nonumber \end{align} When $s'>s$ (i.e., $\frac{\pi}{2} < \psi < \frac{3\pi}{2}$), that is, when the drone rolls away from the object, it holds: \begin{align} \label{eq:ground_distance_gamma_d_roll_away} E_d(\gamma_d) &\le \frac{|\gamma_d^2 - 2d\gamma_d|}{2 d } \nonumber \\ \intertext{and assuming $\gamma_d \ll d$} E_d(\gamma_d) &\le \gamma_d \end{align} When $s'< s$ (i.e., $0 \le \psi \le \frac{\pi}{2}$ or $\frac{3\pi}{2} \le \psi \le 2\pi$), that is, when the drone rolls close to the object, since $s+s'> s$, we obtain a weaker bound: \begin{align*} E_d(\gamma_d) & < \frac{|\gamma_d^2 - 2d\gamma_d|}{d}\frac{s}{s} \nonumber \\ & < 2\gamma_d \end{align*} Now, if $\gamma_d \ll d$ holds, $\frac{s}{s'} \rightarrow 1$. Since $s+s' \ge 2s'$, we have: \begin{align} \label{eq:ground_distance_gamma_d_bound_strict} E_d(\gamma_d) & < \frac{|\gamma_d^2 - 2d\gamma_d|}{2d}\frac{s}{s'} \nonumber \\ & < \gamma_d\frac{s}{ s'} \rightarrow \gamma_d \end{align} We will see in our experiments that indeed the stricter bound in Eq.~\eqref{eq:ground_distance_gamma_d_bound_strict} well approximates the rolling error even when the drone rolls close to the GD\xspace. \vspace{-0.05in} \subsubsection{Altitude error} In this section, we only consider the altitude error (i.e., $\gamma_d=\epsilon_s=0$). When the drone is subject to an uplift (resp., downfall), the measured slant distance $s'$ is overestimated (resp., underestimated). The overestimate case is illustrated in Figure~\ref{fig:gamma_h_error}. \begin{figure}[htbp] \centering \def0.5{0.8} \input{figures/ground_error_altitude_over.pdf_tex} \vspace{-0.05in} \caption{The altitude \revision{accuracy} $\gamma_h> 0$ and ground error.} \label{fig:gamma_h_error} \end{figure} The measured slant distance $s'$ between $\widetilde{W}$ and $P$ is: $s' = \sqrt{h+{\gamma_h}^2 + d^2}$. Recalling that $h>0$, $d>0$, and $\gamma_h \geq 0$, including the instrumental error, the slant error is: $$E_s = s'-s= \sqrt{h+{\gamma_h}^2 + d^2} - \sqrt{h^2+d^2}$$ Moreover, \vspace{-0.1in} \begin{align} E_d(\gamma_h) &= \frac{E_s(\gamma_h)}{\cos(\alpha)} = \frac{\left| s'- s \right|}{\cos(\alpha)} = \frac{|\gamma_h^2 - 2h\gamma_h|s}{\left( s' + s \right)d} \nonumber \end{align} Repeating calculations similar to those above, and assuming that the altitude \revision{accuracy} $\gamma_h$ is very small with respect to $h$, and that $\frac{s}{s'} \rightarrow 1$, we find that the ground error can be approximated as: \vspace{-0.1in} \begin{equation}\label{eq:ground_distance_gamma_h_bound} E_d(\gamma_h) \approx \gamma_h \frac{h}{d} \end{equation} \subsubsection{Overall ground error} From the previous discussions we can estimate the overall ground error as stated by the following: \begin{fact} Let $\epsilon_s$, $\gamma_d$, and $\gamma_h$ be respectively the instrumental \revision{accuracy}, rolling \revision{accuracy}, and altitude \revision{accuracy} that may affect the slant measurement. By projecting the slant distance on the ground, the largest error $E_d$ given the ground distance $d$, is: \vspace{-0.1in} \begin{equation}\label{eq:Ed} E_{d}(\gamma_d, \gamma_h, \epsilon_s) \approx \gamma_d + \frac{h}{d}\gamma_h + \epsilon_s\sqrt{1+\frac{h^2}{d^2}} \end{equation} \end{fact} Analyzing Eq.~\eqref{eq:Ed}, it is clear that when $d$ is very small, the ground error is very large. Increasing $d$, the impact of both instrumental and altitude \revision{accuracies} decreases, but $E_d$ cannot be smaller than the rolling \revision{accuracy} $\gamma_d$. In conclusion, the ground error can be bounded by adding a constraint on the minimum ground distance (\ensuremath{d\textsubscript{min}}\xspace) between the drone and the GD\xspace. If it is ensured that $d \ge \ensuremath{d\textsubscript{min}}\xspace$ using the drone, then the {\em ground \revision{accuracy}} $\epsilon_d$, i.e., the maximum error on the ground distance, is bounded by: \begin{equation}\label{eq:ground_distance_combined_bound_def} \epsilon_d = \epsilon_d(\gamma_d, \gamma_h, \epsilon_s) \approx \gamma_d + \frac{h}{\ensuremath{d\textsubscript{min}}\xspace}\gamma_h + \epsilon_s\sqrt{1+\frac{h^2}{\ensuremath{d\textsubscript{min}}\xspace^2}} \end{equation} \revision{Our first takeaway is that the ground \revision{accuracy} $\epsilon_d$ can be monitored by monitoring the ratio $h/d$ between altitude and ground distance. } \vspace{5pt} \revision{\paragraph{A2G links and ground error} In this paragraph, we explain how the A2G communication link quality between GD\xspace and MA\xspace impacts on our results. According to the model in~\cite{al2014optimal}, each A2G link has a certain probability $P(\text{LoS})$ to be in LoS and $P(\text{NLoS})$ to be in NLoS. $P(\text{LoS})$ depends on the elevation angle $\alpha$ between drone and GD\xspace and on the environment type, i.e., sub-urban, urban, dense, and highrise. Clearly, in crowded environments, links have a higher probability to be mixed LoS and NLoS scenarios. \begin{table}[ht] \vspace{-0.1in} \renewcommand{\arraystretch}{1.15} \caption{The line of sight probabilities $P(\text{LoS})$ in different environments~\cite{al2014optimal}.} \label{tab:lap} \vspace{-0.1in} \centering \begin{tabular}{cc|cccc} \hline $h/\ensuremath{d\textsubscript{min}}\xspace$ & $\alpha$ & sub-urban & urban & dense & highrise \\ \hline $5.67$ & $80^{\circ}$ & $100\%$ & $100\%$ & $100\%$ &$100\%$ \\ $\sqrt{3}$ & $60^{\circ}$ & $100\%$ & $100\%$ & $100\%$ &$60\%$ \\ $1$ & $45^{\circ}$ & $100\%$ & $97\%$ & $85\%$ & $30\%$ \\ ${1}/{2}$ & $26.5^{\circ}$ & $100\%$ & $75\%$ & $30\%$ & $5\%$ \\ ${1}/{3}$ & $20^{\circ}$ & $100\%$ & $40\%$ & $20\%$ & $\to 0\%$ \\ \hline \end{tabular} \vspace{-0.05in} \end{table} The UWB distance measurements are possible as long as the antennas keep an A2G link. Up to $35 \unit{m}$, even if the elevation angle is small, communications can be established since UWB works in both LoS and NLoS~\cite{www-Deca-ieee}. Beyond $35 \unit{m}$ UWB only works in LoS, and hence only LoS links can be guaranteed. Suitable values for $h/\ensuremath{d\textsubscript{min}}\xspace$ such that the elevation angle $\alpha=\arctan(h/\ensuremath{d\textsubscript{min}}\xspace)$ gives LoS links with high probability, have to be selected. For example, as reported in Table~\ref{tab:lap}, in sub-urban environment $h/\ensuremath{d\textsubscript{min}}\xspace=1/3$ because A2G link has $100\%$ probability to be LoS whenever $\alpha \ge 20^{\circ}$. In urban environment, links are $100\%$ in LoS when $h/\ensuremath{d\textsubscript{min}}\xspace \ge 1/2$ ($\alpha \ge 26.5^{\circ}$). Similarly, in highrise the minimum ratio for LoS links is $h/\ensuremath{d\textsubscript{min}}\xspace=5.67$. Note that for $h/\ensuremath{d\textsubscript{min}}\xspace < 1/3$ ($\alpha < 20^{\circ}$) links could be mixed and hence UWB might work or not. Recall that the ground accuracy in Eq.~\eqref{eq:ground_distance_combined_bound_def} can be bounded selecting a small $h/\ensuremath{d\textsubscript{min}}\xspace$. Keeping in mind the maximum UWB NLoS range of $35 \unit{m}$, $h/\ensuremath{d\textsubscript{min}}\xspace$ cannot be freely chosen. However, in our experiments since we work in a sub-urban, obstacle-free, and flat environment, any ratio $h/\ensuremath{d\textsubscript{min}}\xspace \ge 1/3$ is sufficient to be in LoS. } \vspace{-0.1in} \section{Localization Error for Trilateration Based Algorithms}\label{sec:loc-error} Once a GD\xspace has collected a suitable number of distance measurements from the MA\xspace, it can be localized by invoking any localization algorithm. A very common approach for localization is {\em trilateration}. In Section~\ref{sec:error-trilateration} we analytically derive the {\em localization trilateration error} $E^T_L$ and the {\em localization trilateration \revision{accuracy}} $\epsilon^T_L$, which are incurred by any algorithm based on this method. Subsequently, in Section~\ref{sec:omni-scan-algs}, we discuss the trilateration based algorithms that are considered in our experiments. \begin{figure*}[ht] \centering \subfloat[Linearization of each measurement.]{% \def0.5{0.85} \input{figures/trilateration_error.pdf_tex} \label{fig:trilateration_error} } \subfloat[Same signs estimation.]{% \def0.5{0.85} \input{figures/trilateration_error_sin.pdf_tex} \label{fig:trilateration_error_sin} } \subfloat[Different signs estimation.]{% \def0.5{0.85} \input{figures/trilateration_error_cos.pdf_tex} \label{fig:trilateration_error_cos} } \caption{Trilateration error with different conditions.} \label{fig:trilateration_error_proof} \vspace{-0.1in} \end{figure*} \vspace{-0.05in} \subsection{Trilateration Error}\label{sec:error-trilateration} This section discusses the localization error $E^T_L$ that may affect the estimated GD\xspace's position when the trilateration procedure is applied. Let us briefly recall that the trilateration procedure for estimating the position of the object $P$, takes as input three ground distances $d_1$, $d_2$, and $d_3$ of $P$ from three waypoints $W_1$, $W_2$, and $W_3$ respectively. The procedure returns, as the GD\xspace's estimated position $P$, the intersection of the three circumferences corresponding to the radii $d_1$, $d_2$, and $d_3$ centered at the projections $W'_1$, $W'_2$, and $W'_3$ of the waypoints. Due to the ground errors, however, the three circumferences do not intersect at a single point, but they delimit a small {\em star} area, as depicted in Figure~\ref{fig:trilateration_error}. In fact, a pair of extreme circumferences, one obtained by considering the radius affected by the maximum positive $E_d$ error ($d_i + E_d$, measurement overestimation) and one whose radius is affected by the maximum negative $E_d$ error ($d_i - E_d$, measurement underestimation) is drawn in place of each circumference of radius $d_i$. Assuming that all the ground distances are sufficiently large compared to the ground error, these extreme circumferences can be linearized (i.e., replaced by the tangent to the radius) without significantly changing the area. Each different non-parallel pair of linearized circumferences intersects at a single point forming overall $12$ different points, that correspond to the vertices of the star shape. Note that $P$ is at the center of the star. The trilateration procedure returns as the estimated position, instead of the exact intersection $P$, a point $P'$ in the star. The point $P'$ is selected by means of the least-squares-error method. In fact, given three ground measurements, the estimated position of $P$ is the point $(x_P,y_P)$ that minimizes the sum of the least squares, i.e.: \vspace{-0.05in} \begin{equation}\label{eq:trilateration} \begin{array}{r@{}r@{}r@{}l} \text{min} \quad \delta^2_1 + \delta^2_2 + \delta^2_3 \\[\jot] \text{s.t.}\qquad \sqrt{(x_{W'_i}-x_P)^2+(y_{W'_i}-y_P)^2} &{} +\delta_i=\overline{W'_i P} \\ \multicolumn{4}{c}{ \hspace{4.9cm} \textrm{for} \quad i=1,2,3.} \end{array} \vspace{-0.05in} \end{equation} The largest value of the positioning error, i.e., $\overline{PP'}$, called {\em localization trilateration error} $E^T_L$, or simply {\em trilateration error}, occurs when the estimated position $P'$ is at the furthest vertex of the star shape. In other words, the positioning error is bounded by the distance between the center of the star $P$ (i.e., the actual position of the GD\xspace) and its farthest vertex. As an example, in Figure~\ref{fig:trilateration_error_sin}, the distance between the actual point $P$ and the estimated point $P'$ at the intersection of two measurement underestimations $d_2 (-)$ and $d_3 (-)$ is $\frac{E_d}{\cos(\beta/2)}$, where $\beta$ is one of the three different angles in which the turn angle in $P$ is divided by the lines $\overline{W'_1P}$, $\overline{W'_2P}$, and $\overline{W'_3P}$ (see Figure~\ref{fig:angular_aperture}). In Figure~\ref{fig:trilateration_error_cos}, the distance between $P$ and $P'$ that results from the measurement underestimation, i.e., $d_1 (+)$, and the measurement overestimation, i.e., $d_3 (-)$, is depicted. In this case, the distance $\overline{PP'}=\frac{E_d}{\sin(\overline{\beta}/2)}$. For each vertex of the star, depending on the signs of the estimations ($+$ overestimation, $-$ underestimation) of each pair of circumferences, we have: $\frac{E_d}{\sin(\beta_i/2)}$ if the signs are the same; and $\frac{E_d}{\cos(\beta_i/2)}$ if the signs are different, where $\beta_1 \le \beta_2 \le \beta_3$ are the three different angles formed in $P$ such that $\sum_i \beta_i = \pi$. In the following, we prove that the farthest vertex occurs when the measurement estimations have the same signs and the angle is minimum. \begin{lemma}[\cite{bettisorbelli2018accuracy}] Let $\ensuremath{\beta\textsubscript{min}}\xspace = \min_{i}\{\beta_i\}$, $\ensuremath{\beta\textsubscript{max}}\xspace = \max_{i}\{\beta_i\}$ and $\sum_i \beta_i = \pi$. Then $\sin ( \frac{\ensuremath{\beta\textsubscript{min}}\xspace}{2} ) \leq \cos ( \frac{\ensuremath{\beta\textsubscript{max}}\xspace}{2} )$. \end{lemma} \begin{IEEEproof} Let $\ensuremath{\beta\textsubscript{min}}\xspace = \min_{i}\{\beta_i\}$ and $\ensuremath{\beta\textsubscript{max}}\xspace = \max_{i}\{\beta_i\}$. Then, we have: $\ensuremath{\beta\textsubscript{max}}\xspace \leq \pi - 2\ensuremath{\beta\textsubscript{min}}\xspace \Rightarrow \frac{\ensuremath{\beta\textsubscript{max}}\xspace}{2} \leq \frac{\pi}{2} - \ensuremath{\beta\textsubscript{min}}\xspace$ from which $\cos ( \frac{\pi}{2} -\ensuremath{\beta\textsubscript{min}}\xspace ) \leq \cos ( \frac{\ensuremath{\beta\textsubscript{max}}\xspace}{2} )$, and thus $\sin(\ensuremath{\beta\textsubscript{min}}\xspace) \leq \cos ( \frac{\ensuremath{\beta\textsubscript{max}}\xspace}{2} )$. \noindent Since $0 \leq \ensuremath{\beta\textsubscript{min}}\xspace \leq \pi/3$, it yields: \begin{gather} \sin \left( \frac{\ensuremath{\beta\textsubscript{min}}\xspace}{2} \right) \leq \sin(\ensuremath{\beta\textsubscript{min}}\xspace) \leq \cos \left( \frac{\ensuremath{\beta\textsubscript{max}}\xspace}{2} \right) \nonumber \end{gather} Thus, the furthest vertex is at distance $\frac{E_d}{\sin (\frac{\ensuremath{\beta\textsubscript{min}}\xspace}{2})}$ from $P$. \end{IEEEproof} \begin{theorem}[\cite{bettisorbelli2018accuracy}] Given the \revision{accuracies} $\epsilon_s$, $\gamma_d$, and $\gamma_h$, given \ensuremath{d\textsubscript{min}}\xspace, and recalling that $\epsilon_{d}(\gamma_d, $ $\gamma_h, \epsilon_s) \approx |\gamma_d| + \frac{h }{\ensuremath{d\textsubscript{min}}\xspace}|\gamma_h| + |\epsilon_s|\sqrt{1+\frac{h^2}{\ensuremath{d\textsubscript{min}}\xspace^2}}$, the localization trilateration \revision{accuracy} defined as the maximum trilateration error is obtained as: \begin{equation} \label{eq:eps_l} \epsilon^T_L(\gamma_d, \gamma_h, \epsilon_s) = \frac{\epsilon_d(\gamma_d, \gamma_h, \epsilon_s) }{\sin \left(\frac{\ensuremath{\beta\textsubscript{min}}\xspace}{2}\right)} \end{equation} \end{theorem} Therefore, from Eq.~\eqref{eq:eps_l}, we learn that, given a certain ground error, the localization error is minimized when $\ensuremath{\beta\textsubscript{min}}\xspace \rightarrow \frac{\pi}{3}=60^{\circ}$. Figure~\ref{fig:precision_vs_angle_distance} reports an example of the trilateration error ${E^T_L}$ computed by varying both the values $d$ and $\ensuremath{\beta\textsubscript{min}}\xspace$, and assuming only the instrumental error, i.e., $\gamma_d=\gamma_h=0 \unit{m}$ and $\epsilon_s = 0.10 \unit{m}$. As expected, when both $d$ and $\ensuremath{\beta\textsubscript{min}}\xspace$ tend to $0$, ${E^T_L}$ grows quickly. \begin{figure}[htbp] \vspace{-0.1in} \centering \subfloat[The three angles $\beta_i$.]{% \def0.5{0.65} \input{figures/angular_aperture.pdf_tex} \label{fig:angular_aperture} } \subfloat[$E^T_L$.]{% \includegraphics[scale=0.775]{tikz/precision_vs_angle_distance} \label{fig:precision_vs_angle_distance} } \caption{The trilateration error $E^T_L$: when $d$ and $\ensuremath{\beta\textsubscript{min}}\xspace$ are very small, the error is extremely high.} \label{fig:angles} \end{figure} \revision{Analyzing Eq.~\eqref{eq:eps_l} it is clear that the localization trilateration \revision{accuracy} $\epsilon^T_L$ can be bounded keeping the minimum angle $\ensuremath{\beta\textsubscript{min}}\xspace$ as large as possible, i.e., closer to $60^{\circ}$. Our second takeaway is that a good localization accuracy in trilateration methods can be obtained keeping as low as possible the ratio $h/\ensuremath{d\textsubscript{min}}\xspace$, the elevation angle $\alpha$ as small as possible respecting also the communication conditions in LoS, and making the minimum angle \ensuremath{\beta\textsubscript{min}}\xspace as large as $60^{\circ}$.} \vspace{-0.05in} \subsection{\textsc{Omni}\xspace and \textsc{Scan}\xspace Localization Algorithms}\label{sec:omni-scan-algs} In this section, we review trilateration based algorithms \textsc{Omni}\xspace and \textsc{Scan}\xspace considered in our experiments. Based on the above discussion, the localization error for these algorithms is bounded by $\epsilon^T_L$, which is a function of the \revision{accuracies} $\epsilon_s$, $\gamma_d$, and $\gamma_h$, the minimum angle \ensuremath{\beta\textsubscript{min}}\xspace, altitude $h$, and minimum distance \ensuremath{d\textsubscript{min}}\xspace. Both algorithms are based on a static path \ensuremath{\Pi}\xspace formed by a series of vertical lines (each called as {\em vertical scan}) connected by horizontal lines. \begin{figure*}[htbp] \centering \small \subfloat[\textsc{Drf}\xspace.]{% \def0.5{0.50} \input{figures/model-drf.pdf_tex} \label{fig:model-drf} } \subfloat[\textsc{IoC}\xspace.]{% \def0.5{0.50} \input{figures/model-xiao.pdf_tex} \label{fig:model-xiao} } \subfloat[\textsc{IoA}\xspace.]{% \def0.5{0.50} \input{figures/model-lee.pdf_tex} \label{fig:model-lee} } \caption{The \textsc{Drf}\xspace, \textsc{IoC}\xspace, and \textsc{IoA}\xspace localization algorithms. In \textsc{IoC}\xspace e \textsc{IoA}\xspace there are two symmetric intersection areas: a third point is required to find and disambiguate the correct intersection area.} \label{fig:intersection_areas} \vspace{-0.1in} \end{figure*} \paragraph{The \textsc{Scan}\xspace Algorithm} \textsc{Scan}\xspace~\cite{koutsonikolas2007path} is one of the first range-based localization algorithms designed for rovers. Each GD\xspace is localized employing trilateration using three waypoints. The main drawback is the collinearity between points in the estimation phase. Since we wish to avoid such undesirable conditions, in our experiments we perform single trilateration selecting three non-collinear waypoints from at least two distinct vertical scans. In this slightly improved version of \textsc{Scan}\xspace, the \ensuremath{\beta\textsubscript{min}}\xspace and \ensuremath{d\textsubscript{min}}\xspace constraints may not be satisfied, resulting in large localization errors. \paragraph{The \textsc{Omni}\xspace Algorithm} \textsc{Omni}\xspace~\cite{bettisorbelli2018range} is the first range-based localization algorithm that takes into account the impact of the drone's altitude on the measurement \revision{accuracy} and on the geometry of the waypoints from which trilateration is performed. It logically tessellates the deployment area into diamonds. Then, each GD\xspace, once it has acquired a sufficient number of waypoints/distances from the drone, performs two trilaterations. The first trilateration is invoked using any three non-collinear waypoints in order to compute the logical diamond in which the GD\xspace resides. Since each diamond is associated with an optimal triple of waypoints which satisfy the minimum angle/distance constraints~\cite{bettisorbelli2018range}, any GD\xspace belonging to such diamond can be finally trilaterated for a second time using that triple. In conclusion, \textsc{Omni}\xspace has been proved to be highly accurate but requires two trilaterations, as opposed to the single trilateration performed by \textsc{Scan}\xspace. \vspace{-0.1in} \section{Other Localization Algorithms}\label{sec:algorithms} In this section, we describe four more localization algorithms, namely \textsc{Drf}\xspace, \textsc{IoA}\xspace, \textsc{IoC}\xspace, and \textsc{Drb-C}\xspace, not based on trilateration. The first three of such algorithms are {\em range-free}, therefore the localization \revision{accuracy} depends on the antenna radiation pattern quality. Recently, Betti et al.~\cite{bettisorbelli2019ground} experimentally showed the poor \revision{accuracy} of \textsc{Drf}\xspace using relatively inexpensive hardware. In this paper, motivated by these results, we extend these algorithms by considering distance measurements to improve on the localization \revision{accuracy}. Specifically, as also detailed in the experiments in Section~\ref{sec:ev}, the GD\xspace stores, for each waypoint that it hears, the relative distance between itself and that waypoint. Exploiting this information, we reformulate all the range-free techniques making them actually range-based. This way, we mean to overcome the poor localization \revision{accuracy} resulting from the low-quality of the radio antenna, while still keeping the original procedures for the localization. In the following, for each of these extended algorithms, as well as for \textsc{Drb-C}\xspace, we identify the sources of the localization error. However, we do not derive any analytical expression of $E_L$, since the analysis would involve too many variables to be expressed in a closed formula. Nevertheless, we study their error through real experiments in Section~\ref{sec:ev}. \paragraph{The \textsc{Drf}\xspace Algorithm} \textsc{Drf}\xspace~\cite{bettisorbelli2019rangefree} is a lightweight range-free radius-free algorithm designed for drones. This algorithm is based on the notion of {\em chord}. In general, the perpendicular bisector of any chord passes through the center $O$ of the circle itself. So, the bisector of another non-parallel chord and the previous one intersect at $O$ point. In Figure~\ref{fig:model-drf}, two chords are identified by the pairs $A_1A_2$ and $A_2B_1$. Theoretically, the circle is identified by the receiving disk of GD\xspace which is centered at $O$. Accordingly, the GD\xspace starts to estimate its position when it detects two chords. The two chords are detected using the HnH technique~\cite{bettisorbelli2020rangefree} on each scan. The detection of chords incurs several problems that eventually affects the localization accuracy. First, recalling that the MA\xspace regularly broadcasts its current position (waypoint) at discrete intervals of time and that two consecutive waypoints are at distance {\em inter-waypoint} \ensuremath{I_w}\xspace, the endpoints of the chords may not exactly fall on the circumference of the receiving disk, even if the receiving disk is a perfect circle (e.g., $A_2$ and $A_3$ in Figure~\ref{fig:model-drf}). However, the chords can be improperly defined if the antenna pattern has ``holes'' and ``bubbles'', as experienced in the field as reported in~\cite{bettisorbelli2020rangefree}. \underline{{\em Range-based extension}:} Exploiting the fact that our tested kit allows us to take distance measurements, the choice of the chords can be performed selecting three waypoints at a certain fixed distance $d$ from the GD\xspace, relaxing the range-free constraint. In this way, with three waypoints on the same circumference of radius $d$, two chords can be derived. Accordingly, we can obtain a localization error which depends only on the length of \ensuremath{I_w}\xspace and on the error $E_d$. A more detailed explanation of the original version of \textsc{Drf}\xspace can be found in~\cite{bettisorbelli2019ground}. \paragraph{The \textsc{IoC}\xspace Algorithm} \textsc{IoC}\xspace~\cite{xiao2008distributed} is a range-free radius-based localization algorithm initially developed for ground MAs\xspace. Like \textsc{Drf}\xspace, the \textsc{IoC}\xspace algorithm exploits the HnH method in order to detect special points used for building a constraint area that bounds the GD\xspace's position. However, diversely from \textsc{Drf}\xspace, \textsc{IoC}\xspace relies also on the value of the communication radius $d$. In fact, initially the GD\xspace detects the pair of endpoints ($A_1$ and $A_2$, in Figure~\ref{fig:model-xiao}) using the HnH method. Successively, two more points called pre-arrival and post-departure ($A_0$ and $A_3$), respectively, are determined using the value of \ensuremath{I_w}\xspace, since MA\xspace sends its current position at discrete intervals. Note that such four points belong on the same straight line. Then, four circles of radius $d$ centered at each of these four points are drawn. Those circles create two symmetrical intersection areas where the GD\xspace may reside. In order to select the correct intersection area, the GD\xspace needs to detect a third point. Finally, the GD\xspace is localized at the ``center'' of the correct intersection area. This definition of center slightly varies depending on the shape of the intersection area, which may vary from four to five vertices. \underline{{\em Range-based extension}:} As for \textsc{Drf}\xspace, also in \textsc{IoC}\xspace we exploit the distance measurements for computing all the required points. That is, we select the two waypoints on the same line at distance $d$ from GD\xspace as $A_1$ and $A_2$, and the preceding and subsequent waypoints as $A_0$ and $A_3$. \paragraph{The \textsc{IoA}\xspace Algorithm} \textsc{IoA}\xspace~\cite{lee2009localization}, is a range-free radius-based algorithm very similar to \textsc{IoC}\xspace. Indeed, it builds a similar constrained area using the HnH method and the knowledge of both $d$ and \ensuremath{I_w}\xspace. Once the GD\xspace has detected the two extreme endpoints ($A_1$ and $A_2$, in Figure~\ref{fig:model-lee}), it traces two circles of radius $d$ and $d-\ensuremath{I_w}\xspace$ on both the points. These circles, which create two annuli, intersect in two distinct and symmetrical intersection areas, so also in this case a third point is required. Finally, the GD\xspace estimates its position at the center of such an area, using easy geometrical rules. \underline{{\em Range-based extension}:} As for the previous algorithms, we select the two extreme endpoints $A_1$ and $A_2$ as two waypoints at distance $d$ from the GD\xspace. \vspace{-0.05in} \paragraph{The \textsc{Drb-C}\xspace Algorithm} \textsc{Drb-C}\xspace~\cite{bettisorbelli2019ground} is a lightweight range-based technique designed for UAVs. The goal of GD\xspace is to detect two waypoints at distance $d_1$ and $d_2$, and drawing two circumferences centered at these waypoints, of radius $d_1$ and $d_2$, respectively. Then, the GD\xspace knows to reside simultaneously on the two intersections of two circumferences, and a third point is required to disambiguate the correct intersection point. In conclusion, we can note that, differently from the trilateration based algorithms \textsc{Omni}\xspace and \textsc{Scan}\xspace in which the least-squares-error method is employed (see Eq.~\eqref{eq:trilateration}), \textsc{Drb-C}\xspace only demands a few algebraic calculations. Finally, Table~\ref{tab:algorithms_evaluation} summarizes the six algorithms that will be compared in our testbed. \begin{table}[ht] \vspace{-0.1in} \renewcommand{\arraystretch}{1.15} \caption{Summary of the compared algorithms.} \label{tab:algorithms_evaluation} \vspace{-0.05in} \centering \begin{tabular}{llcl} \hline name & method & points & error source \\ \hline \textsc{Omni}\xspace~\cite{bettisorbelli2018range} & trilaterations & $3+3$ & geometry \\ \textsc{Scan}\xspace~\cite{koutsonikolas2007path} & trilateration & $3$ & geometry \\ \textsc{Drb-C}\xspace~\cite{bettisorbelli2019ground} & circles intersection & $2+1$ & center \\ \textsc{Drf}\xspace~\cite{bettisorbelli2019rangefree} & bisector intersection & $3$ & chords \\ \textsc{IoC}\xspace~\cite{xiao2008distributed} & points ``center'' & $2+1$ & center \\ \textsc{IoA}\xspace~\cite{lee2009localization} & points ``center'' & $2+1$ & center \\ \hline \end{tabular} \vspace{-0.1in} \end{table} \vspace{-0.1in} \section{Evaluation on a Real Testbed}\label{sec:ev} In this section we propose our experimental evaluation. \revision{Initially, in Section~\ref{sec:uwb-performance} we describe the adopted hardware for our testbed.} In Section~\ref{sec:1st-experiments}, we study the ground error $E_d$. In Section~\ref{sec:2nd-experiments}, we study the localization error ${E^T_L}$ of the trilateration method. Finally, in Section~\ref{sec:3rd-experiments}, we run a campaign of experiments with the goal of comparing the localization error of different algorithms. \vspace{-0.05in} \subsection{\revision{Performance of UWB Antennas}}\label{sec:uwb-performance} The experiments in Sections~\ref{sec:1st-experiments} and~\ref{sec:2nd-experiments} are done using the DecaWave\xspace EVK1000 kit (see Figure~\ref{img:evk1000}), formed by two UWB antennas which are based on the DW1000 UWB chip~\cite{www-Deca-dwm1000}. For the experiments done in Section~\ref{sec:3rd-experiments}, we rely on a more consistent set of twelve antennas, using the MDEK1001 kit (see Figure~\ref{img:dwm1001}), based on the same DW1000 UWB chip~\cite{www-Deca-dwm1001}. \revision{ According to DecaWave\xspace, those chips have a $6.5 \unit{GHz}$ center frequency, and have a declared and reliable point-to-point range up to $60 \unit{m}$ LoS and $35 \unit{m}$ NLoS~\cite{www-Deca-dwm1001} on a typical use-case. Although the DW1000 chip transmitting power is set to $-41.3 \unit{dBm/MHz}$, and the typical receiver sensitivity is $-93 \unit{dBm/500 MHz}$~\cite{www-Deca-dwm1000}, the received power is influenced by the antenna polarity. In both the DecaWave\xspace kits the antennas are vertically polarized, meaning that the module is intended to be vertically positioned to let another vertically polarized antenna observe an omnidirectional radiation pattern in the azimuth plane~\cite{www-Deca-dwm1001}. For this reason, following the recommendations provided by DecaWave\xspace in their datasheet, in our experiments we always vertically placed our antennas. The antenna placed on the drone is positioned vertically, but reversely, keeping the transceiver on the bottom, for avoiding the MA\xspace's body to become an obstacle between the GDs\xspace and the MA\xspace itself. \begin{figure}[htbp] \vspace{-0.2in} \centering \hfill \subfloat[EVK1000 kit.]{% \includegraphics[height=3.0cm]{images/evk1000.jpg} \label{img:evk1000} } \hfill \subfloat[MDEK1001 kit.]{% \includegraphics[height=2.25cm]{images/dwm1001.jpg} \label{img:dwm1001} } \hfill \subfloat[An antenna.]{% \includegraphics[height=3.0cm]{images/exp-antenna.jpg} \label{img:exp-antenna} } \vspace{-0.1in} \hfill \subfloat[The drone.]{% \includegraphics[height=3.0cm]{images/exp-drone.jpg} \label{img:exp-drone} } \vspace{-0.07in} \caption{The used DecaWave\xspace kits and the 3DR Solo drone.} \label{fig:kits} \vspace{-0.2in} \end{figure} } \vspace{-0.05in} \subsection{Experiments with Ground Error}\label{sec:1st-experiments} In this section we analyze the ground error employing the DecaWave\xspace EVK1000 kit. We start with pre-arranged antenna experiments in which two antennas (one reproduces the GD\xspace to localize and another one the MA\xspace) are used. The antenna that acts as GD\xspace is fixed on the ground, while the other one (MA\xspace), fixed on a pole, moves accordingly to the specific experiment emulating the rolling and altitude error. \revision{We fixed the drone's position on the ground and that of the GD\xspace measuring the distance with a Bosch digital laser~\cite{www-laser}. The ground GPS drone's position is then converted as the origin $W'=(0,0,0)$ of the local Cartesian coordinate system used during the experiments.} Then, in the subsequent experiments, we replace the pole with a drone hovering at a certain altitude. The goal is to understand how the drone impacts the measurement error. For each experiment, we record at least $30$ slant distances and we finally determine the final computed value at the $95\%$ confidence level. \begin{figure*}[htbp] \centering \subfloat[Instrumental error.]{% \includegraphics[scale=0.75]{tikz/measurement_errors_instrumental} \label{fig:measurement_errors_instrumental} } \subfloat[Rolling error.]{% \includegraphics[scale=0.75]{tikz/measurement_errors_rolling} \label{fig:measurement_errors_rolling} } \subfloat[Altitude error.]{% \includegraphics[scale=0.75]{tikz/measurement_errors_altitude} \label{fig:measurement_errors_altitude} } \subfloat[Combined error.]{% \includegraphics[scale=0.75]{tikz/measurement_errors_combined} \label{fig:measurement_errors_combined} } \hfill \vspace{-0.1in} \subfloat[Experimental $\overline{E_d}$.]{% \includegraphics[scale=0.75]{tikz/measurement_errors_drone} \label{fig:measurement_errors_drone} } \subfloat[Experimental $\overline{E^T_L}$: $d$ varies.]{% \includegraphics[scale=0.75]{tikz/compare_measurements_same_angle} \label{fig:compare_measurements_same_angle} } \subfloat[Experimental $\overline{E^T_L}$: $\ensuremath{\beta\textsubscript{min}}\xspace$ varies.]{% \includegraphics[scale=0.75]{tikz/compare_measurements_same_distance} \label{fig:compare_measurements_same_distance} } \caption{The experimental error and the theoretical error in different cases.} \label{fig:compare_real_precision} \vspace{-0.1in} \end{figure*} In the first experiment, we measure the slant distance and comparing its projection on the ground with the exact ground distance $d$. Consequently, we compute the {\em experimental ground error} $\overline{E_d}$ and compare it with theoretical error $E_d$. To verify Eq.~\eqref{eq:ground_distance_instrumental}, Eq.~\eqref{eq:ground_distance_gamma_d_roll_away}, and Eq.~\eqref{eq:ground_distance_gamma_h_bound}, we have measured and reported in Figure~\ref{fig:measurement_errors_instrumental}, Figure~\ref{fig:measurement_errors_rolling}, and Figure~\ref{fig:measurement_errors_altitude} the experimental error $\overline{E_d}$ when the instrumental error, rolling error, and altitude error, separately affect the GD\xspace on the ground, respectively. We also report the theoretical ground error bound $E_d$. It is interesting to see that, in each plot, the measurement error $E_d$ (solid line) almost always upper-bounds the experimental error $\overline{E_d}$ (dashed line). We also measured and reported in Figure~\ref{fig:measurement_errors_combined} the combined error where all the three components affect the error along with the bound in Eq.~\eqref{eq:ground_distance_combined_bound_def}. The curves almost coincide. In the second experiment, we repeat the previous setting employing this time a drone. In Figure~\ref{fig:measurement_errors_drone} we report the experimental $\overline{E_d}$ for different altitudes. Since the drone's position is affected by the wind, air density, humidity, the strength of the propellers, and GPS error, we know that even the slant distance is affected at the same time. Moreover, we know from Eq.~\eqref{eq:ground_distance_combined_bound_def} that the error $\overline{E_d}$ increases when $h$ increases and when $d$ tends to $0 \unit{m}$. In Figure~\ref{fig:measurement_errors_drone}, we also plot the theoretical error $E_d$ in solid lines, fixing $\epsilon_s=0.10 \unit{m}$ and using $\gamma_d=\{0.6, 0.8, 1.2\} \unit{m}$ and $\gamma_h=\{0.1, 0.15, 0.2\} \unit{m}$ for each $h=\{10, 20, 30\} \unit{m}$, respectively, that empirically fit the experimental curves. Differently from the previous ones, this is the first experiment that somehow simulates a real scenario. It is interesting to note that, we can model the curve of the combined error even in a non-optimal scenario, just tuning in advance the parameters on Eq.~\eqref{eq:ground_distance_combined_bound_def}, which provides a good approximation of the error. In conclusion, upon this first campaign of experiments, we can confirm that the measurement error is small when either the ground distance between the drone and the GD\xspace is large or the altitude of the drone is low. \vspace{-0.1in} \subsection{Experiments on the Trilateration Error}\label{sec:2nd-experiments} In this section, we describe two more comparative experiments to better understand how the localization error can be affected when the trilateration method is applied. From Eq.~\eqref{eq:eps_l}, it is clear that the localization error $E_L$ can be bounded if the three waypoints are sufficiently apart from the GD\xspace. In other words, the three points must respect good geometry and minimum distance constraints. \revision{In both the experiments we use our 3DR Solo drone as a MA\xspace and placed a single GD\xspace in $P=(0, 0, 0)$. Moreover, the drone's initial position $W'$ was initially set at the same GD\xspace's position, i.e., in $W' = P$.} In the first experiment depicted in Figure~\ref{fig:compare_measurements_same_angle}, we plot the {\em experimental localization trilateration error} $\overline{E^T_L}$ between the estimated and the actual position of the GD\xspace. Here, we fix the best possible minimum angle $\ensuremath{\beta\textsubscript{min}}\xspace=60^{\circ}$ and we decrease the value of the ground distance $d$ to smaller values. For each value of $d$, we perform trilateration using three points which satisfy the optimal geometry. As expected and according to Eq.~\eqref{eq:eps_l}, $\overline{E_L^T}$ is high when $d$ is short, even though the minimum angle is fixed at the best possible value $60^{\circ}$. In the second experiment shown in Figure~\ref{fig:compare_measurements_same_distance} we do the opposite by keeping a large and good enough ground distance $d=40 \unit{m}$, and decreasing the value of \ensuremath{\beta\textsubscript{min}}\xspace to narrow values. Even here we perform trilateration and according to Eq.~\eqref{eq:eps_l}, the error decreases when $\ensuremath{\beta\textsubscript{min}}\xspace$ increases. \vspace{-0.05in} \subsection{Comparison of Localization Algorithms}\label{sec:3rd-experiments} In this section, we describe the hardware and software architecture of the comparative testbed. The goal is to evaluate the performance of different localization algorithms in-field. In this testbed, we cannot use the previous EVK1000 kit since it is formed by only two antennas, hence it is definitively not sufficient for evaluating a real scenario in which we have to localize multiple GDs\xspace at once. Instead, we move towards the larger set of antennas relying on the new MDEK1001 kit from DecaWave\xspace, since it comprises of a set of twelve antennas. In addition, the testbed consists also of a Raspberry Pi which is the main component that auto-pilots the drone via Wi-Fi and sends UWB commands via a single UWB antenna that is physically connected to it by the serial peripheral interface (SPI). \vspace{-0.05in} \subsubsection{Testbed setup} We set a rectangular deployment area of sizes $100 \times 100 \unit{m^2}$, and fix a Cartesian coordinate system with origin at the special position {\sc{Home}}\xspace $(0, 0, h_0 = 1 \unit{m})$. Then, we deploy on the ground $n=10$ antennas placed at the top of a tripod of height $h_0$. Each antenna (Figure~\ref{img:exp-antenna}) identified by own ID is not aware of its relative position with respect to the {\sc{Home}}\xspace, even though we already know its position. \revision{In fact, as illustrated in Figure~\ref{fig:deployment_area}, the deployed 10 antennas respect a predefined pattern, i.e., form a series of equilateral triangles (shown in green) with the same side of length $30 \unit{m}$ in which each vertex is a GD\xspace. Thus, we are able to measure, with reasonable accuracy before our experiments, the relative distance between the GDs\xspace, with the help of a digital laser. Finally, the {\sc{Home}}\xspace position is set between antennas ID 4 and ID 5 accurately measured with the same digital laser.} By a drone's {\em mission} at a certain altitude $h$, we actually refer to a drone (see Figure~\ref{img:exp-drone}) that flies at a fixed altitude $h_0+h$ (see Figure~\ref{fig:ranging_precision_setup}) following a certain static path \ensuremath{\Pi}\xspace. For each algorithm, the trajectory \ensuremath{\Pi}\xspace starts and finishes at {\sc{Home}}\xspace and consists of vertical scans connected by horizontal scans (see Figure~\ref{fig:deployment_area}). Once all the GDs\xspace are deployed, the drone starts its mission flying over the deployment area. When both GD\xspace and MA\xspace are within the communication range of the other, the devices start a ToA based distance measurement protocol. Then, the GD\xspace stores the computed distance along with the current MA\xspace's position. In other words, the GD\xspace memorizes the position of the waypoint and the associated distance on it. At the end of the mission, each GD\xspace estimates its position by invoking a localization algorithm. \begin{figure}[htbp] \vspace{-0.1in} \centering \hfill \subfloat[The MA\xspace and GD\xspace $P$.]{% \def0.5{0.8} \input{figures/ranging_precision.pdf_tex} \label{fig:ranging_precision_setup} } \hfill \subfloat[The deployment area.]{% \def0.5{0.5} \input{figures/deployment_area.pdf_tex} \label{fig:deployment_area} } \caption{The experimental testbed on the field.} \label{fig:exps} \vspace{-0.1in} \end{figure} All the compared algorithms require at least three points (see Table~\ref{tab:algorithms_evaluation}) to compute and estimate the position of a GD\xspace. However, each GD\xspace has several stored distance measurements, so it can potentially exploit all of them. In order to better understand how either the altitude of the drone or the geometry of the waypoints impact the quality of the localization \revision{accuracy}, as already investigated in Section~\ref{sec:2nd-experiments}, we fix two constraints during the selection of the three points: \begin{inparaenum}[(i)] \item the ground distance $d$ between the GD\xspace and the MA\xspace, and \item the geometry angle $\beta$ to keep between the three waypoints. \end{inparaenum} Accordingly, we fix $d=\{20, 30, \ldots, 60\} \unit{m}$ and $\beta=\{0, 15, 30\}^{\circ}$, where $\beta = 0^{\circ}$ means an unconstrained geometry. Moreover, we vary the altitude $h=\{10, 20, 30\} \unit{m}$. Clearly, it is not easy to find three points at an exact distance $d$. Thus, we relax the constraint and we search for three points at distance $d \pm \tau$, where $\tau$ indicates a tolerance in our measurements (we fix $\tau=1\unit{m}$) due to the fact that the drone sends its position at discrete intervals of time, i.e., the inter-waypoint distance \ensuremath{I_w}\xspace. The \ensuremath{I_w}\xspace value is affected by the drone's speed. In our experiments, we have seen that $\ensuremath{I_w}\xspace=1 \unit{m}$ with a drone's speed of $10 \unit{m/s}$. \subsubsection{Results} We compare all algorithms varying the drone's altitude $h$, the minimum distance $d$ among the GDs\xspace and the waypoints, and the waypoint geometry. In \textsc{Omni}\xspace, by construction, we select always the furthest three waypoints that guarantee good geometry (see Figure~\ref{fig:angular_aperture}). The \textsc{Omni}\xspace error is reported as a reference for the other algorithms. \begin{figure}[htbp] \vspace{-0.15in} \subfloat[$h=10 \unit{m}, \beta = 0^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_trilat_h10_a0} \label{fig:comparison_algorithms_trilat_h10_a0} } \subfloat[$h=10 \unit{m}, \beta = 30^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_trilat_h10_a30} \label{fig:comparison_algorithms_trilat_h10_a30} } \hfill \vspace{-0.1in} \subfloat[$h=30 \unit{m}, \beta = 0^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_trilat_h30_a0} \label{fig:comparison_algorithms_trilat_h30_a0} } \subfloat[$h=30 \unit{m}, \beta = 30^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_trilat_h30_a30} \label{fig:comparison_algorithm_trilats_h30_a30} } \caption{Errors for \textsc{Scan}\xspace and \textsc{Omni}\xspace.} \label{fig:comparison_algorithms_tri_err} \vspace{-0.05in} \end{figure} In Figure~\ref{fig:comparison_algorithms_tri_err}, we show the observed errors of \textsc{Scan}\xspace and \textsc{Omni}\xspace along with their theoretical bounds $\epsilon^T_L$ given in Eq.~\eqref{eq:eps_l} obtained by substituting $\epsilon_s=0.10 \unit{m}$, and the values of $\gamma_d$ and $\gamma_h$ taken from Figure~\ref{fig:measurement_errors_drone}. Obviously, \textsc{Omni}\xspace is better than \textsc{Scan}\xspace because the geometry of the waypoints is enforced. The difference between the theoretical and the observed error is smaller for \textsc{Omni}\xspace than \textsc{Scan}\xspace when $\beta=0^{\circ}$, and almost the same when $\beta=30^{\circ}$. \begin{figure}[htbp] \vspace{-0.15in} \subfloat[$h=30 \unit{m}, \beta = 0^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_h30_a0} \label{fig:comparison_algorithms_h30_a0} } \subfloat[$h=30 \unit{m}, \beta = 30^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_h30_a30} \label{fig:comparison_algorithms_h30_a30} } \hfill \vspace{-0.1in} \subfloat[$h=10 \unit{m}, \beta = 0^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_h10_a0} \label{fig:comparison_algorithms_h10_a0} } \subfloat[$h=10 \unit{m}, \beta = 30^{\circ}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_h10_a30} \label{fig:comparison_algorithms_h10_a30} } \caption{For fixed $h$ and $\beta$, the algorithms error when $d$ varies.} \label{fig:comparison_algorithms_loc_err} \vspace{-0.1in} \end{figure} \revision{Figure~\ref{fig:comparison_algorithms_loc_err} compares the localization errors $E_L$ of the algorithms when $d$ varies. The localization error of \textsc{Drf}\xspace, \textsc{IoC}\xspace, and \textsc{IoA}\xspace is greater than that of \textsc{Scan}\xspace and \textsc{Omni}\xspace. As said, the trilateration based algorithms, \textsc{Omni}\xspace and \textsc{Scan}\xspace, get a good localization but pose many constraints (angle, distance) in the selection of waypoints; they also compute the estimated position by performing a least-squares-error optimization technique (which is complex). The \textsc{Drb-C}\xspace is not as accurate as \textsc{Omni}\xspace or \textsc{Scan}\xspace because it omits the least-squares-error optimization technique, but it is quite good. The chords based method in \textsc{Drf}\xspace is the least \revision{accurate}. \textsc{IoC}\xspace and \textsc{IoA}\xspace improve over \textsc{Drf}\xspace because their localization technique use the radius information. The errors are large when $\beta=0^{\circ}$, while they significantly decrease for all the algorithms when $\beta=30^{\circ}$. This shows that all the algorithms, and not only those based on trilateration, benefit from a good waypoint geometry. The experiments with $h=10 \unit{m}$ in Figures~\ref{fig:comparison_algorithms_h10_a0} and~\ref{fig:comparison_algorithms_h10_a30} have a smaller elevation angle and thus a smaller error than those with $h=30\unit{m}$ reported in Figures~\ref{fig:comparison_algorithms_h30_a0} and~\ref{fig:comparison_algorithms_h30_a30}. When $h=30 \unit{m}$, all the ratios yield $h/d \ge 1/3$, and since our experiments are in sub-urban area all the measurements are possibly in LoS; whereas when $h=10 \unit{m}$, the measurements at $d \ge 30\unit{m}$ are mixed LoS and NLoS. Nonetheless, we cannot notice any special behavior, probably, thanks to the UWB multipath immunity. \begin{figure}[htbp] \vspace{-0.15in} \centering \subfloat[$h=10 \unit{m}, d = 30 \unit{m}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_h10_d30} \label{fig:comparison_algorithms_h10_d30} } \subfloat[$h=10 \unit{m}, d = 50 \unit{m}$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_h10_d50} \label{fig:comparison_algorithms_h10_d50} } \caption{For fixed $h$ and $d$, the algorithms error when $\beta$ varies.} \label{fig:fixed_h_d} \end{figure} Figure~\ref{fig:fixed_h_d} compares the localization error $E_L$ when $h=10 \unit{m}$ and $d=30 \unit{m}$ or $d=50 \unit{m}$, varying the geometry angle $\beta$. From the observed errors, any localization that satisfies $\beta \ge 30^{\circ}$ has a small error, and cannot significantly improve decreasing the elevation angle (i.e., ratio $h/d$). The decrease of the error when $h$ decreases from $50 \unit{m}$ to $30 \unit{m}$ is large when $0^{\circ} \le \beta \le 30^{\circ}$. Finally, note that in Figure~\ref{fig:comparison_algorithms_h10_d50}, it holds $h/d=0.2$, which is below the ratio that guarantees $100\%$ LoS in sub-urban area in Table~\ref{tab:lap}, but we do not notice a meaningful worsening of the error. Figure~\ref{fig:fixed_ratio} plots the error for different pairs of $h$ and $d$ with the same ratio $h/d$. Precisely, we compare two ratios $h/d$: $0.5$ and $1.0$. Each ratio can be extracted from three different combinations altitude/distance. For example, for $h/d=0.5$ we consider the combinations $h=\{10, 20, 30\} \unit{m}$ and $d=\{20, 40, 60\} \unit{m}$. The improvement in the accuracy is high when the elevation angle decreases from $45^{\circ}$ to $26^{\circ}$. \textsc{Drf}\xspace, the least accurate algorithm in all our experiments, is very sensitive to the change of the elevation angle. \begin{figure} \vspace{-0.2in} \centering \subfloat[$h/d = 1.0$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_hd_10} \label{fig:comparison_algorithms_hd_10} } \subfloat[$h/d = 0.5$.]{% \includegraphics[scale=0.75]{tikz/comparison_algorithms_hd_05} \label{fig:comparison_algorithms_hd_05} } \caption{For fixed $h/d$ ratio, the algorithms error when $\beta$ varies.} \label{fig:fixed_ratio} \end{figure} } \begin{table}[ht] \vspace{-0.1in} \renewcommand{\arraystretch}{1.15} \caption{Error between range-free and range-based algorithms, in meters (m).} \label{tab:comparison_algorithms_rf_rb} \centering \vspace{-0.1in} \begin{tabular}{ll|cccccc} \hline \multicolumn{2}{c}{ } & \multicolumn{2}{c}{\textsc{Drf}\xspace} & \multicolumn{2}{c}{\textsc{IoC}\xspace} & \multicolumn{2}{c}{\textsc{IoA}\xspace} \\ \multicolumn{2}{c}{ } & RF & RB & RF & RB & RF & RB \\ \hline \multirow{3}{*}{$h$ (m)} & $10$ & $52.3$ & $16.6$ & $48.8$ & $10.8$ & $47.2$ & $10.8$ \\ & $20$ & $55.1$ & $19.8$ & $49.1$ & $11.2$ & $48.8$ & $11.0$ \\ & $30$ & $57.7$ & $20.4$ & $48.2$ & $13.8$ & $51.2$ & $13.6$ \\ \hline \end{tabular} \vspace{-0.05in} \end{table} \revision{In conclusion,} in Table~\ref{tab:comparison_algorithms_rf_rb} compares the localization error $E_L$ among range-free (RF) and range-based (RB) versions of the three original range-free algorithms \textsc{Drf}\xspace, \textsc{IoC}\xspace, and \textsc{IoA}\xspace, for different altitudes $h$. In particular, we report the localization error obtained from our previous testbed~\cite{bettisorbelli2020rangefree} on which the three algorithms were implemented as pure range-free techniques based on the HnH technique (RF columns), along with the average results shown in Figure~\ref{fig:comparison_algorithms_loc_err} (RB columns). As reported in~\cite{bettisorbelli2020rangefree}, on average, the experimental error of those algorithms is very large (almost $60 \unit{m}$) and variable. These experiments show that the error of the original range-free versions is 3-4 times larger than the corresponding extended range-based version that exploit distance measurements. Moreover, in~\cite{bettisorbelli2020rangefree}, about one-third of the GDs\xspace were left unlocalized by \textsc{IoC}\xspace and \textsc{IoA}\xspace, while \textsc{Drf}\xspace localized all of them. These results and the fact that our antennas are able to take distance measurements via ToA fully justify our transformation of \textsc{Drf}\xspace, \textsc{IoC}\xspace, and \textsc{IoA}\xspace in range-based algorithms, implying that measurements help. \vspace{-0.1in} \section{Conclusions}\label{sec:concl} In this paper, we analytically study and experimentally evaluate thorough real experiments on the field the errors that can affect the localization of GDs\xspace using a drone as MA\xspace. We decompose the error in measurement error, ground error, and localization error, and provide analytical expressions of these errors. We also link the ground error with the theory of the A2G communication link via the elevation angle. Our experiments confirm that our analytical analysis is accurate. Furthermore, results also show that extending range-free algorithms with range-based measurements, significantly increases the localization \revision{accuracy}. \revision{In the future, we plan to extend the analysis to NLoS scenarios. We will investigate DecaWave\xspace antenna capabilities, and also modulating and tuning the transmitting power in different scenarios. We finally plan to extend our work to propose a more realistic antenna radiation pattern. } \vspace{-5pt} \paragraph{Acknowledgments} The authors are grateful to the editor and reviewers for valuable comments that helped us improve the quality of the manuscript. This work was partially supported by Project {\em NALP-SAPR} granted by FSE, Project {\em NALP-SAPR2} granted by University of Perugia, by NATO grant G4936, Intelligent Systems Center (ISC) at Missouri S\&T, and by NSF grants CNS-1545050, CNS-1725755, CNS-1818942, and SCC-1952045. \vspace{-0.05in} \bibliographystyle{IEEEtran}
1,116,691,497,068
arxiv
\section{} Aditya-L1 is India’s first solar mission with Visible Emission Line Coronagraph (VELC) consisting of three spectral channels taking high-resolution spectroscopic observations of the inner corona up to 1.5~R$_\odot$ at 5303 \AA, 7892 \AA, and 10747 \AA. In this work, we present the strategy for the slit-width optimization for the VELC using synthetic line profiles by taking into account the instrument characteristics and coronal conditions for log(T) varying from 6 to 6.5. The synthetic profiles are convolved with simulated instrumental scattered light and noise to estimate the signal-to-noise ratio (SNR), which will be crucial to design the future observation plans. We find that the optimum slit width for VELC turns out to be 50 $\mu$m providing sufficient SNR for observations in different solar conditions. We also analyzed the effect of plasma temperature on the SNR at different heights in the VELC field-of-view for the optimized slit-width. We also studied the expected effect of the presence of a CME on the spectral channel observations. This analysis will help to plan the science observations of VELC in different solar conditions. \tiny \fontsize{8}{11}\helveticabold { \section{Keywords:} Corona, Coronagraph, Spectroscopy, Emission lines, Instrumentation} \end{abstract} \section{Introduction} The observations of the inner corona ({up to} 3 R$_\odot$) in white-light and emission lines have been made during the total solar eclipses over the years yielding a detailed description of the corona \citep{Baumbach37, Hulst50, Habbal2007ApJ, Habbal2010ApJ, Habbal_2014, Boe2018Freezin}. During the total solar eclipses, spectroscopic investigations utilizing the emission lines detected the presence of oscillations and fast magnetohydrodynamic (MHD) waves in the solar atmosphere \citep{Singh1997SoPh, Pasachoff2002SoPh, Sakurai2002SoPh, Singh2011SoPh, Samanta2016SoPh}. As eclipses last for a couple of minutes, regular observations of the inner corona will help in improving our understanding of the solar atmosphere. It may also shed light on the development of small and large scale transients that lead to severe space-weather. The availability of observations in emission lines such as H$\alpha$ 6563 \AA, Fe IX 4359 \AA, Fe X 6374 \AA, Fe XI 7892 \AA, Fe XIII 10747 \AA, Fe XIV 5303 \AA, and Ni XV 6702 \AA\ can provide useful thermodynamic diagnostics of different processes occurring in the solar corona \citep{Habbal2011ApJ}. It should be noted that current ground- and space- based instruments lack coronal observations in majority of these wavelengths. The Coronal Multi-channel Polarimeter (CoMP) instrument \citep{COMP2008SoPh} in Mauna Loa Solar Observatory (MLSO) provided regular spectro-polarimetric observations of the inner corona up-to 1.5 R$_\odot$ at 10747 \AA, and 10798 \AA\ until 2018. The detection of Alfv\'enic waves in the solar corona using the CoMP data provided support for wave based models for coronal heating \citep{Tomczyk2007Sci, Morton2015NatCo, Morton2016ApJ, Morton2019NatAs}. The observations by this instrument also provided the first global magnetic map of the solar corona \citep{Yang2020Sci}. Another ground based instrument, Norikura coronagraph, located at Norikura, Japan provided spectroscopic observations of the off-limb corona in emission lines corresponding to Fe X, Fe XI,, Fe XIII and Fe XIV. The emission line-intensity ratios give an estimate of the temperature while the line width could be used to calculate the thermal and non-thermal structure of the emitting plasma. Such high resolution spectroscopic observations from Norikura and CoMP revealed the thermal and non-thermal variations in coronal structures and velocities, complex variations in line-intensity ratios inferring the distribution of multi-thermal plasma and turbulence in these structures \citep{Singh_2003, Singh2004ApJ, Singh2006SoPh, Singh_2006, Krishna2013ApJ, McIntosh2012ApJ, Morton2016ApJ, Fan2018SoPh, Tiwari2019ApJ, Pant2019ApJ}. The variations in the temperature and non-thermal velocity in coronal structures may give an insight of the processes involved in heating the corona and acceleration of solar wind. Therefore, continuous spectroscopic monitoring of the solar corona in such emission lines is required. Among the existing space-based sprectrograph instruments, EUV Imaging Spectrometer (EIS) on Hinode takes the observations of the solar corona and upper transition region in the wavelength range of 170 – 210 \AA\ and 250 – 290 \AA\ \citep{EIS2007SoPh}. Another spacecraft, Interface Region Imaging Spectrograph (IRIS) takes simultaneous imaging and spectroscopic observations of the photosphere, chromosphere, transition region and corona in three pass-bands of 1332--1358 \AA, 1389--1407 \AA, and 2783--2834 \AA\ \citep{IRIS2014SoPh}. The Spectral Imaging of the Coronal Environment (SPICE) instrument \citep{SPICE2020} on-board the recently launched Solar Orbiter is an imaging spectrometer capable of observing the corona in extreme ultraviolet (EUV) pass-bands of 70.4 nm -- 79.0 nm and 97.3 nm -- 104.9 nm. It should be noted that EIS, IRIS, and SPICE perform the observation confined to a small field of view (FOV). Visible Emission Line Coronagraph (VELC) on-board Aditya-L1 \citep{ADITYA2017} will take simultaneous imaging and spectroscopic observations of the inner solar corona in three visible and one infra-red (IR) passbands from 1.05-1.5 R$_\odot$ \citep{Singh13, VELC17, IAUS2017}. VELC is equipped with a multi-slit spectrograph to study the solar corona with high spatial and temporal resolutions in the three emission lines centered at 5303 \AA\ (Fe XIV), 7892 \AA\ (Fe XI), and 10747 \AA\ (Fe XIII) mostly used during the eclipse observations. Continuous observations provided by VELC will be helpful to attain its scientific objectives including the diagnostics of the coronal plasma for temperature, and velocity thereby understanding the process of coronal heating and acceleration of the solar wind. Moreover, the spectro-polarimeteric capability of VELC using 10747 \AA\ emission line will be helpful to directly estimate the magnetic fields of the solar corona. Therefore, it is necessary to understand the performance of the instrument beforehand to plan the observations after launch. In this article we present the characterization of the VELC spectral channels using synthetic spectra generated using {CHIANTI 8.0} for different coronal conditions taking in to account for instrument parameters. In Section \ref{sec:synthSpectra} we present the process involved in synthesizing the spectra and converting it to simulated observations. We present a detailed analysis and the results including the optimization of slit-width of VELC followed by the instrument performance for different conditions in Section \ref{sec:results}. Section \ref{sec:summary} summarizes the analysis followed by a discussion. \section{Synthesizing Spectra} \label{sec:synthSpectra} As discussed before VELC will take spectroscopic observations of the inner corona at three emission wavelengths centered at 5303 \AA, 7892 \AA, and 10747 \AA. We used {CHIANTI 8.0} atomic database \citep{CHIANTI1997, Zanna2015A&A} to generate emission spectra for these three ionisation states of iron. {It should be noted that the mechanism responsible for emission corona is atomic transitions unlike scattering of photospheric light in case of K and F corona. Also, it has been found that the F corona dominates beyond 2.5-3 R$_\odot$ \citep{Morgan07} which is beyond VELC field of view (FOV) of the spectroscopic channels. The emission lines under consideration are about 100 times brighter than the background K continuum \citep{Stix02}. We have included the continuum while preparing the synthetic spectra with CHIANTI.} The contribution of the instrument to the full width half maximum (FWHM) of the spectra is calculated using the following relation: \begin{equation} \label{eq:fwhm} \mathrm{FWHM_{instr}} = \sqrt{\Bigg (\frac{\mathrm{dispersion}}{\mathrm{pixel \; scale}} \times \mathrm{slit \; scale} \Bigg )^{2} + (\mathrm{dispersion})^{2}}, \end{equation} where `dispersion' corresponds to 28.4 m\AA/pixel, 31 m\AA/pixel and 227.3 m\AA/pixel for the three channels respectively \citep{Singh2019}. The pixel size of 6.5 $\mu$m for visible spectral channels and 25 $\mu$m for IR channel result in pixel scales as 1.25 arcsec pixel$^{-1}$ and 4.8 arcsec pixel$^{-1}$ respectively. It should be realised from the optical layout of VELC \citep[Figure 1 in][]{kumar2018optical}, the set of 4 slits in the slit plane placed in the optical path is incident on the spectrograph. We vary slit-width as 20 $\mu$m, 40 $\mu$m and 60 $\mu$m and investigate their effects on the instrument output (see Section \ref{sec:results}). The density and emission measure for the different scenarios are supplied as inputs to the CHIANTI. The synthetic {spectra} is computed using IDL procedures of CHIANTI, {\it ch\_synthetic.pro} and {\it make\_chianti\_spec.pro}. {The instrumental FWHM calculated using Equation \ref{eq:fwhm} is provided as an input to {\it make\_chianti\_spec.pro} for the instrument induced FWHM to the synthetic line.} The peak of the intensity obtained {having the physical unit of photons cm$^{-2}$ sr$^{-1}$ s$^{-1}$ \AA$^{-1}$ is then converted to equivalent photoelectrons incident on each pixel per second using the following relation}, \begin{equation} \label{eq-peakPhot} \mathrm{ph\_elecs_{peak}} = \mathrm{(peak \; intensity) \times area \times (solid \; angle) \times dispersion \times efficiency} {\;} \mathrm{ph.electrons/pixel/s}, \end{equation} where the area of the VELC primary is obtained taking the diameter of the primary mirror as 195 mm \citep{kumar2018optical}. The solid angle subtended is calculated taking the {slit scale} and pixel scale along the horizontal and vertical dimensions respectively. The efficiency of the spectral channels in visible and IR wavelengths is taken to be $\sim$5 \% and $\sim$4 \% respectively \citep{Singh2019}. The instrument contribution to the scattered intensity and noise arising due to the detector is also included in the simulated spectra. Scatter studies for the continuum channel of VELC (observing at 5000 \AA\ with 10 \AA\ pass-band) was done using Advanced System Analysis Program (ASAP) by \citet{Venkata17}. {As VELC has narrow band filters for the three channels, we scaled this scattered intensity values taking into account the pass-band of individual channels.} The scatter of the continuum was scaled to the spectral channels using, \begin{equation} \label{eq-scatter} \mathrm{scatter_{spec}} = \frac{\mathrm{dispersion}}{10}\times\frac{\mathrm{slit \; scale \times (pixel \; scale) _{spectral}}}{\mathrm{(pixel \; scale)^{2} _{continuum}}} \times \mathrm{(Scatter_{continuum})} {\;} \mathrm{ph.electrons/pixel/s}, \end{equation} which gives the number of scattered {photoelectrons} in the spectral channel generated in a pixel every second. The {pixel scale} for the continuum channel is 2.5 arcsec pixel$^{-1}$. The constant factor in the Equation \ref{eq-scatter} is obtained using the procedure for conversion followed in \citet{Patel2018}. The scatter is assumed to be circularly symmetric and is added to the synthetic spectra obtained using CHIANTI in Equation \ref{eq-peakPhot}. { This could be considered as the worst case scenario as scatter is inversely related to the square of incident wavelength.} The final spectra obtained is then used for the further analysis. \\ \section{Analysis and Results} \label{sec:results} The synthetic spectra for each channel is added with the dark noise associated with individual detectors. For the visible spectral channels using CMOS detector the dark noise ({\it D}) is up to 15 electrons where as in the high gain mode of the IR CCD, it is 42 electrons with readout noise ({\it R}) of 2 and 80 electrons respectively \citep[VELC team]{Singh2019} . {The photon noise ({\it p}) is calculated as the square root of the total { photoelectrons} generated including the coronal signal photons obtained in the spectra and the scattered photons.} The resulting signal to noise ratio (SNR) is then calculated as, \begin{equation} \label{SNR} \mathrm{SNR} = \frac{S}{\sqrt{p^2 + D^2 + R^2}}, \end{equation} where S is the number of incident signal { photoelectrons} which in the cases analysed will be the photons determined from synthetic spectral signal. The SNR is calculated for the peak intensity of the simulated spectra and also at $\pm$0.5 \AA\ from the peak intensity wavelength. It is required to have an idea about the signal strength at $\pm$0.5 \AA\ as it will be helpful to determine sufficient signal requirements near the wings of the spectral line for obtaining a better fit. \\ \subsection{Slit-width Optimization} \label{sec-swopt} It was reported by \citet{Singh2019} that the slit-width of VELC needs to be increased for to achieve the desired science goals. For obtaining an optimised value of slit-width we used the synthetic spectra for analysis. The synthetic spectra was generated for a quiet-Sun condition considering an average coronal temperature of 10$^{6.25}$ K for the heights ranging from 1.1 R$_\odot$ to 1.5 R$_\odot$ in steps of 0.1 R$_\odot$. The emission measure and electron density for simulation using the coronal parameters \citep{Baumbach37, allen1973book} at these five heights are: \begin{itemize} \item log(EM) = [27, 26.3, 25.8, 25.36, 25] \item log(n$_e$) = [8.2, 7.85, 7.6, 7.38, 7.2]. \end{itemize} The spectra were generated for the three channels at the five heights including the scattered intensity and noises. The SNR per pixel per second at the peak intensity and at $\pm$0.5 \AA\ from the peak intensity wavelength were calculated for different slit-widths are tabulated in Table \ref{tab:diffTemp}. \begin{table}[!ht] \centering \begin{tabular}{cccccccc} \hline \\ & Distance (R$_\odot$) & 1.1 & 1.2 & 1.3 & 1.4 & 1.5 \\ \hline \\ \multicolumn{7}{c}{Slit width = 20 $\mu$m} \\ \hline \\ \multirow{4}{*}{5303 \AA} & Peak & 659 & 182 & 80 & 41 & 25 \\ & Peak + scatter & 720 & 223 & 110 & 65 & 43 \\ & SNR (peak) & 21.39 & 8.56 & 4.34 & 2.39 & 1.512 \\ & SNR ($\pm$0.5 \AA) & 6.71 & 2.22 & 1.02 & 0.56 & 0.37 \\ \hline \\ \multirow{4}{*}{7892 \AA} & Peak & 53 & 17 & 9 & 5 & 3 \\ & Peak + scatter & 120 & 62 & 42 & 31 & 23 \\ & SNR (peak) & 3.15 & 1.08 & 0.58 & 0.33 & 0.19 \\ & SNR ($\pm$0.5 \AA) & 1.5 & 0.53 & 0.31 & 0.18 & 0.13 \\ \hline \\ \multirow{4}{*}{10747 \AA} & Peak & 14349 & 5439 & 2820 & 1619 & 1041 \\ & Peak + scatter & 16212 & 6681 & 3731 & 2344 & 1593 \\ & SNR (peak) & 113.02 & 64.06 & 41.63 & 27.82 & 19.64 \\ & SNR ($\pm$0.5 \AA) & 95.48 & 52.01 & 33.02 & 21.68 & 15.31 \\ \hline \\ \multicolumn{7}{c}{Slit width = 40 $\mu$m} \\ \hline \\ \multirow{4}{*}{5303 \AA} & Peak & 1287 & 355 & 155 & 80 & 48 \\ & Peak + scatter & 1409 & 436 & 215 & 128 & 84 \\ & SNR (peak) & 31.79 & 13.76 & 7.35 & 4.23 & 2.71 \\ & SNR ($\pm$0.5 \AA) & 11.5 & 4.09 & 1.95 & 1.05 & 0.66 \\ \hline \\ \multirow{4}{*}{7892 \AA} & Peak & 104 & 34 & 17 & 9 & 6 \\ & Peak + scatter & 237 & 123 & 82 & 61 & 46 \\ & SNR (peak) & 5.69 & 2.09 & 1.08 & 0.58 & 0.39 \\ & SNR ($\pm$0.5 \AA) & 2.65 & 0.98 & 0.51 & 0.29 & 0.18 \\ \hline \\ \multirow{4}{*}{10747 \AA} & Peak & 28018 & 10620 & 5506 & 3161 & 2033 \\ & Peak + scatter & 31744 & 13104 & 7328 & 4610 & 3137 \\ & SNR (peak) & 162.34 & 95.41 & 64.55 & 45.02 & 32.97 \\ & SNR ($\pm$0.5 \AA) & 137.81 & 77.52 & 50.88 & 34.54 & 25.15 \\ \hline \\ \multicolumn{7}{c}{Slit width = 60 $\mu$m} \\ \hline \\ \multirow{4}{*}{5303 \AA} & Peak & 1861 & 513 & 224 & 115 & 69 \\ & Peak + scatter & 2043 & 635 & 313 & 186 & 123 \\ & SNR (peak) & 39.04 & 17.45 & 9.62 & 5.64 & 3.67 \\ & SNR ($\pm$0.5 \AA) & 15.76 & 6.28 & 2.94 & 1.59 & 1.03 \\ \hline \\ \multirow{4}{*}{7892 \AA} & Peak & 152 & 50 & 24 & 14 & 9 \\ & Peak + scatter & 351 & 183 & 121 & 92 & 68 \\ & SNR (peak) & 7.78 & 2.99 & 1.51 & 0.89 & 0.58 \\ & SNR ($\pm$0.5 \AA) & 3.63 & 1.37 & 0.71 & 0.39 & 0.29 \\ \hline \\ \multirow{4}{*}{10747 \AA} & Peak & 40476 & 15342 & 7954 & 4567 & 2937 \\ & Peak + scatter & 46065 & 19068 & 10686 & 6741 & 4593 \\ & SNR (peak) & 196.93 & 117.28 & 80.66 & 57.37 & 42.81 \\ & SNR ($\pm$0.5 \AA) & 168.1 & 95.62 & 63.60 & 43.80 & 32.37 \\ \hline \end{tabular} \caption{Optimization of the slit-width using the SNR calculations for slit-widths of 20 $\mu$m, 40, $\mu$m and 60 $\mu$m for log(T) = 6.25. The parameters peak and peak+scatter are in the unit of {photoelectrons/pixel/second.}} \label{tab:diffTemp} \end{table} It can be seen from Table \ref{tab:diffTemp} that as the slit-width is increased, there is an improvement in the SNR of all the channels. The increased SNR is also observed at the wings of the spectral line used for analysis. However, it should be noted that there has also been increase in the number of photons incident on the detector. Increasing the number of incident photons also impose the challenge of attaining sufficient SNR without saturating the detector. The CMOS sensors used for visible spectral channels have full well capacity of $\sim$30000 electrons in both high and low gain where as INGAS (InGaS) used for IR detector it is $\sim$30300 electrons in high-gain mode \citep{Singh2019}. The IR channel is also equipped with spectro-polarimeter mode which will operate in high-gain mode with a fixed exposure time of 500 ms. When operated in this mode, it can be noted from Table \ref{tab:diffTemp} that for a slit-width of 60 $\mu$m, the incident photons count to $\sim$23033 electrons at 1.1 R$_\odot$ which is $\sim$77\% of the full well capacity. Thus, the slit-width needs to be $\leq$60$\mu$m. Therefore, we tested the SNR with 50 $\mu$m slit-width keeping rest of the parameters same as above. The reults are tabulated in Table \ref{tab:50mic}. \begin{table}[] \centering \begin{tabular}{ccccccc} \hline \\ \multicolumn{7}{c}{Slit width = 50 $\mu$m} \\ \hline \\ & Distance (R$_\odot$) & 1.1 & 1.2 & 1.3 & 1.4 & 1.5 \\ \hline \\ \multirow{4}{*}{5303 \AA} & Peak & 1582 & 437 & 190 & 98 & 59 \\ & Peak + scatter & 1734 & 539 & 265 & 157 & 104 \\ & SNR (peak) & 35.71 & 15.76 & 8.55 & 4.98 & 3.23 \\ & SNR ($\pm$0.5 \AA) & 13.64 & 4.99 & 2.46 & 1.35 & 0.82 \\ \hline \\ \multirow{4}{*}{7892 \AA} & Peak & 128 & 42 & 20 & 12 & 7 \\ & Peak + scatter & 294 & 153 & 101 & 77 & 56 \\ & SNR (peak) & 6.77 & 2.55 & 1.26 & 0.77 & 0.45 \\ & SNR ($\pm$0.5 \AA) & 3.16 & 1.15 & 0.61 & 0.34 & 0.23 \\ \hline \\ \multirow{4}{*}{10747 \AA} & Peak & 34423 & 13047 & 6765 & 3884 & 2498 \\ & Peak + scatter & 39080 & 16152 & 9042 & 5695 & 3878 \\ & SNR (peak) & 180.95 & 107.19 & 73.23 & 51.66 & 38.24 \\ & SNR ($\pm$0.5 \AA) & 154.01 & 87.22 & 57.71 & 39.51 & 29.02 \\ \hline \end{tabular} \caption{SNR for the three spectral channels of VELC for optimized slit-width of 50 $\mu$m for log(T) = 6.25.} \label{tab:50mic} \end{table} It could also be noticed from Table \ref{tab:50mic}: \begin{enumerate}[(i)] \item With a slit-width of 50 $\mu$m, the IR spectro-polarimeter mode could get an incident photon count of 19540 at 1.1 R$_\odot$ having sufficient SNR at the same time. \item The slit-width of 50 $\mu$m leads to a photon count contributing to $\sim$65\% of the full well capacity for IR channel at 1.1 R$_\odot$ in spectro-polarimeter mode. This is a sufficient margin to account for the flaring conditions or intensity enhancement in the coronal structures. \item The SNR at the peak and wings decrease with height for all the channels. This implies that subsequent frames may be added post-facto to enhance the SNR. \item SNR for Fe XI (7892 \AA) channel {is lower than the other two channels} for the selected simulation parameters. \\ \end{enumerate} \subsection{Effect of Temperature on SNR} As the corona contains plasma of different temperatures, after optimizing the slit-width to 50 $\mu$m, we analysed the effect of different plasma temperatures on the performance of the spectral channels. The spectra are synthesized for three channels at a height of 1.1 R$_\odot$ taking electron density of 10$^{8.2}$ cm$^{-3}$ and EM = 10$^{27}$ cm$^{-5}$. The temperature is varied from log(T) = 6.0 to log(T) = 6.5 in steps of 0.1. The scattered intensity and noise introduced by the instrument is also added accordingly. The result is tabulated in Table \ref{tab:50Temp}. It is noticed that the three channels show maximum SNR corresponding to different temperatures. This directly implies the importance of these lines for temperature diagnostics of the corona. It could be noted that 7892 \AA\ channel shows good SNR for relatively cool plasma as compared to other channels as the corresponding Fe XI ion formation temperature is comparatively lower than the other two ions in consideration. Looking at the values of SNRs for the three channels in Table \ref{tab:50Temp} also reveals that if the study using these three lines are combined then it will be helpful to investigate plasma over a wide range of temperatures. \begin{table}[] \centering \begin{tabular}{cccccccccc} \hline \\ \multicolumn{7}{c}{Slit width = 50 $\mu$m} \\ \hline \\ & log(T) & 6.0 & 6.1 & 6.2 & 6.3 & 6.4 & 6.5 \\ \hline \\ \multirow{4}{*}{5303 \AA} & Peak & 11 & 249 & 1237 & 1391 & 390 & 68 \\ & Peak + scatter & 163 & 401 & 1389 & 1543 & 542 & 220 \\ & SNR (peak) & 0.55 & 9.92 & 30.75 & 33.04 & 14.04 & 3.21 \\ & SNR ($\pm$0.5 \AA) & 0.05 & 1.76 & 9.98 & 13.81 & 6.02 & 1.48 \\ \hline \\ \multirow{4}{*}{7892 \AA} & Peak & 1025 & 1043 & 342 & 36 & 2 & 1 \\ & Peak + scatter & 1191 & 1209 & 508 & 202 & 168 & 167 \\ & SNR (peak) & 28.95 & 29.24 & 14.31 & 2.21 & 0.13 & 0.06 \\ & SNR ($\pm$0.5 \AA) & 12.63 & 14.97 & 7.04 & 0.98 & 0.05 & 0.05 \\ \hline \\ \multirow{4}{*}{10747 \AA} & Peak & 3183 & 23479 & 42559 & 19635 & 2495 & 214 \\ & Peak + scatter & 7840 & 28136 & 47216 & 24292 & 7152 & 4871 \\ & SNR (peak) & 45.23 & 147.76 & 202.14 & 134.21 & 38.21 & 4.81 \\ & SNR ($\pm$0.5 \AA) & 25.65 & 117.75 & 172.51 & 110.26 & 23.67 & 2.37 \\ \hline \end{tabular} \caption{Effect of different plasma temperatures on the SNRs of VELC spectral channels estimated at 1.1~R$_\odot$.} \label{tab:50Temp} \end{table} \begin{figure}[!ht] \centering \centerline{\hspace*{0.05\textwidth} \includegraphics[width=0.33\textwidth,clip=]{0green_peak.pdf} \hspace*{0.002\textwidth} \includegraphics[width=0.33\textwidth,clip=]{0red_peak.pdf} \includegraphics[width=0.33\textwidth,clip=]{0ir_peak.pdf} } \centerline{ \hspace{0.175\textwidth} \color{black}{(a)} \hspace{0.3\textwidth} \color{black}{(b)} \hspace{0.3\textwidth} \color{black}{(c)} \hfill} \caption{Synthetic spectra for VELC at 1.1 R$_\odot$ for (a) 5303 \AA, (b) 7892 \AA, and (c) 10747 \AA\ for 50 $\mu$m slit-width at their respective line formation temperature. {Photons due to instrument scattering are added to the spectra.}} \label{fig:50mic_peak} \end{figure} We then synthesized the lines at their respective peak formation temperature and proceeded as above. The temperatures chosen are close to the peak line formation temperature for these lines which are 10$^{6.27}$~K, 10$^{6.1}$ K, and 10$^{6.2}$ K for 5303 \AA, 7892 \AA, and 10747 \AA\ respectively \citep{allen1973book}. We synthesized these three spectral lines at heights ranging from 1.1 to 1.5 R$_\odot$ with emission measure and electron density as mentioned in Section \ref{sec-swopt} for slit-width of 50 $\mu$m considering the scatter and noise addition as for previous cases. Figure \ref{fig:50mic_peak} shows such synthetic spectra expected to be observed by VELC at 1.1 R$_\odot$. The spectra also includes added scatter and noise values from the instrument at this height. The results of this analysis for the mentioned heights is summarised in Table \ref{tab:50micpeak} which reveals peak SNR at the line peak as well as at the wings. On comparison with Table \ref{tab:50mic}, it could be seen that the observed line intensity will have sufficient SNR up to larger distances for line emission corresponding to their peak temperatures for all the spectral channels. For the cases of 5303 \AA\ and 7892 \AA\ when the SNR becomes $\leq$ 5, then it could be enhanced by pixel or frame binning as required. \\ \begin{table}[] \centering \begin{tabular}{ccccccc} \hline \\ \multicolumn{7}{c}{Slit width = 50 $\mu$m} \\ \hline \\ & Distance (R$_\odot$) & 1.1 & 1.2 & 1.3 & 1.4 & 1.5 \\ \hline \\ \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}5303 \AA\\[1ex] log(T) = 6.27 \end{tabular}} & Peak & 1611 & 486 & 209 & 110 & 65 \\ & Peak + scatter & 1763 & 570 & 284 & 169 & 110 \\ & SNR (peak) & 36.1 & 17.19 & 9.22 & 5.51 & 3.53 \\ & SNR ($\pm$0.5 \AA) & 13.64 & 4.99 & 2.46 & 1.35 & 0.82 \\ \hline \\ \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}7892 \AA\\[1ex] log(T) = 6.1 \end{tabular}} & Peak & 1043 & 334 & 161 & 88 & 56 \\ & Peak + scatter & 1209 & 445 & 242 & 153 & 105 \\ & SNR (peak) & 29.24 & 14.07 & 8.15 & 4.94 & 3.31 \\ & SNR ($\pm$0.5 \AA) & 14.97 & 6.31 & 3.40 & 2.03 & 1.32 \\ \hline \\ \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}10747 \AA\\[1ex] log(T) = 6.2 \end{tabular}} & Peak & 42559 & 16095 & 8331 & 4777 & 3070 \\ & Peak + scatter & 47216 & 19200 & 10608 & 6588 & 4450 \\ & SNR (peak) & 202.14 & 120.42 & 82.90 & 59.04 & 44.13 \\ & SNR ($\pm$0.5 \AA) & 172.51 & 98.70 & 65.88 & 45.52 & 33.66 \\ \hline \end{tabular} \caption{SNR for VELC spectral channels for slit-width of 50 $\mu$m taking the line formation peak temperature for each channel.} \label{tab:50micpeak} \end{table} These cases have been simulated taking the isothermal corona with electron density calculated using the Baumbach model. As this coronal density model is based on white-light eclipse observations that include contributions from different temperatures, we used a temperature distribution to calculate the signal variation with height for the three channels. {We then compared} the densities obtained from Baumbach model with the values by the following relation: \begin{equation} \label{eq-scaleht} n_e = n_0e^{-(\frac{r-1.1}{H})}, \end{equation} where n$_0$ is the electron density estimated at 1.1 R$_\odot$ using the {Baumbach model} and H is the scale height which is dependent on the temperature. Using the two estimates of the densities we performed a chi-square analysis to obtain the maximum match between the two models (Figure \ref{fig:chi_comp}a). We found that for log(T) = 6.1, density estimates using the two methods match well. Taking this as the peak value of temperature and width of 0.3 we generated 150 random numbers with a Gaussian distribution as shown in Figure \ref{fig:chi_comp}(b) to perform Markov chain Monte Carlo simulation \citep[MCMC;][]{Hastings1970} taking quiet-Sun densities and EM at the heights used previously with 50 $\mu$m slit-width. \begin{figure}[!ht] \centering \centerline{\hspace*{0.05\textwidth} \includegraphics[width=0.5\textwidth,clip=]{chisq.pdf} \hspace*{0.002\textwidth} \includegraphics[width=0.52\textwidth,clip=]{temp_hist.pdf} } \centerline{ \hspace{0.3\textwidth} \color{black}{(a)} \hspace{0.4\textwidth} \color{black}{(b)} \hfill} \caption{(a) Chi-square minimisation output as a function of temperature, (b) Temperature distribution with peak at log(T) = 6.1 used for MCMC simulation.} \label{fig:chi_comp} \end{figure} We then estimated the photoelectrons generated in each pixel every second with temperature based on the distribution. The variation of the photoelectrons with height is shown in Figure \ref{fig:dist_height} for the three channels. It can be noted that there is a cluster of points near the top at each height for all channels. These values can be compared with the number of photoelectrons calculated at the line formation temperature for each channel as {specified} in Table \ref{tab:50micpeak}. Also when the points in the clusters in Figure \ref{fig:dist_height} are compared with the counts tabulated in {Table \ref{tab:50Temp}}, we could infer that the spread near the peak counts in each channel is due to the contribution from the temperature close to the peak line formation temperatures for the respective lines. Thus, even though there is a range of counts available from zero to thousands of photoelectrons each second, the maximum contribution will be observed due to the coronal structures contributing at and near the peak temperature for VELC channels. \begin{figure}[!ht] \centering \centerline{\hspace*{0.05\textwidth} \includegraphics[width=0.33\textwidth,clip=]{green_dist.pdf} \hspace*{0.002\textwidth} \includegraphics[width=0.33\textwidth,clip=]{red_dist.pdf} \includegraphics[width=0.33\textwidth,clip=]{ir_dist.pdf} } \centerline{ \hspace{0.175\textwidth} \color{black}{(a)} \hspace{0.3\textwidth} \color{black}{(b)} \hspace{0.3\textwidth} \color{black}{(c)} \hfill} \caption{Photoelectrons variation using synthetic spectra for VELC at different heights for (a) 5303 \AA, (b) 7892 \AA, and (c) 10747 \AA\ for 50 $\mu$m slit-width with temperature distribution. {It should be noted that the counts for IR channel is about an order of magnitude greater than the other two channels as the IR sensor pixels are larger than the visible sensor pixels.\\}} \label{fig:dist_height} \end{figure} \subsection{Effect of CME on Spectra} We also analysed the performance of the instrument for a case when CME is passing through spectrograph slits with 50 $\mu$m slit-width. It should be noted that ground based coronagraph MLSO/KCor has FOV similar to VELC. We used a CME case that was observed by KCor on 2016-01-01. A reference image for the locations of slit in VELC FOV in spectral channels is shown in yellow superimposed on the KCor image as shown in Figure \ref{fig:cmecase}(a). Due to the difference in size of visible (2560 $\times$ 2160 pixels) and IR (640 $\times$ 512 pixels) channel detectors, the FOV covered is slightly different for the two. A circularly symmetric uniform coronal density based on Baumbach model \citep{Baumbach37} was used for synthesizing the spectra. Since the white light intensity is proportional to the electron density, we estimated the electron density by taking the ratio of CME and non-CME image. The non-CME image used here was an average intensity image of the images prior to CME occurrence. The ratio provided an enhancement at the CME location with respect to the background corona. It was found that for this CME the maximum enhancement attained was $\sim$4 times above the background corona. This electron density was then used to synthesize the spectra at these locations with 1 second exposure time. The emission measure value at 1.1 R$_\odot$ during the CME was taken as 10$^{28.9}$ cm$^{-5}$ due to CME which was varied {up to 10$^{25}$ cm$^{-5}$} at the height of 1.5 R$_\odot$ with intermediate values interpolated using a third degree polynomial. It has been observed that the temperature of a CME can range from 10$^{5.5}$ to 10$^{6.5}$ {\citep{Susino2016ApJ}}, hence, an average temperature of 10$^{6.25}$ K was taken during simulation of this case. The coronagraph slits are kept in the reference position with the center of the Sun lying midway between slits 2 and 3. \begin{figure} \centering \centerline{\hspace*{0.05\textwidth} \includegraphics[width=0.5\textwidth,clip=]{ref_kcor3.pdf} \hspace*{0.001\textwidth} \includegraphics[width=0.5\textwidth,clip=]{green_cme1.pdf} } \centerline{ \hspace{0.23\textwidth} \color{black}{(a)} \hspace{0.5\textwidth} \color{black}{(b)} \hfill} \vspace{0.01\textwidth} \centerline{\hspace*{0.05\textwidth} \includegraphics[width=0.5\textwidth,clip=]{red_cme1.pdf} \hspace*{0.002\textwidth} \includegraphics[width=0.5\textwidth,clip=]{ir_cme1.pdf} } \centerline{ \hspace{0.23\textwidth} \color{black}{(c)} \hspace{0.5\textwidth} \color{black}{(d)} \hfill} \caption{(a) VELC slit locations over-plotted in yellow on the KCor image of 2016-01-01 for VELC FOV in IR channel, (b), (c), and (d) Synthesized spectra for the four slits of VELC for 5303 \AA, 7892 \AA, and 10747 \AA\ respectively. It could be noticed that the region where CME is present, the intensity of the spectra is enhanced.} \label{fig:cmecase} \end{figure} The synthetic spectra as will be observed by VELC corresponding to the 5303 \AA, 7892 \AA, and 10747 \AA\ {channels} are simulated and shown in Figure \ref{fig:cmecase}. Here, {the center of slit 4 (from the left)} is at a heliocentric distance of 1.11 R$_\odot$. In Figure \ref{fig:cmecase}(b,c,d) the expected synthetic spectra for the CME seen in the reference image is shown that will be observed by VELC at 5303 \AA, 7892 \AA, and 10747 \AA\ respectively. It can be seen that at the location of the CME, there is enhancement of peak intensity reflecting the increased electron density in the three channels of VELC. Comparing the peak electron counts in the presence of CME for the three channels with Table \ref{tab:50mic} it was found that there was an increment by $\sim$4, $\sim$5, and $\sim$150 times for the 5303 \AA, 7892 \AA, and 10747 \AA\ channels respectively. For the two visible channels the electron count is sufficiently below the full well capacity of the detectors even after adding the scattered photons. However, for the IR channel the counts are more than the full well detector capacity. This is due to larger pixel size of IR channel detector. This implies that even though exposure of more than 1 second could be set for visible channels, it should be less than a second for the IR channel for CME observations without saturating the detector. For this when the CME is detected onboard using the onboard CME detection logic \citep{Patel2018}, spectroscopic channel will be configured with a predetermined set of exposures. We plan to use the flag provided by the on-board CME detection algorithm for the continuum channel of VELC to change the IR observation to low gain mode with reduced exposure time for CME observations. \\ \subsection{SNR Requirements for Specific Cases} \subsubsection{Magnetometry} {One of the aims of VELC is the measurement of coronal magnetic field using the IR channel of VELC. As the magnetic field in the active regions in the corona is of the order of few tens of Gauss \citep{Lin2000ApJ}, we synthesized the weakest Stokes profile, V, intensity for the IR channel keeping the EM and density same as { the} previous cases for slit-width of 50 $\mu$m and temperature of 10$^{6.2}$~K. The radial variation of the V signal for magnetic field strength of 10 G is presented in Table \ref{tab:stokesv}. It could be noted that the V/I percentage of polarization for the instrument accounts to $\sim$0.13 \%. Taking in to account the Poisson noise at 1.1 R$_\odot$ the SNR will be $\approx$6. To { perform} the magnetic field measurement the SNR for Stokes-V should be in the order of $\sim$1000. { Acquiring this SNR requires integration times above one hour. Images with nominal exposure times will be recorded to avoid detector saturation and summed on the ground to produce sufficient SNR to provide critical measurements of the magnetic field in coronal active regions.} \\} \begin{table}[] \centering \begin{tabular}{ccccccc} \hline \\ & Distance (R$_\odot$) & 1.1 & 1.2 & 1.3 & 1.4 & 1.5 \\ \hline & V & 35 & 13 & 7 & 4 & 3 \\ & I & 26164 & 9894 & 5121 & 2937 & 1887 \\ & \% of polarization (V/I) & 0.13194976 & 0.13195072 & 0.13195114 & 0.13195138 & 0.13195152 \\ \hline \end{tabular} \caption{Variation of Stokes-V signal radially in the VELC IR channel for 50 $\mu$m slit-width.} \label{tab:stokesv} \end{table} \subsection{Doppler Maps} {VELC will be used to produce doppler maps to study the { line-of-sight} plasma motions in the solar corona. This will also be helpful to study the turbulence generated in the corona. We used the synthetic spectra at { a} height of 1.4 R$_\odot$ where the SNR for 5303 \AA\ is $\approx$5 for the quiet-Sun case as mentioned in Table \ref{tab:50micpeak}. We then imposed a doppler velocity of 5 km s$^{-1}$ resulting in a shift of $\approx$88~m\AA\ for the line peak from the rest wavelength position. We added the noise introduced in Section \ref{sec:results} randomly along the spectral line to mimic the near realistic observation. The final spectra was then fitted with a Gaussian profile and the doppler velocity was then measured. Figure \ref{fig:doppler}a and b shows the synthetic spectra at 1.4 R$_\odot$ and with { 20\%} counts of the prior. The vertical dashed line is the position of the rest wavelength for the spectra. The SNR variation within the line is shown in panels c and d of the same Figure. It could be seen that when the SNR at the peak of the spectra is $\approx$5, then a reliable doppler speed is measured in the synthetic data that is close to the imposed one. On the other hand when the SNR at the line peak is close to 1, then a significant deviation could be seen in the measured doppler speed (Figure \ref{fig:doppler}b). We found that SNR of { at least} 5 is good enough to measure the doppler speed as low as 5 km s$^{-1}$ using the green channel of VELC when all the noise sources are included in the line profile.} \begin{figure} \centering \centerline{\hspace*{0.05\textwidth} \includegraphics[width=0.5\textwidth,clip=]{test_spectra_2.pdf} \hspace*{0.002\textwidth} \includegraphics[width=0.5\textwidth,clip=]{test_spectra_3.pdf} } \centerline{ \hspace{0.23\textwidth} \color{black}{(a)} \hspace{0.5\textwidth} \color{black}{(b)} \hfill} \vspace{0.01\textwidth} \centerline{\hspace*{0.05\textwidth} \includegraphics[width=0.5\textwidth,clip=]{test_spectra_snr2.pdf} \hspace*{0.002\textwidth} \includegraphics[width=0.5\textwidth,clip=]{test_spectra_snr3.pdf} } \centerline{ \hspace{0.23\textwidth} \color{black}{(c)} \hspace{0.5\textwidth} \color{black}{(d)} \hfill} \caption{Synthetic green line with noise and doppler shift added (a) at 1.4 R$_\odot$, (b) at { 20\%} intensity of (a). The vertical dashed line shows the rest wavelength. (c), and (d) show SNR variation within the spectral line respectively for (a) and (b) where horizontal dotted-dashed line mark the SNR = 1.} \label{fig:doppler} \end{figure} \section{Summary and Discussions} \label{sec:summary} VELC on-board Aditya-L1 will provide a unique opportunity to simultaneously image and perform spectroscopic observations of the inner solar corona in three visible and one IR pass-band. It will be used for spectroscopic diagnostic of corona up to 1.5 R$_\odot$ using three emission lines, 5303 \AA, 7892 \AA, and 10747 \AA. It is necessary to simulate the performance of the instrument that will be useful for designing the observation plan after the launch. In this work we used synthetic spectral data to characterize the spectral channels of VELC for different solar conditions. We synthesized the spectra for the three channels using the CHIANTI atomic data base taking in to account the instrument characteristics. We also added the contribution of the instrument including the scattered intensity and detector noise to the synthesized spectra. The scattered intensity available for the continuum channel of the instrument was scaled for the spectral channel parameters. The final spectra was then analysed using the signal to noise calculated at the line center wavelength and at $\pm$0.5 \AA\ from the line center. We simulated the synthetic spectra taking isothermal condition with average coronal temperature as 10$^{6.25}$ K for the three channels and estimated the SNR for the emission lines at coronal heights from 1.1 to 1.5~R$_\odot$. For the simulation the slit width was varied as 20 $\mu$m, 40 $\mu$m, and 60 $\mu$m. We found that on increasing the slit width the SNR at the peak intensity and at the wings increased for all the three channels. It was identified that for the slit width of 60 $\mu$m, the detector for IR channels fills up to 77\% of its capacity with 500 ms exposure in high gain mode to be used for spectro-polarimetery mode. In order to keep a modest margin for this particular mode, the analysis was done taking slit width as 50 $\mu$m. It was found that 50 $\mu$m slit width leads to 65\% filling of the IR detector in spectro-polarimeter mode providing sufficient SNR at the same time. Thus, based on the requirement for this particular mode, we believe that the slit-width of 50 $\mu$m for the VELC spectral channels will be sufficient to study different regions of the solar atmosphere. It could be seen from Table \ref{tab:50mic} that the SNR for 7892 \AA\ is relatively poor as compared to the other two. This is because different lines have different formation temperatures. Therefore, we also studied the effect of different coronal plasma temperatures on the performance of the VELC spectral channels for the optimized slit-width. We synthesized the spectra for the three channels at 1.1 R$_\odot$ varying the temperature from log(T) = 6.0 to log(T) = 6.5 in steps of 0.1. We found that the SNR at line peak varies with change in temperature such that the maximum SNR observed for 5303 \AA, 7892 \AA, and 10747 \AA\ was at log(T) of 6.3, 6.1, and 6.2 respectively. These temperatures are close to their line formation temperatures. We could observe a similar increase in SNR for the wings of the synthesized spectral lines. We analysed the instrument's performance with slit-width of 50 $\mu$m to synthesize the spectra with their line formation temperatures. The SNR was then estimated at the line peak and the wings for coronal heights from 1.1 to 1.5 R$_\odot$. From the results of this analysis presented in Table \ref{tab:50micpeak}, it could be observed that there is sufficient SNR in all the channels for the line peak as well as the wings which decreases at larger heights. The IR channel has very good SNR even at larger heights due to its bigger pixel size as compared to the other two. As these analysis were based on isothermal coronal conditions, we also used a MCMC simulation taking a Gaussian distribution of temperature peaking at 10$^{6.1}$ K having width of 10$^{0.3}$ K to estimate the expected signal at different heights in the VELC FOV. We found that there is spread in the estimated counts based on the temperature distribution with a cluster obtained corresponding to the temperatures close to the line formation temperature for individual channels (Figure \ref{fig:dist_height}). Overall analysis shows that for the optimized slit-width of 50 $\mu$m and considering the peak line formation temperatures of each line, reliable signal could be obtained for all the channels which could be used for analysis. For the larger heights when the SNR becomes $\leq$5, then pixel binning could be considered to enhance the signal with respect to the background. There is also an option to increase the exposure time without saturating the detector which could also boost the signal. For the study of very fast transients requiring short exposure times, the observations could be taken at high cadence. Such short exposure frames could be further binned to increase the SNR. We also performed a study of a CME case approximating the enhanced electron density due to a CME of 2016-01-01 from KCor images which has similar FOV as VELC. We found that the visible spectral channels of VELC could be operated for CME observations with exposure time more than 1 s whereas for IR channel will be operated in low gain mode following the trigger provided by on-board CME detection algorithm with reduced exposure time. More CME cases could be analysed in future with different enhancements that could help in planning for spectral diagnostics of CMEs using VELC. {We also considered two specific science cases', magnetometry and doppler mapping, SNR requirement from VELC point of view. For the measurement of magnetic field using the IR channel we calculated the Stokes-V intensity thereby estimating the minimum integration time required. For the doppler mapping, we used the green line at height of 1.4 R$_\odot$ where SNR $\approx$5. We also considered a case where the intensity of the line is 20 \% of the above mentioned case. For the two cases we added the noise and a doppler speed which was measure later by fitting a Gaussian. We found that a minimum SNR of 5 will be required for doppler mapping using VELC.} {It should be noted that increasing the slit-width results in broadening of the spectral lines and hence decreasing the spectral resolution. \citet{Singh2019} studied the effect of increasing the slit-width for the three spectral channels of VELC. Their analysis also indicates that the slit-width should be increased to enhance the SNR at the same time optimising for the spectral resolution to meet the science requirements. These estimates will be helpful to identify the expected observed spectra including the instrument contribution which could be de-convolved during the processing to get the true line profile. The effect of spacecraft drift and jitter as studied by \citet{ranganathan2019polarimeter} has not be considered for the studies presented here but will be included in our future work. } Such complete spectral information will be helpful for the data pipeline development and extracting the signal from the background In this work we have presented a few of the solar conditions. A similar study could be extended to other solar features such as loops, plumes, coronal holes etc. which results in different ambient coronal conditions. Such extensive studies covering different cases could help in preparing the optimised plan for maximising the science output from the instrument. It should be noted that this study could also be extended for future missions which include spectrographs for preparing the target science cases that could be addressed with the instrument capabilities. \section*{Conflict of Interest Statement} The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. \section*{Author Contributions} MA, VM, and AKS generated the synthetic spectra for different solar conditions using CHIANTI database taking instrument parameters. RP and VM converted the synthesized spectra to the instrument's synthetic observations. RP carried on the estimations and prepared the manuscript. KS and DB planned the analysis from the observation point of view. VP provided the essential inputs to analyse the results. All authors took part in the discussion. \section*{Acknowledgments} We would like to acknowledge IIA, ISRO and ARIES to provide necessary facilities and computation requirements. We thank the VELC instrument team to provide the instrument parameters when required. We also thank the CHIANTI team for making the database available. RP and AKS would like to thank Samrat Sen for discussions regarding the analysis. RP, MA and VM are supported by DST. CHIANTI is a collaborative project involving the University of Cambridge (UK), the NASA Goddard Space Flight Center (USA), the George Mason University (GMU, USA) and the University of Michigan (USA). \bibliographystyle{frontiersinSCNS_ENG_HUMS}
1,116,691,497,069
arxiv
\section{Introduction} Let $X$ be a connected, reduced and complete scheme over a perfect field $k$ and let $x \in X$ be a $k$-rational point. In \cite{No1}, Nori introduced a $k$-group scheme $\pi^N(X,x)$ associated to essentially finite vector bundles on $X$. In \cite{No2}, Nori extends the definition of $\pi^N(X,x)$ to connected and reduced $k$-schemes. In \cite{BPS}, Biswas, Parameswaran and Subramanian defined the notion of {\it $S$-fundamental group scheme} $\pi^S(X, x)$ for $X$ a smooth projective curve over any algebraically closed field $k$. This is generalized to higher dimensional connected smooth projective $k$-schemes and studied extensively by Langer in \cite{La, La2}. Let $C$ be a connected smooth projective curve defined over an algebraically closed field $k$. Fix a locally free sheaf $E$ on $C$ of rank $\geq 2$ and an integer $d \geq 2$. Let $\mc Q$ denote the Quot scheme parameterizing torsion quotients of $E$ of degree $d$. It is a smooth and projective variety over $k$. In this article we shall compute the $S$-fundamental group scheme of $\mc Q$. We mention some of the earlier results where fundamental group schemes were computed. In \cite{BH} it is proved that for a smooth projective surface $X$, the etale fundamental group $\pi^{\et}(Hilb_X^n,nx)$ of the Hilbert scheme of $n$ points ($n\geq 2$), is isomorphic to $\pi^{\et}(X,x)_{\rm ab}$. The main result in \cite{PS-surface} is to generalize this to the $S$-fundamental group scheme. In \cite{La2} it is proved that $\pi^S({\rm Alb}(C),0)$ is isomorphic to $\pi^S(C,c)_{\rm ab}$. Let $S_d$ ($d\geq 2$) be the permutation group of $d$ symbols and denote $S^dC := C^d/S_d$. In \cite{PS-curve} the authors prove that the $\pi^S(S^dC,d[c])$ is isomorphic to $\pi^S(C,c)_{\rm ab}$. Once we have such a result for the $S$-fundamental group scheme, one deduces similar results for the Nori and etale fundamental group schemes. \noindent {\bf Notation}. From now on, unless mentioned otherwise, we will be working with the following assumptions. Let $k$ be an algebraically closed field. Let $C$ be an irreducible smooth projective curve over $k$. Let $E$ be a locally free sheaf on $C$ of rank $\geq 2$. Fix an integer $d \geq 2$. Let $\mc Q$ denote the Quot scheme parameterizing torsion quotients of $E$ of degree $d$. There is a Hilbert-Chow map $\phi:\mc Q\to S^dC$, the definition of which is recalled in section \ref{section-Hilbert-Chow}. The $S$-fundamental group scheme has been defined in Definition \ref{cat-nf}. The main result we prove in this article is the following. \begin{theorem*}[Theorem \ref{m-t}] For any closed point $q \in \mc Q$, there is an isomorphism of affine $k$-group schemes $${\phi_*^S} : \pi^S(\mc Q, q) \xrightarrow{\sim} \pi^S(S^dC, \phi(q)).$$ \end{theorem*} \noindent From this the following corollary follows easily. \begin{corollary*}[Corollary \ref{m-c}] For any closed point $q \in \mc Q$, there are isomorphism of affine $k$-group schemes ${\phi_*^N} : \pi^N(\mc Q, q) \xrightarrow{\sim} \pi^N(S^dC, \phi(q))$ and ${\phi_*^{\et}} : \pi^{\et}(\mc Q, q) \xrightarrow{\sim} \pi^{\et}(S^dC, \phi(q)).$ \end{corollary*} \noindent In view of \cite{PS-curve} it follows that \begin{corollary}$\pi^{?}(\mc Q,q)\cong \pi^{?}(C,c)_{\rm ab}$ for $?=S,N,\et$. \end{corollary} The key ingredient in the proof of the above theorem is the following corollary regarding the scheme theoretic fiber of the Hilbert-Chow morphism. Related to this, in \cite[Proposition 5.9]{Kleiman-et-al}, the authors give a constructive proof that each fiber of the Hilbert-Chow map has the same reduction as the product of certain Quot schemes, which, a priori, need not be reduced. We prove the following. Let $D$ be a divisor corresponding to a closed point of $S^dC$. Let $\mc Q_D$ denote the scheme theoretic fiber over the point $D$. Then we have the following result. \begin{corollary*}[Corollary \ref{cor-fiber}] The fiber $\mc Q_D$ is reduced, irreducible and normal. \end{corollary*} \noindent Further, there is a smooth projective rational variety $S_d$ and a birational map $g_d:S_d\to \mc Q_D$, see Proposition \ref{birational}. This allows us to conclude easily, using Grauert's theorem, that every numerically flat bundle is the pullback of a numerically flat along $\phi$. In the rest of this article, $E$ will be a locally free sheaf of rank $\geq 2$ and $d$ will be an integer $\geq 2$. \subsection*{Acknowledgements} We thank Arjun Paul for useful discussions. We thank the authors in \cite{Kleiman-et-al} for their interest. We thank the referee for a very careful reading of this article and for helpful comments. \section{Hilbert-Chow morphism}\label{section-Hilbert-Chow} In this section we recall the definition of the Hilbert Chow morphism $\phi:\mc Q\to S^dC$. For Hilbert schemes of points, one has a Hilbert-Chow morphism, see \cite[Chapter 7, section 1]{FGAex} for a detailed discussion. Here we describe how to get such a morphism for the Quot schemes we consider. This construction appears in other places, for example, see the introduction in \cite{Bis-Dh-Hu}. The map $\phi$ that we define here is the same as the map $\xi$ defined in \cite{Kleiman-et-al}, in the situation that they work in. There, however, the authors give a more explicit description. Let $p_1:C\times \mc Q\to C$ and $p_2:C\times \mc Q\to \mc Q$ denote the projections and let \begin{equation}\label{hc-e1} 0\to K\to p_1^*E\to B\to 0 \end{equation} denote the universal quotient on $C\times \mc Q$. Since $B$ and $p_1^*E$ are flat over $\mc Q$, it follows that $K$ is a flat $\mc Q$ sheaf. Let $q\in \mc Q$ denote a closed point. Restricting this quotient to $C\times q$ we get the exact sequence \begin{equation}\label{hc-e2} 0\to K\vert_{C\times q}\to E\to B\vert_{C\times q}\to 0\,. \end{equation} It follows that $K\vert_{C\times q}$ is a locally free sheaf on $C$. From Nakayama's lemma it follows that $K$ is a locally free $C\times \mc Q$ sheaf of rank $r:={\rm rank}\, E$. Taking determinant of the inclusion in \eqref{hc-e1} we get an exact sequence $$0\to {\rm det}(K)\to {\rm det}(p_1^*E)\to \mc F\to 0\,.$$ To show that $\mc F$ is flat over $\mc Q$ it suffice to show that the restriction of this sequence to $C\times q$ remains exact on the left. But this is clear as the restriction of this sequence to $C\times q$ is precisely the sequence obtained by taking determinant of the inclusion in \eqref{hc-e2}, which remains exact on the left. Thus, on $C\times \mc Q$ we get a quotient $$0\to {\rm det}(K)\otimes{\rm det}(p_1^*E)^{-1}\to \mc O\to \mc F\otimes {\rm det}(p_1^*E)^{-1}\to 0\,.$$ This defines a morphism \begin{equation}\label{defn-hc} \phi:\mc Q\to S^dC\,. \end{equation} In the following sections we will study the fibers of this morphism. \section{Locus where $\phi$ is smooth} Consider the map $\phi:\mathcal{Q}\rightarrow S^{d}C$. Let $D$ denote the divisor $\sum^{k}_{i=1}d_{i}[c_{i}]$ and consider a quotient $q$ \begin{equation}\label{thick-points} E\xrightarrow{q} \mathcal{O}_{D}\rightarrow 0\,. \end{equation} \begin{lemma}\label{GS-L2} Given a quotient $q$ as above, there is a line bundle $L$ and a surjection $E\rightarrow L\rightarrow 0$ such that $q$ factors as $$E\rightarrow L\rightarrow \mc O_D \,.$$ \end{lemma} \begin{proof} Let $L'$ be any line bundle on $C$. Then we have the exact sequence \[ 0 \to L'(-D) \to L' \to L'|_{D} \to 0\,. \] Applying the functor $\text{Hom}(E, )$ to the above exact sequence, we get \[ 0 \to \text{Hom}(E,L'(-D)) \to \text{Hom}(E,L') \to \text{Hom}(E,L'|_{D}) \to \text{Ext}^{1}(E,L'(-D)) \] Now $\text{Ext}^{1}(E,L'(-D))\cong H^{1}(E^{\vee}\bigotimes L'(-D))$, so for $L'$ of sufficiently high degree we get $\text{Ext}^{1}(E,L'(-D))=0$, that is, we have an exact sequence when deg $L'\gg0$ \[ 0 \to \text{Hom}(E,L'(-D)) \to \text{Hom}(E,L') \to \text{Hom}(E,L'|_{D}) \to 0\,. \] In other words, for any homomorphism $E\rightarrow L'|_{D}$, we have a morphism $E\rightarrow L'$ such that the following diagram commutes: \[ \begin{tikzcd} E \arrow[r] \arrow[d] & L'|_{D}\cong \mc O_D \\ L '\arrow[ur] & \end{tikzcd} \] In particular, taking the quotient $q:E\to \mc O_D$, there is a line bundle $L'$ such that $q$ factors as $E\to L'\to \mc O_D$. Let $L$ be the image of $E$ in $L'$. Then, we have a surjection $$E\rightarrow L\rightarrow \mc O_{D}\,,$$ which proves the lemma. \end{proof} \begin{lemma}\label{GS-L3} The map $\phi:\mathcal{Q}\rightarrow S^{d}C$ is smooth at $q$ (corresponding to the quotient in equation \eqref{thick-points}). \end{lemma} \begin{proof} We will show that the map of Zariski tangent spaces $T_{q}\mathcal{Q}\rightarrow T_{\phi(q)}S^{d}C$ is surjective. Let $T:={\rm Spec}\,k[\epsilon]/(\epsilon^2)$ and let $t_0$ denote the closed point of $T$. We will show that for any map $$T\xrightarrow{v} S^{d}C$$ such that the image of the closed point of $T$ is $\phi(q)$, there is a map $T\xrightarrow{v'} \mathcal{Q}$, such that the closed point maps to $q$ and the following diagram commutes \[ \begin{tikzcd} & \mathcal{Q} \arrow[d,"\phi"] \\ T \arrow[r,"v"] \arrow[ru,dashed,"v'"] & S^{d}C \end{tikzcd} \] By universal property of $S^{d}C$, the morphism $v$ corresponds to a quotient over $C\times T$ given by $$\mathcal{O}_{C\times T}\xrightarrow{J_v} \mathcal{O}_\mathcal{D}\rightarrow 0$$ such that $\mathcal{O}_{\mathcal{D}}$ is $T$-flat and the restriction of $J_v$ to $C\times {\rm Spec}\,k$ is equivalent to the quotient $[\mathcal{O}_C\rightarrow \mathcal{O}_{D}]$ (which corresponds to the point $\phi(q)$). Note that $\mc O_{\mc D}$ is an Artinian ring. Let $f_1:C\times T\to C$ and $f_2:C\times T\to T$ denote the projections. We fix a line bundle $L$ over $C$ as in Lemma \ref{GS-L2}. We have $1\otimes J_v:f_1^*L\to f_1^*L\otimes \mc O_{\mc D}$. Define a quotient $J_{v'}$ over $C\times T$ as the composition \begin{equation}\label{lift-tv} J_{v'}:f^{*}_{1}E\rightarrow f^{*}_{1}L\xrightarrow{1\otimes J_v} f^{*}_{1}L\otimes \mc O_{\mc D}\cong \mc O_{\mc D}\,. \end{equation} Clearly, $f^{*}_{1}L\otimes \mc O_{\mc D}$ is $T$-flat and $J_{v'}|_{C\times \{\text{Spec }k\}}$ is equivalent to $q$ by Lemma \ref{GS-L2}. Hence, $J_{v'}$ induces a morphism $v':T\rightarrow \mathcal{Q}$. Next we show that $\phi\circ v'=v$. Let us denote the kernel of $J_{v'}$ by $E_{v'}$. Thus, we have an exact sequence $$0\to E_{v'}\stackrel{\iota}{\longrightarrow} f_1^*E\stackrel{J_{v'}}{\longrightarrow} \mc O_{\mc D}\to 0\,.$$ Let $t_0$ denote the closed point of $T$. From the $T$-flatness of $\mc O_{\mc D}$ we conclude that $E_{v'}\vert_{C\times t_0}$ is locally free. Now using Nakayama's lemma we conclude that $E_{v'}$ is locally free sheaf on $C\times T$. Recall from the definition of $\phi$, that the map $\phi\circ v'$ is given by the following quotient on $C\times T$ $$0\to {\rm det}(E_{v'})\otimes {\rm det}(f_1^*E)^{-1}\xrightarrow{{\rm det}(\iota)} \mc O_{C\times T}\to \mc F\to 0\,.$$ Passing to the local rings at $(c,t_0)$ and using equation \eqref{lift-tv}, it is easily checked that $\mc F\cong \mc O_{\mc D}$ and that the above quotient is exactly $J_v:\mc O_{C\times T}\to \mc O_{\mc D}$. This completes the proof of the lemma. \end{proof} \begin{proposition}\label{Generic Smoothness} In every fiber of $\phi$ there is a point at which $\phi$ is a smooth morphism. \end{proposition} \begin{proof} Let $D$ be the divisor corresponding to a point $x\in S^dC$. Fix a line bundle $L$ which is a surjective quotient of $E$. Then the composite $q:E\to L\to L\otimes \mc O_D$ is a quotient such that $\phi(q)=x$. The proposition now follows from lemma \ref{GS-L3}. \end{proof} \section{The space $S_d$}\label{S_d} Let the rank of the vector bundle $E$ be $r$. We will inductively define $(S_{d},A_{d})$, where $A_{d}$ is a vector bundle of rank $r$ defined over $C\times S_{d}$. Define $S_{0}={\rm Spec}\,k$ and $A_{0}=E$. To define $(S_{j},A_{j})$ we assume that we have defined $(S_{j-1},A_{j-1})$. Let $i_{j-1}:\{c\}\times S_{j-1}\hookrightarrow C\times S_{j-1}$ be the natural closed immersion. Define $S_{j}:=\mathbb{P}(i^{*}_{j-1}A_{j-1})$ and let $f_{j,j-1}:S_{j}\rightarrow S_{j-1}$ be the structure morphism. Finally let $F_{j,j-1}:=id_C\times f_{j,j-1}:C\times S_{j}\rightarrow C\times S_{j-1}$. Let $p_{1,j}$ and $p_{2,j}$ be the projections from $C\times S_{j}$ to $C$ and $S_{j}$, respectively. For each $j$, we have the following diagram \[ \begin{tikzcd} \{c\}\times S_{j} \arrow[r,hook,"i_j"] \arrow[rd, "="]& C\times S_{j} \arrow[r,"F_{j,j-1}"] \arrow[d,"p_{2,j}"] & C\times S_{j-1} \arrow[d,"p_{2,j-1}"] \\ & S_j \arrow[r,"f_{j,j-1}"] & S_{j-1} \arrow[u, bend right=90, "i_{j-1}", labels=below right] \end{tikzcd} \] Let $\mathcal{O}_{j}(1)$ the universal line bundle over $S_{j}$. Then over $C\times S_{j}$ we have the quotient \begin{align*} F^{*}_{j,j-1}A_{j-1}\rightarrow & (i_{j})_{*}i^{*}_{j}F^{*}_{j,j-1}A_{j-1} \\ = & (i_{j})_{*}f^{*}_{j,j-1}i_{j-1}^*A_{j-1} \\ \rightarrow & (i_{j})_{*}\mathcal{O}_{j}(1) \end{align*} Define $A_{j}$ to be the kernel of the above quotient. Since $(i_j)_*\mc O_j(1)$ is flat over $S_j$, restricting the exact sequence \begin{equation}\label{univ-seq-A_j} 0\to A_j\to F^*_{j,j-1}A_{j-1}\to (i_j)_*\mc O_j(1)\to 0 \end{equation} to $C\times s$ we see that $A_j\vert_{C\times s}$ is torsion free and so is locally free. It follows from Nakayama's lemma that $A_j$ is locally free on $C\times S_j$. Thus, we have defined $(S_j,A_j)$. It is easy to see, for example, using equation \eqref{e10} in the proof of the next Lemma, that closed points of $S_d$ are in 1-1 correspondence with filtrations \begin{equation}\label{e8} E_d\subset E_{d-1} \subset E_{d-2} \subset \cdots \subset E_0=E \end{equation} where each $E_j$ is a locally free sheaf of rank $r$ on $C$ and $E_j/E_{j+1}$ is a skyscraper sheaf of rank one supported at $c\in C$. \section{Birationality of $S_d$ and $\mc Q_{d[c]}$}\label{subsection-birational} Define the following morphisms for $j>i$: \begin{align*} f_{j,i}&=f_{j,j-1}\circ \ldots \circ f_{i+1,i}:S_{j}\to S_{i},\\ F_{j,i}&=F_{j,j-1}\circ \ldots \circ F_{i+1,i}:C\times S_{j}\to C\times S_{i}\,. \end{align*} Note that both of these morphisms are flat. Let $V\subset \mc Q_{d[c]}$ be the open subset whose points parameterize quotients of the type $E\to \mc O_C/\mf m^d_{C,c}$. \begin{lemma}\label{fibre-l2} There exists a morphism $g_{d}:S_{d}\rightarrow \mc Q_{d[c]}$ such that \begin{enumerate}[(i)] \item $g_d$ is surjective on closed points, \item $g_d^{-1}(V)\to V$ is a bijection, \item $S_d\setminus g_d^{-1}(V)\to \mc Q_{d[c]}\setminus V$ has positive dimensional fibers. \end{enumerate} \end{lemma} \begin{proof} We will define a quotient of $p_1^*E$ on $C\times S_{d}$. Using the flatness of $F_{d,i}$, we have inclusions (recall the definition of $A_j$ from \eqref{univ-seq-A_j}) \begin{equation}\label{e10} A_{d}\subset F_{d,d-1}^{*}A_{d-1}\subset F_{d,d-2}^{*}A_{d-2} \subset \ldots \subset F_{d,1}^{*}A_{1}\subseteq p^{*}_{1}E. \end{equation} Define \begin{align*} B_{j}^d&:=p^{*}_{1}E/F_{d,j}^{*}A_{j}\\ &\cong F_{d,j}^{*}(p^{*}_{1}E/A_j \end{align*} For each $j$ there is an exact sequence on $C\times S_d$ \begin{equation}\label{eq1} 0 \to F_{d,j-1}^{*}A_{j-1}/F_{d,j}^{*}A_j \to B_{j}^d \to B_{j-1}^d \to 0\,. \end{equation} On $C\times S_j$ we have the quotient \eqref{univ-seq-A_j} $$0\to A_j\to F_{j,j-1}^*A_{j-1}\to F_{j,j-1}^{*}A_{j-1}/A_j\cong (i_j)_*(\mc O_j(1))\to 0\,.$$ As $F_{j,j-1}^{*}A_{j-1}/A_j$ is $S_j$-flat, the pullback along $F_{d,j}$, that is, $F_{d,j-1}^{*}A_{j-1}/F_{d,j}^{*}A_j$ is $S_{d}$-flat. When restricted to $C\times s$ for $s\in S_{d}$, it is a degree one torsion sheaf supported at $c$. By induction on $j$, using equation \eqref{eq1}, one sees that $B_j^d$ is $S_d$-flat and the restriction of $B_j^d$ to $C\times s$ is a torsion sheaf of degree $j$ supported at $c$. In particular, \begin{equation}\label{e2} 0\to A_d\to p^{*}_{1}E\rightarrow B^{d}_{d} \to 0 \end{equation} is a quotient such that $B^{d}_{d}$ is $S_{d}$ flat and for each $s\in S_{d}$, $B^{d}_{d}|_{C\times \{s\}}$ is a torsion sheaf of degree $d$ supported at $c$. By the universal property of $\mathcal{Q}$, we have a morphism $$S_{d}\rightarrow \mathcal{Q},$$ such that the set theoretic image of the composition $$S_{d}\rightarrow \mathcal{Q}\xrightarrow{\phi} S^{d}C$$ is the point $d[c]$. Since $S_{d}$ is reduced, the scheme theoretic image is the scheme $\{d[c]\}\hookrightarrow S^{d}C$. In other words, we get that the above morphism factors as \[ \begin{tikzcd} S_{d} \arrow[r,"g_{d}"] & \mc Q_{d[c]} \arrow[r,hook ] & \mathcal{Q}\,. \end{tikzcd} \] A closed point of $S_d$ corresponds to a filtration as in \eqref{e8}. Under $g_d$ this point maps to the quotient $E\to E/E_d$. Conversely, given a closed point of $\mc Q_{d[c]}$ it is clear that we can find a closed point of $S_d$ which maps to it. This proves (i). Suppose $E_d\subset E$ is such that $E/E_d\cong \mc O/\mf{m}^d_{C,c}$, then for every $0\leq j\leq d$ there is a unique $E_j$ such that $E_d\subset E_j\subset E$ and $E/E_j\cong \mc O/\mf{m}^j_{C,c}$. From this one easily concludes (ii). For a closed point in $\mc Q_{d[c]} \setminus V$, corresponding to a quotient $E\to \mc F_d$, we have ${\rm rank}(\mc F_d\otimes \mc O/\mf{m}_{C,c})\geq 2$. We can construct infinitely many chains $\mc F_d\to \mc F_{d-1} \to \ldots \to \mc F_1$. Therefore, the fiber over such a closed point is positive dimensional. This proves $(iii)$. \end{proof} \begin{corollary} The fiber $\mc Q_{d[c]}$ is irreducible of dimension $d(r-1)$. \end{corollary} \begin{proof} Since $S_d$ is irreducible, it is clear that $\mc Q_{d[c]}$ is irreducible. It is clear that the dimension of $S_d$ is $d(r-1)$. Thus, the dimension of $\mc Q_{d[c]}$ is at most $d(r-1)$. On the other hand, the dimension of the fiber of $\phi$ over a general point is $d(r-1)$. This shows that the dimension of $\mc Q_{d[c]}$ is at least $d(r-1)$. \end{proof} \begin{corollary} The codimension of $\mc Q_{d[c]}\setminus V$ in $\mc Q_{d[c]}$ is $\geq 2$. \end{corollary} \begin{proof} As $S_d$ and $\mc Q_{d[c]}$ have the same dimension and $S_d$ is irreducible, this follows easily using $(iii)$ in lemma \ref{fibre-l2}. \end{proof} \begin{corollary}\label{fiber-R1} The fiber $\mc Q_{d[c]}$ satisfies Serre's condition $(R_1)$. \end{corollary} \begin{proof} Since the map $\phi$ is smooth at points $v\in V$, it follows that $\mc O_{\mc Q_{d[c]},v}$ is a regular local ring for all $v\in V$. Further, from the preceding corollary $V$ contains all prime ideals of height 1. The corollary follows. \end{proof} Next we will show that $g_d$ is birational. Let \begin{equation* p^{*}_{1}E\rightarrow B' \end{equation*} be the restriction of the universal quotient $B$ over $C\times \mc Q$ to the subscheme $C\times\mc Q_{d[c]}$. Let us define the inclusion $$i:{\rm Spec}\,(\mathcal{O}_{C,c}/\mf{m}^{d}_{C,c})\times \mc Q_{d[c]}\hookrightarrow C\times \mc Q_{d[c]}\,.$$ \begin{lemma}\label{fibre-l3} There is a coherent sheaf $F_d$ over ${\rm Spec}\,(\mathcal{O}_{C,c}/\mf{m}^{d}_{C,c})\times \mc Q_{d[c]}$ such that $B'=i_*F_d$. \end{lemma} \begin{proof} It is enough to show that the $p^{*}_{1}(E\otimes \mathcal{O}(-dc))$ is contained in the kernel of $p_1^*E\to B'$. Denote the kernel by $A'$. Let $0\to E'\xrightarrow{h} E$ be locally free sheaves of the same rank on a scheme $Y$. Let $\mc I$ denote the ideal sheaf determined by ${\rm det}(h)$. Then it is easy to see that $\mc IE\subset h(E')\subset E$. Thus, it suffices to find the ideal sheaf corresponding to the following exact sequence \begin{equation}\label{univ-exact-Q_p} 0 \to A' \xrightarrow{h} p^{*}_{1}E \to B' \to 0 \end{equation} on $C\times \mc Q_{d[c]}$. By the definition of $\phi$, the map $\mc Q_{d[c]}\to \mc Q\xrightarrow{\phi} S^{d}C$ is given by the quotient $$0\to {\rm det}(A')\xrightarrow{{\rm det}(h)} {\rm det}(p_1^*E) \to \mc F\to 0$$ on $C\times \mc Q_{d[c]}$. But since the image of $\mc Q_{d[c]}$ under this morphism is precisely $d[c]$, it follows that this quotient is isomorphic to the quotient $$p_1^*\mc O_C\to p_1^*(\mc O_C/\mf m^d_{C,c})\,.$$ It is clear that the ideal sheaf $\mc I$ corresponding to the exact sequence \eqref{univ-exact-Q_p} is $p_1^*(\mc O_C(-dc))$. The lemma now follows. \end{proof} Define subschemes $$D_j:={\rm Spec}\,(\mathcal{O}_{C,c}/\mf{m}^{j}_{C,c})\times V\stackrel{\alpha_j}{\hookrightarrow} C\times V\,.$$ By the previous lemma there is a sheaf $F_d$ on $D_d$ such that $B'|_{C\times V}\cong (\alpha_d)_*(F_d)$. Clearly $F_d$ is flat over $V$ since $B'$ is. \begin{lemma}\label{fibre-l4} $F_d$ is a line bundle over $D_{d}$. \end{lemma} \begin{proof} By definition of $V$, for each $v\in V$, $(F_d)_v\cong \mc O_C/\mf m^d_{C,c}$. Using $F_d$ is $V$-flat and Nakayama's lemma we see that $F_d$ is a line bundle. \end{proof} \begin{corollary} The restriction $F_j:=F_d\vert_{D_j}$ is a line bundle on $D_j$. \end{corollary} \begin{remark}\label{comm-alg-rem} We will use the following fact in the proof of the next theorem. Let $A$ and $B$ be rings and let $M$ be an $A\otimes_kB$ module. Let $B\to C$ be a ring homomorphism. Then $$M\otimes_{A\otimes_kB}(A\otimes_kC)\cong M\otimes_{A\otimes_kB}(A\otimes_kB)\otimes_BC\cong M\otimes_BC\,.$$ In particular, if $0\to N'\to N\to M\to 0$ is a short exact sequence of $A\otimes_kB$ modules, where $M$ is flat as a $B$-module, then it remains exact when we apply the functor $-\otimes_{A\otimes_kB}(A\otimes_kC)$. \end{remark} \begin{proposition}\label{birational} The restriction $g_d:g_d^{-1}(V)\to V$ is an isomorphism. \end{proposition} \begin{proof} We will use induction on $j$ to define maps $V\to S_j$. Define $A'_0$ on $C\times V$ to be $p_1^*E$. For $j\geq 1$ define sheaves $A_j'$ on $C\times V$ as follows \begin{equation}\label{e5} 0\to A_j'\to p_1^*E\to (\alpha_j)_*(F_j)\to 0\,. \end{equation} Observe that we have a commutative diagram \[ \xymatrix{ 0\ar[r] & A_{j}'\ar[r]\ar[d] & p_1^*E \ar[r]\ar@{=}[d] & (\alpha_{j})_*(F_{j})\ar[r]\ar[d] & 0\\ 0\ar[r] & A_{j-1}'\ar[r] & p_1^*E \ar[r] & (\alpha_{j-1})_*(F_{j-1})\ar[r] & 0 } \] The kernel of the right vertical arrow is $(\alpha_1)_*(F_1)$. Thus, there is an exact sequence of sheaves on $C\times V$, \begin{equation}\label{exact-1-V} 0\to A_{j}'\to A_{j-1}'\xrightarrow{\delta_{j-1}} (\alpha_1)_*(F_1)\to 0\,. \end{equation} Note that $S_1=\mb P(E_c)$. Thus, to give a map from $V\to S_1$ we need to give a line bundle quotient of $E_c\otimes O_V$. Restricting the universal quotient $p_1^*E\to (\alpha_d)_*(F_d)$ on $C\times V$ to $c\times V$ we get the quotient $E_c\otimes \mc O_V\to F_1\,.$ This defines a morphism $h_1:V\to S_1$. On $C\times S_1$ one has the exact sequence \eqref{univ-seq-A_j}. Using remark \ref{comm-alg-rem} we see that the pullback of this along $id_C\times h_1$ gives the following exact sequence on $C\times V$, \begin{equation* 0\to (id_C\times h_1)^*A_1\to p_1^*E\xrightarrow{\delta_0} (\alpha_1)_*(F_1)\to 0\,. \end{equation*} We see that $A_1'=(id_C\times h_1)^*A_1$. Let us assume that we have constructed maps $h_{j-1}:V\to S_{j-1}$ such that the pullback of \eqref{univ-seq-A_j} along $id_C\times h_{j-1}$ yields the exact sequence \begin{equation}\label{e1} 0\to A_{j-1}'\to A_{j-2}'\xrightarrow{\delta_{j-2}} (\alpha_1)_*(F_1)\to 0\,. \end{equation} Consider the diagram \[ \begin{tikzcd} \{c\}\times V \arrow[r,hook,"\alpha_1"] \arrow[rd, "="]& C\times V \arrow[r,"id_C\times h_{j-1}"] \arrow[d] & C\times S_{j-1} \arrow[d] \\ & V \arrow[r,"h_{j-1}"] & S_{j-1} \arrow[u, bend right=90, "i_{j-1}", labels=below right] \end{tikzcd} \] To give a map $V\to S_{j}$ we need to give a line bundle quotient of $$h_{j-1}^*i_{j-1}^*A_{j-1}\cong (\alpha_1)^*(id_C\times h_{j-1})^*A_{j-1}\cong (\alpha_1)^*A'_{j-1},$$ where the last isomorphism follows from \eqref{e1}. Restricting \eqref{exact-1-V} to $c\times V$, we get a line bundle quotient $$(\alpha_1)^*A'_{j-1}\to F_1\,.$$ This defines a morphism $h_j:V\to S_j$. Pulling back \eqref{univ-seq-A_j} along $id_C\times h_j$, using \ref{comm-alg-rem} and \eqref{e1}, we get the following short exact sequence on $C\times V$ $$0\to (id_C\times h_j)^*A_j\to A'_{j-1}\xrightarrow{\delta_{j-1}} (\alpha_1)_*(F_1)\to 0\,.$$ Now equation \eqref{exact-1-V} shows that $A'_j\cong (id_C\times h_j)^*A_j$ and there is an exact sequence \begin{equation* 0\to A_{j}'\to A_{j-1}'\xrightarrow{\delta_{j-1}} (\alpha_1)_*(F_1)\to 0\,. \end{equation*} Thus, inductively we have constructed a map $h_d:V\to S_d$. To show that the composite $V\xrightarrow{h_d}S_d\xrightarrow{g_d} \mc Q_{d[c]}$ is an isomorphism onto $V$, it suffices to show that the pullback of the universal quotient on $C\times \mc Q_{d[c]}$ along $id_C\times (g_d\circ h_d)$ is the restriction of the universal quotient to $C\times V$. Recall from \eqref{e5} the universal quotient on $C\times V$ is $$0\to A'_d\to p_1^*E\to B'\to 0\,.$$ Pulling this back along $id_C\times g_d$ is the quotient (recall from \eqref{e2}) $$0\to A_d\to p_1^*E\to B^d_d\to 0,$$ by the definition of the map $g_d$. From the definition of $h_d$, one easily checks that the pullback along $id_C\times h_d$ of the filtration in \eqref{e10} is the following filtration on $C\times V$, \[ A'_d\subset A'_{d-1}\subset\cdots\subset p_1^*E\,. \] Thus, it follows that the pullback along $id_C\times h_d$ of \eqref{e2} is $$0\to A'_d\to p_1^*E\to B'\to 0\, ,$$ which is the universal quotient on $C\times V$. This proves that $g_d\circ h_d$ is the identity on $V$. By lemma \ref{GS-L3} the morphism $\phi$ is smooth at a point $v\in V$. This shows that the local ring $\mc O_{V,v}$ is a domain. Thus, we have maps $\mc O_{V,v}\to \mc O_{S_d,h_d(v)}\to \mc O_{V,v}$ such that the composite is the identity. Since both rings have the same dimension, the kernel of $\mc O_{S_d,h_d(v)}\to \mc O_{V,v}$ is forced to be 0, which shows that the local rings are isomorphic. This proves the proposition. \end{proof} \section{Normality of all fibers} For a point $D=\sum d_i[c_i]\in S^dC$, denote by $\mc Q_{D}$ the scheme theoretic fiber of $\phi$ over the closed point corresponding to $D$. \begin{proposition}\label{fibres are of same dimension} The fiber $\mc Q_{D}$ is irreducible of dimension $d(r-1)$. \end{proposition} \begin{proof} We define a morphism $\prod \mc Q_{d_i[c_i]}\to \mc Q_D$ as follows. Let $p_{j}$ be the projections $C\times \prod \mc Q_{d_i[c_i]} \to C\times \mc Q_{d_j[c_j]}$ and $p$ be the projection $C\times \prod \mc Q_{d_i[c_i]} \to C$. Let $B_{d_i[c_i]}$ denote the universal quotient over $C\times \mc Q_{d_i[c_i]}$. Then over $C\times \prod \mc Q_{d_i[c_i]}$, we define a quotient $$p^*E\to \bigoplus p^*_{i}B_{d_i[c_i]}$$ Clearly, $\bigoplus p^*_iB_{d_i[c_i]}$ is flat, and hence induces a morphism \begin{equation}\label{theta_D} \theta_D:\prod \mc Q_{d_i[c_i]}\to \mc Q \end{equation} which is bijective onto the closed points of $\mc Q_D$. Therefore, $\mc Q_D$ is irreducible. Since the dimension of the general fiber of $\phi$ is $d(r-1)$, we get $$d(r-1)\leq {\rm dim}\,\mc Q_D\leq\sum {\rm dim }\,\mc Q_{d_i[c_i]}=\sum d_i(r-1)=d(r-1)$$ This proves the corollary. \end{proof} \begin{corollary} The map $\phi$ is flat. \end{corollary} \begin{proof} This follows using \cite[Chapter III, Exercise 10.9]{Ha} \end{proof} \begin{corollary} The fiber $\mc Q_{d[c]}$ is reduced, irreducible and normal. In particular, it is integral. \end{corollary} \begin{proof} Since $\phi$ is flat and $\mc Q$ is smooth, it follows from \cite[\href{https://stacks.math.columbia.edu/tag/045J}{Tag 045J}]{Stk} (or see Corollary to \cite[Theorem 23.3]{Mat}) that the fiber $\mc Q_{d[c]}$ is Cohen-Macaulay. Thus, the fiber satisfies Serre's condition $(S_2)$. Now from corollary \ref{fiber-R1} it follows that the fiber satisfies $(R_0)$ and $(S_1)$ and so it is reduced. Since it satisfies $(R_1)$ and $(S_2)$ it is normal. \end{proof} \begin{lemma} $\mc Q_{D}\cong\prod \mc Q_{d_i[c_i]}$. \end{lemma} \begin{proof} The map $\theta_D$ in \eqref{theta_D} sits in a commutative diagram \[ \xymatrix{ \prod\mc Q_{d_i[c_i]}\ar[r]^{\theta_D}\ar[d] & \mc Q\ar[d]\\ \prod S^{d_i}C\ar[r] & S^dC } \] From the above diagram it is clear that $\theta_D$ factors to give a map $$\tilde\theta_D:\prod\mc Q_{d_i[c_i]}\to \mc Q_D\,.$$ We want to give a map in the other direction. Let $p_D:C\times \mc Q_D \to C$ be the first projection. Let us denote the restriction of the universal quotient to $C\times \mc Q_D$ by $$p^*_D E \to B_D\,.$$ There are integers $e_i$ such that the quotient $B_D$ is supported on the following closed subscheme of $C\times \mc Q_D$ $$\bigsqcup_i\,{\rm Spec}\, (\mc O_C/\mf{m}_{C,c_i}^{e_i})\times \mc Q_D\,.$$ Let $\mf{j}_i:{\rm Spec}\, (\mc O_C/\mf{m}_{C,c_i}^{e_i})\times \mc Q_D\hookrightarrow C\times \mc Q_D$ denote the closed immersion. Let $$B_{d_i[c_i]}:=\mf{j}_{i*}\Big(B_{D}\vert_{{\rm Spec}\, (\mc O_C/\mf{m}_{C,c_i}^{e_i})\times \mc Q_D}\Big)\,.$$ Clearly, since $B_D$ is flat over $\mc Q_D$, the $B_{d_i[c_i]}$ is also flat over $\mc Q_D$. We define the quotients $$p^*E\to B_D \to B_{d_i[c_i]}\,,$$ which defines a morphism $\mc Q_D\to \mc Q_{d_i[c_i]}$. This defines a morphism $\gamma_D:\mc Q_D\to \prod \mc Q_{d_i[c_i]}$. One easily checks that the pullback along $id_C\times (\theta_D\circ \gamma_D)$ of the universal quotient is $p_1^*E\to B_D$. This shows that $\tilde \theta_D\circ \gamma_D$ is the identity. Arguing as in the last para of the proof of proposition \ref{birational}, the lemma is proved. \end{proof} \begin{corollary}\label{cor-fiber} The fiber $\mc Q_D$ is reduced, irreducible and normal. \end{corollary} \section{Main Theorem} \begin{definition}\label{cat-nf} Let $X$ be a connected, projective and reduced $k$-scheme. Let $\mathcal{C}^{\rm nf}(X)$ denote the full subcategory of coherent sheaves whose objects are coherent sheaves $E$ on $X$ satisfying the following two conditions: \begin{enumerate} \item $E$ is locally free, and \item for any smooth projective curve $C$ over $k$ and any morphism $f : C \longrightarrow X$, the vector bundle $f^*E$ is semistable of degree $0$. \end{enumerate} \end{definition} We call the objects of the category $\mc C^{\rm nf}(X)$ {\it numerically flat vector bundles} on $X$. Fix a $k$-valued point $x \in X$. Let ${\rm Vect}_k$ be the category of finite dimensional $k$-vector spaces. Let $T_x : \mc C^{\rm nf}(X) \longrightarrow {\rm Vect}_k$ be the fiber functor defined by sending an object $E$ of $\mc C^{\rm nf}(X)$ to its fiber $E_x \in {\rm Vect}_k$ at $x$. Then $(\mc C^{\rm nf}(X), \otimes, T_x, \mathcal{O}_X)$ is a neutral Tannaka category \cite[Proposition 5.5, p.~2096]{La}. The affine $k$-group scheme $\pi^S(X, x)$ Tannaka dual to this category is called the {\it S-fundamental group scheme} of $X$ with base point $x$ \cite[Definition 6.1, p.~2097]{La}. A vector bundle $E$ is said to be {\it finite} if there are distinct non-zero polynomials $f, g \in \mathbb{Z}[t]$ with non-negative coefficients such that $f(E) \cong g(E)$. \begin{definition} A vector bundle $E$ on $X$ is said to be {\it essentially finite} if there exist two numerically flat vector bundles $V_1, V_2$ and finitely many finite vector bundles $F_1, \ldots, F_n$ on $X$ with $V_2 \subseteq V_1 \subseteq \bigoplus\limits_{i=1}^n F_i$ such that $E \cong V_1/V_2$. \end{definition} Let ${\rm EF}(X)$ be the full subcategory of coherent sheaves whose objects are essentially finite vector bundles on $X$. Fix a closed point $x \in X$ and let $T_x : {\rm EF}(X) \longrightarrow {\rm Vect}_k$ be the fiber functor defined by sending an object $E \in {\rm EF}(X)$ to its fiber $E_x$ at $x$. Then the quadruple $({\rm EF}(X), \bigotimes, T_x, \mathcal{O}_X)$ is a neutral Tannakian category. The affine $k$-group scheme $\pi^N(X, x)$ Tannaka dual to this category is referred to as the {\it Nori-fundamental group scheme} of $X$ with base point $x$, see \cite{No1} for more details. In \cite[Proposition 8.2]{La} it is proved that the $S$-fundamental group of projective space is trivial. In \cite{Mehta-Hogadi} it is proved that the $S$-fundamental group scheme is a birational invariant of smooth projective varieties. Let $S_{d_i,c_i}$ be the space defined in section \ref{S_d} by taking $d=d_i$ and $c=c_i$. In view of the discussion in section \ref{subsection-birational} there is a birational map $$\eta_D:=(\tilde \theta_D\circ \prod g_{d_i}):\prod S_{d_i,c_i}\to \prod \mc Q_{d_i[c_i]}\to \mc Q_D\,.$$ \begin{proposition}\label{prop-num-flat-trivial} A numerically flat bundle on $\mc Q_D$ is trivial. \end{proposition} \begin{proof} Let $W$ be a numerically flat bundle on $\mc Q_D$. As $\mc Q_D$ is normal, $\eta_D$ is birational, $\prod S_{d_i,c_i}$ is a smooth rational variety, we have \begin{align*} W &\cong \eta_{D*}\eta_D^*W\\ &\cong \eta_{D*}(\mc O)^{\oplus r}\\ &\cong \mc O_{\mc Q_D}^{\oplus r} \end{align*} In the above we have used the result of \cite{Mehta-Hogadi}. This proves the proposition. \end{proof} We now prove the main result of this article. \begin{theorem}\label{m-t} Let $k$ be an algebraically closed field. Let $C$ be an irreducible smooth projective curve over $k$. Let $E$ be a locally free sheaf on $C$ of rank $\geq 2$. Fix an integer $d \geq 2$. Let $\mc Q$ denote the Quot scheme parameterizing torsion quotients of $E$ of degree $d$. Let $S^dC$ denote the $d$th symmetric product of $C$ and let $\phi:\mc Q\to S^dC$ denote the Hilbert Chow map (see section \ref{section-Hilbert-Chow}). Then the induced map $\phi^S_*:\pi^S(\mc Q,q)\to \pi^S(S^dC,\phi(q))$ is an isomorphism. \end{theorem} \begin{proof} Since the fibers of $\phi$ are projective integral varieties, and $\phi$ is flat, it follows that $\phi_*(\mc O_{\mc Q_D})=\mc O_{S^dC}$. Now applying \cite[Lemma 8.1]{La} we see that $\phi^S_*$ is faithfully flat. To prove $\phi^S_*$ is a closed immersion we will use \cite[Proposition 2.21(b)]{DMOS}, which we recall for the convenience of the reader. For an affine algebraic group scheme $G$ over $k$, let ${\rm Rep}_k(G)$ denote the category of finite dimensional representations of $G$ on $k$-vector spaces. Let $\theta : G \to G'$ be a homomorphism of affine group schemes over $k$ and let \begin{equation}\label{eqn-hom-f} \widetilde{\theta} : {\rm Rep}_k(G') \to {\rm Rep}_k(G) \end{equation} be the functor given by sending $\rho' : G' \to {\rm GL}(V)$ to $\rho'\circ \theta : G \to {\rm GL}(V)$. An object $\rho : G \to {\rm GL}(V)$ in ${\rm Rep}_k(G)$ is said to be a {\it subquotient} of an object $\eta : G \to {\rm GL}(W)$ in ${\rm Rep}_k(G)$ if there are two $G$-submodules $V_1 \subset V_2$ of $W$ such that $V \cong V_2/V_1$ as $G$-modules. Let $\theta : G \to G'$ be a homomorphism of affine algebraic groups over $k$. Then $\theta$ is a closed immersion if and only if every object of ${\rm Rep}_k(G)$ is isomorphic to a subquotient of an object of the form $\widetilde{\theta}(V')$, for some $V' \in {\rm Rep}_k(G')$. Let $W$ be a numerically flat bundle on $\mc Q$ which corresponds to a finite dimensional representation $\rho:\pi^S(\mc Q,q)\to {\rm GL}(V)$. We will show that there is a numerically flat bundle $W'$ on $S^d(C)$ such that $W\cong \phi^*W'$. This precisely means that there is a representation $\rho':\pi^S(S^d(C),\phi(q))\to {\rm GL}(V)$ such that $\rho=\rho'\circ \phi_*^S$. Now by the criterion in the preceding para we see that $\phi_*^S$ is a closed immersion. By Grauert's theorem \cite[Corollary 12.9]{Ha} and Proposition \ref{prop-num-flat-trivial}, it follows that if $W$ is a numerically flat bundle on $\mc Q$ then $\phi_*(W)$ is a locally free sheaf on $S^dC$ and the natural map $\phi^*\phi_*(W)\to W$ is an isomorphism. It follows easily that $\phi_*(W)$ is numerically flat. This is easily checked because given a morphism $f:X\to Y$ between two projective varieties, and a morphism from a projective curve $C\to Y$, we can always find a cover $C'\to C$ such that the composite $C'\to Y$ lifts to a map $C'\to X$. This proves that $\phi^S_*$ is a closed immersion. \end{proof} From the $S$-fundamental group scheme we recover the Nori fundamental group scheme as the inverse limit of finite quotients. Similarly, the etale fundamental group scheme can be recovered as the inverse limit of finite and reduced quotients. Thus, we get the following corollary. (See \S 5.5 in \cite{PS-surface} for more details.) \begin{corollary}\label{m-c} The induced map $\phi^N_*:\pi^N(\mc Q,q)\to \pi^N(S^dC,\phi(q))$ is an isomorphism. The induced map $\phi^{\et}_*:\pi^{\et}(\mc Q,q)\to \pi^{\et}(S^dC,\phi(q))$ is an isomorphism. \end{corollary} \newcommand{\etalchar}[1]{$^{#1}$}
1,116,691,497,070
arxiv
\section{Introduction} Pulsar-timing arrays (PTAs) are a network of millisecond pulsars that can be used as a gravitational-wave detector on galactic scales \cite{1978SvA....22...36S, 1979ApJ...234.1100D}. The measurement is based on the precise determination of arrival times of radio pulses, which feature spatially correlated fluctuations in the presence of gravitational waves (GWs). The primary source of GWs in the PTA band (roughly $1-100$ nHz) is expected to be a stochastic gravitational-wave background (SGWB) produced by a population of inspiralling super-massive black hole binaries (SMBHBs)\cite{Rajagopal:1994zj,Jaffe:2002rt}. Other sources include cosmic strings \cite{Olmez:2010bi, Sousa:2013aaa, Miyamoto:2012ck, Kuroyanagi:2012jf}, phase transitions \cite{Caprini:2010xv}, and relic GWs from inflation \cite{Starobinsky:1979ty, Zhao:2013bba}. Current PTA collaborations include the European PTA (EPTA) \cite{Desvignes:2016yex}, the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) \cite{Brazier:2019mmu}, and the Parkes Pulsar Timing Array (PPTA) \cite{Kerr:2020qdo}, which together form the International Pulsar Timing Array (IPTA) \cite{Perera:2019sca}. While a detection of a SGWB is yet to be claimed, increasingly tight upper limits on the SGWB amplitude are being set and NANOGrav has recently reported evidence for a stochastic red-noise process across all pulsars. These limits have important consequences for the sources of the GW background (see, e.g., Refs. \cite{Taylor:2016ftv} and \cite{Chen:2018znx}). If the origin of a stochastic background is cosmological or sourced by a large population of distant objects, the standard assumption of statistical isotropy is expected to be a good one. In this regime, the correlation between pulsar timing residuals depends only on their angular separation and is given by the Hellings and Downs curve \cite{1983ApJ...265L..39H}. However, some degree of statistical anisotropy is expected. For instance, the finite number of SMBHBs or the presence of a nearby bright source can lead to an anisotropic background \cite{Ravi:2012bz, Cornish:2013aba, Sesana:2008mz}. Characterizing the anisotropy of the background can therefore be a powerful probe of this astrophysical population. Another relevant feature of the SGWB that can be probed with PTAs is its polarization. Analogously to electromagnetism, one can define the Stokes parameters for GWs, and the focus of this work is the circular-polarization component. While the chirality of a background produced by many distant sources is expected to vanish, it may be non-negligible if a reduced number of bright sources dominate, since SMBHBs produce chiral GWs if the orbit is observed face-on. In this scenario, anisotropies in the circular polarization would also be produced, as in the case for intensities. Previous work has addressed the circular polarization of the SWGB in the context of measurements with ground and space-based interferometers \cite{Seto:2006dz, Seto:2006hf, Seto:2007tn, Seto:2008sr}, astrometry \cite{Qin:2018yhy}, and PTAs \cite{Kato:2015bye, Belgacem:2020nda}. Ref.~\cite{Kato:2015bye} studied the effect of intensity and circular-polarization anisotropies on the correlation between pulsar timing residuals. They propose a way to infer the spherical-harmonic coefficients of the circular-polarization anisotropy using linear combinations of two pulsar pairs, although {\it a priori} knowledge of the background is required. By describing the pulsar positions in the spherical-harmonic basis, Refs.~\cite{Hotinli:2019tpc} and \cite{Belgacem:2020nda} showed that intensity and circular-polarization anisotropies correspond to even and odd bipolar spherical harmonic multipoles and can therefore be measured independently. However, this bipolar-spherical-harmonic formalism relies on the assumption that one has a complete map of the sky at hand. Since the expansion is on the angular position of the pulsars, this would require a dense and uniform distribution of pulsars, which is incompatible with the current observational capabilities of PTAs. Here we propose an estimator based on the real-space correlation function, that can naturally separate the intensity and circular-polarization contributions. Since circular polarization is defined as the imaginary part of the cross-correlation between $+$ and $\times$ GW polarizations, it corresponds to an imaginary contribution to the correlation between pulsars in the frequency domain. Hence, we construct an estimator for the real and imaginary parts of the correlation function. We follow a similar approach to Ref.~\cite{Anholm:2008wy}, but generalize the detection statistic to account for intensity and circular polarization anisotropies. Furthermore, we extend the result beyond the weak-signal regime, since any detection of circular-polarization anisotropy will require a level of sensitivity beyond the validity of this approximation. This paper is organized as follows. We begin by outlining the general effect of GWs on pulsar timing residuals in Sec.~\ref{sec:tim}. In Sec.~\ref{sec:corr} we discuss the imprint of the SGWB on the correlation of timing residuals and define the Stokes parameters for GWs. We present the estimators for intensity and circular polarization in Sec.~\ref{sec:est}, apply this estimator for the simple case of detecting a circular polarization dipole in Sec.~\ref{sec:dipole} and conclude in Sec.~\ref{sec:concl}. \section{Timing Residuals}\label{sec:tim} A gravitational wave (GW) propagating between the Earth and a pulsar will change the observed arrival time of pulses. The fractional frequency shift for a given pulsar, which we will label by $a$, at a position $\hat{p}_a$ on the sky, induced by a metric perturbation $h_{ij}(t,\hat{\Omega})$ from a GW propagating in the $\hat{\Omega}$ direction, is given by \cite{1979ApJ...234.1100D, Anholm:2008wy} \begin{equation} z_a(t, \hat{p}_a, \hat{\Omega}) = \frac{1}{2}\frac{\hat{p}^i_a\hat{p}^j_a}{1+\hat{\Omega}\cdot\hat{p}_a}\Delta h_{ij}, \end{equation} where $\Delta h_{ij} \equiv h_{ij}(t_e,\hat{\Omega}) - h_{ij}(t_p,\hat{\Omega})$ is the difference between the metric perturbation at the solar system barycenter, with coordinates $(t_e, \vec{x}_e)$, and at the pulsar, with coordinates $(t_p, \vec{x}_p)$. We choose a coordinate system in which the origin is at the center of the solar system and the pulsar is some distance $L_a$ away, such that $t_e=t$, $\vec{x}_e = 0$, $t_p=t-L_a$ and $\vec{x}_p = L_a\hat{p}_a$. In the transverse-traceless gauge, the metric perturbation at each point can be written as the following superposition of plane waves \cite{Allen:1997ad} \begin{equation} h_{ij}(t, \hat{\Omega}) = \sum_{A=+,\times} \int_{-\infty}^{\infty} df \ h_A(f,\hat{\Omega}) e^{A}_{ij}(\hat{\Omega}) e^{2\pi if(t-\hat{\Omega}\cdot \vec{x})}, \end{equation} where the index $A$ labels the $+$ and $\times$ polarizations and $f$ is the frequency of the GW. The Fourier amplitudes $h_A(f, \hat{\Omega})$ are complex functions that satisfy \begin{equation} h^{*}_A(f, \hat{\Omega}) = h_A(-f, \hat{\Omega}), \end{equation} and the polarization tensors $e^{A}_{ij}(\hat{\Omega})$ are given by \begin{equation} \begin{split} e^+_{ij}(\hat{\Omega}) &= \hat{m}_i\hat{m}_j - \hat{n}_i\hat{n}_j, \\ e^{\times}_{ij}(\hat{\Omega}) &= \hat{m}_i\hat{n}_j + \hat{n}_i\hat{m}_j, \end{split} \end{equation} where $\hat{m}$ and $\hat{n}$ are orthogonal unit vectors perpendicular to $\hat{\Omega}$. The total frequency shift induced by a stochastic background comprised of GWs of all frequencies and coming from all directions is then given by \begin{equation} \begin{split} z_a(t) =& \sum_{A=+,\times} \int_{-\infty}^{\infty} df \int d \hat{\Omega}\ h_A(f,\hat{\Omega}) F^A_a(\hat{\Omega}) e^{-2\pi ift} \\ &\times \left( 1 - e^{2\pi ifL_a(1+\hat{\Omega}\cdot \hat{p}_a)} \right), \label{eq:timeres} \end{split} \end{equation} where $a$ labels each pulsar and the antenna beam pattern is defined as \begin{equation} F^A_a(\hat{\Omega}) = \frac{\hat{p}^i_a \hat{p}^j_a e^A_{ij}(\hat{\Omega})}{2(1+ \hat{\Omega}\cdot \hat{p}_a)}. \end{equation} By taking the Fourier transform, we can finally write the timing residual in the frequency domain as \begin{equation} z_a(f) = \int d^2 \hat{\Omega} \left( 1 - e^{2\pi ifL_a(1+\hat{\Omega}\cdot \hat{p}_a)} \right) \sum_A h_A(f, \hat{\Omega}) F^A_a(\hat{\Omega}). \label{eq:residual_freq} \end{equation} The explicit expressions for the antenna beam patterns are shown in App.~\ref{app:orf}. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{ORFs.pdf} \caption{Generalized overlap reduction functions in the computational frame for intensity and circular polarization as a function of pulsar angular separation. The monopole is shown on the left panel and the dipole is shown on the right.} \label{fig:ORFs} \end{figure*} \section{Correlations}\label{sec:corr} The presence of a stochastic gravitational wave background induces correlations between the timing residuals of pulsars. Here we consider a background that is Gaussian and stationary, but that can be polarized and anisotropic. In this case, the correlation function between different GW polarizations can be written as \begin{equation} \avg{h^*_A(f, \hat{\Omega}) h_{A'}(f', \hat{\Omega}')} = \delta^2(\hat{\Omega}, \hat{\Omega}') \delta(f-f') \mathcal{P}_{AA'}(f, \hat{\Omega}), \label{eq:GWB2PCF} \end{equation} where $\mathcal{P}_{AA'}$ is the power spectrum and includes both frequency and angular dependence. In analogy with electromagnetism, it is convenient to define the Stokes parameters for GWs as \cite{Kato:2015bye} \begin{equation} \begin{split} I &= \frac{1}{2}\avg{|h_+|^2 + |h_{\times}|^2}, \\ Q &= \frac{1}{2}\avg{|h_+|^2 - |h_{\times}|^2}, \\ U &= \text{Re}\avg{h^*_+ h_{\times}} = \frac{1}{2}\avg{h^*_+h_{\times} + h^*_{\times}h_+},\\ V &= \text{Im}\avg{h^*_+ h_{\times}} = \frac{1}{2i}\avg{h^*_+h_{\times} - h^*_{\times}h_+}. \label{eq:stokes} \end{split} \end{equation} With these definitions, the correlation function given in Eq.~\ref{eq:GWB2PCF} can be decomposed as \begin{equation} \mathcal{P}^{AA'} (f, \hat{\Omega}) = \begin{pmatrix} I(f, \hat{\Omega}) + Q(f, \hat{\Omega}) & U(f, \hat{\Omega}) - iV(f, \hat{\Omega}) \\ U(f, \hat{\Omega}) + iV(f, \hat{\Omega}) & I(f, \hat{\Omega}) - Q(f, \hat{\Omega}) \end{pmatrix} \end{equation} In this work, we will consider only intensity and circular polarization and therefore assume that $Q=U=0$. With Eq.~\ref{eq:timeres}, we can write the timing residual correlation between pulsars $a$ and $b$. Substituting the definitions of the Stokes parameters in Eq.~\ref{eq:stokes} and keeping only intensity and circular polarization, we get \begin{widetext} \begin{equation} \begin{split} \avg{z^*_a(f) z_b(f')} = \int d^2 \hat{\Omega}\ \kappa_{ab}(f, \hat{\Omega}) \delta(f-f') \Big\{I(f, \hat{\Omega})(F^{+*}_{a}F^{+}_{b} + F^{\times*}_{a}F^{\times}_{b})+ iV(f, \hat{\Omega})(F^{+*}_{a}F^{\times}_{b} - F^{\times*}_{a}F^{+}_{b}) \Big\}, \end{split} \end{equation} \end{widetext} where we have defined \begin{equation} \kappa_{ab}(f, \hat{\Omega}) \equiv \left(1 - e^{2\pi ifL_a(1+\hat{\Omega}\cdot \hat{p}_a)}\right)\left( 1 - e^{2\pi ifL_b(1+\hat{\Omega}\cdot \hat{p}_b)} \right). \end{equation} We assume that the frequency and angular dependence of the intensity and circular polarization power spectra are separable and expand the angular dependence in spherical harmonics as follows \begin{equation} X(f,\hat{\Omega}) = X(f) \sum_{\ell m} c^X_{\ell m}Y_{\ell m}(\hat{\Omega}), \end{equation} where $X = I,V$. Similarly to Ref.~\cite{Kato:2015bye}, we can define the intensity and circular polarization overlap reduction functions (ORFs) as \begin{equation} {}^{(ab)}\Gamma^X = X(f) \sum_{\ell m} c^X_{\ell m} {}^{(ab)}\Gamma^X_{\ell m} \end{equation} where we have defined \begin{equation} \begin{split} {}^{(ab)}\Gamma^I_{\ell m} &= \int d^2 \hat{\Omega}\ Y_{\ell m}(\hat{\Omega}) \kappa_{ab}(f, \hat{\Omega}) \left(F^{+*}_{a}F^{+}_{b} + F^{\times*}_{a}F^{\times}_{b} \right), \\ {}^{(ab)}\Gamma^V_{\ell m} &= \int d^2 \hat{\Omega}\ Y_{\ell m}(\hat{\Omega}) \kappa_{ab}(f, \hat{\Omega}) \left(F^{+*}_{a}F^{\times}_{b} - F^{\times*}_{a}F^{+}_{b} \right). \end{split} \end{equation} Note that we have adopted a slightly different convention for the definitions of the Stokes parameters, but the equations presented here are consistent with Ref.~\cite{Kato:2015bye}. We take the standard assumption that $L_a = L_b$ and that $fL_a \gg 1$ \cite{2018JPhCo...2j5002M, Mingarelli:2014xfa}, such that \begin{equation} \kappa_{ab}(f, \hat{\Omega}) \rightarrow (1+\delta_{ab}), \end{equation} and therefore the functions ${}^{(ab)}\Gamma^X_{\ell m}$ are independent of frequency. This approximation is equivalent to neglecting the pulsar term in the cross-correlations, and only including both Earth and pulsar terms in the auto-correlations. Notice that including the pulsar term adds a small change in phase, which corresponds to an imaginary term in the pulsar correlations, and would therefore contaminate the circular polarization measurement. Substituting the definitions above into the pulsar timing residual correlation, we get \begin{equation} \avg{z^*_a(f) z_b(f')} = \delta(f-f') \left[ I(f) ^{(ab)}\Gamma^I + i V(f) ^{(ab)}\Gamma^V\right]. \label{eq:res_corr} \end{equation} We emphasize here that the quantity $V(f) ^{(ab)}\Gamma^V$ is defined to be real and that circular polarization therefore induces an imaginary contribution to the correlation function. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{SNR_RMS3.pdf} \caption{Detectability of the circular polarization dipole as a function of the pulsar white noise. The panel on the left shows the signal-to-noise ratio as a function of the noise RMS for PTAs composed of 10, 25, and 50 pulsars sampled from a uniform spatial distribution. The panel on the right shows the positions of the pulsars on the sky. Note that the inclusion of pulsars is additive: the blue curve on the left panel includes only the blue dots in the map on the right panel, the yellow curve includes the 10 pulsars in blue plus the 15 pulsars in yellow, and the red curve includes all points in the map (blue+yellow+red).} \label{fig:SNR_RMS} \end{figure*} \section{Estimators for intensity and circular polarization}\label{sec:est} In this section, we introduce the estimators for intensity and circular polarization of the SGWB. The key result is the signal-to-noise ratio of the circular polarization statistic when the optimal filter and inverse-variance weighting of pulsar pairs are employed \cite{Allen:1997ad, Anholm:2008wy}. We consider the covariance between pulsars in the presence of a non-zero SGWB, thereby accounting for the variance of the background itself \cite{Romano:2020sxq}. We assume that the timing residual signal from a given pulsar can be written as a sum of the contributions from the stochastic background and measurement noise as follows \begin{equation} \begin{split} s_a(f) = z_a(f) + n_a(f). \label{eq:signal} \end{split} \end{equation} We consider a simplified noise model in which the intrinsic pulsar red noise is omitted, and $n_a$ is assumed to be an uncorrelated white Gaussian noise, characterized by the power spectra \begin{equation} \avg{n^*_a(f)n_b(f')} = \frac{1}{2}\delta_{ab} \delta(f-f') N_a(|f|), \label{eq:noise_power} \end{equation} where $N_a(f)=2\sigma^2_a \Delta t$ is the one-sided noise power spectrum of the $a$-th pulsar, $\sigma^2_a$ is its noise variance, and $\Delta t$ is the sampling period. The intensity and circular polarization contributions to the background can be identified through the measurement of the real and imaginary parts of the cross-correlation between timing residuals. The estimators for the two components can therefore be written as \begin{equation} \begin{split} \hat{I}_{ab} =& \frac{1}{2}\int^{\infty}_{-\infty} df \int^{\infty}_{-\infty} df' \delta_T(f-f') \big[s^*_a(f) s_b(f') \\&+ s_a(f) s^*_b(f')\big] Q^I_{ab}(f'),\\ \hat{V}_{ab} =& \frac{1}{2i}\int^{\infty}_{-\infty} df \int^{\infty}_{-\infty} df' \delta_T(f-f') \big[s^*_a(f) s_b(f') \\&- s_a(f) s^*_b(f')\big] Q^V_{ab}(f'), \label{eq:estimators} \end{split} \end{equation} where $\delta_T(f) = \sin(\pi f T)/\pi f$, and $Q_I$ and $Q_V$ are filters to be determined. We will focus our discussion on the circular polarization estimator, but note that the optimal estimator for the intensity can be derived in an identical manner. Our goal is to derive an expression for the filters that maximize the signal-to-noise ratio, and combine the measurement from each pulsar pair in an optimal way. Due to the frequency integration, we require the filter for the intensity estimator to satisfy $Q_I(f) = Q_I(-f)$ and for the circular polarization filter to satisfy $-Q_V(f) = Q_V(-f)$. The mean of the circular polarization estimator is given by \begin{equation} \begin{split} \avg{\hat{V}_{ab}} =& T \int_{-\infty}^{\infty} df\ V(f) {}^{(ab)}\Gamma^V Q^V_{ab}(f), \label{eq:opt_signal} \end{split} \end{equation} and its covariance is shown in App.~\ref{app:cov}. In order to shorten the notation, we write the variance for a pulsar pair given in Eq.~\ref{eq:cov} as \begin{equation} \sigma^2_{ab} = \frac{T}{2}\int_{-\infty}^{\infty} df \left[Q^V_{ab}(f)\right]^2 \mathcal{C}_{ab}(f), \end{equation} where $\mathcal{C}_{ab} \equiv \mathcal{C}_{ab, ab}$. The optimal filter can be found by defining the inner product in the space of complex valued functions \cite{1997rggr.conf..373A} \begin{equation} (A,B) = \int_{-\infty}^{\infty} df A^*(f) B(f) \mathcal{C}_{ab}(f). \end{equation} By writing $\avg{\hat{V}_{ab}}$ and $\sigma^2_{ab}$ as an inner product, it can be shown using the Schwarz inequality that the optimal filter is given by \begin{equation} Q_{ab}(f) = \chi \frac{V(f) {}^{(ab)}\Gamma^V} {\mathcal{C}_{ab}(f)}, \end{equation} where the constant $\chi$ is chosen so at to set the mean of the circular polarization estimator. The measurement from each pulsar pair, accounting for correlations, can then be optimally combined as follows (see, e.g. Ref~\cite{Romano:2020sxq}) \begin{equation} \hat{V}_{\text{opt}} = \frac{\sum\limits_{a}\sum\limits_{b<a}\lambda_{ab} \hat{V}_{ab}}{\sum\limits_{a}\sum\limits_{b<a}\lambda_{ab}}, \end{equation} where \begin{equation} \lambda_{ab} = \sum\limits_{c}\sum\limits_{d<c} (C^{-1})_{ab,cd} \end{equation} and the variance of the optimal estimator is given by \begin{equation} \sigma^2_{\text{opt}} = \frac{1}{\sum\limits_{a}\sum\limits_{b<a}\lambda_{ab}}. \label{eq:opt_noise} \end{equation} \section{Case Study: Detecting a circular polarization dipole}\label{sec:dipole} To showcase the proposed circular polarization estimator, we consider a simple scenario in which the gravitational wave background is dominated by a monopole and a dipole that points in the $\hat{z}$ direction. The background is therefore assumed to be \begin{equation} X(f,\hat{\Omega}) = X(f) \left(c^X_{00}Y_{00} + c^X_{10}Y_{10}\right) \end{equation} in the cosmic rest-frame. We note that the cross-correlation of pulsars is insensitive to an isotropic circularly polarized background, or, in other words, $^{(ab)}\Gamma^V_{00} = 0$. The inclusion of the $\ell,m=0$ term in the $V$ component is therefore superfluous. We assume the fiducial value of the dipole coefficient to be 15\% of the monopole for both $I$ and $V$. We further assume that the frequency spectrum for both intensity and circular polarization can be written as \begin{equation} X(f) = \frac{1}{16\pi} A_{\text{X}}^2\left(\frac{f}{f_{1\text{yr}}}\right)^{2\alpha_X -1}. \end{equation} We choose the fiducial value of $A_I = A_V = 10^{-15}$ and $\alpha_I=\alpha_V=-2/3$. We focus on estimating the amplitude of the circular polarization of the SGWB. In the dipole toy-model we consider here, the coefficient $c_{10}$ is degenerate with the amplitude of the GW spectrum. Hence, we choose to normalize the circular polarization estimator such that $\avg{\hat{V}_{ab}} = c^V_{10}A^2_{V}$. The constant $\chi$ is therefore given by \begin{equation} \chi=\frac{c^V_{10}A^2_V}{T}\left(\int_{-\infty}^{\infty} df \frac{ \left(V(f)^{(ab)}\Gamma^V_{10}\right)^2}{\mathcal{C}_{ab}} \right)^{-1}. \end{equation} We generate random pulsar positions uniformly distributed on the sky and assume that all pulsars have identical white noise. The integral over frequencies shown throughout this work are, in practice, taken between minimum and maximum frequencies $f_{\text{min}}$ and $f_{\text{max}}$. The frequency range is determined by the total observing time $T$ and the cadence time $\Delta t$ (the time between consecutive pulsar observations). We assume an observing time of 10 yr, which corresponds to $f_{\text{min}} \sim 3\times 10^{-9}$Hz, and a cadence of 2 weeks, resulting in $f_{\text{max}}\sim8\times 10^{-7}$Hz. We obtain the signal-to-noise ratio of the circular polarization amplitude in two different ways. First, analytically, by computing the covariance matrix given in Eq.~\ref{eq:cov} and the variance of the optimal estimator using Eq.~\ref{eq:opt_noise}. We then validate our results on simulated timing residuals that we generate using the method described in App.~\ref{app:sims}. The estimator defined in Eq.~\ref{eq:estimators} is then applied to the simulated data, and the signal and noise are given by the mean and standard deviation of the optimal estimator across various realizations of the simulated PTAs. We confirmed that our estimator is unbiased and that the variance matches our predicted value. We therefore choose to present only the predicted signal-to-noise ratio in the results shown in this work. The signal-to-noise ratio of the circular polarization estimator is shown as a function of noise RMS in Fig.~\ref{fig:SNR_RMS}, for a network of 10, 25, and 50 pulsars. We can see three distinct regimes: weak-signal (noise-dominated), intermediate-signal, and high-signal (SGWB-dominated). The high-signal regime seen here corresponds to a ``cosmic variance'' limit of the circular polarization measurements, which results in a $S/N \sim 3$ for a PTA with 50 pulsars. The second data release by the IPTA~\cite{Perera:2019sca} includes 65 pulsars with values of RMS between $\sim 0.2-14\mu$s, roughly corresponding to the intermediate-noise portion of Fig.~\ref{fig:SNR_RMS} if such a dipolar background were present. The results in Fig.~\ref{fig:SNR_RMS} correspond to a single realization of pulsars on the sky. However, the expected signal-to-noise ratio depends on the positions of the pulsars relative to the SGWB dipole. A different realization of the pulsar positions would therefore lead to a different signal-to-noise estimate. We show in Fig.~\ref{fig:SNR_Np} the distributions of the signal-to-noise ratios for different numbers of pulsars in a PTA. We consider a measurement in the high-signal regime and assume all pulsars have an RMS of 1ns. As expected, the distribution is the widest for the smallest PTA considered, with a median value around $S/N\sim 1$. As we increase the number of pulsars, the distribution narrows and reaches a signal-to-noise ratio of nearly $S/N\sim 3$ for the largest array considered. In practice, the distribution of pulsars on the sky is not uniform and is clustered around the galactic plane. While the assumption of a uniform spatial distribution of pulsars overestimates the signal-to-noise ratio of the SGWB, Fig.~\ref{fig:SNR_Np} captures a sensible range of values for the dipole toy-model of GW anisotropy. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{SNR_Np2.pdf} \caption{Signal-to-noise ratio of the amplitude of a circular polSarization dipole as a function of the number of pulsars. The distributions correspond to the intrinsic variation in $S/N$ due to different realizations of the pulsar spatial positions. The pulsars were generated assuming a uniform distribution of angles on the sky.} \label{fig:SNR_Np} \end{figure} \section{Conclusions}\label{sec:concl} Over the last decade, PTA consortia have been gathering increasingly precise timing data and are beginning to dip into an astrophysically interesting region of the SGWB parameter space. The prospect of detection in the near future and the expansion of both telescope-centered programs \cite{Bailes:2018azh, Ng:2017djg} and new PTAs \cite{Susobhanan:2020zmm} motivates us to consider what are the next targets after an isotropic background is measured. In this work, we address the circular polarization of the SGWB. By isolating the real and imaginary parts of the timing residual correlation, we construct an estimator that can naturally distinguish between the intensity and circular polarization contributions of the SGWB. The estimator is based on extracting the real and imaginary parts of the cross-correlation between pulsar timing residuals. We compute the optimal filter and the combination of pulsar pairs that maximizes the signal-to-noise ratio of the circular polarization component. As a case study, we consider a simple toy-model in which the SGWB is dominated by a dipole that has 10\% of the power of the monopole and that points in the $\hat{z}$ direction. We then compute the expected signal-to-noise ratio of the circular polarization amplitude under different assumptions about the number of pulsars and their white noise. In order to validate our results, we also apply the proposed estimator on simulated timing residuals and recover the amplitude of the supposed circular polarization dipole. By varying the pulsar noise RMS, we show where the three (low-signal, intermediate-signal, and high-signal) regimes fall in the dipole toy-model. Under idealized observational assumptions, we show that the dipole can be recovered with a significance $\sim 3\sigma$ in an array that contains 50 pulsars in the SGWB-dominated high-signal regime. Finally, we show how the sensitivity of the PTA to the circular polarization amplitude varies with the locations of the pulsars on the sky. We compute the distribution of signal-to-noise ratios across multiple realizations for increasing numbers of pulsars. The circular polarization of the SGWB can offer additional information about the astrophysical sources of the background and of more exotic parity-violating physics. In particular, it may be an important tool to distinguish between scenarios in which the background is dominated by multiple distant sources and a smaller number of nearby sources. This contribution is naturally present in the data set of PTAs and therefore extracting the circular polarization component may be an important step in maximizing the information recovered from timing data. \acknowledgments We thank Tristan L. Smith, Andrea Lommen, Chiara Mingarelli, and José Luis Bernal for useful discussions. GSP was supported by the National Science Foundation Graduate Research Fellowship under Grant No.\ DGE1746891. This work was supported at Johns Hopkins by NSF Grant No.\ 1818899 and the Simons Foundation.
1,116,691,497,071
arxiv
\section{\bf Introduction} \hspace*{0,5cm} Bernstein algebras form a class of nonassociative algebras whose origin lies in genetics. Historically, they have been introduced by Lyubich \cite{Lyu} and Holgate \cite{Ho} as an algebraic formulation of the problem of classifying the stationary evolution operators in genetics. In this way, Bernstein algebras represent populations reaching the equilibrium after the first generation. Since then the theory has evolved into an independent branch of nonassociative algebras, and many researches have been done on the topic from various points of view (see, for instance, \cite{Be, Co, Gonza1, Ly, Mi, Ouattara, Wa, Wo}).\\ One of the main questions on the structure of Bernstein algebras, posed by Lyubich and solved by Odoni and Stratton \cite{Od} as well as by Baeza \cite{Ba} and Grishkov \cite{Gr}, says that the barideal of a finite-dimensional nuclear Bernstein algebra is nilpotent. The analogous question in the finitely generated case was proposed by Grishkov \cite{Gr}, and its affirmative solution was settled by Peresi \cite{Pe} and Krapivin \cite{Kra}. Consequently, finitely generated nuclear Bernstein algebras must be finite-dimensional, as was directly established by Suazo \cite{Su} using a different approach.\\ On the other hand, it is well known that one of the most satisfactory developments of the theory of some varieties of algebras, as associative, Jordan, alternative and Lie algebras, is the structure theory of algebras with chain conditions, and there is presently a substantial bibliography in this subject. Concerning Bernstein algebras, a detailed treatment was done in \cite{BZ} for Bernstein algebras satisfying chain conditions on ideals. Among many other results in that paper, it was especially proved that for a Bernstein algebra which is Jordan or nuclear, each of the N\oe therian and Artinian hypotheses implies finite-dimensionality of the algebra and so nilpotency of the barideal. Hence, the Lyubich conjecture is still valid in the N\oe therian and Artinian cases.\\ In this present article we pursue the study of Bernstein algebras satisfying conditions initiated in \cite{BZ}. After a first section devoted to preliminaries, we show in Section 2 that an arbitrary Bernstein algebra $A$ satisfying the ascending or descending chain condition subalgebras is finite-dimensional. Then we prove in Section 3 that a Bernstein algebra $A$ is N\oe therian (Artinian) if and only if its barideal $N=\ker(\omega)$ is, thus generalizing an early result by Krapivin \cite{Kra} about the finitely-generated case. Section 4 deals with Bernstein algebras having locally nilpotent barideals, as an extension of both Jordan and nuclear Bernstein algebras. Specifically, we study whether a N\oe therian (Artinian) Bernstein algebra $A$ with a locally nilpotent barideal $N$ is finite-dimensional. The answer is positive in the N\oe therian case and negative in the Artinian case. This question is connected with a result due to Zhevlakov \cite{Zh} on general locally nilpotent nonassociative algebras for which we provide an independent proof. As a special case, we derive that a commutative nilalgebra of nilindex 3 which is N\oe therian or Artinian is finite-dimensional. In the final section, we intend to ameliorate and extend some results of Micali and Ouattara \cite{Mi} to the N\oe therian and Artinian cases.\\ Various examples are presented along this work to serve as motivation and illustration for our results. \section{\bf Preliminaries} \quad In this section we briefly summarize notation, terminology and classical properties about Bernstein algebras and arbitrary nonassociative algebras. Throughout this paper, we will fix an infinite ground field $K$ of characteristic different from 2 and 3, and let $A$ be an algebra over $K$, not necessarily associative or finite-dimensional. If there exists a nonzero homomorphism of algebras $\omega:A \rightarrow K$, then the ordered pair $(A, \omega)$ is called a {\it baric algebra} and $\omega$ is its {\it weight function}. For every $e\in A$ with $\omega(e)\neq 0$, we have $A=Ke\oplus N$, where $N=\ker(\omega)$ is an ideal of $A$, called the {\it barideal} of $A$. A {\it baric ideal} of $A$ is an ideal $I$ of $A$ with $I\subseteq N$. Then the quotient algebra $A/I$ is a baric algebra with weight function $\overline{\omega}$ defined by $\overline{\omega}(x+I)=\omega(x)$.\\ \hspace*{0,5cm} A {\it Bernstein algebra} is a commutative baric algebra $(A, \omega)$ satisfying the identity $(x^2)^2=\omega(x)^2x^2$. A Bernstein algebra has a unique weight function $\omega$. If $x\in A$ and $\omega(x)=1$, then $e=x^2$ is a nontrivial idempotent of $A$ which gives rise to the {\it Peirce decomposition} $A=Ke\oplus U\oplus V$, where $N=U\oplus V$, and \begin{equation} U=\{u\in A\; /\; eu=\frac 12\ u\},\quad V=\{v\in A\; /\; ev=0\}. \end{equation} Besides, the Peirce components multiply according to the relations \begin{equation} U^2\subseteq V, \; UV\subseteq U, \; V^2\subseteq U, \; UV^2=0. \end{equation} A Bernstein algebra $A=Ke\oplus U\oplus V$ is never unital unless if $\dim(A)=1$, and cannot be associative except when $U=0$. However, Bernstein algebras may be power-associative, that is, if each element generates an associative subalgebra. Recall that a commutative algebra $A$ is a {\it Jordan algebra} if the identity $x(x^2 y)=x^2(xy)$ holds in $A$. Bernstein-Jordan algebras play a crucial role in the theory of Bernstein algebras. It is well-known that the following four conditions are equivalent for a Bernstein algebra $A=Ke\oplus U\oplus V$ (see, for instance, \cite{Gonza1, Wa}):\\ (a) $A$ is a Jordan algebra. \qquad\qquad\quad \ (b) $A$ is power-associative.\\ (c) $x^3=\omega(x)x^2$ for all $x\in A$. \qquad \qquad (d) $V^2=0$ and $(uv)v=0$ for all $u\in U$ and $v\in V$. \\ Therefore, the elements of the barideal $N=\ker(\omega)$ in a Bernstein-Jordan algebra $(A, \omega)$ satisfy $x^3=0$ and so the Jacobi identity \begin{equation} (xy)z+(yz)x+(zx)y=0. \end{equation} \hspace*{0,5cm} An important tool in Bernstein algebras is the ideal $ann_U(U)=\{u\in U \; /\; uU=0\}$ of $A$ wich is independent of the selected idempotent $e$ and satisfies $ann_U(U)(U\oplus U^2)=0$ and $V^2\subseteq ann_U(U)$ (see, for instance, \cite[Theorem 3.4.19]{Ly}). It should be remarked the fundamental position played by this ideal $ann_U(U)$ in the connection between Bernstein algebras and Jordan algebras, since the quotient algebra $A/ann_U(U)$ is a Bernstein-Jordan algebra (see, for example, \cite{Gonza1, Hen, Mi}). A Bernstein algebra $A$ is called {\it nuclear} if $A^2=A$, or equivalently, $U^2=V$ for an arbitrary idempotent $e$ ; in this case, we have $ann_U(U)N=0$. Every Bernstein algebra $A=Ke\oplus U\oplus V$ gives rise to a nuclear Bernstein subalgebra $A^2=Ke\oplus U\oplus U^2$. Further information about algebraic properties of Bernstein algebras, as well as their possible genetic interpretation can be found in \cite{Ly, Ouattara, Reed, Wo}.\\ \hspace*{0,5cm} We let now $A$ be an arbitrary algebra over $K$. Following the notation of \cite{russe}, we will consider the {\it powers} $A^i$ and the {\it right principal powers} $A^{<i>}$ of $A$ defined recursively by $A^1=A^{<1>}=A, \; A^i=\sum\limits_{r+s=i} A^r A^s$ and $A^{<i>}=A^{<i-1>}A$. The algebra $A$ is called {\it nilpotent} if $A^n=0$ for some $n$, and {\it right nilpotent} if $A^{<n>}=0$. It is well known that these two notions of nilpotency are equivalent in the commutative case. We define the plenary powers $A^{(i)}$ of $A$ by setting $A^{(1)}=A^2$ and $A^{(i)}=(A^{(i-1)})^2$. The algebra $A$ is said to be {\it solvable} when $A^{(n)}=0$ for some $n$.\\ \hspace*{0,5cm} On the other hand, $A$ being a commutative algebra, the linear mappings $L_a: A \rightarrow A$ defined by $L_a(x)=ax$ generate a subalgebra of $End_K(A)$, denoted by ${\mathcal M}_*(A)$ and called the {\it multiplication ideal} of $A$. The subalgebra of $End_K(A)$ generated by ${\mathcal M}_*(A)$ and the identity endomorphism id$_A$, will be denoted by ${\mathcal M}(A)$ and will be called the {\it multiplication algebra} of $A$. If $B$ is a subalgebra of $A$, we write ${\mathcal M}_*^A(B)$ for the subalgebra of ${\mathcal M}_*(A)$ generated by the operators $L_b$, where $b\in B$. The unital algebra ${\mathcal M}^A(B)$ is defined analogously.\\ \hspace*{0,5cm} For any subset $S\subseteq A$, we will adopt the notations $<S>$ and $K\langle S\rangle$, which mean respectively the subspace of $A$ spanned by $S$ and the free unital associative (noncommutative) algebra over $K$ generated by $S$. Since the ideal of $A$ generated by $S$ consists of finite sums of elements $f(x)$, where $f\in {\mathcal M}(A)$ and $x\in S$, it is customary to denote it by ${\mathcal M}(A)S$. \\ \hspace*{0,5cm} Returning to Bernstein algebras, recall that in a Bernstein algebra, the principal powers $N^{<i>}$ are ideals \cite[page 113]{Ly}. Moreover, The barideal $N$ satisfies the equation $(x^2)^2=0$, but is not in general nilpotent. However, $N$ is always solvable, since $N^{(3)}=0\;$ \cite[Theorem 2.11]{Be} (see, also, \cite{Jac}). \section{\large Chain conditions for subalgebras} \hspace*{0,5cm} In our preceding work \cite{BZ} it has been proved that a Bernstein algebra $A$ that is Jordan or nuclear is necessarily finite-dimensional whenever it is N\oe therian or Artinian. In addition, a counter-example was given to show that the hypothesis that $A$ be Jordan or nuclear is essential in this result. In the following we are going to relax the Jordan and nuclear assumptions in order to state a result for general Bernstein algebras satisfying the ascending (descending) condition for subalgebras instead of ideals. An arbitrary algebra $A$ satisfies the ascending chain condition a.c.c. (descending chain condition d.c.c.) on subalgebras if it has no infinite strictly ascending (descending) chains of subalgebras. It is easily seen that the a.c.c. (d.c.c.) for subalgebras is equivalent to the maximal (minimal) condition for subalgebras, that is, every non-empty set of subalgebras has a maximal (minimal) element. Moreover, in an algebra satisfying the a.c.c. for subalgebras, all subalgebras are finitely generated. In the literature of general nonassociative algebras, there are some results treating the maximal condition for subalgebras. For instance, Kubo constructed in \cite{Kubo} infinite-dimensional associative, Jordan and Lie algebras satisfying the maximal condition for subalgebras (see also \cite{Amayo}). In \cite[Theorem 3, page 91]{russe} it is established that a Jordan nil-algebra satisfying the maximal condition for subalgebras is nilpotent, and therefore finite-dimensional. For Bernstein algebras, we may formulate the following result which is valid for both the a.c.c and d.c.c. conditions on subalgebras. \begin{Theorem} For a Bernstein algebra $A$, the following conditions are equivalent:\\ {\rm (i)} $A$ satisfies the a.c.c. (d.c.c.) condition for subalgebras; \\ {\rm (ii)} $A$ satisfies the a.c.c. (d.c.c.) condition for subalgebras contained in $N=\ker(\omega)$;\\ {\rm (iii)} $A$ is finite-dimensional. \end{Theorem} {\it Proof. } It is enough to demonstrate that (ii) implies (iii). If (ii) holds, then $A$ satisfies a fortiori the a.c.c. (d.c.c.) condition for ideals contained in $\ker(\omega)$, and hence it is N\oe therian (Artinian) in view of \cite[Proposition 2.1]{BZ}. It follows from \cite[Proposition 3.1]{BZ} that the Bernstein-Jordan algebra $A/ann_U(U)$ is finite-dimensional. Now, since $\left(ann_U(U)\right)^2=0$, every subspace of $ann_U(U)$ is a subalgebra of $A$ contained in $\ker(\omega)$. It follows from the hypothesis that $ann_U(U)$ is finite-dimensional, which completes the proof. \ep \section{\large The barideal of a Bernstein algebra} \hspace*{0,5cm} Krapivin established in \cite{Kra} that a Bernstein algebra $(A, \omega)$ is finitely generated if and only if its barideal $N=\ker(\omega)$ is finitely generated (as an algebra). Hence, it is legitimate to ask the analogous question for the N\oe therian and Artinian cases. An arbitrary algebra is said to be N\oe therian (Artinian) if it satisfies the ascending chain condition a.c.c. (descending chain condition d.c.c.) on ideals, that is, every ascending (descending) sequence of ideals is stationary. Before embarking in this direction, we require some preparation. The key ingredient is the deep link exhibited in \cite{BZ} between Bernstein algebras and modules over associative (noncommutative) algebras. In details, let $A=Ke\oplus U\oplus V$ be a Bernstein algebra, and consider the free unital associative (noncommutative) algebra $K\langle V\rangle$ generated by the set $V$. Then the ideal $ann_U(U)$ becomes a left module over $K\langle V\rangle$ by setting $$(v_1*\ldots *v_k).u=v_1(\ldots (v_ku)\ldots ), \mbox{ for all } v_1, \ldots, v_k\in V \mbox{ and } u\in ann_U(U).$$ This $K\langle V\rangle$-module $ann_U(U)$ contains many information on the Bernstein algebra $A$. For instance, the submodules of this module are just the ideals of $A$ contained in $ann_U(U)$. Moreover, the finiteness behavior of the Bernstein algebra $A$ was studied with much benefit in terms of its attached $K\langle V\rangle$-module $ann_U(U)$. In particular, it was established that the Bernstein algebra $A$ is finitely generated (resp. N\oe therian, Artinian) if and only if $A/ann_U(U)$ is finite-dimensional and the $K\langle V\rangle$-module $ann_U(U)$ is finitely generated (resp. N\oe therian, Artinian). \hspace*{0,5cm} Now, we are in a position to prove the following result which extends the Krapivin theorem \cite{Kra} to both the N\oe therian and Artinian contexts. \begin{Theorem} Let $A=Ke\oplus U\oplus V$ be a Bernstein algebra with barideal $N=\ker(\omega)=U\oplus V$. Then the following conditions are equivalent: \\ {\rm (i)} $A$ is N\oe therian (Artinian); \\ {\rm (ii)} $N$ is N\oe therian (Artinian). \end{Theorem} {\it Proof. } The implication ${\rm (ii)} \Rightarrow {\rm (i)}$ is trivial, since a Bernstein algebra $A$ is N\oe therian (Artinian) if and only if it satisfies a.c.c. (d.c.c.) on baric ideals of $A$ \cite[Proposition 2.1]{BZ}.\\ ${\rm (i)} \Rightarrow {\rm (ii)}$: Assume that $A$ is N\oe therian. Then by \cite[Proposition 3.4]{BZ}, $A/ann_U(U)$ is finite-dimensional and the $K\langle V\rangle$-module $ann_U(U)$ is N\oe therian. If $I$ is an ideal of $N$, then $I\cap ann_U(U)$ is an ideal of $A$ contained in $ann_U(U)$, because $ex=\frac 12 x$ for all $x\in I\cap ann_U(U)\subseteq U$. Hence, $I\cap ann_U(U)$ is a submodule of the $K\langle V\rangle$-module $ann_U(U)$.\\ Now, let $(I_n)$ be an increasing sequence of ideals of $N$. Then $(I_n\cap ann_U(U))$ is an increasing sequence of submodules of the $K\langle V\rangle$-module $ann_U(U)$, and $(I_n+ann_U(U))/ann_U(U)$ is an increasing sequence of subspaces of the quotient space $A/ann_U(U)$. Then both chains must stabilize, and by a standard argument, the sequence $(I_n)$ is stationary. \\The Artinian case is treated analogously. \ep \section{\large Bernstein algebras and locally nilpotent nonassociative algebras} \hspace*{0,5cm} Jordan and nuclear Bernstein algebras are important types of Bernstein algebras. The barideal $N=\ker(\omega)$ in a Bernstein-Jordan algebra $(A, \omega)$ satisfies the identity $x^3=0$, hence by \cite[page 114]{russe} $N$ is {\it locally nilpotent}, that is, every finitely generated subalgebra of $N$ is nilpotent (see, also, \cite{Co}). Let us explain the similar fact for nuclear Bernstein algebras: \begin{Proposition} Let $A$ be a nuclear Bernstein algebra. Then the barideal $N$ of $A$ is locally nilpotent. \end{Proposition} {\it Proof. } Consider the Bernstein-Jordan algebra $\overline{A}=A/ann_U(U)$ and let $\pi : A \longrightarrow A/ann_U(U)$ be the canonical surjection. We know that the barideal $N/ann_U(U)$ of the Bernstein-Jordan algebra $A/ann_U(U)$ is locally nilpotent. Let the subalgebra $S$ of $A$ generated by the elements $a_1, \dots, a_n$. Then the subalgebra $T=\pi(S)$ of $A/ann_U(U)$ generated by $\pi(a_1), \dots, \pi(a_n)$ is nilpotent, say $T^{<k>}=0$. It follows that $S^{<k>}\subseteq ann_U(U)$. Hence, $S^{<k+1>}=S^{<k>}S\subseteq ann_U(U)S\subseteq ann_U(U)N=0$, which means that $S$ is nilpotent. It follows that $N$ is locally nilpotent. \ep \begin{Note} {\rm Let $A=Ke\oplus U\oplus V$ be an arbitrary Bernstein algebra. Then the subspace $U\oplus U^2$ is an ideal of $A$ which is locally nilpotent. To be convinced, it suffices to consider the nuclear Bernstein subalgebra $A^2=Ke\oplus U\oplus U^2$ whose barideal is $U\oplus U^2$.} \end{Note} In virtue of \cite[Theorem 2.3]{BZ}, for a Bernstein algebra which is Jordan or nuclear, each of the N\oe œtherian and Artinian conditions implies finite-dimensionality of the algebra and so nilpotency of the barideal. This result suggests us to raise the following more general question: Let $(A, \omega)$ be a Bernstein algebra such that its barideal $N=\ker(\omega)$ is locally nilpotent. If $A$ is N\oe therian or Artinian, is it finite-dimensional? In the light of our Theorem 3.1, the above question has a closed link with the following question on general locally nilpotent nonassociative algebras, which seems to be of independent interest: Is a locally nilpotent algebra which is N\oe therian or Artinian finite-dimensional? Searching in the wide literature of general nonassociative algebras, we found a noteworthy article \cite{Zh} published in 1972 by the eminent algebraist Zhevlakov (1939-1972) after his death, which gives a positive answer to the N\oe therian case and constructs a counter-example to the Artinian case. For the sake of completeness, we provide below an alternative proof of this result which is substantially different from the proof of Zhevlakov mentioned in \cite[Note 1]{Zh}, and that we have done before discovering Zhevlakov's paper. \begin{Theorem} Let $N$ be a nonassociative (possibly noncommutative) algebra over a field $K$. Assume that $N$ satisfies the ascending chain condition on two-sided ideals. If $N$ is locally nilpotent, then $N$ is finite-dimensional. \end{Theorem} {\it Proof. } Let the ideal $N$ be generated by the elements $e_1,\dots,e_r$: $N={\cal M}(N)e_1+\dots+{\cal M}(N)e_r$. We denote by $F$ the subalgebra generated by $e_1,\dots,e_r$. Then $F$ is nilpotent, and by a straightforward argument, $F$ is finite-dimensional. Choose $m\geq 2$ such that the power $F^m=0$. Clearly, $N=F+N^2$, and by a simple induction one may show that \begin{equation} N^i\subseteq F^i+N^{i+1}\mbox{ for each } i\geq 1\end{equation} Indeed, if the inclusion (4) is true for each $j\leq i$, let us show that $N^{i+1}\subseteq F^{i+1}+N^{i+2}$. We have: $$N^{i+1}=\sum\limits_{j_1+j_2=i+1} N^{j_1}N^{j_2}\subseteq \sum\limits_{j_1+j_2=i+1} (F^{j_1}+N^{j_1+1})(F^{j_2}+N^{j_2+1}).$$ Now, the following relations are easy to verify: $$F^{j_1}F^{j_2}\subseteq F^{j_1+j_2}=F^{i+1}, \;\;\;\;\; F^{j_1}N^{j_2+1}\subseteq N^{j_1}N^{j_2+1}\subseteq N^{j_1+j_2+1}=N^{i+2},$$ $$N^{j_1+1}F^{j_2}\subseteq N^{j_1+1}N^{j_2}\subseteq N^{j_1+1+j_2}=N^{i+2},\;\;\;\;\; N^{j_1+1}N^{j_2+1}\subseteq N^{j_1+j_2+2}=N^{i+3}\subseteq N^{i+2},$$ from which we get the desired inclusion $N^{i+1}\subseteq F^{i+1}+N^{i+2}$.\\ As a consequence of (4), it follows that $N^i\subseteq F+N^{i+1}$, for each $i\geq 1$. \\Hence, $N=F+N^2=F+N^3=\dots=F+N^m$. Now, consider $a_1,\dots,a_t$ in $N^m$ such that, $N^m={\cal M}(N)a_1+\dots+{\cal M}(N)a_t$. Without loss of generality, we can assume that each $a_j$ is a nonassociative product $x_1\dots x_m$ (with some distribution of parentheses) of $m$ factors $x_i\in N$. Writing $x_k=u_k+v_k$ $(k=1,\dots,m)$, where $u_k\in F$, $v_k\in N^m$ and using the fact that $F^m=0$, we obtain that each $a_j$ is a sum of elements of the form $y_1\dots y_m$ (with some distribution of parentheses), where $y_{1},\dots,y_{m}\in N$ such that at least one of them, say $y_{d}$, belongs to $N^m$. Decomposing $y_{d}$ into $N^m={\cal M}(N)a_1+\dots+{\cal M}(N)a_t$, we get that $y_{1}\dots y_{d}\dots y_{m}$ is a sum of products $h_1\dots h_{s-1}a_ih_{s+1}\dots h_p$ (with some distribution of parentheses), where $h_1,\dots,h_{s-1}, h_{s+1},\dots, h_p\in N$ and $i\in \{1,\dots,t\}$. As a consequence, each $a_j$ can be expressible in the form \begin{equation} a_j =\sum h_1^{j,i}\dots h_{s_{j,i}-1}^{j,i}\;a_i\; h_{s_{j,i}+1}^{j,i}\dots h_{p_{j,i}}^{j,i}, \end{equation} where $h_k^{j,i}\in N,\; p_{j,i}\geq s_{j,i}\geq 1.$ \\ The subalgebra $H$ generated by the finite set $\{a_1,\dots, a_t\} \cup \{h_k^{j,i}\}$ is nilpotent. Let $H^l=0$ ($l\geq 2$). We may now replace $a_i$ by its expression ($l$ times) in (5) to conclude that $a_j=0$. This implies that $N^m=0$, and therefore $N=F$, which completes the proof. \ep\\ A special case of locally nilpotent algebras are commutative nilalgebras of nilindex at most 3, whose natural examples are barideals of Bernstein-Jordan algebras and train algebras of rank 3 (see, for instance, \cite{Zitan}). They satisfy the identity $x^3=0$ and so also the Jacobi identity $(xy)z+(yz)x+(zx)y=0$. It is well known that they are Jordan algebras (see \cite[Lemma 1]{Ba}, \cite[Lemma 2.2]{Jacobi}, \cite{Gut}, \cite[page 114]{russe}). They appear in the litterature as {\it Jacobi-Jordan algebras} in \cite{Agore,Jacobi} and as {\it mock-Lie algebras} in \cite{Pasha}. In addition, any such an algebra $N$ is solvable and $N^{(4)}=0$ \cite[Lemma 3.1]{Zitan}. On the other hand, since such algebras are locally nilpotent, we infer the following immediate consequence of Theorem 4.3. \begin{Corollary} Let $N$ be a commutative algebra satisfying the identity $x^3=0$. If $N$ satisfies the ascending chain condition on ideals, then $N$ is finite-dimensional. \end{Corollary} The above corollary is certainly not new, and it is quite easy to prove it directly. Indeed, since the ideal $N$ is a N\oe therian solvable Jordan algebra, it follows from a result of Medvedev and Zelmanov \cite{Med} that $N$ is nilpotent. Hence, by a simple reasoning, one may prove that $N$ is finite-dimensional.\\ \hspace*{0,5cm} The analog of Theorem 4.3 for the descending chain condition is false, as already shown by Zhevlakov \cite{Zh}: \begin{Example} {\rm (Zhevlakov) Let $N$ be a countably-dimensional algebra with basis $\left\{e_n\right\}_{n\in \N^*}$ and nonzero products $$ e_ie_j=e_{\min(i,j)-1} \;\mbox{ for }i, j\geq 2.$$ By \cite[Note 2]{Zh}, $N$ is a locally nilpotent commutative algebra which is Artinian and satisfying $N^2=N$.} \end{Example} We offer another counter-example below: \begin{Example} {\rm Let $N$ be an infinite dimensional commutative algebra with basis $\left\{e_n\right\}_{n\in \N^*}$ and nonzero multiplication table given by $ e_n^2=e_{n-1}$ for $n\geq 2$. \\ Let $S$ be a nonzero subalgebra of $N$, and let $x=\alpha _1e_1+\dots+\alpha _ke_k\in S$, with $\alpha_k\neq 0$. By considering the plenary powers $x^{[r]}\; (1\leq r\leq k)$ defined by $x^{[1]}=x$ and $x^{[r]}=(x^{[r-1]})^2$, a simple calculation gives $x^{[2]}=\alpha _2e_1+\dots+\alpha _ke_{k-1},\;\ldots, x^{[k-1]}=\alpha_{k-1}e_1+\alpha_k e_2, \;x^{[k]}=\alpha_{k}e_1$. It follows that $e_1,\dots, e_k\in S$. If $n=\max\,\{k \; / \; x\neq 0 \} $ is finite, then $S=<e_1, \ldots, e_n>$, which is also an ideal of $N$. In the opposite case, we get $S=N$. Therefore, each proper subalgebra (ideal) of $N$ coincides with a subspace $<e_1, \ldots, e_n>$ for some $n\geq 1$. Clearly, no infinite sequence of ideals of $N$ can exist, and so $N$ is Artinian. On the other hand, it is not hard to observe that $N$ is locally nilpotent, since every finitely generated subalgebra $<e_1, \ldots, e_n>$ is nilpotent (of index $n+1$). Obviously, $N$ is not N\oe therian because the ascending chain $(<e_1, \ldots, e_n>)_{n\geq 1}$ of ideals does not break off.} \end{Example} Although the similar result of Theorem 4.3 fails in the Artinian case, we may prove the following Artinian version of Corollary 4.4, whose proof also holds in the N\oe therian case. \begin{Corollary} Let $N$ be a commutative algebra satisfying the identity $x^3=0$. If $N$ satisfies the descending chain condition on ideals, then $N$ is finite-dimensional. \end{Corollary} {\it Proof. } When $N^2=0$, each subspace of $N$ is an ideal of $N$, and therefore $N$ must be finite-dimensional by the Artinian hypothesis. Now, if $N^2\neq 0$, then the former case shows that $N/N^2$ is finite-dimensional. Finally, we apply \cite[Lemma 3.2]{Zitan} stating that any commutative algebra $N$ satisfying the identity $x^3=0$ and such that $N/N^2$ is finite-dimensional must be finite-dimensional. \ep\\ At present, let us return to Bernstein algebras with locally nilpotent barideals. Applying Theorem 3.1 together with Theorem 4.3, we deduce immediately the next consequence: \begin{Corollary} Let $A$ be a Bernstein algebra with locally nilpotent barideal $N$. If $A$ is N\oe therian, then $A$ is finite-dimensional. \end{Corollary} As attempted, we will give in the sequel a counter-example to the Artinian version of Corollary 4.8. We emphasize that the locally nilpotent algebras $N$ treated in Examples 4.5 and 4.6 cannot be embedded in a Bernstein algebra $A$, since in either cases the identity $\left(x^2\right)^2=0$ is not valid in $N$. For this reason, we make appeal to the following example taken from \cite[Example 3.10]{BZ}: \begin{Example} {\rm Let $A$ be the Bernstein algebra with denumerable basis $\{e, v_1, u_1, u_2, u_3, \dots\}$ and nonzero products $$e^2=e,\ eu_i=\frac 12 u_i\ (i\geq 1),\ u_iv_1=u_{i-1} \ (i\geq 2)$$ The weight function $\omega: A \longrightarrow K$ is defined by $\omega(e)=1,\ \omega(v_1)=\omega(u_i)=0 \ (i\geq 1)$ and the Peirce components are $U=<u_1, u_2, \ldots> $ and $V=<v_1>$. We know from \cite[Example 3.10]{BZ} that $A$ is Artinian. Actually, it is clear that the barideal $N=U\oplus V$ is locally nilpotent. We point out that the Bernstein algebra $A$ is not N\oe therian, and moreover, it is neither nuclear nor Jordan.} \end{Example} \section{On the nilpotence} \hspace*{0,5cm} In this final section we shall proceed to revisit some results of Micali and Ouattara \cite{Mi} in the aim to improve and generalize them to the N\oe therian and Artinian situations. \\ First, we start with the following result which was proved in \cite[Lemmas 4.3 and 4.4]{Mi} when the Bernstein algebra $A$ was assumed to be finitely generated. \begin{Lemma} Let $A=Ke \oplus U \oplus V$ be a Bernstein algebra which is N\oe therian or Artinian, and let $N=U\oplus V$ be its barideal. Let $I$ be a subspace of $A$. \\ {\rm (i)} If $NI=I$, then $I\subseteq ann_U(U)$ and $I$ is an ideal of $A$.\\ {\rm (ii)} $NI=I$ if and only if $VI=I$. \end{Lemma} {\it Proof. } In view of \cite[Proposition 3.1]{BZ}, since $A$ is N\oe therian or Artinian, the Bernstein-Jordan algebra $A/ann_U(U)$ is finite-dimensional. Hence, its barideal $N/ann_U(U)$ is nilpotent, so $N^k\subseteq ann_U(U)$ for some integer $k$. The remainder of the proof follows as in \cite[Lemmas 4.3 and 4.4]{Mi}. More precisely:\\ (i) Evidently, $I=NI\subseteq N$. Now, $I=NI=N(NI)=\dots \subseteq N^k$, yielding $I\subseteq ann_U(U)$. Furthermore, since $I\subseteq ann_U(U)\subseteq U$, we have $eI=I$, and by the condition $NI=I$, we deduce that $I$ is an ideal of $A$.\\ (ii) If $NI=I$, then the above assertion gives $I\subseteq ann_U(U)$. It follows that $I=NI=(U\oplus V)I=VI$, because $UI=0$. \\ Conversely, if $VI=I$, then $I=V(V(\dots V(VI)\dots))\subseteq N^k\subseteq ann_U(U)$. Therefore, $UI=0$ and so $NI=(U\oplus V)I=VI=I$. \ep\\ \hspace*{0,5cm} Actually, after proving Lemma 5.1, the result of \cite[Th\'eor\`eme 4.7]{Mi} can be ameliorated by deleting the superfluous hypothesis that $A$ be finitely generated. Namely: \begin{Theorem} Let $A=Ke \oplus U \oplus V$ be an Artinian Bernstein algebra. Then the following conditions are equivalent:\\ {\rm (i)} The ideal $N=U\oplus V$ is nilpotent;\\ {\rm (ii)} The associative algebra ${\cal M}_*^N(V)$ is nilpotent;\\ {\rm (iii)} $I=0$ is the unique subspace of $A$ satisfying $VI=I$. \end{Theorem} {\it Proof. } As in the proof of \cite[Th\'eor\`eme 4.7]{Mi}, the implications ${\rm (i)} \Rightarrow {\rm (ii)} \Rightarrow {\rm (iii)}$ are always true even if $A$ is not Artinian. And the implication (iii) $\Rightarrow$ (i) follows the same path as in \cite[Th\'eor\`eme 4.7]{Mi}, by applying our Lemma 5.1 instead of \cite[Lemma 4.3]{Mi}. In details:\\ ${\rm (i)}\Rightarrow {\rm (ii)}$: Since $N$ is nilpotent, then also is the multiplication ideal ${\cal M}_*(N)$ (see \cite[Chapitre II, Theorem 2.3]{Schafer}. In particular, the subalgebra ${\cal M}_*^N(V)$ is nilpotent.\\ ${\rm (ii)}\Rightarrow {\rm (iii)}$: Let $I$ be a subspace of $A$ with $VI=I$, so that $I\subseteq N$ by Lemma 5.1. Since $I=V(V(\dots V(VI)\dots))$ and ${\cal M}_*^N(V)$ is nilpotent, then $I=0$.\\ ${\rm (iii)}\Rightarrow {\rm (i)}$: Since $A$ is Artinian, the descending chain of ideals $(N^i)_{i\geq 1}$ of $A$ must stabilize. Hence, $N^r=N^{r+1}$ for some integer $r\geq 1$, or equivalently $N^r=NN^r$. It follows from Lemma 5.1(ii) that $N^r=VN^r$, implying $N^r=0$ by hypothesis. \ep \begin{Note}{\rm We point out that the implication ${\rm (ii)}\Rightarrow {\rm (i)}$ of Theorem 5.2 is already true when the Bernstein algebra $A$ is finitely generated \cite[Lemma 4.1]{Mi}, so it holds automatically in the N\oe therian case, since N\oe therian Bernstein algebras are finitely generated \cite[Corollary 3.6]{BZ}. Nevertheless, the implication ${\rm (iii)} \Rightarrow {\rm (i)}$ fails when $A$ is not Artinian, even if it is finitely generated or N\oe therian, as the following example will illustrate.} \end{Note} \begin{Example} {\rm Let $A $ be the infinite-dimensional Bernstein algebra considered in \cite[Example 3.11]{BZ}, with basis $\{e, v_2, u_1, u_2, u_3, \ldots\} $ and nonzero products $$e^2=e,\; eu_i=\frac 12 u_i \; (i\geq 1), \;\; u_iv_2=u_{i+1} \; (i\geq 1), \; (i\geq 2). $$ Then $A=Ke\oplus U \oplus V $, where $U=<u_1, u_2, \ldots> $ and $V=<v_2> $.\\ Let $I$ be a subspace of $A$ such that $VI=I$. Assume that $I\neq 0$ and choose an element $a=(\alpha_1u_1+\dots+\alpha_{p}u_p)+\alpha v_2 \in I$, with $p$ minimal such that $\alpha_p\neq 0$. Since $a\in VI$, there exists $b=(\beta_1u_1+\dots+\beta_{q}u_q)+\beta v_2\in I$ with $\beta_q\neq0$ and $a=v_2b=\beta_1u_2+\dots+\beta_qu_{q+1}$. Then the contradiction $q+1=p<q$ yields $I=0$. However, the ideal $N=U\oplus V$ is not nilpotent. In fact, this Bernstein algebra $A$ is finitely generated, N\oe therian but not Artinian \cite[Example 3.11]{BZ}.} \end{Example} \quad We close our paper by making the following comment. The Grishkov conjecture \cite{Gr} asserts that if $A=Ke\oplus U\oplus V$ is a finitely generated Bernstein algebra that is nuclear, then the barideal $N=U\oplus V$ is nilpotent. This question has been shown affirmatively by Peresi \cite{Pe} and Krapivin \cite{Kra}. Nevertheless, the proof of this result presented in \cite[Th\'eor\`eme 4.10]{Mi} is not correct, because it relies on \cite[Th\'eor\`eme 4.7]{Mi} which requires the additional assumption that $A$ be Artinian. $$$$ {\bf \large Acknowledgment}: The author wish to thank Professor Nadia Boudi for a number of useful discussions and for reading an earlier draft of this paper.
1,116,691,497,072
arxiv
\section{INTRODUCTION} Recently, significant progress has been made in understanding the structure of 4-dimensional supersymmetric gauge theories. Building on earlier work \cite{ads,cern} and using arguments based on symmetry, holomorphy, and weak-coupling limits, it has been possible to reach remarkable conclusions about the non-perturbative structure of these theories \cite{svac,dual}. Particularly striking results have been achieved in $N = 2$ theories using these methods \cite{sw}. One of the goals of this recent work has been to understand the structure of the moduli spaces of vacua in supersymmetric gauge theories. In ref.~\cite{ads} a methodology was developed for describing the classical space of vacua in terms of coordinates constructed from holomorphic gauge-invariant polynomials in the matter fields. However, in most of the literature this methodology is applied on a case-by-case basis, with little insight given as to its general applicability. The purpose of this paper is to give a simple but rigorous proof that the moduli space of vacua can be precisely described in this simple way. Many of the results we obtain are contained in the existing literature \cite{wb,witten,mumford}. The main contribution of the present work is that we give a unified description of these results which properly takes into account ``fine points'' such as sets of measure zero and singularities. These points are important because they often correspond to physical features such as enhanced gauge symmetry. Our point of departure is the observation that a supersymmetric gauge theory with gauge group $G$ is invariant under the complexified gauge group $G^{\rm c}$. From this point of view, the usual $D$-flatness conditions can be viewed as a $G^{\rm c}$ gauge artifact. By using a gauge in which $G^{\rm c}$ invariance is preserved, we show that in the absence of a superpotential {\em every} constant value of the matter fields is $G^{\rm c}$ gauge-equivalent (in an extended sense that we make precise) to a solution of the $D$-flatness conditions. This gives the result that the space of classical vacua is \begin{equation} \label{vspace} \scr M_0 = \scr F /\!\!/ G^{\rm c}, \end{equation} where $\scr F$ is the space of all constant matter field configurations and the quotient denoted by $/\!\!/$ identifies any $G^{\rm c}$ orbits that have common limit points. This gives a manifestly holomorphic description of the space $\scr M_0$. In fact, we can use this result to prove (using elementary results of algebraic geometry) that the space of vacua can be described by the set of all gauge-invariant holomorphic polynomials. These polynomials form an algebra generated by a finite number of monomials subject to (finitely many) defining constraints, as in ref.~\cite{ads}. That is, $\scr M_0$ is an algebraic variety. These results generalize simply to the case where a superpotential is present. In that case every constant field configuration that extremizes the superpotential is $G^{\rm c}$ gauge-equivalent (in the extended sense) to a classical vacuum and the space of classical vacua is given by eq.~\en{vspace}, where $\scr F$ is the space of stationary points of the superpotential. This space $\scr F$ is by definition an algebraic variety, which is sufficient to show that $\scr M_0$ is a variety in this case as well. This paper is organized as follows. In Section II, we derive our principal results on the structure of the space of vacua; In Section III, we give several illustrative examples. Section IV contains a discussion of related work and our conclusions. In the Appendix, we give a simple proof that the space $\scr M_0$ is a variety. \section{CLASSICAL VACUA} \subsection{Quotient space} The lagrangian of a supersymmetric gauge theory can be written% \footnote{We use the conventions of Wess and Bagger \cite{wb}.} \begin{equation} \scr L = \int\mkern-5mu {\rm d}^2\theta\, {\rm d}^2\mybar\theta\, \Phi^\dagger {\rm e}^V \Phi + \left( \frac 1{4g^2} \int\mkern-5mu {\rm d}^2\theta\, \mathop{\rm tr}(W^\alpha W_\alpha) + \int\mkern-5mu {\rm d}^2\theta\, W (\Phi) + {\rm h.c.} \right), \end{equation} where $\Phi$ are chiral matter fields transforming in some (in general reducible) representation of the gauge group $G$, $V$ is a vector superfield taking values in the Lie algebra of $G$, and $W (\Phi)$ is a superpotential. This lagrangian is invariant under a large group of gauge transformations \begin{equation} \Phi \mapsto g \cdot \Phi, \qquad {\rm e}^V \mapsto g^{-1\dagger} {\rm e}^V g^{-1}, \end{equation} where $g = {\rm e}^{i\Lambda}$ and $\Lambda$ is a chiral superfield in the Lie algebra of $G$. In particular, $\Lambda$ can be a complex scalar, so that this includes $G^{\rm c}$ transformations. Conventionally, one fixes Wess--Zumino gauge, which breaks $G^{\rm c}$ invariance leaving only ``ordinary'' $G$ gauge invariance. We will instead use a gauge in which $V$ takes the form \begin{equation} V_A = C_A - \theta \sigma^\mu \mybar\theta v_{\mu A} + i\theta\th \mybar\theta\mybar\lambda_A - i \mybar\theta\mybar\theta \theta\lambda_A + \sfrac 12 \theta\th \mybar\theta\mybar\theta D_A, \end{equation} where $A$ is a $G$ adjoint index. This leaves a residual $G^{\rm c}$ gauge freedom. It is straightforward to derive the $D$-flatness conditions in this gauge, which read \begin{equation} \label{Dflat} \frac{\partial}{\partial C_A} \left( \phi^\dagger {\rm e}^C \phi \right) = 0, \end{equation} where $\phi$ is the scalar component of $\Phi$. This immediately shows that any $\phi$ that satisfies the $D$-flatness conditions~\en{Dflat} for some $C$ is $G^{\rm c}$ gauge-equivalent to the field $\hat\phi = {\rm e}^{C / 2} \phi$, which satisfies \begin{equation} \label{newDflat} 0 = \frac{\partial}{\partial \hat C_A} \left. \left(\hat\phi^\dagger {\rm e}^{\hat C} \hat\phi \right) \right|_{\hat C = 0} = \frac{\partial}{\partial \hat C_A} \left. \nu({\rm e}^{\hat C / 2} \hat\phi) \right|_{\hat C = 0}, \end{equation} where \begin{equation} \nu(\phi) \equiv \phi^\dagger \phi. \end{equation} Eq.~\en{newDflat} is just the usual $D$-flatness condition in Wess--Zumino gauge. Since $\nu(\phi)$ is $G$-invariant, we see that the fields that satisfy these $D$-flatness conditions are precisely those for which $\nu(\phi)$ is stationary with respect to $G^{\rm c}$.% \footnote{Essentially the same result is derived in ref.~\cite{wb}. Similar arguments have been discussed recently by H. Georgi, and by J. March--Russell (unpublished).} The set of points for which this condition is satisfied lie on closed $G$ orbits (since $G$ is compact) that we will refer to as $D$-{\em orbits\/}. We consider now the case where the superpotential vanishes, and show that {\em every} constant field configuration $\phi_0$ is $G^{\rm c}$ gauge-equivalent to a solution $\hat\phi$ of the Wess--Zumino gauge $D$-flatness condition eq.~\en{newDflat}. To make our results precise, we need a slightly generalized notion of $G^{\rm c}$ gauge-equivalence. We say that two constant field configurations $\phi_1$ and $\phi_2$ are $G^{\rm c}$ equivalent in the {\em extended} sense if there is a sequence $\{ g_n \}$ of elements in $G^{\rm c}$ such that \begin{equation} \label{limit} \lim_{n \mathop{\rightarrow} \infty} g_n \cdot \phi_1 = \phi_2. \end{equation} In order for this to define an equivalence we must also impose the same condition with the roles of $\phi_1$ and $\phi_2$ reversed; we must also impose transitivity, {\em i.e\/}.\ $\phi_1$ and $\phi_2$ are equivalent if there is a $\phi_3$ that is equivalent to both $\phi_1$ and $\phi_2$. We call the set of all fields that are equivalent in this sense to a field $\phi$ the {\em extended} $G^{\rm c}$ {\em orbit} of $\phi$. These definitions are physically sensible because any gauge-invariant function takes the same value on all the field configurations in an extended orbit, so that the points of such an orbit are physically indistinguishable. With these definitions, the result to be proven can be concisely stated: every extended $G^{\rm c}$ orbit contains a $D$-orbit. This immediately implies that the space of classical vacua is given by \begin{equation} \label{modresult} \scr M_0 = \scr F /\!\!/ G^{\rm c}, \end{equation} where $\scr F$ is the space of all constant matter field configurations, and the extended quotient by $G^{\rm c}$ is defined using the equivalence defined above. This result is intuitively satisfying since it is closely analogous to the result for non-supersymmetric theories that (in a theory with no potential) every constant field configuration lies in a gauge equivalence class of vacua. The proof of this assertion is extremely simple. Fix an arbitrary $\phi_0$. Since the function $\nu(\phi)$ is positive semidefinite and is less than or equal to $\nu(\phi_0)$ only on a compact ball in $\phi$-space, it must take on a minimum value at some point in the closure of the ordinary $G^{\rm c}$ orbit that contains $\phi_0$. Thus, there is a $\hat\phi$ such that \begin{equation} \hat\phi = \lim_{n \mathop{\rightarrow} \infty} g_n \cdot \phi_0, \end{equation} which minimizes $\nu$ on the closure of the orbit. Clearly, $\hat\phi$ lies in the extended $G^{\rm c}$ orbit containing $\phi_0$. Furthermore, $\nu(\hat\phi)$ must be stationary with respect to $G^{\rm c}$ transformations, since otherwise we could construct a different sequence that converges to a new value of $\hat\phi$ with smaller $\nu(\hat\phi)$ by making a $G^{\rm c}$ transformation of the original sequence. Thus, $\hat{\phi}$ is in a $D$-orbit.% \footnote{A different argument for essentially the same conclusion is given in ref.~\cite{wb}.} This result makes it intuitively clear why the space of classical vacua can be parameterized by the set of gauge-invariant holomorphic polynomials in the fields $\phi$, as advocated in ref.~\cite{ads}. Such polynomials are constant on extended $G^{\rm c}$ orbits, and it seems natural that there are ``enough'' polynomials to distinguish any two distinct extended orbits. In the appendix, we show that this intuition can be made rigorous using fairly elementary results from algebraic geometry. We prove that the space $\scr M_0$ has as coordinates a set of gauge-invariant polynomials subject to finitely many defining relations. In the language of algebraic geometry, $\scr M_0$ is the algebraic variety defined by the ring of all invariant polynomials on $\Phi$. The argument above can be extended immediately to the case where there is a superpotential present. In that case, the fields must extremize the superpotential \begin{equation} \label{fdef} R_j(\phi) \equiv \frac{\partial W(\phi)}{\partial \phi_j} = 0 \end{equation} as well as satisfying the $D$-flatness conditions. It is easy to see that if any point in an extended $G^{\rm c}$ orbit satisfies (\ref{fdef}) then all other points in that extended orbit also satisfy this equation. We can thus simply restrict $\phi$ to satisfy eq.~\en{fdef} and proceed as above. The result is that the space of vacua is given by eq.~\en{modresult}, where $\scr F$ is the space of fields that extremize the superpotential. (See also ref.~\cite{wb}.) It is straightforward to describe the classical moduli space of vacua in theories with a superpotential as a variety. The results proven in the appendix show that the moduli space can be parameterized by the gauge-invariant polynomials on the set of fields that extremize the superpotential. This means that in addition to the defining relations, there are extra relations on the polynomials stating that any gauge-invariant combination of the $R$'s defined in eq.~\en{fdef} with the $\phi$'s must vanish. We will give an example of this construction in Section III. \subsection{Observations on orbit structure} We now collect some observations about the structure of extended orbits. The main results of this paper do not depend on these observations, but we include them to clarify the significance of the extended $G^{\rm c}$ orbits. We first show that there is exactly one $D$-orbit in every extended orbit. This shows that the classical moduli space can be precisely identified with the set of solutions to the Wess--Zumino gauge $D$-flat conditions with points in the same $G$ orbit identified, and provides a simple connection between our approach and the conventional treatment. We then discuss the relationship between extended orbits and points of enhanced symmetry. We show that in any extended orbit that contains more than one ordinary $G^{\rm c}$ orbit, points in the $G^{\rm c}$ orbit containing the $D$-orbit have more gauge symmetry than points in other orbits of the same extended orbit. To show that there is a unique $D$-orbit in every extended orbit, we begin by showing that every stationary point $\hat\phi$ of $\nu(\phi)$ on an ordinary $G^{\rm c}$ orbit $O$ lies in a $D$-orbit which is a global minimum of $\nu$ in $O$. Along any exponential curve \begin{equation} \label{limcurve} \phi(t) = {\rm e}^{tC / 2} \phi_0, \end{equation} because $\nu$ is positive semidefinite we have\footnote{We thank H. Georgi for this observation.} \begin{equation} \label{eq:second} \frac{\partial^2}{\partial t^2} \nu(\phi(t)) = \nu(C \phi(t)) \ge 0. \end{equation} Eq.~\en{eq:second} can vanish for finite $t$ only if $C \phi(t) = 0$, which is only possible when $\phi (t)= \phi_0$ for all $t$. Every element of $G^{\rm c}$ can be written in the form \begin{equation} g = {\rm e}^{C} \cdot u \end{equation} where $C$ is Hermitian and $u \in G$. Thus, every point in $O$ can be reached by an exponential curve starting at a point in the same $D$-orbit as $\phi_0$, and $\nu$ is monotonically increasing along every such curve. This proves that the $D$-orbit is a global minimum of $\nu$ in $O$. In fact, because the $D$-orbit is compact, it is not hard to see that the set of points in $O$ where $\nu$ is less than or equal to any fixed number $x$ is a compact set. This implies that any limit of a sequence in $O$ which does not lie in $O$ would have a divergent value of $\nu$, which implies that $O$ is a closed orbit containing all its limit points. We cannot immediately conclude from this that $\hat\phi$ minimizes $\nu$ on the extended orbit $X$, since there are in general directions in $X$ which do not correspond to $G^{\rm c}$ transformations.% \footnote{As an example of the type of difficulty which may arise, we mention that there are examples where a point $\hat\phi$ is the limit of a sequence of points $g_n \cdot \phi_0$, and yet there is no exponential curve ${\rm e}^{tC}\phi_0$ that approaches $\hat\phi$.} We can however use the fact that the action of $G^{\rm c}$ is algebraic to show that every extended orbit contains a unique $D$-orbit. We have shown that every ordinary $G^{\rm c}$ orbit which contains a $D$-orbit is closed. The proof of statement (i) in the Appendix shows that for any two disjoint closed sets which are invariant under $G^{\rm c}$, there exists a gauge invariant polynomial which takes different values on the two sets. Thus, two distinct closed orbits cannot lie in a single extended orbit This clearly implies that each extended orbit contains a unique $D$-orbit. Note that the above proof does not hold when the group $G$ is not compact. A simple example is an abelian theory with relatively irrational charges.% \footnote{We thank A. Nelson for suggesting this example.} In this case, the gauge group $G$ is not compact, and a single extended orbit contains multiple $D$-orbits. We now discuss the connection between extended orbits and enhanced gauge symmetry. On any ordinary $G^{\rm c}$ orbit, the invariant subgroup of $G^{\rm c}$ is the same (up to conjugation) at all points on the orbit. However, in an extended orbit $X$ the $G^{\rm c}$ orbit containing the $D$-orbit contains points with more gauge symmetry than the points in other orbits in $X$. This can be seen intuitively by noting that if a sequence of points in one ordinary $G^{\rm c}$ orbit $O$ approaches a point in another orbit $\hat O$ then the direction in which the limit is approached corresponds to an extra invariance of the limit point. Due to the complications mentioned above it is easier to make this result precise using algebraic arguments. As noted in the Appendix, every orbit can be written as a finite union and intersection of algebraic sets. This implies that in the situation above, since $\hat{O}$ must be contained in the closure of $O$, the dimension of $\hat O$ must be strictly smaller than that of $O$. Based on this one might suppose that every extended orbit corresponds to a vacuum with enhanced gauge symmetry. However, in theories with no flat directions there is a single extended $G^{\rm c}$ orbit which contains multiple ordinary $G^{\rm c}$ orbits, but there is clearly no extra gauge symmetry. One might also conjecture that one can identify points with extra gauge symmetry from the singularity structure of the resulting variety, but we will give several examples which show that this is not possible. \section{EXAMPLES} \subsection{Supersymmetric QED} Our first example is supersymmetric QED, a theory with gauge group $G = U(1)$, a matter field $Q$ with charge $1$, and a matter field $\widetilde Q$ with charge $-1$. We use this simple example to illustrate the structure of the extended $G^{\rm c}$ orbits. The classical moduli space in this case is parameterized by \begin{equation} A \equiv Q \widetilde Q \end{equation} so the moduli space can be identified with the set of all complex numbers $\bf C$. To understand the $G^{\rm c}$ orbit structure, note that $U(1)^c$ is simply the multiplicative group of non-zero complex numbers. The action of $G^{\rm c}$ in this case is therefore \begin{equation} (Q, \widetilde Q) \mapsto (\alpha Q, \alpha^{-1} \widetilde Q) \end{equation} with $\alpha \ne 0$. The extended orbit corresponding to a value $A \ne 0$ is the set of points \begin{equation} (Q, \widetilde Q) = (q, A / q) \end{equation} with $q \ne 0$, which all lie on an ordinary $G^{\rm c}$ orbit. On the other hand, the extended orbit with $A = 0$ contains three ordinary $G^{\rm c}$ orbits: \begin{equation} (Q, \widetilde Q) = (q, 0),\ (0, \widetilde q),\ \hbox{or}\ (0, 0) \end{equation} with $q, \widetilde q \ne 0$. The orbit $(0,0)$ is a limit point of the other two orbits. Note that the point $A = 0$ is a point of enhanced symmetry, but the moduli space is completely non-singular there. The structure of the classical moduli space in this theory is extremely simple, but it illustrates many of the features we have described above. Generic extended $G^{\rm c}$ orbits ($A \ne 0$) contain a single ordinary $G^{\rm c}$ orbit which contains a single $D$-orbit. At points of enhanced symmetry ($A = 0$), the extended orbit contains multiple ordinary $G^{\rm c}$ orbits, of which only one contains a $D$-orbit. In this case, the $G^{\rm c}$ orbit which contains the $D$-orbit has enhanced gauge symmetry, while the other orbits do not. In this extended orbit the orbit containing a $D$-orbit is closed and contains all its limit points, while the remaining orbits contain curves approaching the $D$-orbit. \subsection{A $U(1) \times U(1)$ model} We now consider a chiral theory with a more interesting classical moduli space. The gauge group is $U(1) \times U(1)$, and the matter fields have charges \begin{equation} Q \sim (2, 0), \quad R \sim (-2, 1), \quad S \sim (1, -1), \quad T \sim (-1, 0). \end{equation} The gauge-invariant polynomials are generated by \begin{equation} A = Q R^2 S^2, \quad B = Q T^2, \quad C = QRST, \end{equation} which satisfy the defining relation \begin{equation} AB = C^2. \end{equation} This simple two-dimensional variety is an example of a {\em quadric surface\/}. The only singular point on this variety is the point $A = B = C = 0$. (This can be seen by noting that when $A \neq 0$ the variables $(A,C)$ are good coordinates and when $B \neq 0$, $(B,C)$ are good coordinates.) This classical moduli space has a one-parameter family of nontrivial extended orbits. For every $B \ne 0$, there is an extended orbit with coordinates $A = C = 0$ which contains three ordinary $G^{\rm c}$ orbits \begin{equation} (Q, R, S, T) = (B / t^2, 0, s, t),\ (B / t^2, r, 0, t),\ \hbox{or}\ (B / t^2, 0, 0, t). \end{equation} where $r, s, t \ne 0$. The orbit with $R = S = 0$ contains a $D$-orbit which has enhanced gauge symmetry. (The second $U(1)$ is unbroken.) Note that the variety is not singular on the vacua corresponding to these orbits. It is also amusing to note that the extended orbit structure is not symmetric under interchanging $A$ and $B$, even though the variety is. This again illustrates that the presence of enhanced gauge symmetry cannot in general be detected from the structure of the variety. \subsection{Supersymmetric QCD with $N_F = N$} Our final example illustrates how one can obtain a simple description of the moduli space in the presence of a superpotential. Consider supersymmetric QCD, $SU(N)$ gauge theory with chiral superfields $Q^{aj}$ ($j = 1, \ldots, N_F;\ a = 1, \ldots, N$) in the fundamental representation and chiral superfields $\widetilde Q_{ak}$, ($k = 1, \ldots, N_F$) in the antifundamental representation. We consider here the special case $N_F = N > 2$. According to the discussion above (or from ref.~\cite{ads}), the classical space of vacua can be parameterized by the variables \begin{eqnarray} M^j{}_k &\equiv& Q^{aj} \widetilde Q_{ak}, \nonumber\\ B &\equiv& \frac 1{N!} \epsilon_{a_1 \cdots a_N} \epsilon_{j_1 \cdots j_N} Q^{a_1 j_1} \cdots Q^{a_N j_N}, \\ \widetilde B &\equiv& \frac 1{N!} \epsilon^{a_1 \cdots a_N} \epsilon^{k_1 \cdots k_N} \widetilde Q_{a_1 k_1} \cdots \widetilde Q_{a_N k_N}, \nonumber \end{eqnarray} subject to the constraint \begin{equation} B \widetilde B = \mathop{\rm det}(M). \end{equation} We wish to consider the theory in the presence of a superpotential \begin{equation} W = b B + \widetilde b \widetilde B \end{equation} with $b, \widetilde b \ne 0$. (We do not add a mass term.) According to the discussion in the main part of the paper, the moduli space in the presence of the superpotential is given by imposing the additional constraints that all gauge-invariant polynomials which can be constructed from \begin{equation} R_{aj} \equiv \frac{\partial W}{\partial Q^{aj}}, \qquad \widetilde R^{ak} \equiv \frac{\partial W}{\partial \widetilde Q_{ak}} \end{equation} vanish. We must therefore impose \begin{eqnarray} \label{cone} R_{aj} Q^{ak} &=& 0, \\ \label{ctwo} \widetilde R^{aj} \widetilde Q_{ak} &=& 0, \\ \label{cthree} R_{aj} \widetilde R^{ak} &=& 0, \\ \label{cfour} \epsilon^{a_1 \cdots a_N} R_{a_1 j_1} \cdots R_{a_r j_r} \widetilde Q_{a_{r+1} k_{r+1}} \cdots \widetilde Q_{a_N k_N} &=& 0, \qquad (r = 1, \cdots, N) \\ \label{cfive} \epsilon_{a_1 \cdots a_N} \widetilde R^{a_1 k_1} \cdots \widetilde R^{a_r k_r} Q^{a_{r+1} j_{r+1}} \cdots Q^{a_N j_N} &=& 0, \qquad (r = 1, \cdots, N). \end{eqnarray} Expressed in terms of the $A$'s and $B$'s, eqs.~\en{cone} and \en{ctwo} give \begin{equation} B = \widetilde B = 0. \end{equation} The left-hand-sides of eqs.~\en{cfour} and \en{cfive} for $r > 1$ have non-zero baryon number and therefore vanish when expressed in terms of the $M$'s and $B$'s when $B = \widetilde B = 0$. For $r = 1$, we obtain \begin{eqnarray} \label{ccone} \epsilon_{j_1 \cdots j_N} M^{j_2}{}_{k_2} \cdots M^{j_N}{}_{k_N} &=& 0, \\ \label{cctwo} \epsilon^{k_1 \cdots k_N} M^{j_2}{}_{k_2} \cdots M^{j_N}{}_{k_N} &=& 0. \end{eqnarray} Eq.~\en{cthree} gives the constraint \begin{equation} \epsilon_{j_1 \cdots j_N} \epsilon^{k_1 \cdots k_N} M^{j_2}{}_{k_2} \cdots M^{j_N}{}_{k_N} = 0, \end{equation} which is clearly implied by eqs.~\en{ccone} and \en{cctwo} above. Thus, the classical moduli space is the space of $M$'s subject to eqs.~\en{ccone} and \en{cctwo}. To understand the meaning of these constraints, note that we can use the $U(N)_+ \times U(N)_-$ global symmetry of the model to diagonalize $M$. It is then easy to see that eqs.~\en{ccone} and \en{cctwo} impose the same constraint, namely that the rank of $M$ be at most $N - 2$. This is therefore the defining constraint of the classical moduli space. \section{DISCUSSION} We have shown that in classical supersymmetric gauge theories, every matter field $\phi$ that extremizes the superpotential is related by a (limit of a) complex gauge transformation to a vacuum. Furthermore, we have proven that the space $\scr M_0$ of classical vacua has a natural structure as an algebraic variety. There is a related approach to describing the classical space of vacua that follows from the observation that the usual gauge-fixed $D$-flatness equations precisely describe the symplectic reduction of $\scr F$ by $G$. This point of view was used by Witten \cite{witten} to discuss $N = 2$ abelian gauge theories in two dimensions. The symplectic quotient of a complex space by $G$ is closely related to the holomorphic quotient by $G^{\rm c}$, which is the natural domain of geometric invariant theory. Our result in IIB connecting the space of extended orbits to the space of $D$-orbits makes this connection precise for the cases of physical interest. The approach taken in the present paper has the virtue that the quotient space structure emerges naturally and directly as a result of the underlying complexified gauge symmetry. Furthermore, the explicit description of the structure of extended orbits allows us to rigorously describe the quotient space as an algebraic variety without the application of sophisticated mathematical theorems. Several aspects of the picture that we have presented in this paper have also been considered by others. A closely related argument for the existence of fields minimizing the $D$-term potential appears in ref.~\cite{wb}. A local holomorphic description of the space of vacua was given in ref.~\cite{pr}. During the completion of the present work, we learned that J. March--Russell has also studied the relationship between the $D$-flat equations and $G^{\rm c}$ orbits, and that H. Georgi has also recently made progress in this direction. It should be emphasized that the descriptions of $\scr M_0$, both as an extended quotient space and as an algebraic variety, give the precise structure of the space of vacua including isolated special points and singularities. This is important, since such ``fine points'' often have physical significance. For example, we have shown that there is a close connection between vacua with enhanced gauge symmetry and orbits of the complexified gauge group which do not contain all their limit points. At such vacua, the moduli space is often singular. These singularities continue to play an important role in the quantum theory, where they may change structure or disappear by being blown up \cite{svac}. It seems natural to pursue a further understanding of the classical and quantum moduli spaces of vacua using this geometrical point of view. \section*{APPENDIX: PROOF THAT $\scr M_0$ IS A VARIETY} \newcommand{ {\bf C}}{ {\bf C}} \newcommand{\ring}[2]{{\bf C} [{#1}_1, \ldots,{#1}_{#2}]} \newtheorem{thm}{Theorem} In this appendix we give a proof that for any gauge group $G$ and matter fields $\phi$ in any representation of $G$ the classical moduli space $\scr F /\!\!/ G^{\rm c}$ can be parameterized by a finite set of gauge-invariant polynomials $P_a(\phi)$ subject to a finite number of relations. Specifically, we show that $\scr F /\!\!/ G^{\rm c}$ is the natural algebraic variety associated with the ring of {\em all} invariant polynomials in $\phi$. The proof is valid when there is a superpotential present, in which case the space $\scr F$ is the set of values for the fields $\phi$ at which the superpotential is stationary. The presence of a superpotential simply imposes additional relations on the polynomials $P_a$, as described in Section IIA. In fact, the result holds for any theory where $\scr F$ can be described as a variety in terms of a set of fields transforming linearly under $G$ and satisfying a set of algebraic equations. The proof we give here is essentially a distillation of results contained in a related proof in ref.~\cite{mumford}. Our goal in presenting this proof here is to make this result accessible to the physics community by giving a self-contained derivation using fairly elementary methods. We will use the language of algebraic geometry but we will only use a few basic definitions and results from this subject. We begin by reviewing those concepts and results that we will use, all of which can be found on the first few pages of any standard textbook (such as Hartshorne \cite{hartshorne}). The set $A$ of points $(x_1,\ldots,x_n)$ in the complex vector space ${\bf C}^n$ satisfying a system of polynomial equations $f_\alpha(x_1,\ldots,x_n)= 0$ is called an {\em algebraic set\/}. The algebraic sets define a special topology on $ {\bf C}^n$ called the {\em Zariski topology\/}. In the Zariski topology the closed sets are the algebraic sets. Open sets are those sets whose complement is closed. All of the usual statements of topology hold in the Zariski topology; {\em e.g.}, the intersection of a finite number of closed sets is closed, {\em etc\/}. We will distinguish sets closed in the Zariski topology from sets closed in the usual topology by using the terms Z-closed and closed respectively. It is easy to see that every Z-closed set is closed and thus that every Z-open set is open. A {\em constructable} set is a set which can be constructed from Z-closed and Z-open sets with a finite number of operations such as unions or intersections. Constructable sets have the nice property that every point in their Z-closure is also in their closure. (This can be shown, for example, by first proving the assertion for an algebraic curve (1-dimensional variety) and then proceeding by induction, reducing the dimension of the initial variety by one by imposing the constraint that an additional equation vanishes.) Associated with every algebraic set $A$ there is a ring of polynomials $I(A)$ that consists of all polynomials in the variables $x_i$ that vanish at all points of $A$. $I(A)$ is an {\em ideal} (invariant subring) of the ring $\ring{x}{n}$ of all polynomials in the $x_i$'s. The {\em Hilbert basis theorem} states that $I(A)$ always has a finite number of generators, so that $A$ can always be described as the set of points on which a finite set of polynomials vanishes. An algebraic set $A$ is {\em irreducible} when it cannot be written as a union $A = B \cup C$ of two algebraic sets that are proper subsets of $A$. An irreducible algebraic set is an {\em affine variety}. A Z-open subset (with respect to the induced topology) of an affine variety is a {\em quasi-affine variety\/}. We refer to both as simply varieties. Every variety $A$ has associated with it a ring $R(A)$ of rational functions without poles on $A$. It can be shown that for an affine variety $A$, $R(A)$ is just $\ring{x}{n} / I(A)$, the polynomials in the $x_i$'s subject to the relations defined by $I(A)$. The essential point of algebraic geometry is that all the geometric information about the variety $A$ is encoded in the algebraic structure of the ring $R(A)$. Thus, in algebraic geometry the fundamental objects are commutative rings rather than geometric objects. The simplest example of how $R(A)$ encodes geometric information about $A$ is given by the algebraic description of points in $A$. From the above definitions, it is clear that any Z-closed subset $B$ in $A$ can be associated with an ideal $I(B) \supset I(A)$. Thus, $I(B)$ naturally corresponds to an ideal $I(B) / I(A)$ in $R(A)$. Conversely, every nontrivial ideal $I$ of $A$ (an ideal that is neither $\{ 0 \}$ nor $A$) can be associated with a closed, non-empty algebraic set, the {\em zero set} $Z(I)$ of $I$. Using another theorem due to Hilbert (the {\em Nullstellensatz}), it can be shown that the points in $A$ are in 1-1 correspondence with the ideals $I \subset R(A)$ that are {\em maximal} in the sense that there exists no larger ideal $I' \supset I$ other than $I' = R$. An {\em algebraic map} (or {\em morphism}) is a map from a variety $A \subset\{(x_1, \ldots,x_n)\}$ to another variety $B \subset\{(y_1, \ldots,y_m)\}$ that can be described by writing the $y_i$'s as rational functions of the $x_i$'s with denominators that are nonvanishing everywhere on $A$. Such a map gives rise to a ring homomorphism $R(B) \mathop{\rightarrow} R(A)$. It can be shown that the image of a variety under an algebraic map is always a constructable set. This concludes our brief review of concepts from algebraic geometry. In terms of this language, the statement that we wish to prove is the following: \medskip\noindent{\bf Theorem\ } {\em Given a group $G^{\rm c}$ acting on a variety $A$, there is a 1-1 correspondence between $A /\!\!/ G^{\rm c}$ and the set of points in the affine variety $A^G$ defined by the ring $R_G$ of $G$-invariant elements in $R =R(A)$.} \medskip \noindent We are making the technical assumptions (which are always valid in the relevant physical theories) that $A$ is an affine variety in a complex vector space ${\bf C}^n$ on which $G^{\rm c}$ acts linearly, and that $G$ is the product of a semi-simple Lie group with a torus $U(1)^k$. Thus, $G^{\rm c}$ is itself a variety (a so-called {\em algebraic group}), and the action of $G^{\rm c}$ on $A$ is described by an algebraic map $\tau : G \times A \mathop{\rightarrow} A$. The $G^{\rm c}$ orbits in $A$ are the image under $\tau$ of $G \times \{p\}$ where $p$ is a point in $A$; therefore each orbit is a constructable set. (In fact, it can be shown that each orbit is a variety but we will not need that condition.) Implicit in the statement of the theorem is the result that $A^G$ is an affine variety. This follows from the fact that $R_G$ is finitely generated, which is a consequence of the Hilbert basis theorem and the fact (used and proven in the proof below) that every ideal $I \subset R_G$ generates an ideal $M$ in $R$ with $M \cap R_G = I$. It will be convenient for us to think of $A$ as lying in the complex vector space with coordinates $x_1, \ldots, x_n$. We can then take a set of generators for $R_G$ to be some set $P_1, \ldots, P_\ell$ of $G$-invariant polynomials in the $x_i$'s. There is a natural map $\pi: A \mathop{\rightarrow} A^G$ that can be defined by simply evaluating the polynomials $P_a$ at a point $x \in A$. Since the polynomials are invariants, this map is constant on orbits of $G^{\rm c}$, so for any point $p \in A^G$ the preimage $\pi^{-1} (p)$ is a union of disjoint orbits. Furthermore, by continuity $\pi$ must be constant on extended $G^{\rm c}$ orbits in $A$, so it induces a well-defined map from $A /\!\!/ G^{\rm c}$ to $A^G$. It should be noted that the variety $A^G$ is a simple example of a general class of varieties that are the subject of a deep and beautiful area of mathematics called geometric invariant theory \cite{mumford}. Fortunately, in the specific case we are interested in here we can prove the desired result without using any particularly sophisticated or delicate methods from algebraic geometry. \medskip \noindent {\bf Proof of Theorem:} We prove two basic statements, of which the theorem is a consequence: \begin{enumerate} \item[(i)] For $p \in A^G$, $\pi^{-1}(p)$ contains at most a single extended $G^{\rm c}$ orbit. \item[(ii)] $\pi$ is onto. \end{enumerate} It will be useful to define a {\em Reynolds operator} $E:R \rightarrow R_G$, which is a projection onto the subring $R_G$ of invariants. Because $R$ is a direct sum of finite dimensional irreducible representations of $G$, such an operator always exists. Important properties of the Reynolds operator are that it is linear, and that that for any $f \in R_G$ and $g \in R$, $E(fg) = f E(g)$. To prove (i), we begin by noting that every extended orbit is Z-closed. This follows from the fact that every orbit is constructable, which implies that the Z-closure and closure of each orbit are identical. Now, suppose that there were two distinct extended orbits $O$ and $O'$ in $\pi^{-1}(p)$. Since $O$ and $O'$ are disjoint, the ideal $I(O) + I(O')$ in $R$ generated by $I(O)$ and $I(O')$ must be all of $R$. (To see this, note that the ideal $I(O) + I(O')$ cannot be contained in any maximal ideal of $R$ or the corresponding point would be in both $O$ and $O'$.) Thus, $1 \in I(O) + I(O')$, and we can write, for some $f \in I(O)$ and $f' \in I(O')$, $1 = f + f'$. But then we have $1=E (1) = E(f) + E(f')$. We now claim that $E(f) \in I(O) \cap R_G$ and $E(f') \in I(O') \cap R_G$. This follows from the fact that the ideals $I (O)$ and $I (O')$ are invariant under $G^{\rm c}$ (since the extended orbits are invariant) and therefore can be written as a direct sum of linear spaces on which $G^{\rm c}$ acts irreducibly. We have thus shown that $E(f)$ is an invariant function that takes the value 0 on $O$ and 1 on $O'$. Thus, $\pi(O) \ne \pi(O')$, completing the proof of (i). To show that $\pi$ is onto, we fix a point $p \in A^G$ and show that there exists a nontrivial ideal $M$ in $R$ with zero set $Z(M) = \pi^{-1}(p)$. We define $M$ to be the ideal in $R$ generated by the maximal ideal $I(p) \subset R_G$. $M$ satisfies $Z(M) = \pi^{-1}(p)$ by construction, but we must prove that $M$ is not all of $R$, so that it is nontrivial. To do this, note that every $g \in M$ can be written as $g = \sum e_i f_i$, where the $\{ e_i \}$ generate $I(p)$ and $f_i \in R$. If $g$ is invariant, we have $g = E(g) = \sum e_i E(f_i) \in I(p)$, which shows that $R_G \cap M = I(p)$. Thus, $M$ is nontrivial, proving (ii). \section*{Acknowledgments} We thank H. Georgi for sharing closely related work with us, and for clarifying a useful point in our presentation. Thanks to M. Artin and D. Vogan for helping us to navigate the periphery of geometric invariant theory. We also thank M. Bershadsky, J. March--Russell, S. Mathur, H. Murayama, A. Nelson, L. Randall, V. Sadov, and I. Singer for helpful conversations. This work was supported in part by funds provided by the U.S. Department of Energy under cooperative agreements DE-FC02-94ER40818 and DE-AC02-76ER03069, and by the divisions of Applied Mathematics of the U.S. Department of Energy under contracts DE-FG02-88ER25065 and DE-FG02-88ER25066, and by National Science Foundation grant PHY89-04035.
1,116,691,497,073
arxiv
\section{Introduction} This paper addresses spontaneous symmetry breaking in non-relativistic quantum systems of finite size. Strictly speaking, spontaneous symmetry breaking can only occur in infinite systems. Then the ground state exhibits a lesser degree of symmetry than the Hamiltonian itself, i.e., the ground state is invariant under a symmetry group ${\cal H}$ that is a proper subgroup of the Hamiltonian's symmetry group ${\cal G}$. The low-energy excitations are strongly constrained by symmetry and given in terms of (weakly interacting) Nambu-Goldstone modes. These can be calculated within an effective field theory (EFT) that is solely based on the pattern of symmetry breaking. From a technical point of view, the EFT is a nonlinear $\sigma$ model with fields that parameterize the coset space ${\cal G}/{\cal H}$~\cite{weinberg1968,coleman1969,callan1969}. Examples for spontaneous symmetry breaking are the breaking of spin-rotational symmetry in a ferromagnet, the breaking of translational symmetry in a crystal lattice, and the breaking of chiral symmetry in quantum chromodynamics (QCD). In these examples, the Nambu-Goldstone modes are magnons, phonons, and pions, respectively, and EFTs have been developed for all these cases~\cite{Gasser1984,Leutwyler1994,Leutwyler1996,roman1999,hofmann1999,Baer2004,kampfer2005}, see Refs.~\cite{weinbergbook,brauner2010} for reviews. In finite systems, the ground state exhibits the full symmetry of the Hamiltonian, and spontaneous symmetry breaking becomes evident in symmetry-unrestricted mean-field calculations~\cite{aberg1990,Nazarewicz1994,Yannouleas2007}. It is then a major effort (and complication!) to restore the symmetry with the help of projection techniques. The expressions ``obscured symmetry breaking''~\cite{Koma1994} or ``emergent symmetry breaking''~\cite{Yannouleas2007} (which we adopt here) have been proposed for such systems. There are two distinct cases for which emergent symmetry breaking plays a role. First, numerical simulations of infinite physical systems are usually limited to a finite volume, and it is then important to understand the finite-size corrections. Some rigorous results are known in this case~\cite{Horsch1988,Koma1994}, and finite-size corrections to partition functions and thermodynamical observables have been worked out within EFTs for simulations of QCD on finite lattices~\cite{Leutwyler1987,Gasser1988}, and for spin systems~\cite{Hasenfratz1993}. Genuinely finite systems constitute the second and probably physically most interesting case. Prominent examples are the emergence of superfluidity in trapped ultracold Bose gases~\cite{matsumoto2002,Enomoto2006}, pairing in atomic nuclei (both breaking a $U(1)$ phase symmetry), and non-spherical shapes of molecules and atomic nuclei (both breaking $O(3)$ rotational symmetry in the limit of infinite system size). Here, the techniques for constructing EFTs for spontaneous symmetry breaking need to be modified, and the interest is in spectra and transitions rather than in thermodynamical observables. The EFT for a finite system with emergent symmetry breaking is, of course, related to the EFT for the corresponding infinite system with spontaneous symmetry breaking. The symmetry must be realized nonlinearly, and the Nambu-Goldstone fields parameterize the coset space ${\cal G}/{\cal H}$~\cite{weinberg1968,coleman1969,callan1969}. In the infinite system, the proper Nambu-Goldstone fields depend on space and time and exhibit fluctuations of small amplitudes and long wavelengths. A purely time-dependent (and spatially constant) mode is forbidden because it would relate states of inequivalent Hilbert spaces. In the finite system, however, this zero mode, i.e., the spatially constant mode of the Nambu-Goldstone field, must be singled out and treated separately. That mode undergoes large-amplitude fluctuations and upon quantization restores the symmetry. In the finite system, the small-amplitude fluctuations, i.e., the proper Nambu-Goldstone modes with nontrivial temporal and spatial dependence, must likewise be quantized. The theoretical implementation of this program is not trivial and is demonstrated for two interesting and important cases. We first consider as an example the emergent breaking of a $U(1)$ phase symmetry in finite superfluids such as ultracold bosonic atom gases or atomic nuclei. In this case, the proper Nambu-Goldstone modes and the global phase rotations do not couple in leading order, and both modes have the same energy scale. This facilitates the description. Second, we consider the emergent breaking of $SO(3)$ symmetry to its $SO(2)$ subgroup. This case describes the low-energy physics of nonspherical objects with axial symmetry such as linear molecules and deformed atomic nuclei. The case is more complicated and interesting due to the interactions between global rotations and proper Nambu-Goldstone modes, and the energy scale of the rotational mode differs from the energy scale of the proper Nambu-Goldstone modes. Our detailed presentation of these two problems makes clear how to develop EFTs for systems with emergent symmetry breaking in general. In this paper we construct EFTs for finite systems with emergent symmetry breaking, and we focus particularly on deformed atomic nuclei. Such nuclei are traditionally described by generalized collective models~\cite{bohr_1952,bohrmottelson_1953,eisenberg,bmbook,rowe} or the interacting boson model~\cite{arima1975,iachello}. For more microscopic approaches to collective motion, we refer the reader to Refs.~\cite{aberg1990,frauendorf2001,niksic2011}. For deformed nuclei, the presented EFT generalizes the simpler construction of an effective theory proposed recently~\cite{papenbrock2011}. Based on symmetry principles alone, our model-independent approach re-derives some of the well-known results for collective nuclear models. We expect that extensions of the EFT approach could be useful in addressing well-known and long-standing limitations of the collective models such as, e.g., the significant overprediction of electromagnetic interband transitions, see Refs.~\cite{garrett2001,rowe} for recent reviews of this problem. Our procedure is patterned after the case of the infinite ferromagnet~\cite{Leutwyler1994,roman1999, kampfer2005}. We generalize the expression for unitary transformations in the coset space by including purely time-dependent variables. These account for the dynamics of the finite system. The resulting generators define the Nambu-Goldstone modes as well as the zero modes as classical fields. From these we construct the building blocks of the effective Lagrangian $L$ using arguments of invariance and energy scaling. Quantization of the Hamiltonian obtained by a Legendre transformation of $L$ then determines the spectrum. \section{Emergent breaking of U(1) phase symmetry} \label{emer} Superfluids can be viewed as breaking $U(1)$ phase symmetry. Examples are infinite Bose-Einstein condensates (BEC) or the paired states of a BCS superconductor. In their mean-field description, these systems exhibit a coherent phase at the expense of a well-defined particle number. In finite systems the particle number is, of course, a good quantum number. While the results we derive in this Section are well know (see, e.g. Ref.~\cite{wen}), their derivation exhibits novel aspects and paves the way for the description of deformed nuclei within an EFT. In finite superfluids the low-lying excitations are governed by two energy scales. These are the chemical potential $\mu$ (the energy needed to add a single boson to the system), and the energy $\Omega$ of long wave-length excitations. These scales are different for noninteracting and for interacting systems. We consider harmonically trapped bosons as an example. For noninteracting bosons, the proper thermodynamic limit is defined~\cite{dalfovo1999} by keeping the product of particle number $N$ and the third power of the trap frequency constant. That frequency, in turn, defines $\mu$ for the condensate. Hence $\mu \sim \Omega \sim N^{-1/3}$ scale similarly. As $N \to \infty$, the ground states of systems with different particle numbers $N$ become quasi degenerate, superpositions of such states (i.e., states with a constant phase and undefined $N$) describe the superfluid, and the $U(1)$ phase symmetry is broken in the thermodynamic limit. For finite uniform systems, both energies scale as $\mu\sim\Omega\sim N^{-2/3}$~\cite{huang1987}, and the arguments apply likewise. For interacting Bose gases of volume $V$, the chemical potential $\mu$ typically approaches in the thermodynamic limit the nonzero value $\mu \sim g N/V$ where $g$ measures the strength of the interaction~\cite{dalfovo1999}. Similarly, for BCS superconductors, adding a pair roughly costs an energy of the order of the Fermi energy. The latter becomes constant in the thermodynamic limit. In both cases the breaking of $U(1)$ symmetry can be understood in the framework of the grand canonical ensemble. The system is coupled to a particle reservoir with external chemical potential $\mu_{\rm ext}$, and the Hamiltonian $H-\mu_{\rm ext}N$ is minimized. Adjusting $\mu_{\rm ext}$ such that $\mu_{\rm ext}\approx\mu$ for $N \gg 1$ introduces a quasi degeneracy between states of different particle numbers, and a superposition of these states then breaks $U(1)$ symmetry. However, the canonical and the grand canonical ensemble yield different results. For nonextensive quantities the differences decrease as $N^{-1/2}$ for $N \to \infty$. It is only within this uncertainty that an isolated finite system can be viewed as equivalent to a finite system coupled to a particle reservoir. Technically, the introduction of a finite external chemical potential $\mu_{\rm ext}$ breaks time-reversal invariance, and the resulting effective theory differs from the case $\mu_{\rm ext} = 0$. As we will see, the latter can be recovered from the former by simply setting $\mu_{\rm ext} = 0$ in the leading-order equations we derive below. For nonzero $\mu_{\rm ext}$, the low-energy scales of interest are $\mu -\mu_{\rm ext}$ and $\Omega$, and we assume that both are similar in size. In the case of a broken $U(1)$ symmetry we have ${\cal G} = U(1)$ and ${\cal H} = 1$. The Nambu-Goldstone fields parameterize the coset ${\cal G}/{\cal H}\sim {\cal G}$, which is the group itself. The fields induce local phase transformations, and in a finite system the relevant operator is \begin{equation} U(\alpha,\beta) = e^{i\alpha(t)}e^{iV^{1/2}\beta(t,\vec{x})} \ . \label{uni} \end{equation} Here, $\beta$ is the Nambu-Goldstone field, the angle $\alpha$ is the zero mode that needs to be singled out, and $V$ is the volume. For a proper Nambu-Goldstone field we have $\int_V {\rm d}^3x \beta = 0$. Following Refs.~\cite{weinbergbook,brauner2010}, we build the invariants of our theory from the derivatives ($\nu = x, y, z$) \begin{eqnarray} -i U^{-1} \partial_t U &=& \dot\alpha +V^{1/2} \dot\beta \ , \\ -i U^{-1} \partial_\nu U &=& V^{1/2} \partial_\nu \beta \end{eqnarray} which are in the Lie algebra of ${\cal G}$. Here the dot denotes the time derivative. Under a global phase transformation with angle $\gamma$, the operator $U$ becomes $e^{i\gamma}U(\alpha,\beta) = U(\alpha+\gamma,\beta)$. Thus, $\dot\alpha$, $\dot\beta$, and $\partial_\nu \beta$ are invariant under global phase transformations. Note also that $\beta$ is a truly ``intrinsic'' degree of freedom because it is unaffected by a global phase transformation. We use these invariants to construct the leading-order terms in the effective Lagrangian, taking account of energy scales. The scale associated with the $\alpha$ degree of freedom is $\mu - \mu_{\rm ext}$, that associated with $\beta$ is $\Omega$. Assuming also invariance under rotations, we have the invariants \begin{eqnarray} L_0 &\equiv& C_1\mu_{\rm ext}\int\limits_V {\rm d}^3x \dot\alpha = C_1V\mu_{\rm ext}{\dot\alpha} \ , \nonumber \\ L_1 &\equiv& {C_1\over 2} \int\limits_V {\rm d}^3x \dot\alpha^2 = {C_1V\over 2}{\dot\alpha}^2 \ , \nonumber\\ L_2 &\equiv& {C_2 \over 2} \int\limits_V {\rm d}^3x \dot\beta^2 \ , \ L_3 \equiv {D \over 2} \int\limits_V {\rm d}^3x \left(\nabla \beta \right)^2 \ . \end{eqnarray} Here, $C_1$, $C_2$, and $D$ are constants that have to be determined from low-energy data. The Lagrangian is \begin{equation} \label{eff} L = {C_1 V \over 2}{\dot\alpha}^2 + C_1 V \mu_{\rm ext} \dot\alpha + \int\limits_V {\rm d}^3x \left( {C_2\over 2} \dot\beta^2 -{D\over 2} \left(\nabla\beta\right)^2 \right) \ . \end{equation} The conserved quantity corresponding to invariance under global phase transformations is the particle number \begin{equation} \label{nparticle} N\equiv p_\alpha \equiv {\partial L\over \partial\dot\alpha} \ . \end{equation} This conserved quantity can be derived via the Noether theorem, and $p_\alpha$ is the canonical momentum of $\alpha$. We expand $\beta(t,\vec{x}) = \sum_j \beta_j \phi_j(\vec{x})$ in a set of orthonormal complex functions $\phi_j(\vec{x})$, $j=1, 2, \ldots$ with $\int_V {\rm d}^3 x \phi_j(\vec{x}) = 0$ (absence of zero modes for the Nambu-Goldstone field). As an example, we choose the eigenfunctions of a free particle in a spherical cavity of volume $V$ with von Neumann boundary conditions. The Lagrangian becomes \begin{equation} L = {C_1 V\over 2} {\dot\alpha}^2 + C_1 V \mu_{\rm ext}\dot\alpha + \sum_{j>0}\left({C_2\over 2}{\dot\beta}_j^2 - {D k_j^2\over 2}\beta_j^2 \right) \ . \end{equation} Here, $k_j^2$ denotes the squared momentum of the spherical wave $\phi_j(\vec{x})$. A Legendre transformation with $p_{\beta j} \equiv \partial L / \partial {\dot\beta}_j$ and Eq.~(\ref{nparticle}) yield the Hamiltonian \begin{equation} H = {\left(p_\alpha - C_1V\mu_{\rm ext}\right)^2\over 2C_1V} + \sum_{j>0} \left({p_{\beta j}^2\over 2C_2} + {D k_j^2 \over 2}\beta_j^2 \right) \ . \end{equation} We quantize $H$ by putting $p_\alpha = -i\partial_\alpha$, and $p_{\beta j} = -i \partial_{\beta_j}$. Then $p_\alpha e^{iN\alpha} = N e^{iN\alpha}$. The intrinsic degrees of freedom $\beta_j$ yield harmonic-oscillator spectra with energies \begin{equation} \label{evib} \omega_j \equiv k_j \left({D\over C_2}\right)^{1/2} \ . \end{equation} In the long-wave-length limit we have $k_j \sim V^{-1/3}$, and the measurement of the low-energy collective excitations of the superfluid determines the ratio $D/C_2$. The amplitude of the Nambu-Goldstone modes (i.e., the oscillator length) is \begin{equation} \label{osclen} l_j\equiv \left(C_2Dk_j^2\right)^{-1/4} \ , \end{equation} and this dimensionless quantity is assumed to be small, $l_j \ll 1$. For fixed particle number $N$, the $\alpha$-dependent part of the Hamiltonian yields the energy $E_N \equiv \left(N-C_1V\mu_{\rm ext} \right)^2 / (2C_1V)$. For $N \gg 1$ the energy difference of a system with $N + 1$ and one with $N$ particles is \begin{equation} \label{chempot} E_{N+1}-E_N \approx {N \over C_1 V} - \mu_{\rm ext} \ . \end{equation} We note that $E_{N+1}-E_N = \mu-\mu_{\rm ext}$ on physical grounds. The emergent breaking of $U(1)$ requires $E_{N+1}-E_N\approx 0$, and Eq.~(\ref{chempot}) relates the constant $C_1$ to the chemical potential and to the density of the system. We see that the chemical potential and the frequencies and amplitudes of the quantized collective vibrations determine the low-energy constants of the EFT. In addition to the collective vibrations with frequencies~(\ref{evib}) one finds approximately equidistant levels with spacing $\mu - \mu_{\rm ext}$ belonging to superfluids with different particle numbers. In superfluid atomic nuclei, these harmonic excitations belonging to different numbers of paired nucleons are known as pairing vibrations~\cite{Bes1966}. In summary we have shown that in finite superfluids, the spectra of systems with different particle numbers are related to each other, and this is a model-independent prediction of the EFT. We turn to higher-order corrections and establish our power counting. The energy scales used in the construction of $L$ in Eq.~(\ref{eff}) are (i) the scale $\mu - \mu_{\rm ext}$ associated with a change in particle number (we have assumed $\dot\alpha \sim \mu-\mu_{\rm ext}$) and (ii) the scale $\Omega$ of the collective vibrations. We have assumed that both scales are of similar size, $\mu - \mu_{\rm ext} \sim \Omega$. Moreover, in the low-energy domain the amplitudes of the Nambu-Goldstone modes $\beta_j$ given by Eq.~(\ref{osclen}) obey $l_j \ll 1$. Thus, we have \begin{eqnarray} \label{scalingu1} \beta_j \sim l_j \ , \ {\dot\beta}_j \sim \Omega l_j \ , C_2 \sim \Omega^{-1} l_j^{-2} \ , \ p_{\beta j} \sim l_j^{-1} \ . \end{eqnarray} In the construction of terms of higher order, we consider only powers of single derivatives because higher-order derivatives of the Nambu-Goldstone field $\beta$ and of the zero mode $\alpha$ can be eliminated via perturbative field redefinitions~\cite{damour1991,grosseknetter1994}. The EFT has a breakdown scale $\Lambda$ with $\Lambda \gg \Omega, |\mu_{\rm ext} -\mu|$. At this scale, $\dot\alpha$ (and $\dot\beta$, $\partial_\nu \beta$, $\beta$) are a factor $\sqrt{\Lambda/|\mu_{\rm ext}-\mu|}$ (and $\sqrt{\Lambda/\Omega}$, respectively) larger than in the low-energy domain. At the breakdown scale, a Lagrangian term of the form $C_{pqr}{\dot\alpha}^p{\dot\beta}^q \beta^r$ must scale as $\Lambda$ when written in terms of such velocities and fields. This determines the scaling of $C_{pqr}$. When evaluated for velocities and fields in the low-energy domain, that term yields a contribution to $L$ of order $\Lambda (|\mu_{\rm ext} - \mu| / \Lambda)^{p/2} (\Omega/\Lambda)^{q/2+r/2}$. Similar considerations apply for terms containing powers of the spatial derivatives $\partial_\nu\beta$. For energies below the breakdown scale, all these higher-order corrections are perturbatively small. Thus, our procedure yields a perturbative expansion in powers of $|\mu_{\rm ext}-\mu|/\Lambda$ and $\Omega/\Lambda$. \section{Emergent breaking of rotational symmetry} For finite objects with an axially symmetric ground-state deformation, the emergent symmetry breaking is from the rotation group to the subgroup of axial symmetry. It is, therefore, useful to recall the breaking of spin-rotational symmetry $O(3)$ to $O(2)$ in infinite ferromagnets~\cite{Leutwyler1994,roman1999, Baer2004}. In the ground state, all spins point in the same direction, violating the spin-rotational symmetry of the Hamiltonian. Ground states with macroscopically different spin directions have zero overlap and define inequivalent Hilbert spaces. As a consequence, the low-lying spectrum of the ferromagnet is dominated by Nambu-Goldstone modes, i.e., spin waves of long wave length that locally induce small rotations of the aligned spins. For a ferromagnet of finite size~\cite{Hasenfratz1993}, the formerly inequivalent Hilbert spaces are connected by non-vanishing tunneling matrix elements, the ground states belonging to different spin directions have nonzero overlap, and there exist states that are superpositions of these ground states. Such states are, for instance, the Wigner $D$-functions for rotational motion. Physically that implies that the ferromagnet may rotate about an axis perpendicular to the direction of the aligned spins. An analogous situation occurs in linear molecules and in axially symmetric deformed even-even nuclei. In the limit of infinitely large size, the low-lying parts of the spectra of these systems would be determined by Nambu-Goldstone modes. But finite linear molecules or finite axially symmetric nuclei may rotate about an axis perpendicular to the symmetry axis. In all three cases, rotational motion occurs as a consequence of emergent symmetry breaking due to the finite size of the system. Rotational motion is, therefore, distinctly different from the Nambu-Goldstone modes. It plays the same role as the zero mode $\alpha$ in Eq.~(\ref{uni}) for the emergent breaking of $U(1)$ phase symmetry. For a deformed nucleus with $A \gg 1$ nucleons, we can quantify these statements. The linear extension of the system is $\propto A^{1/3}$. The moment of inertia is proportional to mass $\times$ (length)$^2 \propto A^{5/3}$. With increasing $A$, the frequency of rotational motion tends to zero like $A^{- 5/3}$, faster than the wave number $\propto A^{- 1/3}$ of the massless modes (analogues of the Nambu-Goldstone modes in the infinite system). Hence, there exists a regime of $A$ values where the scale $\xi$ for rotational motion is small compared to the scale $\omega$ for vibrational motion and where $\omega$ in turn is small compared to the breakdown scale $\Lambda$ of the EFT defined by pair-breaking excitations~\cite{dean2003}. In the regime $\xi \ll \omega \ll \Lambda$, it is meaningful to consider rotational motion as a small (in energy) correction to the Goldstone theorem. That is the regime we study here. For instance, in rare-earth nuclei we have $\xi \approx 80$~keV from the lowest energy spacing in a rotational band, and $\omega \approx 1$~MeV from the lowest ``vibrational'' band head, while $\Lambda \approx 2-3$~MeV is the cost of pair breaking. The condition $\omega \ll \Lambda$ is met only marginally. Corresponding considerations apply to linear molecules such as CO$_2$. Here the scale of rotational energies is $\xi \approx$~1~cm$^{-1}$, that of vibrational energies is $\omega \approx$~500~cm$^{-1}$, while the breakdown scale $\Lambda\approx$~10,000~cm$^{-1}$ is defined by electronic excitations. The conditions $\xi \ll \omega \ll \Lambda$ are very well fulfilled. Axially symmetric deformed even-even nuclei consist of nucleons, and linear molecules consist of nuclei. The locations of these constituents have body-fixed spherical coordinates $r, \theta, \phi$. Vibrations of the nucleus/molecule about an axis perpendicular to the symmetry axis act locally on the constituents. The resulting dislocations are assumed to have small amplitude. We expect that Nambu-Goldstone modes related to the coordinate $r$ have higher frequencies than those due to $\theta$ and $\phi$. We, therefore, confine attention to the latter variables although our approach can be straightforwardly generalized. In addition to these small-amplitude vibrations we also consider global rotations of the entire nucleus/molecule. Our effective theory is universal and applies both to linear molecules and to deformed nuclei. Apart from the magnitude of the low-energy constants, the key difference between both systems is in the symmetry properties of the deformed ground--state wave functions as these define the symmetry properties of the admissible low--lying Nambu-Goldstone modes~\cite{weinbergbook}. We assume that molecular and nuclear ground-state wave functions are axially symmetric about the body-fixed $z'$-axis, invariant under time-reversal, and have positive parity. As a consequence of pairing (superfluidity), even-even nuclei differ from linear molecules in that their intrinsic ground states are also invariant under rotations by $\pi$ about any axis perpendicular to the symmetry axis, i.e., possess positive ${\cal R}$ parity~\cite{bohr1958,bmbook}. Hence, low-energetic intrinsic excitations in nuclei must also have positive ${\cal R}$ parity. \subsection{Dynamical Variables and Power Counting} As done for the case of the ferromagnet~\cite{Leutwyler1994,roman1999,kampfer2005} and in Sect.~\ref{emer}, we consider the Nambu-Goldstone modes as classical fields that are later quantized. We prefer to write these fields in the space-fixed (rather than the body-fixed) coordinate system because here the commutation relations of the three generators $J_x, J_y, J_z$ of infinitesimal rotations about the space-fixed $x, y, z$ axes, respectively, are of standard form. The molecular/nuclear ground state is invariant under $SO(2)$ rotations about the body-fixed $z'$-axis while $SO(3)$ symmetry is broken by the deformation. Therefore, the Nambu-Goldstone modes lie in the coset space $SO(3) / SO(2)$~\cite{coleman1969,callan1969,brauner2010}. The modes depend on the angles $\theta, \phi$ defined above and on time $t$, and are generated by a unitary transformation $U$. As in Eq.~(\ref{uni}) we parameterize the matrix $U$ in product form, \begin{eqnarray} \label{1} U &=& g(\alpha, \beta) u(x,y) \ , \nonumber\\ g(\alpha, \beta) &=& \exp \left\{ -i \alpha(t) J_z \right\} \exp\left\{ -i \beta(t) J_y \right\} \ , \nonumber\\ u(x,y) &=& \exp\left\{-i x(\theta, \phi, t) J_x -i y(\theta, \phi, t) J_y \right\} \ . \end{eqnarray} The purely time-dependent variables $\alpha(t)$ and $\beta(t)$ are the zero modes. They describe global rotations of the finite system and are factored out~\cite{Leutwyler1987}. They are not Nambu-Goldstone modes but upon quantization generate rotational bands~\cite{chandrasekharan2008,papenbrock2011}. The fields $x(\theta, \phi, t)$ and $y(\theta, \phi, t)$ with $|x|, |y| \ll 1$ generate small-amplitude vibrations of the constituents. These depend non-trivially on $\theta$ and $\phi$ so that \begin{equation} \label{2} \int {\rm d} \Omega \ x(\theta, \phi, t) = 0 = \int {\rm d} \Omega \ y(\theta, \phi, t) \ . \end{equation} Here ${\rm d} \Omega$ is the surface element of the three-dimensional unit sphere. In an infinite system $x(\theta, \phi, t)$ and $y(\theta, \phi, t)$ would be genuine Nambu-Goldstone modes. Eqs.~(\ref{1}) and (\ref{2}) define the dynamical variables of the system. Eq.~(\ref{1}) may look like a rather special ansatz but actually follows from the most general form of $U$. Further progress hinges on the identification of the energy scales $\xi$, $\omega$, and $\Lambda$ defined above. The ranges of the variables $\alpha$ and $\beta$ being of order unity, the ratios $\dot{\alpha} / \alpha$ and $\dot{\beta} / \beta$ are governed by the energy scale $\xi$ of rotational motion. We have $|x|, |y| \ll 1$, indicating that the amplitudes of the Nambu-Goldstone fields are small. Then $|\dot{x}|\sim \omega |x|$ and $|\dot{y}| \sim \omega|y|$. We are going to show that power counting based upon the inequalities $\xi \ll \omega \ll \Lambda$ together with the symmetry requirements formulated above uniquely determine the leading-order part of the Hamiltonian, except for a small number of constants that have to be determined by fits to data. Our EFT is characterized by two breakdown scales. The first scale is set by $\Lambda \gg \omega$ and marks the appearance of neglected degrees of freedom. In deformed nuclei these are single-particle degrees of freedom or pair-breaking effects and in linear molecules, electronic excitations. The second scale is related to large-amplitude excitations of the Nambu-Goldstone fields. That scale is reached when these excitations are so large that they practically restore the spherical symmetry of the intrinsically deformed object, or when the energy due to the zero-mode velocities $\dot\alpha$ and $\dot\beta$ reaches the vibrational scale $\omega$. This second scale is set by $\omega^2/\xi$. For well-deformed nuclei, that scale considerably exceeds $\Lambda$, while both scales are similar in size for linear molecules. We now discuss both breakdown scales separately. At energies below $\Lambda$ the neglected degrees of freedom cause the appearance of higher-order terms in the effective Lagrangian of the EFT. Such terms involve powers of the leading-order fields and velocities and, possibly, also higher derivatives. The latter can be eliminated via perturbative field redefinitions~\cite{damour1991,grosseknetter1994} and are not considered here. At the breakdown scale (where the amplitudes and velocities of the Nambu-Goldstone fields are a factor $(\Lambda/\omega)^{1/2}$ larger than in the low-energy domain), a term in the effective Lagrangian with $\tau$ velocities and $n$ powers of $x$ or $y$ is of order $\Lambda$. At the low-energy scale $\omega$ that term yields a contribution of order $\omega (\omega / \Lambda)^{(n+\tau)/2-1}$. Terms like that give rise to small corrections and, at each order, are finite in number. Similar considerations apply to the spatial derivatives. The scale for the breaking of emergent symmetry is reached when the amplitudes, velocities and higher derivatives are a factor $(\omega / \xi)^{1/2}$ larger than in the low-energy domain. The arguments of the previous paragraph can essentially be repeated by replacing the scale $\Lambda$ by $\omega^2/\xi$, and the ratio $\Lambda / \omega$ by $\omega / \xi$. At the breakdown scale the contribution of a term in the effective Lagrangian which contains $\sigma$ velocities is of order $\omega$. At the low-energy scale $\xi$ that term scales as $\xi (\xi / \omega)^{\sigma/2-1}$. These arguments establish our power counting. As a result, our EFT provides an expansion in the two small parameters $\omega / \Lambda$ and $\xi / \omega$, and there is a finite number of terms for each power of these parameters. \subsection{Effective Lagrangian} The effective Lagrangian is built from invariants. These are constructed from elements $a_\mu^x, a_\mu^y, a_\mu^z$ defined by \begin{eqnarray} \label{3} U^{-1} i \partial_\mu U &=& a_\mu^x J_x + a_\mu^y J_y + a_\mu^z J_z \ . \end{eqnarray} The symbol $\partial_\mu$ with $\mu = 1, 2, 3$ stands for the partial derivatives with respect to the angles $\theta, \phi$ and time $t$. Explicit expressions for these elements are obtained from Eqs.~(\ref{1}) and (\ref{3}) in terms of a series expansion in powers of $x$, $y$, and their partial derivatives where only leading-order terms are kept. We use the Baker--Campbell--Hausdorff expansion and obtain \begin{eqnarray} a^x_t &=& \dot{x} +{y \over 6} (x \dot{y} - y \dot{x}) -\dot{\alpha} \sin \beta - y \dot{\alpha} \cos \beta + \ldots\ , \nonumber \\ a^y_t &=& \dot{y} -{x \over 6} (x \dot{y} - y \dot{x}) + \dot{\beta} + x \dot{\alpha} \cos \beta + \ldots \ , \\ \label{3a} a^z_t &=& -{1 \over 2} (x \dot{y} - y \dot{x}) + \dot{\alpha} \cos \beta - y \dot{\alpha} \sin \beta - x \dot{\beta} + \ldots \ , \nonumber \end{eqnarray} and \begin{eqnarray} a^x_\nu &=& \partial_\nu{x} +{y \over 6} (x \partial_\nu{y} - y \partial_\nu{x}) + \ldots\ , \nonumber \\ a^y_\nu &=& \partial_\nu{y} -{x \over 6} (x \partial_\nu{y} - y \partial_\nu{x}) + \ldots \ , \\ \label{3b} a^z_\nu &=& -{1 \over 2} (x \partial_\nu{y} - y \partial_\nu{x}) + \ldots \ , \nonumber \end{eqnarray} To define the invariants we calculate the changes induced on the variables $\alpha, \beta, x, y$ by infinitesimal rotations $r$ of $U$ about angles $\delta \chi_k$ around the space-fixed $k = x, y, z$ axes. With $h(\gamma) = \exp \{ - i \gamma J_z \}$ we have from Eq.~(\ref{1}) that \begin{equation} \label{rot1} r g(\alpha, \beta) = g(\alpha', \beta') h(\gamma') \ . \end{equation} Here, the primed angles depend on the angles of the rotation $r$ and the angles $\alpha$, $\beta$. The right-hand side of Eq. (\ref{rot1}) has the form of a general rotation with Euler angles ($\alpha', \beta',\gamma'$), with explicit expressions given in Ref.~\cite{papenbrock2011}. Thus, \begin{eqnarray} r U &=&g(\alpha', \beta') h(\gamma') u \nonumber\\ &=&g(\alpha', \beta') \ [ h(\gamma') u h^\dag(\gamma') ]\ h(\gamma') \ . \end{eqnarray} As a result we find that the angles $\alpha$ and $\beta$ transform nonlinearly as the azimuthal and polar angle of the two-sphere, respectively, while $x$ and $y$ transform linearly as the $x$ and $y$ components of a vector under rotations around the $z$ axis. In other words, under a rotation, the nucleus as a whole changes its orientation and undergoes a rotation around its symmetry axis. This transformation behavior under rotations confirms that $\alpha$ and $\beta$ describe the global orientation of the axially symmetric nucleus, while $x$ and $y$ are ``intrinsic'' degrees of freedom. Thus any combination of $x$ and $y$ that formally exhibits axial symmetry is indeed fully invariant under rotations. For example, $x^2 + y^2$ is invariant under rotations, and the four quantities $x, y, \dot{x}, \dot{y}$ are transformed into linear combinations of $x', y', \dot{x}', \dot{y}'$. These transformation properties are characteristically different from the ones for an infinite system where in Eq.~(\ref{1}) we would have $g(\alpha, \beta) = 1$. Time-reversal invariance requires that invariants involving time derivatives must contain even powers of $a^x_t, a^y_t, a^z_t$. The lowest-order invariants obtained from Eqs.~(\ref{3a}) are \begin{eqnarray} \label{4} {\cal L}_{1a} &=& \dot{\beta}^2 + \dot{\alpha}^2 \sin^2 \beta \ , \nonumber \\ {\cal L}_{1b} &=& \dot{x}^2 + \dot{y}^2 + 2(x \dot{y} - y \dot{x}) \dot{\alpha} \cos \beta \ , \nonumber \\ {\cal L}_{1c} &=& (x \dot{y} - y \dot{x})^2 \ , \nonumber \\ {\cal L}_{1d} &=& (x^2 +y^2)[\dot{x}^2 + \dot{y}^2 + 2(x \dot{y} - y \dot{x}) \dot{\alpha} \cos \beta] \ . \end{eqnarray} We note that the invariant ${\cal L}_{1a}$ is essentially the Lagrangian of a rotor, and that the Lagrangian density ${\cal L}_{1b}$ couples global rotations to the Nambu-Goldstone modes. The invariant ${\cal L}_{1 c}$ is related to the angular momentum of the Nambu-Goldstone modes, see Eq.~(\ref{K}). The invariant ${\cal L}_{1d}$ is obtained by multiplying ${\cal L}_{1b}$ with the invariant $(x^2 + y^2)$ and is of the same order as ${\cal L}_{1c}$. As for the invariants constructed from $a^x_\nu$, $a^y_\nu$ $a^z_\nu$ with $\nu = \theta$ or $\nu = \phi$, we use that for fixed $\nu$ and $\nu'$, the forms $(a^x_\nu)^2 + (a^y_\nu)^2$, $a^z_\nu$ and $a^z_\nu a^z_{\nu'}$ are invariant. Admissible linear combinations of these expressions are defined by the requirement of axial symmetry. Suppressing terms of higher order than $x^4$ and multiplying with the additional invariant $(x^2 + y^2)$, we find the invariants \begin{eqnarray} \label{5} {\cal L}_{2a} &=& (\vec{\bf L} x)^2 + (\vec{\bf L} y)^2 \ , \nonumber \\ {\cal L}_{2b}&=& ({\bf L}_z x)^2 + ({\bf L}_z y)^2 \ , \nonumber \\ {\cal L}_{2c} &=& (x \vec{\bf L} y - y \vec{\bf L} x)^2 \ , \nonumber \\ {\cal L}_{2d} &=& (x^2 + y^2) \left( (\vec{\bf L} x)^2 + (\vec{\bf L} y)^2 \right) \ . \end{eqnarray} Here $\vec{\bf L}$ (${\bf L}_z)$ is the vector (the $z$-component) operator of orbital angular momentum, respectively, written in terms of $\theta$ and $\phi$~\cite{varshalovich1988}. The occurrence of the term ${\cal L}_{2 b}$ reflects the fact that we impose only axial rather than rotational symmetry on ${\cal L}$. The Lagrangian $L$ is given by \begin{eqnarray} \label{6} L &=& L_1 + L_2 \nonumber\\ &=& \sum_{i = a, b, c, d} \int {\rm d} \Omega \ \bigg( \frac{C_i}{2} {\cal L}_{1 i} - \frac{D_i}{2} {\cal L}_{2 i} \bigg) \ . \end{eqnarray} Here $C_i$ and $D_i$ with $i = a, b, c, d$ are constants that are determined by low-energy data. We expand the real variable $x$ as \begin{eqnarray} \label{7} x = \sum_{\lambda = 2}^\infty \sum_{\mu = -\lambda}^\lambda x_{\lambda \mu} Z_{\lambda \mu} \end{eqnarray} and correspondingly for $y$, $\dot{x}$, $\dot{y}$. Aside from normalization constants, the real orthonormal functions $Z_{\lambda \mu}$ are equal to the real part (for $\mu\ge 0$) and imaginary part (for $\mu<0$) of the spherical harmonics $Y_{\lambda \mu}$. The coefficients $x_{\lambda\mu}$ are real. Terms with $\lambda = 0$ and $\lambda = 1$ are excluded since $\lambda = 0$ violates Eq.~(\ref{2}) and describes global rotations while $\lambda = 1$ describes translations in space~\cite{bmbook}. We insert the expansions~(\ref{7}) into Eq.~(\ref{6}) and use the resulting expression for $L$ to define the real canonical momenta \begin{eqnarray} \label{8} p_\beta &=& \frac{\partial L}{\partial \dot{\beta}} \ , \ p_\alpha = \frac{\partial L}{\partial \dot{\alpha}} \ , \nonumber\\ p^x_{\lambda \mu} &=& \frac{\partial L}{\partial \dot{x}_{\lambda \mu}} \ , \ p^y_{\lambda \mu} = \frac{\partial L}{\partial \dot{y}_{\lambda \mu}} \ . \end{eqnarray} From Noether's theorem we obtain explicit expressions for the three components $I_x, I_y, I_z$ of angular momentum. These are \begin{eqnarray} I_x &=& - p_\beta \sin \alpha - p_\alpha \cot \beta \cos \alpha + {\cos \alpha \over \sin \beta} \ K \ , \\ I_y &=& p_\beta \cos \alpha - p_\alpha \cot \beta \sin \alpha + {\sin \alpha \over \sin \beta} \ K \ , \\ I_z &=& p_\alpha \ . \end{eqnarray} Here \begin{eqnarray} \label{K} K = \int {\rm d} \Omega \ (x p_y - y p_x ) \end{eqnarray} is the angular momentum of the two-dimensional oscillators that describe the intrinsic vibrations. The square of the total angular momentum is \begin{equation} \label{9} I^2 = p_\beta^2 +{1 \over \sin^2\beta} \left(p_\alpha^2 -2Kp_ \alpha \cos \beta + K^2 \right) \ . \end{equation} The terms in Eqs.~(\ref{9}) obtained by putting $K = 0$ can be shown to be equal to the square of the total angular momentum of the rigid rotor. \subsection{Effective Hamiltonian and Quantization} We use Eqs.~(\ref{8}) and the standard Legendre transformation to transform the effective Lagrangian $L$ into the effective Hamiltonian $H$. The scales of the coefficients $C_i$ and $D_i$ (and the terms that are kept in $H$) are determined by assuming $C_a {\cal L}_{1a} \sim \xi$, $C_b {\cal L}_{1b} \sim \omega$, $C_c {\cal L}_{1c}, C_d {\cal L}_{1d} \sim |x|^2 \omega$, $p_\beta, p_\alpha \sim 1$, and $p_x, p_y \sim |x|^{-1}$. Relevant parts of $H$ are given below. For the rigid-rotor part of $H$, quantization is achieved by symmetrization with respect to $\alpha, \beta$ and by putting $p_\beta = -i (\sin \beta)^{- 1/2} \partial_\beta (\sin \beta)^{1/2}$, $p_\alpha = -i \partial_\alpha$. This is the usual quantization for a particle on the sphere. For the remaining canonical momenta we have \begin{equation} \label{10} p^x_{\lambda \mu} = - i \frac{\partial}{\partial x_{\lambda \mu}} \ , \ p^y_{\lambda \mu} = - i \frac{\partial}{\partial y_{\lambda \mu}} \ . \end{equation} Substitution of these expressions into Eq.~(\ref{9}) yields the quantized form of the square $\hat{I}^2$ of the operator of total angular momentum. The operator $\hat{K}$ is given by $\hat{K} = \sum_{\lambda\mu} \left(x_{\lambda\mu} p^y_{\lambda\mu} - y_{\lambda\mu} p^x_{\lambda \mu} \right)$. The three components $\hat{I}_x$, $\hat{I}_y$, $\hat{I}_z$ of the quantized angular momentum can be shown to obey the standard commutation relations. Moreover, every component commutes with $\hat{K}$ and with the quantized Hamiltonian $\hat{H}$. A complete set of commuting operators is, therefore, $\hat{I}^2, \hat{I}_z, \hat{K}, \hat{H}$. \subsection{Spectra} The leading-order (${\cal O}(\omega)$) contribution to $\hat{H}$ is given by \begin{eqnarray*} \hat{H}_\omega = \sum_{\lambda \mu} \left( {(p^x_{\lambda \mu})^2 + (p^y_{\lambda \mu})^2 \over 2C_b} + {C_b\over 2} \omega^2_{\lambda \mu} \left( x_{\lambda \mu}^2 + y_{\lambda \mu}^2 \right) \right) \end{eqnarray*} and describes a set of uncoupled harmonic oscillators with frequencies $\omega_{\lambda \mu} = [(\lambda ( \lambda + 1) D_a + \mu^2 D_{b}) / C_b]^{1/2}$. We combine the degrees of freedom $x_{\lambda \mu}$ and $y_{\lambda \mu}$ into a two-dimensional $SO(2)$ symmetric harmonic oscillator with quantum numbers $n_{\lambda \mu} = 0, 1, 2, \ldots$ and $k_{\lambda \mu} = 0, \pm 1, \pm 2, \ldots$. In units of $\omega_{\lambda \mu}$ the energies are $2n_{\lambda \mu} + |k_{\lambda \mu}| +1$. The operator $\hat{K}$ has eigenvalues $K = \sum_{\lambda\ge 2} \sum_{\mu = - \lambda}^\lambda k_{\lambda \mu}$. Next-order corrections to the Hamiltonian $\hat{H}_\omega$ are either of order ${\cal O}(\xi)$ or of order ${\cal O}(x^2\omega)$. The former couple rotations to vibrations. The latter add anharmonicities to the harmonic vibrations and thereby lift the degeneracies. We confine ourselves here to the former. The Hamiltonian is \begin{equation} \label{12} \hat{H}_{\omega,\xi}= \hat{H}_{\omega} + {\hat{I}^2 - \hat{K}^2 \over 2C_a} \ , \end{equation} with $\hat{I}^2$ given by the quantized form of Eq.~(\ref{9}). The eigenfunctions of $(\hat{I}^2 - \hat{K}^2)$ are Wigner $D$-functions $D_{M,K}^I(\alpha,\beta,0)$ that depend on total integer spin $I$ and its projections $-I \le M, K \le I$~\cite{varshalovich1988,papenbrock2011}. The eigenvalues of $\hat{I}^2$ are $I (I + 1)$ with $I \ge |K|$. We see that each vibrational state of the leading-order Hamiltonian $\hat{H}_\omega$ becomes a band head with spin $|K|$, and the spectrum exhibits a rotational band on top of each band head. At this order in the EFT, all rotational bands have the same moment of inertia $C_a$. Differences in the moments of inertia are higher-order effects~\cite{Zhang2013}. The ground state has quantum numbers $n_{\lambda \mu} = 0$, $k_{\lambda \mu} = 0$ (this implies $K = 0$) and spin $I = 0$. It has positive parity and, in the case of nuclei, positive ${\cal R}$ parity. For nuclei, this limits the rotational states in the ground-state band to even values of $I$. We turn to excited states. Here nuclei and linear molecules differ. For $D_b > 0$ the lowest single vibrational excitation corresponds to the mode $(x_{2 0}, y_{2 0})$. The fields $x$ and $y$ have positive parity, in keeping with the parity of the axial vectors $J_x$ and $J_y$ in Eqs.~(\ref{1}). The lowest excitation has $|K| = 1$, negative intrinsic ${\cal R}$ parity and, thus, values of $I=1,2,3,\ldots$. For linear molecules, states with $|K| = 1$ are indeed the lowest-lying vibrations~\cite{Herzberg1945}. In contradistinction, in nuclei such states are excluded because paired states have positive ${\cal R}$ parity. Pair breaking (i.e., generation of states with odd ${\cal R}$ parity) happens only at the breakdown scale $\Lambda$ of our EFT. Thus, pairing excludes low-lying magnetic dipole excitations~\cite{heyde2010,bentz2011} and more generally any vibrational band head with odd spin and positive parity in the low-energy regime. This essential element provides the only difference in the low-energy spectra of molecules and of deformed nuclei. Indeed, there are no low-lying $I^\pi = 1^+$ states in deformed even-even nuclei~\cite{davidson1981, aprahamian2006}. As is well known from data~\cite{davidson1981, aprahamian2006}, the ground states of deformed even-even nuclei consistently have quantum numbers $I = 0 = K$ and positive parity. Low-lying vibrational states have $K = 0$ and $|K| = 2$ and positive parity. In rare-earth nuclei, both band heads have an excitation energy of about $1$ MeV. In the present approach, these states are generated by local rotations around an axis that is perpendicular to the symmetry axis of the ground state. However, the wavefunction corresponding to $K=0$ is symmetric under any $SO(2)$ rotation of $x$ and $y$ and must thus be viewed as an axially symmetric excitation that corresponds to the $\beta$ excitation of the geometrical model~\cite{bmbook}. The $|K|=2$ wave functions exhibit no symmetry under exchange of $x$ and $y$ and thus break the axial symmetry. Thus, they correspond to the $\gamma$ mode of the geometrical model. Finally, we note that the present approach can be extended to EFTs for other cases such as general molecules. In this case, the ground state fully breaks $SO(3)$ to the point group ${\cal P}$ of the molecule, and the coset space is thus $SO(3)/{\cal P}$. This case is technically simpler than the breaking from $SO(3)$ to $SO(2)$ because the coset essentially has a group structure. The coset can be parameterized by three Euler angles, and the zero modes indeed describe the orientation of the molecule in space while the Nambu-Goldstone fields describe the intrinsic vibrations. \section{Summary} In summary, we have shown how to develop effective field theories for finite systems with emergent symmetry breaking. We have applied this approach to two types of systems. (i) Superfluids like infinite Bose-Einstein condensates (BEC) or the paired states of a BCS superconductor that break $U(1)$ phase symmetry. (ii) Systems with non-spherical ground states such as molecules and atomic nuclei that break rotational symmetry. In both cases, symmetry arguments alone yield the universal features of the low-lying excitations. In case (i) these are vibrations. We also relate the spectra of systems with different particle numbers. In case (ii) these are vibrations that are the heads of rotational bands. The moment of inertia is a fit parameter and will, in general, be different for different physical systems (for instance, superfluid and normal systems). Nuclei and molecules differ in that the ground states of even-even nuclei are paired. This accounts for the absence in nuclei of low-lying band heads with odd spin and positive parity. In contrast to phenomenological approaches, and except for a small number of constants, our approach yields an explicit expression for the Hamiltonian in leading order, and a systematic procedure to generate terms of higher order. It may, thus, be a useful starting point for the analysis of spectra and electromagnetic transitions. It is textbook knowledge that the traditional collective models of deformed nuclei~\cite{bmbook,eisenberg,iachello} overpredict transitions between the $\beta$-band ($\gamma$ band) and the ground-state band by a factor of about ten (four)~\cite{rowe}. Furthermore, the interpretation of low-lying collective $0^+$ states as $\beta$ band heads has been put into question by the available data on $E2$ transitions~\cite{garrett2001}. This makes it very interesting to study electromagnetic couplings within the EFT approach. \begin{acknowledgments} The authors thank N. Pietralla and A. Richter for discussions. This work has been supported by the U.S. Department of Energy under grant Nos. DE-FG02-96ER40963 (University of Tennessee) and DE-AC05-00OR22725 with UT-Battelle, LLC (Oak Ridge National Laboratory), and by the Alexander-von-Humboldt Foundation. \end{acknowledgments}
1,116,691,497,074
arxiv
\section{Introduction} Even though Demand Response (DR) \cite{dr} technologies were studied and practiced since the 60s, their integration in the US wholesale markets has been facilitated by several regulatory rules and measures ever since the state of California was struck by energy crises in 2000 and 2001. Available DR technologies are mainly categorized into the following: Direct Load Control (DLC) strategies \cite{dlc1,dlc2}, where a controller centrally interrupts the jobs of participating appliances mostly in case of emergencies and to curtail high peak load; Dynamic Pricing programs \cite{Born}, which includes several rates and tariffs to manage the demand for electricity in a decentralized manner, e.g., Time of Use (TOU), Critical Peak Pricing (CPP), Real Time Pricing (RTP) and Day Ahead Pricing (DAP) rates; Demand bidding programs \cite{dsb}, where a market participant directly makes an offer to the wholesale market (or the retailer) for reducing electricity during peak times on the next day. All the above mentioned strategies have their own pros and cons. For example, DLC, probably the oldest and safest measure of demand management, unfortunately cannot happen frequently, and thus they can offer little flexibility for integrating intermittent renewable resources into the grid. They are mainly designed for emergencies and cannot easily account for the inconvenience they cause to their customers, i.e., the Quality of Service (QoS) provided. TOU rates are designed months in advance and cannot handle real-time load management in case of emergencies or help integrate intermittent resources into the power grid. RTP may be the most practical and probably the cheapest way of managing electricity demand in the future but it faces the challenging problem of what these price signals should be to avoid causing physical and market instabilities while reflecting the true conditions of the market at the same time. In fact, it has been shown that RTP are likely to cause more volatility or even instabilities when customers respond to this new information and form a new feedback loop in the power system control model \cite{certs,box,mitter}. Given dynamic pricing tariffs, the responsibility of managing demand in response to these price signals cannot be left to the consumer and should be mostly automated. Consequently, there is an extensive literature emerging on Home Energy Management Systems (HEMS), e.g., \cite{han,rad}. In these works, researchers look into finding optimal designs for the software and hardware suited for residential use that would respond to these price signals in an automated fashion. These HEMS units receive requests from their owners specifying the appliances they plan to use in the near future and their preferences. The software then runs an optimization that plans the use of these appliances, based on their power consumption, job deadlines and other customer specified factors, taking into account the dynamic price made available to the unit from its associated retailer/aggregator. As observed in \cite{Kishore2010}, since all the residences are given the same dynamic price, current HEMS that individually operated by each residence will simultaneously schedule the load to the low-price period, and, consequently, a new ``rebound" peak is created to the grid. In this paper, we aim to blur the boundaries between RTP and DLC strategies by proposing an architecture through which HEMS units inside the territory of an aggregator/ratailer can cooperate with each other to keep the demand presented by the retailer to the wholesale market balanced with the available generation supply (which might be the day-ahead bid plus locally available renewable resources). Several exiting works have considered such a coordinated energy management architecture, though different goals are considered. For example, in \cite{Kishore2010} a heuristic neighborhood-level energy management algorithm is proposed for scheduling the load of residential units such that the aggregate load meets a maximum power profile specified by the retailer. In \cite{Mohsenian-Rad2010}, a distributed energy management algorithm, based on a game-theoretic approach, was proposed to minimize the cost of the retailer as well as the peak-to-average ratio of the aggregate load. The work in \cite{Gatsis2011} takes into account user dissatisfaction and proposed a distributed energy management algorithm that minimizes the cost of retailer and a cost that reflects the degree of user dissatisfaction. Both the works in \cite{Mohsenian-Rad2010} and \cite{Gatsis2011} assumed that the operating times of appliances are known \emph{a priori}, and, moreover, they allowed HEMS to optimize the load injection of appliances. However, in many cases, the appliances have fixed load profiles that cannot be altered. The operating times of appliances should also depend upon the householder's request which are usually random. Another issue that is not clearly addressed in the existing works is the incentives for the customers to participate in the energy management coordination. Our intention in this paper is to propose a coordinated HEMS architecture where the HEMS units in the residences collaborate to minimize the cost of the aggregator/retailer in the real-time market. Specifically, in addition to the cost for the day-ahead market, real-time power imbalances will further cost the retailer in the real-time market. Therefore, minimizing the real-time market cost directly achieves the goal of real-time power balancing in the grid. The scenario under consideration is that the retailer will inform the customers the dynamic price, and the HEMS in each individual residence will optimize their electricity cost by scheduling its appliance activities according to the price. To encourage the customers to join the proposed coordinated HEMS program, we assume that the customers won't pay additional money compared to the cost they optimized using the individual HEMS. Moreover, the degree of comfort of customers (e.g., appliance scheduling deadline) will be taken into account in the coordinated HEMS architecture. Under such conditions, the retailer will directly benefit from the coordinated HEMS architecture while the customers will sustain no loss neither financially nor in their degree of comfort. Different from \cite{Mohsenian-Rad2010} and \cite{Gatsis2011}, the current work assumes that the times for which the customer may submit a request for an appliance are random. Moreover, given the load profiles of appliances, we optimally defer their operating times so as to minimize the real-time market cost of the retailer. Such deferrable appliances include Plug-in (Hybrid) Electric Vehicles (PHEV), dish washer etc. which usually have higher impact on the grid power balancing. We show that the HEMS and proposed coordinated HEMS design problem can be formulated as a dynamic programming (DP). The approximate DP approach known as certainty equivalent control (CEC) \cite{BK:Bersekas07} is used to efficiently handle the considered design problems. Furthermore, the convex optimization based dual decomposition technique \cite{Boyddecomposition} is applied for developing a distributed implementation algorithm for the proposed coordinated HEMS. Simulation results are presented to demonstrate the effectiveness of the coordinated HEMS architecture. \section{System Model and HEMS} We consider a general wholesale market scenario where the retailer bids to purchase electricity from the market and serves a number of residential units. Each residence runs an energy management program for minimizing its electricity cost. This section presents the residential appliance load model and mathematical formulation of HEMS. \subsection{Appliance Load Model} Consider $N$ appliances in each residence that are controllable by the HEMS; for example, the PHEV, dish washer, washing machine, and cloth dryer etc., that are flexible in their operating time and allow the HEMS to defer their schedule within the deadline specified by the customers. The load profiles of the controllable appliances are known and, once the appliances are ON, their operation cannot be interrupted. Our interest on deferrable appliances is mainly because the power consumption of deferrable appliances, especially PHEV, has a higher impact on gird stability. To model the deferrable load, we adopt the signal model presented in \cite{ddls2}, which, as will be seen later, can greatly simplify the appliance scheduling optimization problems encountered in HEMS and the proposed coordinated HEMS. Let $g_i(\ell)$, $\ell=1,\ldots,G_i$, denote the discrete-time load profile\footnote{In this paper, for simplicity, we will assume only active power and ignore the reactive power of each appliance. If necessary, the reactive power can be easily incorporated into the developed algorithms in the subsequent sections.}\footnote{$g_i(\ell)=0$ for $\ell<1$ and $\ell>G_i$.} of appliance $i$ where $G_i>0$ is the maximum duration of $g_i(\ell)$, for $i=1,\ldots,N.$ Assuming that the customer sends requests for appliance $i$ at time $t_{i,1}, t_{i,2}, \ldots$ $\in \{1,\ldots,L\}$, where $L>0$ denotes the maximum time horizon. Then, if without scheduling, the load injection due to appliance $i$ is given by \begin{align} {D}_i(\ell)&= \sum_{k=1}^{\infty} g_i(\ell- t_{i,k}),~\ell=1,\ldots,L. \end{align} One can describe the requests for appliance $i$ as a request arrival process: \begin{align} a_i(\ell)&=\sum_{k=1}^{\infty} u(\ell- t_{i,k}),~ \ell=1,\ldots,L, \end{align} where $u(t)$ is the unit step function\footnote{$u(t)$ is equal to one for $t\geq 0$ and zero otherwise}. To model the customer's behavior in using appliance $i$, we assume that the arrival process $a_i(\ell)$ is a non-stationary random process with the average number of new arrivals at time $\ell$ being $\alpha_\ell\in [0,1]$, i.e., $\E\{a_i(\ell)-a_i(\ell-1)\}=\alpha_\ell$. For example, one may model $a_i(\ell)-a_i(\ell-1)$ as a binary random variable with $\alpha_\ell$ being the probability that appliance $i$ will be requested at time $\ell$. The requested tasks of controllable appliances may be queued and scheduled to be ON later depending on the control of HEMS. Suppose that $s_{i,1}$, $ s_{i,2}$, $\ldots$ $\in \{1,\ldots,L\}$, are the scheduled operating times of appliance $i$, where $s_{i,k} \geq t_{i,k}$ for all $k$. Then the scheduled load injection of appliance $i$ is given by \begin{align}\label{eq:load injection of appliance i} {S}_i(\ell)&=\sum_{k=1}^{\infty} g_i(\ell- s_{i,k}),~\ell=1,\ldots,L. \end{align} Similarly, the operating times of appliance $i$ can also be described by a task departure (launching) process as \begin{align}\label{eq:departure process} d_i(\ell)&=\sum_{k=1}^{\infty} u(\ell- s_{i,k}),~ \ell=1,\ldots,L. \end{align} The total load injection of a residence is the summation of the controllable load and uncontrollable load (e.g., lights, stove etc.), and can be expressed as \begin{align}\label{eq:total load} D_{{{\rm total}}}(\ell) = {U}(\ell) + \sum_{i=1}^NS_{i}(\ell), \end{align}where ${U}(\ell)$ is the load of the uncontrollable appliances. \subsection{HEMS} Given the dynamic electricity prices from the retailer, denoted by $p(\ell)$, $\ell=1,\ldots,L$, HEMS targets to schedule the controllable appliances such that the average total electricity cost of the residence, i.e., \begin{align}\label{eq:bill} \sum_{\ell=1}^L \E\{p(\ell) D_{{{\rm total}}}(\ell)\} \end{align} can be minimized. The scheduling task is usually subject to a constraint that reflects the customer's degree of comfort. Here we assume that the customer will preassign a maximum tolerable delay for each appliance, and the HEMS has to turn on the appliance before the specified deadline. In particular, we denote $\zeta_i\geq 0$ as the maximum delay time of appliance $i$. Then the operating times of appliance $i$ have to satisfy \begin{align}\label{eq:deadline constraint0} t_{i,k} \leq s_{i,k} \leq t_{i,k}+\zeta_i, \end{align} for all $k$, in order to fulfill the degree of comfort of the customer. Mathematically, the HEMS design problem can be formulated as the following multi-stage stochastic optimization problem \begin{subequations}\label{eq:HEMS} \begin{align} \min_{s_{i,1},s_{i,2},\ldots}~&\sum_{\ell=1}^L \E\left\{p(\ell) \left(\sum_{i=1}^N S_{i}(\ell)\right)\right\} \\ \text{subject to (s.t.)}~& {S}_i(\ell)=\sum_{k=1}^{\infty} g_i(\ell- s_{i,k})~\forall~i,\ell, \\ & t_{i,k} \leq s_{i,k} \leq t_{i,k}+\zeta_i~\forall~i,k, \\ & s_{i,k} \leq L~ \forall~i,k, \label{eq:HEMS d}\\ & \sum_{i=1}^N S_{i}(\ell) \leq P_{\max}~\forall~ \ell. \label{eq:HEMS e} \end{align} \end{subequations} where \eqref{eq:HEMS d} implies that all the appliances have to be scheduled before the horizon $L$, and $P_{\max}$ in \eqref{eq:HEMS e} denotes the maximum power flow constraints of the residence. As will be detailed later, problem \eqref{eq:HEMS} can be formulated as a dynamic programming (DP) problem and can be efficiently handled by approximate DP techniques \cite{BK:Bersekas07} While the dynamic prices $\{p(\ell)\}_{\ell=1}^L$ are designed by the retailer such that the customers would move their load to the off-peak period of the power grid, as pointed out in \cite{Kishore2010}, the HEMS individually operated by each residence may create a new ``rebound" peak in the low-price period that can be even more severe than that without HEMS. As a result, the aggregate load injection from multiple HEMS-based residential units will not necessarily follow the energy supply scheduled by the day-ahead market, and the resultant real-time power balancing would increase the cost of the retailer in the wholesale real-time bidding market. In the next section, we propose to coordinate the energy management of multiple residences, aiming at minimizing the wholesale real-time market cost of the retailer. The benefits of such a coordinated energy management architecture, which we refer to as \emph{coordinated HEMS}, will be demonstrated via computer simulations. \section{Coordinated HEMS} We first analyze the costs of the retailer and the incentives to the customers so that the customers would like to join the proposed coordinated HEMS program. The mathematical formulation of the proposed coordinated HEMS will be presented in the second subsection. The third subsection shows how the coordinated HEMS design problem can be recast as a standard DP and can be handled by the approximate DP technique known as certainty equivalent control (CEC) \cite{BK:Bersekas07}. \subsection{Cost of Retailer and Incentives to Customers} The cost of the retailer mainly consists of two parts, namely, the wholesale day-ahead market bidding cost and the wholesale real-time market bidding cost. In the day-ahead market, the retailer bids to purchase energy from the generator through ISO according to the predicted load requirement for the upcoming day. Due to the prediction errors, the load actually consumed in real time may deviate from the scheduled energy supply. Under such circumstances, the retailer has to purchase additional amount of energy in the real-time market or pay to the grid for absorbing the extra energy that cannot be consumed, in order to maintain the real-time power balancing. Let $\pi_{\rm p}(\ell)$ be the price for buying energy from the real-time market and $\pi_{\rm s}(\ell)$ be the price for absorbing extra energy (if $\pi_{\rm s}(\ell)\leq 0$ then it implies that the retailer may sell back the extra energy). Let $E(\ell)$ be the energy supply. Moreover, assume that there are totally $M$ residential units, each of which contributes $D_{\rm total}^{(m)}$ (see \eqref{eq:total load}) load injection to the system. The total real-time market cost of the retailer is given by \begin{align}\label{eq:deviation cost} \text{Cost}_{RT}=&\sum_{\ell=1}^L \left[ \pi_{\rm s}(\ell)\left(E(\ell)-\sum_{m=1}^M D_{\rm total}^{(m)}(\ell)\right)^+ \notag \right.\\ &\left.~~~~~~~~~~~+\pi_{\rm p}(\ell)\left(\sum_{m=1}^M D_{\rm total}^{(m)}(\ell)-E(\ell)\right)^+ \right], \end{align} where $(x)^+=\max\{x,0\}$. The profit of the retailer can be roughly calculated as \begin{align}\label{eq: profit} \text{Profit}= B_{\rm c} - \text{Cost}_{RT} -\text{Cost}_{DA} \end{align} where $B_{\rm c}$ represents the total money paid by the customers for their electricity usage (by \eqref{eq:bill}, each customer will pay $\sum_{\ell=1}^Lp(\ell) D_{{{\rm total}}}(\ell)$), and $\text{Cost}_{DA}$ denotes the cost for day-ahead market. As discussed in the previous section, the HEMS ran in each residential unit will not only reduce the bill $B_{\rm c}$ but also potentially increase the real-time market cost $\text{Cost}_{RT}$ of the retailer. Hence it is desirable for the retailer to coordinate the HEMS of the residential units to reduce $\text{Cost}_{RT}$. As incentives for the customers to participate in the coordinated HEMS program, we propose that 1) the retailer will charge the same amount of money from the customers as that optimized by their individual HEMS (i.e., \eqref{eq:HEMS}); 2) the coordinated HEMS will maintain the same scheduling deadline constraints specified by the customers. In summary, the customers would neither have any financial loss nor would lose any degree of comfort, if they joined the coordinated HEMS program. Nevertheless, the retailer will directly benefit from the reduction of $\text{Cost}_{RT}$ according to \eqref{eq: profit}. \subsection{Proposed Coordinated HEMS} Following the two conditions above, we propose to coordinate the scheduling tasks of the $M$ residential units, targeting at minimizing the real-time market cost in \eqref{eq:deviation cost}. To extend the load models in Section II-A to the $M$ residential units, we use superscript $(m)$ to denote the $m$th residential unit; for example, $t_{i,k}^{(m)}$ and $s_{i,k}^{(m)}$ represent the request arrival and task operating times of appliance $i$ in the $m$th residence, and $S^{(m)}_{i}(\ell)$ represents the controllable load injection of appliance $i$ in the $m$th residence. The proposed coordinated HEMS design is given by \begin{subequations}\label{eq:COHEMS} \begin{align} \min_{s_{i,1}^{(m)},s_{i,2}^{(m)},\ldots}~&\sum_{\ell=1}^L \E\left[ \pi_{\rm s}(\ell)\left(E(\ell)-\sum_{m=1}^M D_{\rm total}^{(m)}(\ell)\right)^+ \notag \right.\\ &\left.~~~~~~~+\pi_{\rm p}(\ell)\left(\sum_{m=1}^M D_{\rm total}^{(m)}(\ell)-E(\ell)\right)^+ \right] \\ \text{s.t.}~& D_{\rm total}^{(m)}(\ell)=U^{(m)}(\ell) + \sum_{i=1}^N{S}_i^{(m)}(\ell),\label{eq:COHEMS b}\\ & {S}_i^{(m)}(\ell)=\sum_{k=1}^{\infty} g_i^{(m)}(\ell- s_{i,k}^{(m)})~\forall~i,\ell,m, \label{eq:COHEMS c}\\ & t_{i,k}^{(m)} \leq s_{i,k}^{(m)} \leq t_{i,k}^{(m)}+\zeta_i^{(m)}~\forall~i,k,m, \label{eq:COHEMS d}\\ & s_{i,k}^{(m)} \leq L~ \forall~i,k,m, \label{eq:COHEMS e}\\ & \sum_{i=1}^N S_{i}^{(m)}(\ell) \leq P_{\max}~\forall~ \ell,m. \label{eq:COHEMS f} \end{align} \end{subequations} Problem \eqref{eq:COHEMS} minimizes the average real-time market cost of the retailer. Note that problem \eqref{eq:COHEMS} is subject to the same scheduling constraints as problem \eqref{eq:HEMS} for each residential unit, meaning that the degree of comfort of customers is preserved in the proposed coordinated HEMS. We show here that the coordinated HEMS problem \eqref{eq:COHEMS} can be expressed as a DP. The key observation is that finding the optimal operating times $s_{i,1}^{(m)}$, $s_{i,2}^{(m)}$, $\ldots$ is equivalent to finding the optimal task departure (launching) process $d_i^{(m)}(\ell)$. Specifically, in accordance with \eqref{eq:departure process}, the optimal $s_{i,k}^{(m)}$ is given by $\ell^\star$ if $\ell^\star$ is the minimum number in the set $$ \mathfrak{L}_{i}^{(m)}(k) = \{ \ell \in \{1,\ldots,L\} ~|~ d_i^{(m)}(\ell)=k\}. $$ We should emphasize here that the observation above can significantly simplify the optimization of \eqref{eq:COHEMS}. By the fact that the load injection in \eqref{eq:load injection of appliance i} is the convolution of the departure process difference $d_i^{(m)}(\ell)-d_i^{(m)}(\ell-1)$ and the load profile $g_i^{(m)}(\ell)$, one can express $S_i^{(m)}(\ell)$ in \eqref{eq:COHEMS c} as \begin{align}\label{eq:departure process1} S_i^{(m)}(\ell) &=\sum_{k=1}^{\infty} [d_i^{(m)}(\ell-k+1)-d_i^{(m)}(\ell-k)]g_i^{(m)}(k) \notag \\ & =\!\!\!\!\!\!\sum_{k=1}^{\min\{\ell,G_i^{(m)}\}} [d_i^{(m)}(\ell-k+1)-d_i^{(m)}(\ell-k)]g_i^{(m)}(k), \end{align}where $d_i^{(m)}(0)=0$, and the second equality is owing to that $g_i^{(m)}(\ell)$ has a maximum duration $G_i^{(m)}$. Moreover, since $d_i^{(m)}(\ell)$ is nondecreasing and according to the scheduling constraints \eqref{eq:COHEMS d} and \eqref{eq:COHEMS e}, $d_i^{(m)}(\ell)$ should satisfy \begin{subequations}\label{eq:departure process constraints} \begin{align} d_i^{(m)}(\ell-1) \leq~ &d_i^{(m)}(\ell) \leq a_i^{(m)}(\ell), \\ a_i^{(m)}(\ell-\zeta_i^{(m)}) \leq~ & d_i^{(m)}(\ell), \label{eq:departure process constraints b}\\ &d_i^{(m)}(L)=a_i^{(m)}(L),\label{eq:departure process constraints c}\\ &d_i^{(m)}(\ell) \in \mathbb{Z}_+, \end{align} \end{subequations} for all $\ell$, $i$ and $m$, where $\mathbb{Z}_+$ denotes the set of nonnegative integers. Specifically, \eqref{eq:departure process constraints b} guarantees that appliance $i$ will be scheduled within the maximum delay $\zeta_i^{(m)}$. By \eqref{eq:departure process1} and \eqref{eq:departure process constraints}, we can reformulate problem \eqref{eq:COHEMS} as the following problem \begin{subequations}\label{eq:COHEMS departure} \begin{align} \!\! \min_{\substack{d_{i}^{(m)}(\ell)\\ \forall \ell,i,m}}&\sum_{\ell=1}^L \E\left[ \pi_{\rm s}(\ell)\left(\tilde{E}(\ell)-\sum_{m=1}^M \sum_{i=1}^N{S}_i^{(m)}(\ell)\right)^+ \notag \right.\\ &\left.~~~~~~~+\pi_{\rm p}(\ell)\left(\sum_{m=1}^M \sum_{i=1}^N{S}_i^{(m)}(\ell)-\tilde{E}(\ell)\right)^+ \right] \label{eq:COHEMS departure a}\\ \text{s.t.}~& \sum_{i=1}^N S_{i}^{(m)}(\ell) \leq P_{\max}~\forall~ \ell,m, \label{eq:COHEMS departure b}\\ & \text{constraints~in~} \eqref{eq:departure process1}-\eqref{eq:departure process constraints}, \end{align} \end{subequations} where $\tilde{E}(\ell)=E(\ell)-\sum_{m=1}^M U^{(m)}(\ell)$ (see \eqref{eq:total load}). Comparing with \eqref{eq:COHEMS}, in \eqref{eq:COHEMS departure}, the optimal departure processes $\{d_{i}^{(m)}(\ell)\}$ are to be determined instead. Problem \eqref{eq:COHEMS departure} can be solved by the standard DP approach, e.g., using the principle of optimality of DP \cite{BK:Bersekas07}, by which the optimal control policy for $\{d_{i}^{(m)}(\ell)\}$ can be obtained in a backward search manner. This method, however, is not computationally feasible because \eqref{eq:COHEMS departure} involves a large dimension of state vector. In particular, the state vector corresponding to \eqref{eq:COHEMS departure} at stage $\ell$ is given by \begin{align} \xb_\ell&=[\xb_{1,1}^T(\ell),\ldots,\xb_{1,N}^T(\ell),\xb_{2,1}^T,\ldots,\xb_{M,N}^T(\ell)]^T, \end{align}where \begin{align} \xb_{m,i}(\ell)&=[d_i^{(m)}(\ell-1),\ldots,d_i^{(m)}(\ell-\min\{\ell,G_i^{(m)}\}),\notag\\ &~~~~~~~~~~~~~~~~~a_i^{(m)}(\ell),a_i^{(m)}(\ell-\zeta_i^{(m)})]^T. \end{align} As seen, the number of possibilities of $\xb_\ell$ exponentially increase with $M$ and $N$. \subsection{Certainty Equivalent Control (CEC)} Certainty Equivalent Control (CEC) is a simple approach to obtaining an approximate solution of a complicated DP problem \cite{BK:Bersekas07}. In CEC, we search the optimal control in a forward manner and apply the control at each time that would be optimal if the uncertainty quantities were fixed at the typical values, e.g., the mean value. Therefore, by CEC, we can obtain an approximate solution to \eqref{eq:COHEMS departure} in an \emph{on-line} fashion, sequentially from time $1$ to $L$, and each time we only need to deal with a deterministic optimization problem. Applying CEC to problem \eqref{eq:COHEMS departure}, we obtain the following algorithm: \begin{algorithm}[h] \caption{CEC approach to problem \eqref{eq:COHEMS departure}} \begin{algorithmic}[1] \STATE \text{\bf for}~\text{time}~{$\bar \ell=1,\dots,L-1$}~\text{\bf do} \STATE Given $\xb_{\bar\ell}$, solve the following problem {\small \begin{align}\label{eq:COHEMS departure cec} \!\!\!\!\!\! \min_{\substack{d_{i}^{(m)}(\ell)~\forall i,m\\\ell=\bar\ell,\ldots,L }}&\sum_{\ell=\bar \ell}^L \left[ \pi_{\rm s}(\ell)\left(\tilde{E}(\ell)-\sum_{m=1}^M \sum_{i=1}^N{S}_i^{(m)}(\ell)\right)^+ \notag \right.\\ &\left.~~~~~+\pi_{\rm p}(\ell)\left(\sum_{m=1}^M \sum_{i=1}^N{S}_i^{(m)}(\ell)-\tilde{E}(\ell)\right)^+\!\! \right] \\ \text{s.t.}~& \sum_{i=1}^N S_{i}^{(m)}(\ell) \leq P_{\max}, \notag \\ & d_i^{(m)}(\ell-1) \leq d_i^{(m)}(\ell) \leq a_i^{(m)}(\bar \ell)+\sum_{k=\bar \ell+1}^\ell \alpha_{i}^{(m)}(k) \notag \\ & \Phi_i^{(m)}(\ell) \leq d_i^{(m)}(\ell), \notag\\ &d_i^{(m)}(L)=a_i^{(m)}(\bar \ell)+\sum_{k=\bar \ell+1}^L \alpha_{i}^{(m)}(k), \notag \\ &d_i^{(m)}(\ell) \in \mathbb{Z}_+, ~\forall~ m,i,\ell=\bar \ell,\ldots,L, \notag \end{align} } \!\!and denote $\{\bar{d}_{i}^{(m)}(\ell)\}_{i,m,\ell}$ as the associated optimal solution. \STATE \text{\bf Set} ${d}_{i}^{(m)}(\bar \ell)=\bar{d}_{i}^{(m)}(\bar \ell)$ for all $i,m$, as the approximate solution at time $\bar \ell$. \STATE \text{\bf end~for} \end{algorithmic} \end{algorithm} \noindent In \eqref{eq:COHEMS departure cec}, ${S}_i^{(m)}(\ell)$ is given by \eqref{eq:departure process1}, and $\Phi_i^{(m)}(\ell)$ is defined as \begin{align*} \Phi_i^{(m)}(\ell)=\!\!\left\{\!\!\!\begin{array}{ll} a_i^{(m)}(\ell-\zeta_i^{(m)}),&\ell=\bar \ell,\ldots,\bar\ell+\zeta_i^{{(m)}} \\ a_i^{(m)}(\bar\ell)+\!\!\sum_{k=\bar\ell+1}^{\ell-\zeta_i^{(m)}} \alpha_i^{(m)}(k),& \text{elsewhere.} \end{array}\right. \end{align*} Note that, in \eqref{eq:COHEMS departure cec}, the unknown arrivals $a_i^{(m)}(\ell),~\ell=\bar\ell+1,\ldots,L$, at time $\bar\ell$ are set to their mean values $a_i^{(m)}(\bar \ell)+\sum_{k=\bar \ell+1}^\ell \alpha_{i}^{(m)}(k),~\ell=\bar\ell+1,\ldots,L$, so problem \eqref{eq:COHEMS departure cec} is a deterministic optimization problem for all $\bar\ell=1,\ldots,L$. Problem \eqref{eq:COHEMS departure cec} has a convex objective function and convex constraints, except for the integer constraints $d_i^{(m)}(\ell) \in \mathbb{Z}_+$. Since the integer constraints lead to a discrete optimization problem which is difficult to handle in general, we simply relax the integer constraints to nonnegative orthant $d_i^{(m)}(\ell)\geq 0$. An approximate solution to \eqref{eq:COHEMS departure cec} can be obtained by rounding the solutions of the relaxed problem into the nearest integers. Next we show that the relaxed counterpart of problem \eqref{eq:COHEMS departure cec} can be recast as a linear programming (LP) which thus can be solved efficiently. To illustrate this, let us first express \eqref{eq:COHEMS departure cec} in a compact form. Define \begin{align*} \!\!\!\!\pib_{\rm p}=&[\pi_{\rm p}(L),\ldots,\pi_{\rm p}(\bar\ell)]^T, \notag \\ \pib_{\rm s}=&[\pi_{\rm s}(L),\ldots,\pi_{\rm s}(\bar\ell)]^T, \\ \tilde{\Eb}=&[\tilde{E}(L),\ldots,\tilde{E}(\bar\ell)]^T, \\ \tilde\db^{(m)}=&[ d_1^{(m)}(L),\ldots,d_1^{(m)}(\bar\ell-\min\{\bar\ell,G_1^{(m)}\}), \notag \\ &~~~~~~~~~~d_2^{(m)}(L),\ldots,d_N^{(m)}(\bar\ell-\min\{\bar\ell,G_N^{(m)}\})]^T, \\ \db^{(m)}=&[ d_1^{(m)}(L),\ldots,d_1^{(m)}(\bar\ell), d_2^{(m)}(L),\ldots,d_N^{(m)}(\bar\ell)]^T, \\ \Psib^{(m)} =&[\Omegab_1^{(m)},\ldots,\Omegab_N^{(m)}]\text{blkdiag}\{\Upsilonb_1^{(m)},\ldots,\Upsilonb_N^{(m)}\}, \end{align*}where $\Omegab_i^{(m)}\in \mathbb{R}^{(L-\bar\ell+1)\times(L-\bar\ell+\min\{\bar\ell,G_i^{(m)}\})}$ is a Toeplitz matrix with the first row given by $[g_i^{(m)}(1),\ldots,g_i^{(m)}(G_i^{(m)})]$ and the first column given by $[g_i^{(m)}(1),0,\ldots,0]^T$, and $\text{blkdiag}\{\Upsilonb_1^{(m)},\ldots,\Upsilonb_N^{(m)}\}$ is a block diagonal matrix in which $\Upsilonb_i^{(m)} \in \mathbb{R}^{(L-\bar\ell+\min\{\bar\ell,G_i^{(m)}\})\times(L-\bar\ell+\min\{\bar\ell,G_i^{(m)}\}+1)}$ is a Toeplitz matrix with the first row being $[1, -1, 0, \ldots, 0]$ and the first column being $[1, 0, \ldots, 0]^T$. Moreover, define \begin{align} &\mathcal{U}^{(m)}=\bigg\{ \db^{(m)}\succeq \zerob |~ \Psib^{(m)} \tilde\db^{(m)} \preceq P_{\max} {\bf 1}, \bigg.\notag\\ &\!\!\!\!\left.\begin{array}{ll} &d_i^{(m)}(\ell-1) \leq d_i^{(m)}(\ell) \leq a_i^{(m)}(\bar \ell)+\sum_{k=\bar \ell+1}^\ell \alpha_{i}^{(m)}(k), \\ &\Phi_i^{(m)}(\ell) \leq d_i^{(m)}(\ell),\\ &d_i^{(m)}(L)=a_i^{(m)}(\bar \ell)+\sum_{k=\bar \ell+1}^L \alpha_{i}^{(m)}(k)~ \forall~ i,\ell=\bar \ell,\ldots,L, \\ \end{array}\!\!\!\!\! \notag \right\}, \end{align} where $\preceq$ and $\succeq$ denote the element-wise inequalities, and ${\bf 1}$ ($\zerob$) is the all-one (all-zero) vector. Then problem \eqref{eq:COHEMS departure cec} can be expressed as \begin{align}\label{eq:COHEMS departure cec2} \!\!\!\!\!\! \min_{\substack{\db^{(m)}\\m=1,\ldots,M }}& \pib_{\rm s}^T\left(\tilde{\Eb}-\sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}\right)^+ \notag \\ &~~~~~~~~~~~~+\pib_{\rm p}^T\left(\sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}-\tilde{\Eb}\right)^+\!\!\\ \text{s.t.}~& \db^{(m)} \in \mathcal{U}^{(m)},~m=1,\ldots,M. \end{align} By introducing a slack variable \begin{align}\label{eq:z}\zb=\left(\sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}-\tilde{\Eb}\right)^+,\end{align} one can write \begin{align*} \left(\tilde{\Eb}-\sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}\right)^+=\zb- \left(\sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}-\tilde{\Eb}\right). \end{align*} Substituting the above equations into \eqref{eq:COHEMS departure cec2} gives rise to \begin{subequations}\label{eq:COHEMS departure cec LP0} \begin{align} \!\!\!\!\!\! \min_{\substack{d_{i}^{(m)}(\ell)~\forall i,m\\\ell=\bar\ell,\ldots,L \\ \zb \in \mathbb{R}^{L-\bar\ell+1} }}& (\pib_{\rm s}+\pib_{\rm p})^T\zb-\pib_{\rm s}^T\left(\sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}-\tilde{\Eb}\right)\notag\\ \text{s.t.}~& \zb=\left(\sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}-\tilde{\Eb}\right)^+, \label{eq:COHEMS departure cec LP0 a} \\ &\db^{(m)} \in \mathcal{U}^{(m)},~m=1,\ldots,M. \notag \end{align} \end{subequations} Assume the usual case of $\pib_{\rm s}+\pib_{\rm p}\succeq \zerob$. Then the constraint \eqref{eq:COHEMS departure cec LP0 a} can be shown to be equivalent to the following two linear constrains: \begin{align}\label{eq:z2} \zb \succeq \zerob,~\zb \succeq \sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}-\tilde{\Eb}. \end{align} By replacing \eqref{eq:COHEMS departure cec LP0 a} with \eqref{eq:z2}, we end up with the following LP representation for \eqref{eq:COHEMS departure cec}: \begin{subequations}\label{eq:COHEMS departure cec LP} \begin{align} \!\!\!\!\!\! \min_{\substack{d_{i}^{(m)}(\ell)~\forall i,m\\\ell=\bar\ell,\ldots,L \\ \zb \in \mathbb{R}^{L-\bar\ell+1} }}& (\pib_{\rm s}+\pib_{\rm p})^T\zb-\pib_{\rm s}^T\left(\sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}-\tilde{\Eb}\right)\notag\\ \text{s.t.}~& \zb \succeq \zerob, \label{eq:COHEMS departure cec LP a} \\ &\zb \succeq \sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}-\tilde{\Eb}, \label{eq:COHEMS departure cec LP b} \\ &\db^{(m)} \in \mathcal{U}^{(m)},~m=1,\ldots,M. \notag \end{align} \end{subequations} As a remark, we should mention that the reformulation idea in Section III-B and the CEC method in Section III-C can also be applied for handling the individual HEMS problem in \eqref{eq:HEMS}. \section{Distributed Implementation} In the previous section, we have shown how the proposed coordinated HEMS problem \eqref{eq:COHEMS} can be approximated by CEC which involves only solving the integer-constraint-relaxed LP problem \eqref{eq:COHEMS departure cec LP}. To solve problem \eqref{eq:COHEMS departure cec LP}, a centralized control is needed in general. The control center not only knows the appliance profiles in each residential units, but also the statistical and real-time information of the request arrival processes $\{a_i^{(m)}(\ell)\}$. In view of the fact that the computational complexity of solving \eqref{eq:COHEMS departure cec LP} increases with the number of residences and the number of controllable appliances, a decentralized implementation algorithm, that can decompose the original problem into parallel subproblems with smaller problem size, is of great interest. In particular, we are interested in decentralized algorithms that allow each of the residential units to compute its scheduling solution locally using only domestic information so that the customers' privacy on electricity usage can also be preserved. In this section, we present a decentralized implementation method for problem \eqref{eq:COHEMS departure cec LP}, using the convex optimization based dual decomposition method \cite{Boyddecomposition}. Combining such a decentralized method with Algorithm 1, a simple distributed coordinated HEMS algorithm is obtained. As its name suggests, dual decomposition solves the problem in the Lagrangian dual domain. Let $\mub \succeq \zerob$ and $\lambdab \succeq \zerob$ be the dual variables associated with the inequality constraints in \eqref{eq:COHEMS departure cec LP a} and \eqref{eq:COHEMS departure cec LP b}, respectively. By definition \cite{BK:Boyd04}, the Lagrangian dual problem of \eqref{eq:COHEMS departure cec LP} can be shown to be \begin{align}\label{eq: dual problem} \max_{\mub \succeq \zerob,\lambdab \succeq \zerob}~\phi(\mub,\lambdab) \end{align} where $\phi(\mub,\lambdab)$ is the dual function given by \begin{align* \!\!\!\!\!\! &\min_{\substack{\db^{(m)} \in \mathcal{U}^{(m)}~\forall m,\\ \zb \in \mathbb{R}^{L-\bar\ell+1} }} (\pib_{\rm s}+\pib_{\rm p})^T\zb-\pib_{\rm s}^T\left(\sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}-\tilde{\Eb}\right)\notag\\ &~~~~~~~~~~~~~~~~~~-\mub^T\zb+\lambdab^T\left(\sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}-\tilde{\Eb}-\zb\right) \notag \\ &=\left\{\!\!\!\!\! \begin{array}{ll} &{\displaystyle \min_{\substack{\db^{(m)} \in \mathcal{U}^{(m)}~\forall m }} (\lambdab-\pib_{\rm s})^T\left(\sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}-\tilde{\Eb}\right) }\\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\text{if}~\pib_{\rm p}+\pib_{\rm s}-\lambdab=\mub, \\ &-\infty,~\text{elsewhere}. \end{array} \right. \end{align*} Substituting the above equation into \eqref{eq: dual problem} gives rise to \begin{align}\label{eq: dual problem2} \max_{ \zerob \preceq \lambdab \preceq \pib_{\rm p}+\pib_{\rm s}}~ \!\!\!\!\left\{ \!\!\!\!\!\!\!\!\begin{array}{ll} &{\displaystyle\min_{\substack{\db^{(m)}~\forall m }}~ (\lambdab-\pib_{\rm s})^T\left(\sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}-\tilde{\Eb}\right)} \\ &~~~~~~~~~ {\displaystyle \text{s.t.}~\db^{(m)} \in \mathcal{U}^{(m)},~ m=1,\ldots,M.} \end{array}\!\!\!\!\! \right\} \end{align} Since problem \eqref{eq:COHEMS departure cec LP} is convex and satisfies Slater's condition \cite{BK:Boyd04}, the dual problem \eqref{eq: dual problem2} attains the same optimal objective value as \eqref{eq:COHEMS departure cec LP}. One can see from \eqref{eq: dual problem2} that the inner part is decomposible, and can be solved in a parallel fashion given $\lambdab$. Therefore, a decentralized implementation can be obtained by solving the dual problem \eqref{eq: dual problem2}. Specifically, we can use the projected subgradient method \cite{Boydsubgradient} to deal with \eqref{eq: dual problem2} in an iterative manner. In iteration $n$, given $\lambdab(n)$, the corresponding inner part of \eqref{eq: dual problem2} can be handled by solving: \begin{align}\label{eq:inner minimization2} \db^{(m)}(n+1)=\arg~\min_{\substack{\db^{(m)}\in \mathcal{U}^{(m)}}}~ (\lambdab(n)-\pib_{\rm s})^T\Psib^{(m)} \tilde\db^{(m)}, \end{align} for $m=1,\ldots,M$. The dual variable $\lambdab$ can be updated using the standard subgradient step \cite{Boydsubgradient}, i.e., \begin{align}\label{eq:dual update} \lambdab(n+1)\!\!= \mathcal{P}\left(\lambdab(n) + c_n \left(\sum_{m=1}^M \Psib^{(m)} \tilde\db^{(m)}(n+1)-\tilde{\Eb}\right)\right), \end{align} where $c_n>0$ denotes the step size, and $\mathcal{P}(\cdot)$ denotes the operation of projection onto the set $[\zerob, \pib_{\rm p}+\pib_{\rm s}]$. Equations \eqref{eq:dual update} and \eqref{eq:inner minimization2} are iterated until convergence or the preset stopping criterion is satisfied. The dual decomposition method for \eqref{eq:COHEMS departure cec LP} is summarized in Algorithm 2. Suppose that the algorithm stops at iteration $n^\star$. Instead of using $\{\db^{(m)}(n^\star)\}$ as the primal solution, we use the \emph{running-averaged} version: \begin{align}\label{eq:running average} &\hat\db^{(m)}=\frac{1}{n^\star+1}\sum_{q=1}^{n^\star+1}\db^{(m)}(q),~m=1,\ldots,M. \end{align} It is shown \cite{Angelia2009} that this averaged version $\hat\db^{(m)}$ is more numerically stable than $\db^{(m)}(n^\star)$, especially for our problem \eqref{eq:COHEMS departure cec LP} which is not strictly convex. In Algorithm 2, we assume that there is a control center which collects $\db^{(m)}(n+1)$ from the residences and uses the information for updating the dual variable $\lambdab$ (Step 7). If there is no control center present, the residences can still perform subgradient update \eqref{eq:dual update} individually by obtaining the aggregate load profile $\sum_{m=1}^M \Psib^{(m)} \tilde\ssb^{(m)}(n+1))$ in a fully distributed fashion, e.g., using the average consensus algorithms or gossip algorithms \cite{Xiao2003,Boyd2006}. \begin{algorithm}[t] \caption{{Dual decomposition for problem \eqref{eq:COHEMS departure cec LP}}} \begin{algorithmic}[1]\label{alg:decentralized_modified} \STATE {\bf Input} an initial value of $\lambdab(0)$. \STATE Set $n=0$ \REPEAT \FOR{$m=1,\dots, M$} \STATE Residence $m$ solves \eqref{eq:inner minimization2} to obtain the solution $\db^{(m)}(n+1)$, and sends it to the control center. \ENDFOR \STATE Given $\db^{(m)}(n+1)$, $m=1,\ldots,M$, the control center updates the dual variable $\lambdab$ by \eqref{eq:dual update}, and broadcasts it to the residences. \STATE $n=n+1$ \UNTIL the predefined stopping criterion is met. \end{algorithmic} \end{algorithm} \section{Simulation Results} In this section, some simulation results are presented to examine the effectiveness of the proposed coordinated HEMS. We consider a scenario where there are $60$ residential units ($M=60$), with 3 controllable appliances in each residence ($N=3$). The optimization horizon is set to $96$ ($L=96$) which is obtained by considering a whole day with 24 hours and 4 quarters for each hour (starting from 8 pm to the next day). The three appliances are assumed to have rectangular power profiles, with instantaneous energy consumptions uniformly generated between $[0.8,1.9]$ (kWh) (e.g., PHEV), $[0.3,0.5]$ (e.g., dish washer) and $[0.8,1.2]$ (e.g., cloth dryer), respectively (reference from {http://www.absak.com/library/power-consumption-table}). The simulation setting of $\{G_i^{(m)}\}$ and $\{\zeta_{i}^{(m)}\}$ are detailed in Table 1. We assume that each residence will send a request for Appliance 1 with probability 0.8 in a time uniformly distributed between 8 pm and midnight, and with probability 0.3 in a time uniformly distributed between 8 am and 12 pm. Appliance 2 is set to probability $0.8$ in three times that are uniformly distributed between 6 am and 10 am, 12 pm and 2 pm, and 5 pm and 7 pm, respectively. Appliance 3 is set to probabilities $0.8$ and $1$ in the times between 2 pm and 3 pm, and 8 pm and 10 pm, respectively. \begin{table}[t]\centering \vspace{-0.3cm} \caption{Simulation Setting of Appliances. The notion $U\sim[a,b]$ stands for a uniform distribution in the interval $[a,b]$. }\label{table:ex1margin}\vspace{-0.2cm} \begin{center} \begin{tabular}{>{\rm}c|>{\rm}c|>{\rm}c|>{\rm}c} & Appliance 1 & Appliance 2 & Appliance 3 \\ \hline $g_i^{{m}}(\ell)$ (kWh) & $U\sim[3.25,7.5]$ & $U\sim[1.2,1.5]$ & $U\sim[0.3,0.5]$\\ \hline $G_i^{(m)}$ (quarter) & $U\sim[16,32]$ & $U\sim[2,4]$ & $U\sim[4,12]$ \\ \hline $\zeta_{i}^{(m)}$ (quarter) & $U\sim[4,16]$ & $U\sim[4,12]$ & $U\sim[4,12]$ \\ \hline \end{tabular} \end{center} \vspace{-0.0cm} \end{table} \begin{figure}[!t] \centering \resizebox{0.470\textwidth}{!}{ \psfrag{gamma}[Bl][Bl]{\huge $\gamma$ (dB)} \includegraphics{price.eps}} \vspace{-0.0cm} \caption{Day ahead price obtained from https://www2.ameren.com/RetailEnergy/realtimeprices.aspx, on day Nov. 15, 2001.} \label{fig. price}\vspace{-0.3cm} \end{figure} For HEMS in \eqref{eq:HEMS}, we use the day-ahead price as shown in Figure \ref{fig. price}. For the proposed coordinated HEMS in \eqref{eq:COHEMS}, we consider two examples. In the first example, we set $\pib_{\rm p}=\pib_{\rm s}={\bf 1}$, by which the objective value of \eqref{eq:COHEMS} reduces to $\sum_{\ell=1}^L |E(\ell)-\sum_{m=1}^M D_{\rm total}^{(m)}(\ell)|$. We use this setting to examine the deviation between the scheduled load and the energy supply. In the second example, we set $\pib_{\rm p}={\bf 1}$ and $\pib_{\rm s}=-0.5{\bf 1}$, simulating the scenario that the retailer is able to sell the extra electricity back to the grid. In the simulations, for simplicity, we assume that, in each residence, the uncontrollable appliances contribute a constant instantaneous energy consumption of 5 kWh, which is assumed to be known by the residence in advance by prediction. \begin{figure}[!t] \centering \resizebox{0.50\textwidth}{!}{ \psfrag{gamma}[Bl][Bl]{\huge $\gamma$ (dB)} \includegraphics{fig3.eps}} \vspace{-0.3cm} \caption{Simulation results for a randomly generated problem instance with $\pib_{\rm p}=\pib_{\rm s}={\bf 1}$.} \label{fig. case1}\vspace{-0.0cm} \end{figure} \begin{figure}[!t] \centering \resizebox{0.50\textwidth}{!}{ \psfrag{gamma}[Bl][Bl]{\huge $\gamma$ (dB)} \includegraphics{fig4.eps}} \vspace{-0.3cm} \caption{Simulation results for a randomly generated problem instance with $\pib_{\rm p}={\bf 1}$ and $\pib_{\rm s}=-0.5{\bf 1}$.} \label{fig. case2}\vspace{-0.0cm} \end{figure} {\bf Example 1:} Figure \ref{fig. case1} shows the simulation results for a randomly generated problem instance with $\pib_{\rm p}=\pib_{\rm s}={\bf 1}$. Firstly, one can see from this figure and Figure \ref{fig. price} that the HEMS (i.e., \eqref{eq:HEMS}) successfully move the load to the lower price region, but that causes significant power imbalance. Specifically, the deviation $\sum_{\ell=1}^L |E(\ell)-\sum_{m=1}^M D_{\rm total}^{(m)}|$ corresponding to the unscheduled load is 1494, but the deviation corresponding to individual HEMS increases to 2450. Secondly, we can see from Figure \ref{fig. case1} that the proposed coordinated HEMS can schedule the load such that the corresponding load follows the energy supply. The load deviation of the proposed coordinated HEMS dramatically decreases to 698. Thirdly, we can observe that the distributed coordinated HEMS can yield almost the same performance as its centralized counterpart. {\bf Example 2:} Figure \ref{fig. case2} shows the simulation results for another randomly generated problem instance with $\pib_{\rm p}={\bf 1}$ and $\pib_{\rm s}=-0.5{\bf 1}$. The simulation results are similar to Figure \ref{fig. case1}. In this case, the real-time cost in \eqref{eq:deviation cost} corresponding to the unscheduled load is 139 and that corresponding to HEMS is 276. The proposed coordinated HEMS however can reduce the cost to -246. This demonstrates well the efficacy of the proposed coordinated HEMS. \vspace{-0.0cm} \section{Conclusions} In the paper, we have presented a coordinated HEMS architecture that coordinates the home energy scheduling of multiple residential units in order to reduce the real-time market cost of the retailer. We have shown that the coordinated HEMS design problem can be reformulated as a DP which can be efficiently handled by CEC. Moreover, a distributed implementation method by dual decomposition is also proposed. The presented Simulation results have shown that the proposed coordinated HEMS can effectively achieve real-time power balancing in contrast to the individual HEMS that may cause a rebound peak load in the low-price region. \vspace{-0.0cm} \section{Acknowledgements} This work is supported in part by the US Department of Energy under the Trustworthy Cyber Infrastructure for the Power Grid (TCIPG) program. \vspace{-0.0cm} \vspace{-0.0cm} \footnotesize
1,116,691,497,075
arxiv
\section{Introduction} \label{s_m1} The number of applications in physics, mechanics, and engineering where it is necessary to solve numerically ordinary differential equations (ODEs) with a given initial value is really enormous. Since many ordinary differential equations cannot be solved analytically, people use numerical algorithms\footnote{ There exist also symbolic techniques but they are not considered in this paper dedicated to numerical computations.} for finding approximate solutions (see \cite{Brugnano,Butcher,Henrici,Quarteroni}). In this paper, we want to approximate the solution $y(x),\,\, x \in [a,b],$ of the initial value problem (also called the Cauchy problem) for a differential equation \beq y'(x)=f(x,y), \hspace{1cm} y(x_0)=y_0, \hspace{1cm} x_0 = a, \label{ode1} \eeq where $a$ and $b$ are finite numbers and $y(x_0)=y_0$ is called the initial condition. We suppose that $f(x,y)$ is given by a computer procedure. Since very often in scientific and technical applications it can happen that the person who wants to solve (\ref{ode1}) is not the person who has written the code for $f(x,y)$, we suppose that the person solving (\ref{ode1}) does not know the structure of $f(x,y)$, i.e., it is a black box for him/her. In the literature, there exist numerous numerical algorithms constructing a sequence $y_1, y_2, y_3, \ldots$ approximating the exact values $y(x_1), y(x_2), y(x_3), \ldots$ that the solution $y(x)$ assumes at points $x_1, x_2, x_3, \ldots$ (see \cite{Butcher,Hairer,Lambert}). The explicit Euler algorithm is the simplest among explicit methods for the numerical integration of ODEs. It uses the first two terms of the Taylor expansion of $y(x)$ constructing so the linear approximation around the point $(x_0,y(x_0))$. The $(n+1)$th step of the Euler algorithm describes how to move from the point $x_n$ to $x_{n+1} = x_{n} + h, n>0,$ and is executed as follows \beq y_{n+1} = y_n + h f(x_n,y_n). \label{ode2} \eeq Traditional computers work with finite values of $h$ introducing so errors at each step of the algorithm. In order to obtain more accurate approximations it is necessary to decrease the step $h$ increasing so the number of steps of the method (the computations become more expensive). In any case, $h$ always remains finite and its minimal acceptable value is determined by technical characteristics of each concrete computer the method is implemented on. Obviously, the same effects hold for more sophisticated methods, as well (see \cite{Butcher,Hairer,Henrici,Lambert}). Another approach to solve (\ref{ode1}) on a traditional computer is the use of an automatic differentiation software executing pre-processing of (\ref{ode1}) (see \cite{Griewank_Corliss} and references given therein). In this paper, we introduce a new numerical framework for solving ODEs related to the usage of a new kind of computer -- the Infinity Computer (see \cite{Sergeyev_patent,www,informatica}). It is able to work \textit{numerically} with finite, infinite, and infinitesimal quantities. The Infinity Computer is based on an applied point of view (see \cite{Sergeyev,informatica,Lagrange}) on infinite and infinitesimal numbers. In order to see the place of the new approach in the historical panorama of ideas dealing with infinite and infinitesimal, see \cite{Lolli,MM_bijection,Dif_Calculus,first,Sergeyev_Garro}. The new methodology has been successfully applied for studying numerical differentiation and optimization (see \cite{DeLeone,Korea,Num_dif,Zilinskas}), fractals (see \cite{chaos,Menger,Biology,DeBartolo}), percolation (see \cite{Iudin,DeBartolo}), Euclidean and hyperbolic geometry (see \cite{Margenstern,Rosinger2}), the first Hilbert problem and Turing machines (see \cite{first,Sergeyev_Garro,Sergeyev_Garro_2}), cellular automata (see \cite{DAlotto}), infinite series (see \cite{Dif_Calculus,Riemann,Zhigljavsky}), functions and their derivatives that can assume infinite and infinitesimal values (see \cite{Dif_Calculus}), etc. With respect to the initial value problem (\ref{ode1}), the possibility to work numerically with infinitesimals allows us to use \textit{numerical infinitesimal values of $h$}. It is proved that under reasonable conditions the Infinity Computer is able to calculate \textit{exact} values of the derivatives of $y(x)$ and to reconstruct its Taylor expansion with a desired accuracy by using infinitesimal values of $h$ without finding the respective derivatives analytically (or symbolically) by the successive derivation of (\ref{ode1}) as it is usually done when the Taylor method is applied. The rest of the paper is organized as follows. Section~2 briefly presents the new computational methodology. Section~3 introduces the main theoretical results and describes how derivatives of $y(x)$ can be calculated numerically on the Infinity Computer. Section~4 introduces a variety of examples of the usage of infinitesimals for ODEs numerical solving. First, it presents two simple iterative methods. Then, it describes a technique that can be used to obtain approximations of derivatives of the solution $y(x)$ at the point $x_{n+1}$ using infinitesimals and the information obtained at the point $x_{n}$. Finally, a technique for an automatic control of rounding errors that can occur during evaluation of $f(x,y)$ is introduced. Through the paper, theoretical results are illustrated by numerical examples. \section{A fast tour to the new computational methodology} Numerous trials have been done during the centuries in order to evolve existing numeral systems\footnote{ We are reminded that a \textit{numeral} is a symbol or group of symbols that represents a \textit{number}. The difference between numerals and numbers is the same as the difference between words and the things they refer to. A \textit{number} is a concept that a \textit{numeral} expresses. The same number can be represented by different numerals. For example, the symbols `7', `seven', and `VII' are different numerals, but they all represent the same number.} in such a way that infinite and infinitesimal numbers could be included in them (see \cite{Benci,Cantor,Conway,Leibniz,Levi-Civita,Newton,Robinson,Wallis}). Particularly, in the early history of the calculus, arguments involving infinitesimals played a pivotal role in the derivation developed by Leibniz and Newton (see \cite{Leibniz,Newton}). The notion of an infinitesimal, however, lacked a precise mathematical definition and in order to provide a more rigorous foundation for the calculus, infinitesimals were gradually replaced by the d'Alembert-Cauchy concept of a limit. Since new numeral systems appear very rarely, in each concrete historical period their importance for Mathematics is very often underestimated (especially by pure mathematicians). In order to illustrate their importance, let us remind the Roman numeral system that does not allow one to express zero and negative numbers. In this system, the expression III-X is an indeterminate form. As a result, before appearing the positional numeral system and inventing zero (by the way, the second event was several hundred years later with respect to the first one) mathematicians were not able to create theorems involving zero and negative numbers and to execute computations with them. There exist numeral systems that are even weaker than the Roman one. They seriously limit their users in executing computations. Let us recall a study published recently in \textit{Science} (see \cite{Gordon}) that describes a primitive tribe -- Pirah\~{a} -- living in Amazonia. These people use a very simple numeral system for counting: one, two, many. For Pirah\~{a}, all quantities larger than two are just `many' and such operations as 2+2 and 2+1 give the same result, i.e., `many'. Using their weak numeral system Pirah\~{a} are not able to see, for instance, numbers 3, 4, 5, and 6, to execute arithmetical operations with them, and, in general, to say anything about these numbers because in their language there are neither words nor concepts for that. In the context of the present paper, it is very important that the weakness of Pirah\~{a}'s numeral system leads them to such results as \beq \mbox{`many'}+ 1= \mbox{`many'}, \hspace{1cm} \mbox{`many'} + 2 = \mbox{`many'}, \label{piraha1} \eeq which are very familiar to us in the context of views on infinity used in the traditional calculus \beq \infty + 1= \infty, \hspace{1cm} \infty + 2 = \infty. \label{piraha2} \eeq The arithmetic of Pirah\~{a} involving the numeral `many' has also a clear similarity with the arithmetic proposed by Cantor for his Alephs\footnote{This similarity becomes even more pronounced if one considers another Amazonian tribe -- Munduruk\'u (see \cite{Pica}) -- who fail in exact arithmetic with numbers larger than 5 but are able to compare and add large approximate numbers that are far beyond their naming range. Particularly, they use the words `some, not many' and `many, really many' to distinguish two types of large numbers using the rules that are very similar to ones used by Cantor to operate with $\aleph_0$ and $\aleph_1$, respectively.}: \beq \aleph_0 + 1= \aleph_0, \hspace{1cm} \aleph_0 + 2= \aleph_0, \hspace{1cm}\aleph_1+ 1= \aleph_1, \hspace{1cm} \aleph_1 + 2 = \aleph_1. \label{piraha3} \eeq Thus, the modern mathematical numeral systems allow us to distinguish a larger quantity of finite numbers with respect to Pirah\~{a} but give results that are similar to those of Pirah\~{a} when we speak about infinite numbers. This observation leads us to the following idea: \textit{Probably our difficulties in working with infinity is not connected to the nature of infinity itself but is a result of inadequate numeral systems that we use to work with infinity, more precisely, to express infinite numbers.} Let us compare the usage of numeral systems in Mathematics emphasizing differences that hold when one works, on the one hand, with finite quantities and, on the other hand, with infinities and infinitesimals. In our every day activities with finite numbers the \emph{same} finite numerals are used for \emph{different} purposes (e.g., the same numeral 4 can be used to express the number of elements of a set and to indicate the position of an element in a finite sequence). When we face the necessity to work with infinities or infinitesimals, the situation changes drastically. In fact, in this case \emph{different} symbols are used to work with infinities and infinitesimals in \emph{different} situations: \begin{itemize} \item $\infty$ in standard Analysis; \item $\omega$ for working with ordinals; \item $\aleph_0, \aleph_1, ...$ for dealing with cardinalities; \item non-standard numbers using a generic infinitesimal $h$ in non-standard Analysis, etc. \end{itemize} In particular, since the mainstream of the traditional Mathematics very often does not pay any attention to the distinction between numbers and numerals (in this occasion it is necessary to recall constructivists who studied this issue), many theories dealing with infinite and infinitesimal quantities have a symbolic (not numerical) character. For instance, many versions of the non-standard Analysis are symbolic, since they have no numeral systems to express their numbers by a finite number of symbols (the finiteness of the number of symbols is necessary for organizing numerical computations). Namely, if we consider a finite $n$ than it can be taken $n=5$, or $n=103$ or any other numeral used to express finite quantities and consisting of a finite number of symbols. In contrast, if we consider a non-standard infinite $m$ then it is not clear which numerals can be used to assign a concrete value to $m$. Analogously, in non-standard Analysis, if we consider an infinitesimal $h$ then it is not clear which numerals consisting of a finite number of symbols can be used to assign a value to $h$ and to write $h=...$ In fact, very often in non-standard Analysis texts, a \textit{generic} infinitesimal $h$ is used and it is considered as a symbol, i.e., only symbolic computations can be done with it. Approaches of this kind leave unclear such issues, e.g., whether the infinite $1/h$ is integer or not or whether $1/h$ is the number of elements of an infinite set. Another problem is related to comparison of values. When we work with finite quantities then we can compare $x$ and $y$ if they assume numerical values, e.g., $x=4$ and $y=6$ then, by using rules of the numeral system the symbols 4 and 6 belong to, we can compute that $y>x$. If one wishes to consider two infinitesimals $h_1$ and $h_2$ then it is not clear how to compare them because numeral systems that can express infinitesimals are not provided by non-standard Analysis techniques. The approach developed in \cite{Sergeyev,informatica,Lagrange} proposes a numeral system that uses \textit{the same numerals} for several different purposes for dealing with infinities and infinitesimals: in Analysis for working with functions that can assume different infinite, finite, and infinitesimal values (functions can also have derivatives assuming different infinite or infinitesimal values); for measuring infinite sets; for indicating positions of elements in ordered infinite sequences; in probability theory, etc. It is important to emphasize that the new numeral system avoids situations of the type (\ref{piraha1})--(\ref{piraha3}) providing results ensuring that if $a$ is a numeral written in this system then for any $a$ (i.e., $a$ can be finite, infinite, or infinitesimal) it follows $a+1>a$. The new numeral system works as follows. A new infinite unit of measure expressed by the numeral \G1 called \textit{grossone} is introduced as the number of elements of the set, $\mathbb{N}$, of natural numbers. Concurrently with the introduction of grossone in the mathematical language all other symbols (like $\infty$, Cantor's $\omega$, $\aleph_0, \aleph_1, ...$, etc.) traditionally used to deal with infinities and infinitesimals are excluded from the language because grossone and other numbers constructed with its help not only can be used instead of all of them but can be used with a higher accuracy. Grossone is introduced by describing its properties postulated by the Infinite Unit Axiom (see \cite{informatica,Lagrange}) added to axioms for real numbers (similarly, in order to pass from the set, $\mathbb{N}$, of natural numbers to the set, $\mathbb{Z}$, of integers a new element -- zero expressed by the numeral 0 -- is introduced by describing its properties). The new numeral \G1 allows us to construct different numerals expressing different infinite and infinitesimal numbers and to execute computations with them. As a result, in Analysis, instead of the usual symbol $\infty$ used in series and integration different infinite and/or infinitesimal numerals can be used (see \cite{Dif_Calculus,Riemann,Zhigljavsky}). Indeterminate forms are not present and, for example, the following relations hold for $\mbox{\ding{172}}$ and $\mbox{\ding{172}}^{-1}$ (that is infinitesimal), as for any other (finite, infinite, or infinitesimal) number expressible in the new numeral system \beq 0 \cdot \mbox{\ding{172}} = \mbox{\ding{172}} \cdot 0 = 0, \hspace{3mm} \mbox{\ding{172}}-\mbox{\ding{172}}= 0,\hspace{3mm} \frac{\mbox{\ding{172}}}{\mbox{\ding{172}}}=1, \hspace{3mm} \mbox{\ding{172}}^0=1, \hspace{3mm} 1^{\mbox{\tiny{\ding{172}}}}=1, \hspace{3mm} 0^{\mbox{\tiny{\ding{172}}}}=0, \label{3.2.1} \eeq \[ 0 \cdot \mbox{\ding{172}}^{-1} = \mbox{\ding{172}}^{-1} \cdot 0 = 0, \hspace{5mm} \mbox{\ding{172}}^{-1} > 0, \hspace{5mm} \mbox{\ding{172}}^{-2} > 0, \hspace{5mm} \mbox{\ding{172}}^{-1}-\mbox{\ding{172}}^{-1}= 0, \] \[ \frac{\mbox{\ding{172}}^{-1}}{\mbox{\ding{172}}^{-1}}=1, \hspace{3mm} \frac{\mbox{\ding{172}}^{-2}}{\mbox{\ding{172}}^{-2}}=1, \hspace{3mm} (\mbox{\ding{172}}^{-1})^0=1, \hspace{5mm} \mbox{\ding{172}} \cdot \mbox{\ding{172}}^{-1} =1, \hspace{5mm} \mbox{\ding{172}} \cdot \mbox{\ding{172}}^{-2} =\mbox{\ding{172}}^{-1}. \] The new approach gives the possibility to develop a new Analysis (see \cite{Dif_Calculus}) where functions assuming not only finite values but also infinite and infinitesimal ones can be studied. For all of them it becomes possible to introduce a new notion of continuity that is closer to our modern physical knowledge. Functions assuming finite and infinite values can be differentiated and integrated. \textbf{Example 1.} \label{e_m1} The function $f(x)=x^2$ has the first derivative $f'(x)= 2x$ and both $f(x)$ and $f'(x)$ can be evaluated at infinite and infinitesimal $x$. Thus, for infinite $x=\mbox{\ding{172}}$ we obtain infinite values \[ f(\mbox{\ding{172}})= \mbox{\ding{172}}^{2}, \hspace{1cm} f'(\mbox{\ding{172}})= 2\mbox{\ding{172}} \] and for infinitesimal $x=\mbox{\ding{172}}^{-1}$ we have infinitesimal values \[ f(\mbox{\ding{172}}^{-1})= \mbox{\ding{172}}^{-2}, \hspace{1cm} f'(\mbox{\ding{172}}^{-1})= 2\mbox{\ding{172}}^{-1}. \] If $x=5\mbox{\ding{172}}-10\mbox{\ding{172}}^{-1}$ then we have \[ f(\mbox{\ding{172}}^{-1})= (5\mbox{\ding{172}}-10\mbox{\ding{172}}^{-1})^2= 25\mbox{\ding{172}}^{2}-100 +100\mbox{\ding{172}}^{-2}, \] \[ f'(\mbox{\ding{172}}^{-1})= 10\mbox{\ding{172}}-20\mbox{\ding{172}}^{-1}. \] We can also work with functions defined by formulae including infinite and infinitesimal numbers. For example, the function $f(x)=\frac{1}{\mbox{\ding{172}}}x^2+\mbox{\ding{172}}x$ has a quadratic term infinitesimal and the linear one infinite. It has the first derivative $f'(x)= \frac{2}{\mbox{\ding{172}}}x +\mbox{\ding{172}}$. For infinite $x=3\mbox{\ding{172}}$ we obtain infinite values \[ f(\mbox{\ding{172}})= 3\mbox{\ding{172}}^{2} + 9\mbox{\ding{172}}, \hspace{1cm}f'(\mbox{\ding{172}})= \mbox{\ding{172}}+6 \] and for infinitesimal $x=\mbox{\ding{172}}^{-1}$ we have \[ \hspace{20mm} f(\mbox{\ding{172}}^{-1})= 1+\mbox{\ding{172}}^{-3}, \hspace{1cm} f'(\mbox{\ding{172}}^{-1})= \mbox{\ding{172}}+ 2\mbox{\ding{172}}^{-2}. \hspace{20mm} \Box \] By using the new numeral system it becomes possible to measure certain infinite sets and to see, e.g., that the sets of even and odd numbers have $\G1/2$ elements each. The set, $\mathbb{Z}$, of integers has $2\G1+1$ elements (\G1 positive elements, \G1 negative elements, and zero). Within the countable sets and sets having cardinality of the continuum (see \cite{Lolli,first,Lagrange}) it becomes possible to distinguish infinite sets having different number of elements expressible in the numeral system using grossone and to see that, for instance, \[ \frac{\mbox{\ding{172}}}{2} < \mbox{\ding{172}}-1 < \mbox{\ding{172}} < \mbox{\ding{172}}+1 < 2\mbox{\ding{172}}+1 < 2\mbox{\ding{172}}^2-1 < 2\mbox{\ding{172}}^2 < 2\mbox{\ding{172}}^2+1 < \] \[ 2\mbox{\ding{172}}^2+2 < 2^{\mbox{\ding{172}}}-1 < 2^{\mbox{\ding{172}}} < 2^{\mbox{\ding{172}}}+1 < 10^{\mbox{\ding{172}}} < \mbox{\ding{172}}^{\mbox{\ding{172}}}-1 < \mbox{\ding{172}}^{\mbox{\ding{172}}} < \mbox{\ding{172}}^{\mbox{\ding{172}}}+1. \] The Infinity Computer used in this paper for solving the problem (\ref{ode1}) works with numbers having finite, infinite, and infinitesimal parts. To represent them in the computer memory records similar to traditional positional numeral systems can be used (see \cite{Sergeyev_patent,informatica}). To construct a number $C$ in the new numeral positional system\footnote{ At the first glance the numerals (\ref{3.12}) can remind numbers from the Levi-Civita field (see \cite{Levi-Civita}) that is a very interesting and important precedent of algebraic manipulations with infinities and infinitesimals. However, the two mathematical objects have several crucial differences. They have been introduced for different purposes by using two mathematical languages having different accuracies and on the basis of different methodological foundations. In fact, Levi-Civita does not discuss the distinction between numbers and numerals. His numbers have neither cardinal nor ordinal properties; they are build using a generic infinitesimal and only its rational powers are allowed; he uses symbol $\infty$ in his construction; there is no any numeral system that would allow one to assign numerical values to these numbers; it is not explained how it would be possible to pass from d a generic infinitesimal~$h$ to a concrete one (see also the discussion above on the distinction between numbers and numerals). In no way the said above should be considered as a criticism with respect to results of Levi-Civita. The above discussion has been introduced in this text just to underline that we are in front of two different mathematical tools that should be used in different mathematical contexts. } with base \ding{172}, we subdivide $C$ into groups corresponding to powers of \ding{172}: \beq C = c_{p_{m}} \mbox{\ding{172}}^{p_{m}} + \ldots + c_{p_{1}} \mbox{\ding{172}}^{p_{1}} +c_{p_{0}} \mbox{\ding{172}}^{p_{0}} + c_{p_{-1}} \mbox{\ding{172}}^{p_{-1}} + \ldots + c_{p_{-k}} \mbox{\ding{172}}^{p_{-k}}. \label{3.12} \eeq Then, the record \beq C = c_{p_{m}} \mbox{\ding{172}}^{p_{m}} \ldots c_{p_{1}} \mbox{\ding{172}}^{p_{1}} c_{p_{0}} \mbox{\ding{172}}^{p_{0}} c_{p_{-1}} \mbox{\ding{172}}^{p_{-1}} \ldots c_{p_{-k}} \mbox{\ding{172}}^{p_{-k}} \label{3.13} \eeq represents the number $C$, where all numerals $c_i\neq0$, they belong to a traditional numeral system and are called \textit{grossdigits}. They express finite positive or negative numbers and show how many corresponding units $\mbox{\ding{172}}^{p_{i}}$ should be added or subtracted in order to form the number $C$. Note that in order to have a possibility to store $C$ in the computer memory, values $k$ and $m$ should be finite. Numbers $p_i$ in (\ref{3.13}) are sorted in the decreasing order with $ p_0=0$ \[ p_{m} > p_{m-1} > \ldots > p_{1} > p_0 > p_{-1} > \ldots p_{-(k-1)} > p_{-k}. \] They are called \textit{grosspowers} and they themselves can be written in the form (\ref{3.13}). In the record (\ref{3.13}), we write $\mbox{\ding{172}}^{p_{i}}$ explicitly because in the new numeral positional system the number $i$ in general is not equal to the grosspower $p_{i}$. This gives the possibility to write down numerals without indicating grossdigits equal to zero. The term having $p_0=0$ represents the finite part of $C$ because, due to (\ref{3.2.1}), we have $c_0 \mbox{\ding{172}}^0=c_0$. The terms having finite positive gross\-powers represent the simplest infinite parts of $C$. Analogously, terms having negative finite grosspowers represent the simplest infinitesimal parts of $C$. For instance, the number $\mbox{\ding{172}}^{-1}=\frac{1}{\mbox{\ding{172}}}$ mentioned above is infinitesimal. Note that all infinitesimals are not equal to zero. Particularly, $\frac{1}{\mbox{\ding{172}}}>0$ because it is a result of division of two positive numbers. A number represented by a numeral in the form (\ref{3.13}) is called \textit{purely finite} if it has neither infinite not infinitesimals parts. For instance, 2 is purely finite and $2+3\G1^{-1}$ is not. All grossdigits $c_i$ are supposed to be purely finite. Purely finite numbers are used on traditional computers and for obvious reasons have a special importance for applications. All of the numbers introduced above can be grosspowers, as well, giving thus a possibility to have various combinations of quantities and to construct terms having a more complex structure. However, in this paper we consider only purely finite grosspowers. Let us give an example of multiplication of two infinite numbers $A$ and $B$ of this kind (for a comprehensive description see \cite{Sergeyev_patent,informatica}). \textbf{Example 2.} \label{e_m2} Let us consider numbers $A$ and $B$, where $$ A=14.3\mbox{\ding{172}}^{56.2} 5.4\mbox{\ding{172}}^{0}, \hspace{1cm} B=6.23\mbox{\ding{172}}^{3}1.5\mbox{\ding{172}}^{-4.1}. $$ The number $A$ has an infinite part and a finite one. The number $B$ has an infinite part and an infinitesimal one. Their product $C$ is equal to \[ \begin{tabular}{cr}\hspace {15mm}$C = B \cdot A = 89.089\mbox{\ding{172}}^{59.2}21.45\mbox{\ding{172}}^{52.1} 33.642\mbox{\ding{172}}^{3}8.1\mbox{\ding{172}}^{-4.1}.$ & \end{tabular} \hspace{10mm} \Box \] We conclude this section by emphasizing that there exist different mathematical languages and numeral systems and, if they have different accuracies, it is not possible to use them together. For instance, the usage of $`many$' from the language of Pirah\~{a} in the record $4+ `many$' has no any sense because for Pirah\~{a} it is not clear what is 4 and for people knowing what is 4 the accuracy of the answer `many' is too low. Analogously, the records of the type $\G1 + \omega$, $\G1-\aleph_0 $, $\G1/\infty$, etc. have no sense because they belong to languages developed for different purposes and having different accuracies. \section{Numerical reconstruction of the Taylor expansion of the solution on the Infinity Computer} Let us return to the problem (\ref{ode1}). We suppose that a set of elementary functions ($a^{x}, \sin(x), \cos(x), $ etc.) is represented at the Infinity Computer by one of the usual ways used in traditional computers (see, e.g. \cite{Muller}) involving the argument $x$, finite constants, and four arithmetical operations. Then the following theorem holds (the world \textit{exact} in it means: with the accuracy of the computer programme implementing $f(x,y)$ from (\ref{ode1})). \begin{theorem} \label{t_m1} Let us suppose that for the solution $y(x),$ $ x \in [a,b],$ of (\ref{ode1}) there exists the Taylor expansion (unknown for us) and at purely finite points $s \in [a,b],$ the function $y(s)$ and all its derivatives assume purely finite values or are equal to zero. Then the Infinity Computer allows one to reconstruct the Taylor expansion for $y(x)$ up to the $k$-th derivative with exact values of $y'(x), y''(x), y^{(3)}(x), \ldots y^{(k)}(x)$ after $k$ steps of the Euler method with the step $h=\G1^{-1}$. \end{theorem} \textbf{Proof.} Let us start to execute on the Infinite Computer steps of the Euler method following the rule (\ref{ode2}) and using the infinitesimal step $h=\G1^{-1}$. Since the problem (\ref{ode1}) has been stated using the traditional finite mathematics, $x_0$ is purely finite. Without loss of generality let us consider the first $k=4$ steps of the Euler method (the value $k=4$ is sufficient to show the way of reasoning; we shall use the formulae involved in this case later in a numerical illustration). We obtain \beq y_{1} = y_0 + \G1^{-1} f(x_0,y_0), \hspace{10mm} y_{2} = y_1 + \G1^{-1} f(x_1,y_1), \label{ode9} \eeq \beq y_{3} = y_2 + \G1^{-1} f(x_2,y_2),\hspace{10mm} y_{4} = y_3 + \G1^{-1} f(x_3,y_3). \label{ode10} \eeq The derivatives of the solution $y(x)$ can be approximated in different ways and with different orders of accuracy. Let us consider approximations (see, e.g., \cite{Fornberg}) executed by forward differences $\triangle^j_{h}, 1 \le j \le k,$ with the first order of accuracy and take $h=\G1^{-1}$ as follows \beq \triangle^k_{\tiny{\G1^{-1}}}=\sum^{k}_{i=0} (-1)^{i} \left(\hspace{-1mm} \begin{array}{c} k \\ i \end{array}\hspace{-1mm} \right) y_{x_0+(k-i)\tiny{\G1^{-1}}}. \label{forward} \eeq Then we have \beq y'(x_{0}) \approx \frac{ \triangle^1_{\tiny{\G1^{-1}}} }{\G1^{-1} } + O\left(\G1^{-1} \right) = \frac{y_{1} - y_{0} }{\G1^{-1} } + O\left(\G1^{-1} \right), \label{ode6.0} \eeq \beq y''(x_{0}) \approx \frac{ \triangle^2_{\tiny{\G1^{-1}}} }{\G1^{-2} } + O\left(\G1^{-1} \right) = \frac{y_{0} -2 y_{1} + y_{2} }{\G1^{-2} } + O\left(\G1^{-1} \right), \label{ode6} \eeq \beq y^{(3)}(x_{0}) \approx \frac{ \triangle^3_{\tiny{\G1^{-1}}} }{\G1^{-3} } + O\left(\G1^{-1} \right) = \frac{-y_{0} +3 y_{1} -3 y_{2} + y_{3}}{\G1^{-3} } + O\left(\G1^{-1} \right), \label{ode6.1} \eeq \beq y^{(4)}(x_{0}) \approx \frac{ \triangle^4_{\tiny{\G1^{-1}}} }{\G1^{-4} } + O\left(\G1^{-1} \right) = \frac{y_{0} -4 y_{1} + 6 y_{2} -4 y_{3} + y_{4}}{\G1^{-4} } + O\left(\G1^{-1} \right). \label{ode6.2} \eeq Since due to (\ref{ode1}) we can evaluate directly $y'(x_{0})=f(x_0,y_0)$, let us start by considering the formula (\ref{ode6}) (the cases with values of $k > 2$ are studied by a complete analogy). Since $x_{0}$ is purely finite, then due to our assumptions $y''(x_{0})$ is also purely finite. This means that $y''(x_{0})$ does not contain infinitesimal parts. Formula (\ref{ode6}) states that the error we have when instead of $y''(x_{0})$ use its approximation \beq \widetilde{y''(x_{0})} = \frac{ \triangle^2_{\tiny{\G1^{-1}}} }{\G1^{-2} } \label{ode8} \eeq is of the order $\G1^{-1}$. The Infinity Computer works in such a way that it collects different orders of \G1 in separate groups. Thus, $\triangle^2_{\tiny{\G1^{-1}}}$ will be represented in the format (\ref{3.13}) \beq \triangle^2_{\tiny{\G1^{-1}}} = c_{0} \mbox{\ding{172}}^{0} + c_{-1} \mbox{\ding{172}}^{-1} + c_{-2} \mbox{\ding{172}}^{-2} + \ldots + c_{-m_2} \mbox{\ding{172}}^{-m_2}, \label{ode4} \eeq where $m_2$ is a finite integer, its value depends on each concrete $f(x,y)$ from (\ref{ode1}). Note that (\ref{ode4}) cannot contain fractional grosspowers because the step $h=\G1^{-1}$ having the integer grosspower $-1$ has been chosen in (\ref{ode9}), (\ref{ode10}). It follows from (\ref{ode6}) and the fact that $y''(x_{0})$ is purely finite that $\widetilde{y''(x_{0})}$ contains a purely finite part and can contain infinitesimal parts of the order $\G1^{-1}$ or higher. This means that grossdigits $c_{0} = c_{-1}= 0$, otherwise after division on $\G1^{-2}$ the estimate $\widetilde{y''(x_{0})}$ would have infinite parts and this is impossible. Thus $\widetilde{y''(x_{0})}$ has the following structure \beq \widetilde{y''(x_{0})} = c_{-2} \mbox{\ding{172}}^{0} + c_{-3} \mbox{\ding{172}}^{-1} + c_{-4} \mbox{\ding{172}}^{-2} + \ldots + c_{-m} \mbox{\ding{172}}^{-m+2}. \label{ode7} \eeq It follows from (\ref{ode6}) that $\widetilde{y''(x_{0})}$ can contain an error of the order $\G1^{-1}$ or higher. Since all the derivatives of $y(x)$ are purely finite at $x_0$ and, in particular, $y''(x_{0})$ is purely finite, the fact that the finite part and infinitesimal parts in (\ref{ode7}) are separated gives us that $c_{-2}=y''(x_{0})$. Thus, in order to have the exact value of $y''(x_{0})$ it is sufficient to calculate $\triangle^2_{\tiny{\G1^{-1}}}$ from (\ref{ode7}) and to take its grossdigit $c_{-2}$ that will be equal to $y''(x_{0})$. By a complete analogy the exact values of higher derivatives can be obtained from (\ref{ode6.0}) -- (\ref{ode6.2}) and analogous formulae using forward differences (\ref{forward}) to approximate the $k$-th derivative $y^{(k)}(x_{0})$. It suffices just to calculate $\triangle^k_{\tiny{\G1^{-1}}}$ and to take the grossdigit $c_{-k}$ that will be equal to the exact value of the derivative $y^{(k)}(x_{0}). \hfill \Box $ Let us consider an illustrative numerical example. We emphasize that the Infinity Computer solves it numerically, not symbolically, i.e., it is not necessary to translate the procedure implementing $f(x,y)$ in a symbolic form. \textbf{Example 3.} \label{e_m3} Let us consider the problem \beq y'(x)=x-y, \hspace{1cm} y(0)=1, \hspace{1cm} \label{ode13} \eeq taken from \cite{Adams}. Its exact solution is \beq y(x)=x-1+2e^{-x}. \label{ode11} \eeq We start by applying formulae (\ref{ode9}) to calculate $y_{1}$ and $y_{2}$: \[ y_{1} = 1 + \G1^{-1} \cdot (0-1) = 1 - \G1^{-1}, \] \[ y_{2} = 1 - \G1^{-1} + \G1^{-1} (\G1^{-1}-1+\G1^{-1})= 1 - 2\G1^{-1} + 2\G1^{-2}. \] We have now the values $y_{0},\,\, y_{1},$ and $y_{2}$. Thus, we can apply formula (\ref{ode6}) and calculate $\triangle^2_{\tiny{\G1^{-1}}}$ as follows \[ \triangle^2_{\tiny{\G1^{-1}}} = y_{0} -2 y_{1} + y_{2} = 1-2 + 2\G1^{-1} + 1 - 2\G1^{-1} + 2\G1^{-2}= 2\G1^{-2}. \] Thus, $c_{-2}=2$. Let us now verify the obtained result and calculate the exact derivative $y''(0)$ using (\ref{ode11}). Then we have $y''(x)=2e^{-x}, $ and $y''(0)=2$, i.e., $c_{-2}=y''(0)$. Note that in this simple illustrative example $c_{-m}=0, \,\, m>2,$ where $m$ is from (\ref{ode6}). In general, this is not the case and $c_{-m}\neq 0$ can occur. Let us proceed and calculate $y_{3}$ following (\ref{ode10}). We have \[ y_{3} = 1 - 2\G1^{-1} + 2\G1^{-2} + \G1^{-1} (2\G1^{-1}- 1 + 2\G1^{-1} - 2\G1^{-2} )= 1 - 3\G1^{-1} + 6\G1^{-2}-2\G1^{-3}. \] It then follows from (\ref{ode6.1}) that \[ \triangle^3_{\tiny{\G1^{-1}}} = -y_{0} +3 y_{1} -3 y_{2} + y_{3} = \] \[ - 1 + 3 (1 - \G1^{-1}) -3 (1 - 2\G1^{-1} + 2\G1^{-2}) + 1 - 3\G1^{-1} + 6\G1^{-2}-2\G1^{-3} = -2\G1^{-3}. \] We can see that $c_{-3}=-2$. The exact derivative obtained from (\ref{ode11}) is $y^{(3)}(x)=-2e^{-x}$. As a consequence, we have $y^{(3)}(0)=-2$, i.e., $c_{-3}=y^{(3)}(0)$. To calculate $y^{(4)}(0)$ we use (\ref{ode10}) and have \[ y_{4} = 1 - 3\G1^{-1} + 6\G1^{-2}-2\G1^{-3} + \G1^{-1} (3\G1^{-1} -1 + 3\G1^{-1} - 6\G1^{-2}+2\G1^{-3})= \] \[ 1 - 4\G1^{-1} + 12\G1^{-2}- 8\G1^{-3} + 2\G1^{-4}. \] From (\ref{ode6.2}) we obtain \[ \triangle^4_{\tiny{\G1^{-1}}} = y_{0} -4 y_{1} + 6 y_{2} -4 y_{3} + y_{4} = 1 -4(1 - \G1^{-1}) +6(1 - 2\G1^{-1} + 2\G1^{-2})- \] \[ -4(1 - 3\G1^{-1} + 6\G1^{-2}-2\G1^{-3}) +1 - 4\G1^{-1} + 12\G1^{-2}- 8\G1^{-3} + 2\G1^{-4}= \] \[ -4(1 - 3\G1^{-1} + 6\G1^{-2}-2\G1^{-3}) +1 - 4\G1^{-1} + 12\G1^{-2}- 8\G1^{-3} + 2\G1^{-4}=2\G1^{-4}. \] Again we obtain that $c_{-4}=y^{(4)}(0)=2$. Thus, four steps of the explicit Euler method with the infinitesimal step $h=\G1^{-1}$ have been executed on the Infinity Computer. As a result, the first five exact items of the Taylor expansion of $y(x)$ in the neighborhood of $x_0=0$ can be written: \beq y(x)=x-1+2e^{-x} \approx 1 - x + x^2 - \frac{x^3}{3} + \frac{x^4}{12}. \label{ode12} \eeq By a complete analogy it is possible to obtain additional terms in the expansion that correspond to higher derivatives of $y(x)$. In Table~\ref{table1}, we present results of experiments executed (see \cite{Adams}) with the methods of Heun ($2^{d}$ order) and Runge--Kutta ($4^{th}$ order) solving the problem (\ref{ode13}). Both methods use $h=0.2$. Then we present results of the new methods that execute first $k$ infinitesimal steps with $k$ going from 2 to 8 and then executing one finite step from the point $x_0=0$ to the point $x=1$. The value $n_f$ is the number of evaluation of $f(x,y)$ executed by each method. The column $y_n$ shows the obtained approximation at the point $x=1$ and the column $\varepsilon$ shows the respective error $\varepsilon=y(1)-y_n$ where $n=5$ for the methods of Heun and Runge--Kutta and $n=1$ for the new methods. \hfill $\Box$ \begin{table}[!t] \caption{Comparison of methods solving the problem (\ref{ode13}) where $n_f$ is the number of evaluation of $f(x,y)$ executed by a method to obtain an approximated solution $y_n$ at the point $x=1$} \begin{center}\scriptsize \begin{tabular}{@{\extracolsep{\fill}}|c|c|c|c|}\hline Method & $n_f$ & $y_n$ & $\varepsilon$ \\ \hline Heun, $h=0.2$ &$10$ &0.741480 & -0.005721 \\ Runge--Kutta ($4^{th}$ order), $h=0.2$ &$20$ &0.735770 & -0.0000116 \\ \hline $y(x,0)=1 - x + x^2 $ &$2$ & 1 & -0.264241118 \\ $y(x,0)=1 - x + x^2 - \frac{x^3}{3}$ &$3$ & 0.6666666667 &\, 0.069092216 \\ $y(x,0)= 1 - x + x^2 - \frac{x^3}{3} + \frac{x^4}{12}$ &$4$ &0.75 & -0.014241118 \\ $y(x,0)= 1 - x + x^2 - \frac{x^3}{3} + \frac{x^4}{12}-\frac{x^5}{60}$ &$5$ &0.7333333333 & \, 0.002425549 \\ $y(x,0)= 1 - x + x^2 - \frac{x^3}{3} + \frac{x^4}{12}-\frac{x^5}{60}+\frac{x^6}{360}$ &$6$ &0.7361111111 & -0.000352229 \\ $y(x,0)= 1 - x + x^2 - \frac{x^3}{3} + \frac{x^4}{12}-\frac{x^5}{60}+\frac{x^6}{360}-\frac{x^7}{2520}$ &$7$ & 0.7357142857 &\, 0.000044597 \\ $y(x,0)= 1 - x + x^2 - \frac{x^3}{3} + \frac{x^4}{12}-\frac{x^5}{60}+\frac{x^6}{360}-\frac{x^7}{2520}+\frac{x^8}{20160}$ &$8$ &0.7357638889 & -0.000005007 \\ \hline \end{tabular} \end{center} \label{table1} \end{table} In cases, where it is not possible to evaluate $f(x,y)$ at the points $x_n+\G1^{-1}$, $x_n+2\G1^{-1}$,\, $x_n+3\G1^{-1}, \ldots$ (for instance, when we should solve the problem over an interval $[a,b]$ and $x_n=b$) the following corollary can be useful. \begin{corollary}\label{c_1} Under conditions of Theorem~\ref{t_m1} the backward differences calculated at the points $x_n-\G1^{-1}$, $x_n-2\G1^{-1}$,\, $x_n-3\G1^{-1}, \ldots , x_n-k\G1^{-1}$ can be used to calculate the derivatives of $y(x)$ at the point $x_n$. \end{corollary} \textbf{Proof.} The backward difference (see e.g., \cite{Fornberg}) of the order $k$ with $h=\G1^{-1}$ is calculated as follows \[ \nabla^k_{\tiny{\G1^{-1}}}=\sum^{k}_{i=0} (-1)^{i} \left(\hspace{-1mm} \begin{array}{c} k \\ i \end{array}\hspace{-1mm} \right) y_{x_0-i\tiny{\G1^{-1}}}. \] The rest of the proof is completely analogous to the proof of the theorem and is so omitted. \hfill $\Box$ Thus, if the region of interest $[a,b]$ from (\ref{ode1}) belongs to the region of convergence of the Taylor expansion for the solution $y(x)$ around the point $x_0$ then it is not necessary to construct iterative procedures involving several steps with finite values of $h$ and it becomes possible to calculate approximations of the desired order by executing only one finite step. \section{Examples of the usage of infinitesimals in the new\\ computational framework} The approach introduced in the previous section gives the possibility to construct a variety of new numerical methods for the Infinity Computer by using both infinitesimal and finite values of $h$. The general step $n$ of a method of this kind for solving (\ref{ode1}) can be described as follows: \bd \item (i) take the point $(x_n,y_n)$, choose a value $k_n$, and execute $k_n$ steps of the Euler method starting from $x_n$ by using $h=\G1^{-1}$; \item (ii) calculate exact values of $y'(x),$ $ y''(x),$ $y^{(3)}(x), \ldots ,$ $y^{(k_n)}(x)$ at the point $(x_n,y_n)$ following the rules described in Theorem~\ref{t_m1}; \item (iii) construct the truncated Taylor expansion of the order $k_n$; \item (iv) execute a single step from the point $x_n$ to $x_{n+1} = x_{n} + h_i$ using the constructed Taylor expansion and a finite value of~$h_n$ (steps of the kind $h_n-\G1^{-1}$ or $h_n+\G1^{-1}$ can be also used). \ed The general step described above allows one to construct numerical algorithms for solving (\ref{ode1}) by executing several iterations of this kind. Many numerical methods (see \cite{Butcher,Henrici,Quarteroni}) can be used as a basis for such developments. Due to the easy way allowing us to calculate exact higher derivatives at the points $(x_n,y_n)$, methods that use higher derivatives are of the main interest. The fact that to increase the accuracy it is necessary just to execute one additional infinitesimal step without performing additional finite steps (i.e., the whole work executed at a lower level of accuracy is used entirely at a higher level of accuracy) is an additional advantage and suggests to construct adaptive methods (for instance, if one wishes to change the finite step from $h_1$ to $h_2 > h_1$ or $h_3 < h_1$ then the same Taylor expansion can be used in all the cases). A study of such methods will be done in a separate paper. Hereinafter, since the usage of numerical infinitesimals is a new topic, we give a number of examples showing how the new computational framework can be used in the context of numerical solving ODEs. The rest of this section is organized as follows. In the first subsection, we present two simple iterative methods using low derivatives (a lower order of derivatives is used for expository reasons). In the second subsection, we present a technique that can be used to obtain an additional information with respect to approximations of derivatives of the solution. In the last subsection, we discuss how an automatic control of rounding errors can be executed during the evaluation of $f(x,y)$ at the points $(x_n,y_n)$. \subsection{A simple method and possibilities of its improvements} We start by introducing the \textit{Method 1} that uses only the first and the second derivatives at each iteration to construct the Taylor expansion by applying formulae (\ref{ode9}), (\ref{ode6.0}), and (\ref{ode6}). Thus, this method at the current point $x_n$ executes twice the Euler step with $h=\G1^{-1}$ and then makes the step with a finite value of $h$ by using the obtained Taylor expansion. Therefore, during these three steps (two infinitesimal steps and one finite step) the function $f(x,y)$ is evaluated twice and only during the infinitesimals steps. Let us use the number step $n$ to count the executed finite steps and denote by $y(x,z)$ the Taylor expansion of the solution $y(x)$ calculated by the method during the infinitesimal steps at the neighborhood of the point $z=x_{n-1}$. Then, we can calculate $y_n$ as $y_n = y(h,x_{n-1})$ with a finite value of $h$. \textbf{Example 4.} \label{e_m4} We test this method on the problem (\ref{ode13}) with the finite step $h=0.2$ (see Table~\ref{table2}). By applying the procedure described above with six digits after the dot we have that \beq y(x,0) = 1 - x + x^2, \hspace{1cm} y_1 = y(0.2,0) = 0.84, \label{ode14} \eeq \[ y(x,0.2) = 0.84 - 0.64 x + 0.82 x^2, \hspace{1cm} y_2 = y(0.2,0.2) = 0.7448, \] \[ y(x,0.4) = 0.7448 - 0.3448 x + 0.6724 x^2, \hspace{1cm} y_3 = y(0.2,0.4) = 0.702736, \] \[ y(x,0.6) = 0.702736 - 0.102736 x + 0.551368 x^2, \hspace{3mm} y_4 = y(0.2,0.6) = 0.704244, \] \[ y(x,0.8) = 0.704244 + 0.095756 x + 0.452122 x^2, \hspace{3mm} y_5 = y(0.2,0.8) = 0.741480. \] \begin{table}[!t] \caption{\small{Two versions of Method 1 constructing the Taylor expansion using the values of the first and second derivatives calculated during two infinitesimal Euler steps at the points $(x_n,y_n)$}} \begin{center}\scriptsize \begin{tabular}{@{\extracolsep{\fill}}|c|c|c|c|c|c|c|}\hline & & \multicolumn{2}{c|}{Method $1.0$} & \multicolumn{3}{c|}{Method $1.1$} \\ \cline{3-7}$n$&$x_n$ &$y_n$ &$\varepsilon_n$ & $y^{c}_n$ & $c_n$ &$\varepsilon_n$ \\ \hline 0 &$0.0$ &1.000000 & \,\,0.000000 & 1.000000 & 0.000000 & \,0.000000 \\ 1 &$0.2$ &0.840000 & -0.002538 & 0.839200 & 0.000800 & -0.001738 \\ \hline 2 &$0.4$ &0.744800 &-0.004160 & 0.743344 & 0.001456 &-0.002704 \\ 3 &$0.6$ &0.702736 & -0.005113 & 0.700742 & 0.001994 &-0.003119 \\ \hline 4 &$0.8$ &0.704244 & -0.005586 & 0.701808 & 0.002436 & -0.003150 \\ 5 &$1.0$ &0.741480 & -0.005721 & 0.738682 & 0.002798 & -0.002923 \\ \hline \end{tabular} \end{center} \label{table2} \end{table} It can be seen from Table \ref{table2} that the results obtained by the new method (see column Method~1.0) for the values $y_n$ coincide (see \cite{Adams}) with the results obtained by applying the modified Euler's method (called also Heun's method) that evaluates $f(x,y)$ twice at each iteration as the new method does. As it can be seen from the formulae above, the Method 1 at each point $x_n$ provides us not only with the value $y_n$ but also with the first and the second derivatives of $y(x)$ at the point $(x_n,y_n)$. We shall denote them as $y'_n(x_{n})$ and $y''_n(x_{n})$ where \[ y'_n(x_{n})= y'(0,x_n), \hspace{1cm} y''_n(x_{n})=y''(0,x_n). \] For instance, we have \[ y'(x,0.4) = - 0.3448 + 1.3448 x, \hspace{1cm} y'_2(0.4)=- 0.3448, \] \[ \hspace{25mm} y''(x,0.4) = 1.3448, \hspace{1cm} y''_2(0.4)=1.3448. \hspace{20mm} \Box \] Note that the values of derivatives calculated at the points $(x_n,y_n)$ are exact in the sense that they provide values of the derivatives for the solution with the initial condition $(x_n,y_n)$. They can be used to estimate derivatives of $y(x)$ at the point $(x_n,y(x_n))$. \begin{figure}[t] \begin{center} \epsfig{ figure = figure_11.eps, width = 4.9in, height = 3in, silent = yes } \caption{The graph of $y(x,0)$ is represented by the top (black) solid line and the graph of $y(x)$ is represented by the lower (grey) solid line (both $y(x,0)$ and $y(x)$ start at the point $(0,1)$); the graph of the function $\bar{y}(x)$ is shown by the top dashed line; the graph of the function $r_1(x)$ is shown by the lower dashed line.} \label{figura_1} \end{center} \end{figure} The possibility to improve the accuracy of a solution going forward and backward has been the main idea of many numerical methods for ODEs (see, e.g., \cite{Cash_Considine,Iavernaro_Mazzia}). Results for the Method 1.0 presented in Table \ref{table2} can be improved using similar ideas in the new framework using the values of derivatives that the method provides during its functioning. Let us consider the exact solution $y(x)$ from (\ref{ode11}) and its approximation $y(x,0)$ from (\ref{ode14}) on the interval $[0,0.5]$ (we choose this interval that is larger then the step $h=0.2$ used in Table~\ref{table2} in order to be able to show the idea more clearly graphically). In Fig.~\ref{figura_1}, the graph of $y(x,0)$ is represented by the top (black) solid line and the graph of $y(x)$ is represented by the lower (grey) solid line. Naturally, both of them start at the point $(0,1)$ and we then have that \[ y(0.5)=0.713061319, \hspace{1cm} y(0.5,0)=0.75. \] Thus, the error $\varepsilon_1$ at the point $x_1=0.5$ is \beq \varepsilon_1 = y(0.5)- y(0.5,0) = -0.036938681. \label{ode16.0} \eeq By using our equation (\ref{ode13}) and executing two infinitesimal steps from the point $x_1=0.5$ to the points $0.5+\G1^{-1}$ and $0.5+2\G1^{-1}$ we obtain \beq y(x,0.5) = 0.75-0.25 x + 0.625 x^2. \label{ode16} \eeq In the Method 1.0, this formula would be used to go forward from the point $x_1=0.5$ by executing a new finite step. Instead of this, let us use this formula to go backward from the point $x_1=0.5$ to the starting point $x_0=0$. Since in (\ref{ode16}) $x$ is intended to be a removal from the point $x_1=0.5$, we need to take into account this fact. The graph of the obtained function \[ \bar{y}(x) = 0.75+0.25 (0.5-x)+ 0.625 (0.5-x)^2 = 1.03125-0.875x +0.625x^2 \] is shown in Fig.~\ref{figura_1} by the top dashed line. It can be seen that $y(x,0)$ from (\ref{ode14}) does not coincide with the obtained function $\bar{y}(x)$. Let us construct a new quadratic approximation $r_1(x)$ oriented on a better result to be obtained at the point $x_1=0.5$. The function $r_1(x)$ is built using coefficients from both $y(x,0)$ and $\bar{y}(x)$ by taking their average with the weights $\frac{1}{2}$ and $\frac{1}{2}$ (the general way to mix them is, obviously, $\tau$ and $1 - \tau,\, 0 <\tau<1$) as follows: \[ r_1(x)= y(0,0)+ \frac{1}{2}(y(0,0)-\bar{y}(0)) +\frac{1}{2}(y'(0,0)+\bar{y}'(0))x+\frac{1}{4}(y''(0,0)+\bar{y}''(0))x^2= \] \[ 1+\frac{1}{2}(1-1.03125)+\frac{1}{2}(-1-0.875)x+\frac{1}{2}(1+0.625)x^2= \] \beq 0.984375-0.9375 x + 0.8125 x^2. \label{ode17} \eeq In Fig.~\ref{figura_1}, the graph of the function $r_1(x)$ is shown by the lower dashed line. The function $r_1(x)$ provides us the value $r_1(0.5)=0.718750$ and the respective error \beq \overline{\varepsilon}_1 = y(0.5)- r_1(0.5) = -0.005688681. \label{ode18} \eeq that is better than the error $\varepsilon_1 = -0.036938681$ from (\ref{ode16.0}) that is obtained by calculating $y(0.5,0)$. The possibility to calculate corrections to approximations opens the doors to various modifications. For instance, it is possible to execute two additional infinitesimal steps at the point $x_1=0.5$ using the value $r_1(0.5)$ instead of $y(0.5,0)$. In general, this means that instead of setting $y_n=y(x_{n},x_{n-1})$ as it is done by the Method 1.0 we put $y_n=r_n(x_n)$. Obviously, this means that it is necessary to evaluate $f(x,y)$ two times more with respect to the Method 1.0. Otherwise it is also possible to use the corrected value $r_n(x_n)$ with the derivatives that have been calculated for $y(x_{n},x_{n-1})$. Another possibility would be the use of the functions $y(x,x_n)$ at each point~$x_n$, i.e., to put $y_n=y(x_{n},x_{n-1})$, and to calculate the global correction following the rule \beq c_n=c(x_n)= c(x_{n-1}) + r_n(x_n)-y(x_{n},x_{n-1}), \label{ode19} \eeq starting from the first correction (in our example $c(x_1)=c(0.5)= 0.031250$) \[ c(x_1)= r_1(x_1)-y(x_{1},x_{0}). \] In this way we can approximate the exact solution $y(x_n)$ by the corrected value \[ y^{c}_n = y(x_{n},x_{n-1})+c(x_n). \] In Table~\ref{table2}, results for this algorithm are presented in the column \emph{Method~1.1} where the error $\varepsilon_n$ is calculated as \[ \varepsilon_n=y(x_n)-y^{c}_n. \] Notice that the correction obtained at the final point has been calculated using Corollary~1. We conclude this subsection by a reminder that Theorem~\ref{t_m1} gives us the possibility to easily construct higher-order methods. Two methods described above just show examples of the usage of infinitesimals for building algorithms for solving ODEs. \subsection{Approximating derivatives of the solution} In this subsection, we show how approximations of derivatives at the point $x_n$ can be obtained using the information calculated at the point $x_{n-1}$. For this purpose, instead of the usage of a finite step $h$, the steps $h-\G1^{-1}$ or $h+\G1^{-1}$ can be used. To introduce this technique we need to recall the following theorem from \cite{Num_dif}. \begin{theorem} \label{t_m2} Suppose that: (i) for a function $s(x)$ calculated by a procedure implemented at the Infinity Computer there exists an unknown Taylor expansion in a finite neighborhood $\delta(z)$ of a purely finite point $z$; (ii) $s(x),$ $s'(x), s''(x), \ldots s^{(k)}(x)$ assume purely finite values or are equal to zero at purely finite $x \in \delta(z)$; (iii) $s(x)$ has been evaluated at a point $z+\mbox{\ding{172}}^{-1} \in \delta(z)$. Then the Infinity Computer returns the result of this evaluation in the positional numeral system with the infinite radix~\ding{172} in the following form \beq s(z+\mbox{\ding{172}}^{-1}) = c_{0} \mbox{\ding{172}}^{0} c_{-1} \mbox{\ding{172}}^{-1} c_{-2} \mbox{\ding{172}}^{-2} \ldots c_{-(k-1)} \mbox{\ding{172}}^{-(k-1)} c_{-k} \mbox{\ding{172}}^{-k}, \label{m1} \eeq where \beq s(z) = c_{0}, \,\, s'(z) = c_{-1}, \,\, s''(z)= 2! c_{-2}, \,\, \ldots \,\, s^{(k)}(z)=k! c_{-k}. \label{m2} \eeq \end{theorem} The theorem tells us that if we take a purely finite point $z$ and evaluate on the Infinity Computer $s(x)$ at the point $z+\G1^{-1}$ then from the computed $s(z+\G1^{-1})$ we can easily extract $s(z)$, $s'(z)$, $s''(z)$, etc. To apply this theorem to our situation we can take as $s(x)$ the Taylor expansion for $y(x)$ constructed up to the $k$th derivative using infinitesimal steps $\G1^{-1}$, $2\G1^{-1}$, $ \ldots, k\G1^{-1}$. Then, if we take as $z$ a purely finite step $h$ and evaluate on the Infinity Computer $s(x)$ at the point $h+\G1^{-1}$ then we obtain $s(h)$, $s'(h)$, $s''(h)$, etc. For instance, let us take $s(x)=s_2(x)=y(x,0)$, where $y(x,0)$ is from (\ref{ode14}) and $s_2(x)$ indicates that we use two derivatives in the Taylor expansion. Then, we have \[ s_2(0.2+\G1^{-1}) = 1 - (0.2+\G1^{-1})+ (0.2+\G1^{-1})^2 = 0.84 - 0.6\G1^{-1}+\G1^{-2}. \] We have the exact (where the word ``exact'' again means: with the accuracy of the implementation of $s(x)$) values $s(0.2)=0.84$, $s'(0.2)=-0.6$, $s''(0.2)=1$ for the function $s(x)$. These values can be used to approximate the respective values $y(0.2)$, $y'(0.2)$, $y''(0.2)$ we are interested in. Moreover, we can adaptively obtain an information on the accuracy of our approximations by consecutive improvements. If we calculate now $y^{(3)}(0)$ from (\ref{ode6.1}) then we can improve our approximation by setting $s(x)=s_3(x)$ where \[ s_3(0.2+\G1^{-1})= s_2(0.2+\G1^{-1}) - \frac{1}{3}(0.2+\G1^{-1})^3 = \] \[ s_2(0.2+\G1^{-1}) - 0.002667 - 0.04 \G1^{-1}-0.2\G1^{-2} -\frac{1}{3}\G1^{-3} = \] \beq 0.837333 - 0.64 \G1^{-1}+0.8\G1^{-2} -\frac{1}{3}\G1^{-3}. \label{ode15} \eeq Note, that to obtain this information we have calculated only the additional part of $s_3(0.2)$ taking the rest from the already calculated value $s_2(0.2)$. Analogously, if we calculate now $y^{(4)}(0)$ from (\ref{ode6.2}) then we can improve our approximation again by setting $s(x)=s_4(x)$ where \[ s_4(0.2+\G1^{-1})= s_3(0.2+\G1^{-1}) + \frac{1}{12}(0.2+\G1^{-1})^4 = \] \[ s_3(0.2+\G1^{-1}) + 0.000133 + 0.002667 \G1^{-1}+0.02\G1^{-2} +0.066667\G1^{-3}+\frac{1}{12}\G1^{-4} = \] \[ 0.837466 - 0.637333 \G1^{-1}+0.82 \G1^{-2} -0.266667\G1^{-3}+\frac{1}{12}\G1^{-4}. \] Since we have used the convergent Taylor expansion of the fourth order, the errors in calculating $y(0.2)$, $y'(0.2)$, and $y''(0.2)$ are of the orders 5, 4, and 3, respectively. \subsection{An automatic control of rounding errors} In the previous sections, we have supposed that the evaluation of derivatives of $y(x)$ was done exactly, i.e., the procedure for evaluating $f(x,y)$ was sufficiently precise and it was possible to neglect rounding errors. The executed numerical examples presented above satisfied this assumption. Let us now study what the Infinity Computer can give us when $f(x,y)$ is calculated with errors. Hereinafter we suppose again that for the solution $y(x),$ $ x \in [a,b],$ of (\ref{ode1}) there exists the Taylor expansion (unknown for us) and at purely finite points $s \in [a,b],$ the function $y(s)$ and all its derivatives assume purely finite values or are equal to zero. In addition, we assume that the same conditions hold for all approximations of $y(x)$ the method will deal with. Let us consider formulae (\ref{ode9}) for calculating $y_{1}$ and $y_{2}$ together with formulae (\ref{ode6.0}) and (\ref{ode6}) used for approximating $ y'(x_{0})$ and $ y''(x_{0})$. Suppose that $f(x_0,y_0)$ is calculated with an unknown error $\epsilon_1$. Then we have that instead of the derivative $y'(x_{0})$ and the point $y_1$ we have \beq \tilde{y}'(x_{0})= f(x_0,y_0)- \epsilon_1 = y'(x_{0})- \epsilon_1 \label{ode21} \eeq \[ \tilde{y}_{1} = y_0 + \G1^{-1} (f(x_0,y_0)- \epsilon_1)= y_1 - \epsilon_1\G1^{-1}, \] Analogously, calculation of the point $y_{2}$ will give us $\tilde{y}_{2}$ and $\triangle^2_{\tiny{\G1^{-1}}}$ will be calculated with errors, as well, giving us $ \tilde{\triangle}^2_{\tiny{\G1^{-1}}}$. Let us study the structure of this forward difference \beq \tilde{\triangle}^2_{\tiny{\G1^{-1}}}= y_{0} -2 \tilde{y}_{1} + \tilde{y}_{2}= \tilde{y}_{2}- \tilde{y}_{1} -( \tilde{y}_{1} -y_{0})= \tilde{y}_{2}- \tilde{y}_{1} -( y_{1} -y_{0}) + \epsilon_1 \G1^{-1} . \label{ode22} \eeq By applying the argumentation analogous to that used in Theorem~\ref{t_m1} together with our assumptions on purely finiteness of all derivatives of approximations of $y(x)$ we have that \beq \tilde{y}_{2}- \tilde{y}_{1} -( y_{1} -y_{0}) = \widetilde{c}_{-2} \mbox{\ding{172}}^{-2} + \ldots + \widetilde{c}_{-m_2} \mbox{\ding{172}}^{-m_2}, \label{ode23} \eeq where the coefficients $\widetilde{c}_{-2}, \ldots , \widetilde{c}_{-m_2}$ are affected by rounding errors and errors incorporated in $\tilde{y}_{1}$ and $\tilde{y}_{2}$. Thus, instead of the exact second derivative that has been obtained in Theorem~\ref{t_m1} from the coefficient $c_{-2}$ of $\G1^{-2}$, the coefficient $\widetilde{c}_{-2}$ gives us an approximation $\tilde{y}''(x_{0})$ of $ y'' (x_{0})$, namely, \[ \widetilde{c}_{-2}=\tilde{y}''(x_{0}) = y''(x_{0})-\epsilon_2, \] where $\epsilon_2$ is an error we have got during the calculation of $y''(x_{0})$. Let us rewrite now (\ref{ode22}) in the decreasing orders of the powers of grossone using the representation (\ref{ode23}), i.e., as the Infinity Computer does it. We have \beq \tilde{\triangle}^2_{\tiny{\G1^{-1}}}= \epsilon_1 \G1^{-1} + \widetilde{c}_{-2} \mbox{\ding{172}}^{-2} + \ldots + \widetilde{c}_{-m_2} \mbox{\ding{172}}^{-m_2}. \label{ode24} \eeq This means that by calculating $\tilde{\triangle}^2_{\tiny{\G1^{-1}}}$ we have obtained also the error $\epsilon_1$ that we have got at the previous infinitesimal step (see (\ref{ode21})). We are able now to reestablish the exact value of the first derivative $y'(x_{0})$ using the approximative value $\tilde{y}'(x_{0})$ calculated in (\ref{ode21}) and the grossdigit corresponding to $\G1^{-1}$ by taking it from $\tilde{\triangle}^2_{\tiny{\G1^{-1}}}$ in (\ref{ode24}), i.e., we have \[ y'(x_{0}) = \tilde{y}'(x_{0})+ \epsilon_1. \] By a complete analogy we can continue and calculate \beq \tilde{\triangle}^3_{\tiny{\G1^{-1}}}= -\epsilon_1 \G1^{-1} + \epsilon_2 \G1^{-2} + \widetilde{c}_{-3} \mbox{\ding{172}}^{-3} + \ldots + \widetilde{c}_{-m_3} \mbox{\ding{172}}^{-m_3}, \label{ode26} \eeq \[ y''(x_{0}) = \tilde{y}''(x_{0})+ \epsilon_2. \] Note that in (\ref{ode26}) $\epsilon_1$ (that can be either positive or negative) appears with the alternated sign following the formulae of forward differences. In fact, in $\tilde{\triangle}^3_{\tiny{\G1^{-1}}}$ we have $ y_{1} -y_{0}$ whereas in $\tilde{\triangle}^2_{\tiny{\G1^{-1}}}$ we have $-( y_{1} -y_{0})$. Analogously, the same alternation happens for higher derivatives. In general, in order to calculate the $(k-1) {th}$ derivative $y^{(k-1)}(x_{0})$ it is necessary to calculate the approximation $\tilde{y}^{(k-1)}(x_{0})= \widetilde{c}_{-(k-1)}$ and then to extract the error $\epsilon_{k-1}$ (that can be negative or positive ) from \[ \tilde{\triangle}^k_{\tiny{\G1^{-1}}}= (-1)^{k}\epsilon_1 \G1^{-1} + \ldots (-1)^{k-i-1}\epsilon_i \G1^{-i} + \ldots \] \[ - \epsilon_{k-2} \G1^{-(k-2)} + \epsilon_{k-1} \G1^{-(k-1)} + \widetilde{c}_{-k} \mbox{\ding{172}}^{-k} + \ldots + \widetilde{c}_{-m_k} \mbox{\ding{172}}^{-m_k}, \] \beq y^{(k-1)}(x_{0}) = \tilde{y}^{(k-1)}(x_{0})+ \epsilon_{k-1}. \label{ode25} \eeq If there exists an index $j, \,\, 1 \le j < k,$ such that $\epsilon_{1}=\ldots\epsilon_{j}=0$, then $y^{(k-1)}(x_{0})$ is calculated again by the formula (\ref{ode25}) but it follows \[ \tilde{\triangle}^k_{\tiny{\G1^{-1}}}= (-1)^{k-j-1}\epsilon_{j+1} \G1^{-(j+1)} + \ldots \] \beq + \epsilon_{k-1} \G1^{-(k-1)} + \widetilde{c}_{-k} \mbox{\ding{172}}^{-k} + \ldots + \widetilde{c}_{-m_k} \mbox{\ding{172}}^{-m_k}. \label{ode28} \eeq Thus, either $f(x,y)$ is evaluated exactly or rounding errors are present, the Infinity Computer is able to calculate the derivatives of the solution exactly. Let us illustrate the theoretical results presented above by a numerical example. \textbf{Example 5.} \label{e_m5} Let us consider the following test problem\footnote{The author thanks Prof. H. P. Langtangen for drawing the author's attention to this nice example.} taken from \cite{Langtangen}. The ODE \beq y'(x) = -\frac{x-c}{s^2} (y-1) \label{ode20} \eeq has the exact solution that is the following Gaussian function \beq u(x) = 1 + e^{-\frac{1}{2}\left(\frac{x-c}{s}\right)^2} \label{ode27} \eeq centered around $t=c$ and with characteristic width (standard deviation) $s$. The initial condition is taken as the exact value $y(0)=u(0)$ and the parameters are taken as $c=3, s=0.5$. \begin{table}[!t] \caption{Calculating approximations for derivatives $y^{(i)}(0), 1 \le i \le 12,$ for the problem (\ref{ode20}) at the point $(0,y(0))$ by applying the automatic control of rounding errors $\varepsilon_i$ and providing so the final accuracy $\delta_i$ } \begin{center}\scriptsize \begin{tabular}{@{\extracolsep{\fill}}|c|c|c|c|c|}\hline $i$ & $\tilde{y}^{(i)}(0)$ & $\varepsilon_i$ & $y^{(i)}(0)$ & $\delta_i$ \\ \hline 1 & $0.182759757\cdot 10^{-6}$ & \hspace{-8mm}0.0000000000 & $0.182759757\cdot 10^{-6}$ & $-0.60449198\cdot 10^{-28} $ \\ 2 & $0.207127725\cdot 10^{-5}$ & $0.609199190\cdot 10^{-7}$ & $0.213219716\cdot 10^{-5}$ & $-0.69190731\cdot 10^{-27}$ \\ 3 & $0.233932489\cdot 10^{-4}$ & $0.731039028\cdot 10^{-6}$ & $0.241242879\cdot 10^{-4}$ & $-0.78192941\cdot 10^{-26}$ \\ 4 & $0.254888941\cdot 10^{-3}$ & $0.901614801\cdot 10^{-5}$ & $0.263905089\cdot 10^{-3}$ & $-0.83928642\cdot 10^{-25}$ \\ 5 & $0.266975453\cdot 10^{-2}$ & $0.111117932\cdot 10^{-3}$ & $0.278087246\cdot 10^{-2}$ & $-0.85899500\cdot 10^{-24}$ \\ 6 & $0.267228880\cdot 10^{-1}$ & $0.136947978\cdot 10^{-2}$ & $0.280923676\cdot 10^{-1}$ & $-0.82216871\cdot 10^{-23}$ \\ 7 & \hspace{-2mm}$0.253489245\cdot 10^{0}$ & $0.168782291\cdot 10^{-1}$ & \hspace{-2mm}$0.270367474\cdot 10^{0}$ & $-0.71132365\cdot 10^{-22}$ \\ 8 & \hspace{-2mm}$0.224980672\cdot 10^{1}$ & \hspace{-2mm}$0.208016667\cdot 10^{0}$ & \hspace{-2mm}$0.245782339\cdot 10^{1}$ & $-0.50790463\cdot 10^{-21}$ \\ 9 & \hspace{-2mm}$0.182784086\cdot 10^{2}$ & \hspace{-2mm}$0.256371293\cdot 10^{1}$ & \hspace{-2mm}$0.208421215\cdot 10^{2}$ & $-0.19195037\cdot 10^{-20}$ \\ 10 & \hspace{-2mm}$0.13002719\cdot 10^{3}$ & \hspace{-2mm}$0.315966218\cdot 10^{2}$ & \hspace{-2mm}$0.161623816\cdot 10^{3}$ & $ \hspace{2mm}0.25832403\cdot 10^{-19}$ \\ 11 & \hspace{-2mm}$0.71638662\cdot 10^{3}$ & \hspace{-2mm}$0.389414314\cdot 10^{3}$ & \hspace{-2mm}$0.110580093\cdot 10^{4}$ & $ \hspace{2mm}0.86669850\cdot 10^{-18}$ \\ 12 & \hspace{-2mm}$0.13588050\cdot 10^{4}$ & \hspace{-2mm}$0.479935826\cdot 10^{4}$ & \hspace{-2mm}$0.615816329\cdot 10^{4}$ & $ \hspace{2mm}0.16535675\cdot 10^{-16}$ \\ \hline \end{tabular} \end{center} \label{table3} \end{table} In Table \ref{table3}, we calculate derivatives $y^{(i)}(0), 1 \le i \le 12,$ using (\ref{ode25}) with 30 digits in the mantissa, in order to be able to catch the final accuracy $\delta_k$ presented in the last column. It shows the final error obtained by subtracting from the derivatives calculated using the explicit solution (\ref{ode27}) and the derivatives $y^{(i)}(0), 1 \le i \le 12,$ i.e., \[ \delta_i = u^{(i)}(0)- y^{(i)}(0), \hspace{8mm} 1 \le i \le 12. \] Let us make a few remarks regarding Table~\ref{table3}. First, at $x=0$ with $c=3$ and $s=0.5$ it follows from (\ref{ode20}) that $-\frac{x-c}{s^2} = 12$. In order to illustrate the situation (\ref{ode28}), we have calculated $y_1$ using (instead of the original expression $12(y_0-1)$ from (\ref{ode20}) leading to $\varepsilon_1 \neq 0$) the expression $12 y_0-12$ that provides $\varepsilon_1 = 0$ when it is used in $ y_1= y_0 + \G1^{-1}(12 y_0-12)$. Then, it is worthwhile to notice that almost through the whole Table~\ref{table3} (and in spite of large values of higher derivatives) the relative error has the constant order equal to $10^{-22}$ (it can be easily seen from Table~\ref{table3} that $m-n=22$ where $m$ is the exponent of $y^{(i)}$ and $n$ is the exponent of $\delta_i$). Notice also that at the last line of the Table the error $\varepsilon_{12}$ is even larger than the approximation $\tilde{y}^{(12)}(0)$. \hfill $\Box$ \section{Conclusion} In this paper, a new framework for solving ODEs has been introduced. The new approach allows us to work numerically not only with usual finite numbers but also with different infinitesimal and infinite values on a new kind of a computational device called the Infinity Computer (it has been patented and its working prototype exists). The structure of numbers we work on the new computer is more complex and, as a result, we face new computational possibilities. In particular, the presence of different numerical infinitesimals makes it possible to use infinitesimal steps for solving ODEs. The following results have been established in the new framework. i. It has been shown that (under the assumption that the person solving the ODE does not know the structure of $f(x,y)$, i.e., it is a ``black box'' for him/her) the Infinity Computer is able to calculate numerical values of the derivatives of $y(x)$ of the desired order without the necessity of an analytical (or symbolical) computation of the respective derivatives by the successive derivation of the ODE as it is usually done when the Taylor method is applied. ii. If the region of our interest $[a,b]$ belongs to the region of convergence of the Taylor expansion for the solution $y(x)$ in the neighborhood of the point $x_0$, then it is not necessary to construct iterative procedures involving several steps with finite values of $h$. It becomes possible to calculate approximations of the desired order $k$ by executing $k$ infinitesimal steps and only one finite step. iii. Approximations of derivatives of $y(x)$ at the point $x_n$ can be obtained using the information calculated at the point $x_{n-1}$. For this purpose, instead of the usage of a finite step $h$, the steps $h-\G1^{-1}$ or $h+\G1^{-1}$ can be used. Methods going forward and backward and working with approximations of derivatives can be proposed in the new framework. iv. The last subsection of the manuscript shows that either $f(x,y)$ is evaluated exactly or rounding errors are present, the Infinity Computer is able to perform, by means of a smart usage of infinitesimals, an automatic control of the accuracy of computation of the derivatives of the solution. v. Theoretical results have been illustrated by a number of numerical examples.
1,116,691,497,076
arxiv
\section{Introduction} The ejection of matter at moderate to high velocities is a common and perhaps universal phenomenon of Quasi-Stellar Objects (QSOs). One of the main manifestations of QSO outflows is the blueshifted UV Broad Absorption Lines (BALs) seen in $\sim 10$\% of optically selected QSOs, the BAL~QSOs (e.g., Weymann 1997). X-ray spectroscopy of BAL~QSOs is potentially important for studying their outflows and nuclear geometries, but the study of BAL~QSOs in the X-ray regime has not yet matured, largely due to low X-ray fluxes (e.g., Green \& Mathur 1996; Gallagher et~al. 1999). Only $\approx 9$ BAL~QSOs have been detected in X-rays at present. The current data suggest that the X-ray emission from BAL~QSOs suffers from significant intrinsic absorption, with many BAL~QSOs having absorption column densities $\lower.5ex\hbox{\gtsima}$~(1--5)$\times 10^{23}$~cm$^{-2}$. Optical brightness is {\it not\/} a good predictor of X-ray brightness for BALQSOs; some optically faint BAL~QSOs have been clearly detected (e.g., PHL~5200; $V=18.1$) while some of the optically brightest (e.g., PG~$1700+518$; $V=15.1$) remain undetected in deep 0.1--10~keV observations. In the limited data available at present, however, there is a suggestion that the BAL~QSOs with high ($\lower.5ex\hbox{\gtsima} 2$\%) optical continuum polarization {\it may\/} be the X-ray brighter members of the class (see \S4 of Gallagher et~al. 1999). A polarization/X-ray flux connection, if indeed present, would provide a clue about the geometry of matter in BAL~QSO nuclei (see \S3). To improve understanding of the X-ray properties of BAL~QSOs and examine the possible polarization/X-ray flux connection, we have started a program to observe highly polarized BAL~QSOs in X-rays. An excellent target for this program was the Case Stellar Object 755 (CSO~755; $z=2.88$; Sanduleak \& Pesch 1989), which has $V=17.1$ (e.g., Barlow 1993) and is a representative, `bona-fide' BAL~QSO in terms of its luminosity and UV absorption properties (e.g., Glenn et~al. 1994). Its continuum polarization is high ($\approx$~3.8--4.7\%; only 8/53 BAL~QSOs studied by Schmidt \& Hines 1999 had $>2$\%) and rises to the blue. We adopt $H_0=70$~km~s$^{-1}$ Mpc$^{-1}$ and $q_0=\frac{1}{2}$. The Galactic neutral hydrogen column density towards CSO~755\ is $(1.6\pm 0.4)\times 10^{20}$ cm$^{-2}$ (Stark et~al. 1992). \section{Observations, Analysis and Results} We observed CSO~755\ with {\it BeppoSAX\/}\ (Boella et~al. 1997) on 1999~Feb~2. We will focus on the results from the Medium-Energy Concentrator Spectrometers (MECS; 1.8--10~keV; 35.2~ks exposure) and Low-Energy Concentrator Spectrometer (LECS; 0.1--4~keV; 12.7~ks exposure), since the data from the other instruments are not useful for such a faint source. Our energy coverage corresponds to 0.4--39~keV in the rest frame. The observation went smoothly, and the resulting data were processed with Version~1.2 of the {\it BeppoSAX\/}\ Science Data Center (BSDC) pipeline. We have adopted the standard reduction methods recommended by the BSDC (Fiore, Guainazzi \& Grandi 1999), and we do not observe any irregular background variability. The screened events resulting from the above reduction were analyzed using {\sc xselect}. We made full-band images for each of the detectors as well as combined MECS2+MECS3 images. An X-ray source consistent with the precise optical position of CSO~755\ is detected with high statistical significance in our MECS2, MECS3 and MECS2+MECS3 images (e.g., Figure~1), but it is not detected by the LECS. Given the observed flux (see below), the probability of a confusing source is $\lower.5ex\hbox{\ltsima} 5\times 10^{-3}$, and no particularly suspicious sources are found in the Palomar Optical Sky Survey or the {\sc ned/simbad} catalogs. To determine MECS count rates, we have used a $3^\prime$-radius circular source cell centered on the X-ray centroid. For background subtraction, we use five $3^\prime$-radius circular cells near CSO~755\ (we have not used an annulus because a weak nearby source would fall inside the annulus). We have corrected for energy-dependent vignetting of the background following \S3.1.5 of Fiore et~al. (1999). In the MECS2+MECS3 full-band (1.8--10~keV) image, we detect $54.3\pm 14.3$ counts from CSO~755\ for a MECS2+MECS3 count rate of $(1.5\pm 0.4)\times 10^{-3}$~count~s$^{-1}$. The LECS $3\sigma$ upper limit on the 0.1--1.8~keV count rate is $<1.7\times 10^{-3}$~count~s$^{-1}$ (computed using a circular source cell with a $5^\prime$ radius). While we do not have enough photons for spectral fitting, we have analyzed MECS2+MECS3 images in three observed-frame energy bands to place crude constraints on spectral shape: 1.8--3~keV (band~1; channels 40--66), 3--5.5~keV (band~2; channels 67--120), and 5.5--10~keV (band~3; channels 121--218). CSO~755\ is detected in all bands, although with varying degrees of statistical significance. We give the corresponding count rates in Table~1, and the Poisson probabilities of false detections in bands~1, 2 and 3 are $6.8\times 10^{-5}$, $4.8\times 10^{-3}$ and $2.8\times 10^{-2}$, respectively. The detection in band~3 (21--39~keV in the rest frame) is notable. To compare the observed spectral shape with spectral models, we have employed a band-fraction diagram similar to those used in studies of the diffuse soft X-ray background (e.g., see \S5 of Burstein et~al. 1977). We first consider a simple power-law model with photon index, $\Gamma=$~1.7--1.9 (a typical, representative range for radio-quiet QSOs; e.g. Reeves et~al. 1997), and neutral absorption at $z=2.88$. For this model, Figure~2 shows that column densities less than $\approx 7\times 10^{23}$~cm$^{-2}$ are most consistent with our data. Alternatively, for small column densities, values of $\Gamma$ down to $\approx 0.8$ are most consistent with our data (i.e. the spectrum could be as flat as that for a `reflection-dominated' source). Incorporating the LECS upper limit into similar analyses does not significantly tighten our constraints. If we consider a $\Gamma=1.9$ power-law model with the Galactic column density, we calculate an observed-frame 2--10~keV flux of $1.3\times 10^{-13}$~erg~cm$^{-2}$~s$^{-1}$, corresponding to a rest-frame 7.8--39~keV luminosity of $4.0\times 10^{45}$~erg~s$^{-1}$. These two quantities are relatively insensitive to the internal column density for $N_{\rm H}<5\times 10^{23}$~cm$^{-2}$. If we extrapolate this model into the rest-frame 2--10~keV band, the luminosity is $3.4\times 10^{45}$~erg~s$^{-1}$. We have also calculated {$\alpha_{\rm ox}$}\ (the slope of a hypothetical power law between 3000~\AA\ and 2~keV in the rest frame), since this parameter can be used as a statistical predictor of the presence of X-ray absorption (e.g., Brandt, Laor \& Wills 1999). We calculate the rest-frame 3000~\AA\ flux density using the observed-frame 7500~\AA\ flux density of Glenn et~al. (1994) and a continuum spectral index of $\alpha=0.5$. The rest-frame flux density at 2~keV is more difficult to calculate since we do not have strong constraints on X-ray spectral shape or a {\it BeppoSAX\/}\ detection at ${2~{\rm keV}\over (1+z)}=0.52$~keV (although see our discussion of the {\it ROSAT\/}\ data below). If we normalize a $\Gamma=1.9$ power-law model with Galactic absorption to the rest frame 7--39~keV count rate (corresponding to 1.8--10~keV in the observed frame), we calculate $\alpha_{\rm ox}=1.58$. Of course, this $\alpha_{\rm ox}$ value is really telling us about the rest-frame 7--39~keV emission rather than a directly measured flux density at 2~keV. We have searched for any {\it Einstein\/}, {\it ROSAT\/}\ or {\it ASCA\/}\ pointings that serendipitously contain CSO~755, but unfortunately there is none. We have also analyzed the data from the {\it ROSAT\/}\ All-Sky Survey (RASS). CSO~755\ was observed for 939~s during the RASS between 1990~Dec~31 and 1991~Jan~4 (a relatively long RASS exposure; see Figure~2 of Voges et~al. 1999). There appears to be an $\approx 7$-photon enhancement over the average background at the position of CSO~755. Comparative studies of RASS and pointed data show that $\approx$~90\% of such 7-photon RASS sources are real X-ray sources rather than statistical fluctuations, and CSO~755\ is included in the Max-Planck-Institut f\"ur Extraterrestrische Physik RASS faint source catalog (Voges et~al., in preparation) with a likelihood of 11 (see Cruddace, Hasinger \& Schmitt 1988). However, to be appropriately cautious we shall treat the probable RASS detection as tentative. The probable RASS detection corresponds to a vignetting-corrected flux in the observed 0.1--2.4~keV band of $\approx~1.1\times 10^{-13}$~erg~cm$^{-2}$~s$^{-1}$ (for a power-law model with $\Gamma=1.9$ and the Galactic absorption column). Given the relative effective areas and imaging capabilities of the {\it ROSAT\/}\ PSPC and {\it BeppoSAX\/}\ LECS, a RASS detection is consistent with the LECS upper limit given in Table~1 (see Figure~2 of Parmar et~al. 1999). Provided there is not substantial intrinsic X-ray absorption below the MECS band, the relative RASS and MECS fluxes are entirely plausible. If we use the {\it ROSAT\/}\ flux to normalize a $\Gamma=1.9$ power law with Galactic absorption, we calculate $\alpha_{\rm ox}=1.62$. If {\it ROSAT\/}\ has indeed detected CSO~755, the {\it ROSAT\/}\ band has the advantage that it directly constrains the rest-frame 2~keV flux density. \section{Discussion and Conclusions} Our {\it BeppoSAX\/}\ and probable {\it ROSAT\/}\ detections of CSO~755\ make it the highest redshift as well as the most optically luminous BAL~QSO detected in X-rays. It was selected for study not based upon high optical flux but rather based on its high (observed-frame) optical continuum polarization (3.8--4.7\%; hereafter OCP), and it is X-ray brighter than several other BAL~QSOs that have $\approx$~4--6 times its $V$-band flux (compare with Gallagher et~al. 1999). While its higher X-ray flux could partially result from the higher redshift providing access to more penetrating X-rays (i.e. a `negative $K$-correction'), there is also suggestive evidence that the BAL~QSOs with high OCP may be the X-ray brighter members of the class. We have investigated the OCP percentages of the 10 BAL~QSOs (including CSO~755) with reliable X-ray detections using the data from Berriman et~al. (1990), Hutsem\'ekers, Lamy \& Remy (1998), Ogle (1998) and Schmidt \& Hines (1999). The OCP percentages have a mean of $2.28\pm 0.28$, a standard deviation of 0.88, and a median of 2.24. These values indeed place the X-ray detected BAL~QSOs toward the high end of the BAL~QSO OCP distribution function (compare with \S2 of Schmidt \& Hines 1999). At present, however, our nonparametric testing is unable to prove that the X-ray detected BAL~QSOs have higher OCPs than those that are undetected in sensitive X-ray observations. This is due to small sample sizes as well as concerns about possible secondary correlations and observational biases. Many of the BAL~QSOs with high-quality X-ray data have been observed because they have exceptional properties (e.g., low-ionization absorption, extreme Fe~{\sc ii} emission), and thus the currently available sample is not necessarily representative of the population as a whole. In addition, the current X-ray and polarization observations of BAL~QSOs span a wide range of rest-frame energy/wavelength bands due to redshift and instrumentation differences (redshifts for the 10 X-ray detected BAL~QSOs run from $z=$~0.042--2.88). At higher redshifts one samples harder X-rays that are less susceptible to absorption. Also at higher redshifts, observed-frame OCP measurements tend to sample shorter wavelengths, and many QSOs show polarization that rises towards the blue. Systematic X-ray and polarimetric observations of uniform, well-defined BAL~QSO samples are needed to examine this issue better. A polarization/X-ray flux connection could be physically understood if the direct lines of sight into the X-ray nuclei of BAL~QSOs were usually blocked by extremely thick matter ($\gg 10^{24}$~cm$^{-2}$). In this case, we could only see X-rays when there is a substantial amount of electron scattering in the nuclear environment by a `mirror' of moderate Thomson depth.\footnote{Ogle (1998) suggests that there is a large range of mirror optical depths among the BAL~QSO population.} The scattering would provide a periscopic, indirect view into the compact X-ray emitting region while also polarizing some of the more extended optical continuum emission (see Figure~3). Measured X-ray column densities would then provide information only about the gas along the {\it indirect\/} line of sight. For CSO~755, the X-ray scattering medium would need to be located on fairly small scales ($\lower.5ex\hbox{\ltsima}$ a few light weeks) to satisfy the spectropolarimetric constraints of Glenn et~al. (1994) and Ogle (1998). These show that the material scattering the optical light is located at smaller radii than both the Broad Line Region (BLR) and BAL region. Our calculations in \S2 give an {$\alpha_{\rm ox}$}\ value of $\approx$~1.6, although our only direct constraint on the rest-frame 2~keV flux density is via the probable {\it ROSAT\/}\ detection. Our {$\alpha_{\rm ox}$}\ value is entirely consistent with those of typical radio-quiet QSOs (compare with Figure~1 of Brandt, Laor \& Wills 1999), and it is smaller than those of many BAL~QSOs (e.g. Green \& Mathur 1996). A `normal' {$\alpha_{\rm ox}$}\ value would appear somewhat surprising in the context of the scattering model of the previous paragraph, since one would expect the X-ray flux level to be reduced if the direct line of sight is blocked. However, there is enough dispersion in the {$\alpha_{\rm ox}$}\ distribution that the observed value of {$\alpha_{\rm ox}$}\ does not cause a serious problem, provided the scattering is efficient. The scattering mirror would need to subtend a fairly large solid angle (as seen by the compact X-ray source) and have a moderate Thomson depth (say $\tau_{\rm T}\sim 0.3$). In addition, there may be `attenuation' at 3000~\AA\ (in the sense of \S2 of Goodrich 1997) that helps to flatten {$\alpha_{\rm ox}$}. Finally, we note that CSO~755\ has a high enough X-ray flux to allow moderate quality X-ray spectroscopy and variability studies with {\it XMM\/}. It is currently scheduled for a 5~ks {\it XMM\/}\ observation, but this is an inadequate exposure time for such work. A longer {\it XMM\/}\ exposure would allow a study of any iron~K spectral features, and the high redshift of CSO~755\ moves the iron~K complex right to the peak of the {\it XMM\/}\ EPIC spectral response. If we are viewing a large amount of scattered X-ray flux in CSO~755\ and other high polarization BAL~QSOs, then narrow iron~K lines with large equivalent widths may be produced via fluorescence and resonant scattering (as for the much less luminous Seyfert~2 galaxies; e.g., Krolik \& Kallman 1987). Such lines could allow direct detection of the X-ray scattering medium, and line energies and blueshifts/redshifts would constrain the ionization state and dynamics of the mirror. We would also not expect rapid ($\lower.5ex\hbox{\ltsima} 1$~day) and large-amplitude X-ray variability if most of the X-ray flux is scattered. \acknowledgments We thank J. Halpern, J. Nousek, W. Voges and B. Wills for helpful discussions, and we thank H. Ebeling for the use of his {\sc idl} software. We acknowledge the support of NASA LTSA grant NAG5-8107 (WNB), Italian Space Agency contract ASI-ARS-98-119 and MURST grant Cofin-98-02-32 (AC), NASA grant NAG5-4826 and the Pennsylvania Space Grant Consortium (SCG), and the fund for the promotion of research at the Technion (AL).
1,116,691,497,077
arxiv
\section{\@startsection {section}{1}{\z@}% {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\bfseries}} \makeatother \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{array}}{\begin{array}} \newcommand{\end{array}}{\end{array}} \newcommand{\partial}{\partial} \newcommand{\scriptscriptstyle}{\scriptscriptstyle} \newcommand{\scriptstyle}{\scriptstyle} \newcommand{\displaystyle}{\displaystyle} \newcommand{\sigma}{\sigma} \newcommand{\theta}{\theta} \newcommand{\varphi}{\varphi} \newcommand{\delta}{\delta} \newcommand{\mbox{const}}{\mbox{const}} \newcommand{\varepsilon}{\varepsilon} \newcommand{\upsilon}{\upsilon} \newcommand{\zeta}{\zeta} \newcommand{\bar\zeta}{\bar\zeta} \newcommand{\bar\chi}{\bar\chi} \newcommand{\omega}{\omega} \newcommand{\triangle}{\triangle} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\sect}[1]{Section~\ref{#1}} \newcommand{\frac{1}{2}}{\frac{1}{2}} \newcommand{\eq}[1]{(\ref{#1})} \newcommand{\fig}[1]{Figure~\ref{#1}} \newcommand{\overset{\sssty ?}{\alpha}}{\overset{\scriptscriptstyle ?}{\alpha}} \newtheorem{theorem}{Theorem} \newtheorem{conj}{Conjecture} \theoremstyle{remark} \newtheorem{rem}{Remark} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \newcommand{\minPlusOne}[1]{ \min\!\left(#1\right) + 1 } \newcommand{\nPlusOne}[1]{ #1+1 } \begin{document} \begin{titlepage} \begin{center} {\LARGE\bf \mbox{Complex plane representations and stationary}\vspace{3mm}\\ states in cubic and quintic resonant systems}\\ \vskip 15mm {\large Anxo Biasi,$^{a}$ Piotr Bizo\'n$^{\,b}$ and Oleg Evnin$^{c,d}$} \vskip 7mm {\em $^a$ Departamento de F\'\i sica de Part\'\i culas, Universidade de Santiago de Compostela and Instituto Galego de F\'\i sica de Altas Enerx\'\i as (IGFAE), Santiago de Compostela, Spain} \vskip 3mm {\em $^b$ Institute of Physics, Jagiellonian University, Krak\'ow, Poland} \vskip 3mm {\em $^c$ Department of Physics, Faculty of Science, Chulalongkorn University, Bangkok, Thailand} \vskip 3mm {\em $^d$ Theoretische Natuurkunde, Vrije Universiteit Brussel and\\ The International Solvay Institutes, Brussels, Belgium} \vskip 7mm {\small\noindent {\tt [email protected], [email protected], [email protected]}} \vskip 10mm \end{center} \vspace{1cm} \begin{center} {\bf ABSTRACT}\vspace{3mm} \end{center} Weakly nonlinear energy transfer between normal modes of strongly resonant PDEs is captured by the corresponding effective resonant systems. In a previous article, we have constructed a large class of such resonant systems (with specific representatives related to the physics of Bose-Einstein condensates and Anti-de Sitter spacetime) that admit special analytic solutions and an extra conserved quantity. Here, we develop and explore a complex plane representation for these systems modelled on the related cubic Szeg\H o and LLL equations. To demonstrate the power of this representation, we use it to give simple closed form expressions for families of stationary states bifurcating from all individual modes. The conservation laws, the complex plane representation and the stationary states admit furthermore a natural generalization from cubic to quintic nonlinearity. We demonstrate how two concrete quintic PDEs of mathematical physics fit into this framework, and thus directly benefit from the analytic structures we present: the quintic nonlinear Schr\"odinger equation in a one-dimensional harmonic trap, studied in relation to Bose-Einstein condensates, and the quintic conformally invariant wave equation on a two-sphere, which is of interest for AdS/CFT-correspondence. \vfill \end{titlepage} \section{Introduction} Resonant Hamiltonian systems of the form \begin{equation} i\,\frac{d\alpha_n}{dt}=\hspace{-2mm}\sum_{n+m=k+l} \hspace{-2mm}C_{nmkl}\bar\alpha_m\alpha_k\alpha_l \label{ressyst} \end{equation} emerge in weakly nonlinear analysis of strongly resonant PDEs, with many applications in mathematical physics. Here, $\alpha_n(t)$ with an integer $n\ge 0$ are complex-valued dynamical variables, the bar denotes complex conjugation, and $C_{nmkl}$ are numbers known as the mode couplings or the interactions coefficients. Note that the summation is restricted by the \emph{resonant constraint} $n+m=k+l$. To quickly demonstrate how equations of this sort originate in a physically motivated example, consider the simplest possible setting afforded by the one-dimensional cubic nonlinear Schr\"odinger equation on the real line in a harmonic potential \begin{equation} i\,\frac{\partial \Psi}{\partial t}=\frac12\left(-\frac{\partial^2}{\partial x^2}+x^2\right)\Psi +g|\Psi|^2\Psi. \label{NLS1d} \end{equation} At $g=0$, one has the linear Schr\"odinger equation of a harmonic oscillator, whose general solution is \begin{equation} \Psi=\sum_{n=0}^\infty \alpha_n \psi_n(x) e^{-iE_n t}, \label{NLS1dlin} \end{equation} with constant $\alpha_n$, where \begin{equation} \qquad E_n=n+\frac12 \end{equation} are the eigenstate energies, and $\psi_n$ are the corresponding normalized wavefunctions satisfying \begin{equation} \frac12\left(-\frac{\partial^2}{\partial x^2}+x^2\right)\psi_n=E_n\psi_n. \end{equation} At a small nonzero coupling $g$, $\alpha_n$ are no longer constant and evolve slowly over time. This evolution can be extracted by substituting \eqref{NLS1dlin} into \eqref{NLS1d} and projecting on $\psi_k(x)$, which gives \begin{equation} i\,\frac{d\alpha_n (t) }{dt}= g\sum_{k,l,m=0}^\infty C_{nmkl} \,\bar \alpha_m (t)\alpha_k (t) \alpha_l (t)\,e^{i(E_n+E_m-E_k-E_l)t}, \label{NLSpreres} \end{equation} with $C_{nmkl}=\int dx \,\psi_n \psi_m \psi_k \psi_l$. At $g\ll 1$, one makes use of the \emph{resonant approximation}, which consists in dropping all oscillatory terms on the right-hand side, keeping only those satisfying $E_n+E_m-E_k-E_l=0$, which is the same as $n+m=k+l$. This is known to result in an accurate approximation of the full equation at small $g$ on very long time scales of order $1/g$. This approximation, known by a number of different names, goes back to the foundational work of Bogoliubov and Krylov \cite{BK} on `time-averaging.' Pedagogical introductions can be found in \cite{BM,murdock, KM}, while a variety of applications in contemporary literature can be sampled from \cite{bambusi,ZhP,ZhPmore,KSS,FGH,GHT,GT}. Once the resonant approximation has been implemented, and $g$ has been absorbed in a redefinition of time, one ends up with (\ref{ressyst}). While we have chosen the simple equation (\ref{NLS1d}) to illustrate the underlying idea, the resonant approximation leads to equations of the form (\ref{ressyst}) in a number of other interesting cases, of course, with different assignments of the interaction coefficients $C_{nmkl}$. We mention here the applications to higher-dimensional nonlinear Schr\"odinger equations in harmonic traps \cite{GHT,GT,BMP,BBCE,GGT}, studies of nonlinear wave equations on spheres and in Anti-de Sitter spacetimes \cite{CF,BHP,BEL} as well as studies of weakly nonlinear gravitational dynamics in Anti-de Sitter spacetimes \cite{FPU,CEV,BMR,islands,AdS4,BEF} in relation to its nonlinear instability \cite{BR,rev2}. A resonant equation of the form (\ref{ressyst}) has recently been recovered as well for an asymptotically Anti-de Sitter wormhole spacetime \cite{resscalar1,resscalar2}, while a general algorithm exists \cite{KG} for constructing spacetimes with resonant structures underlying (\ref{ressyst}). The particular case $C_{nmkl}=1$, known as the cubic Szeg\H o equation, is Lax-integrable and has been studied as an integrable model of turbulence in the mathematical literature \cite{GG} with powerful results. (We note in passing that perhaps the most familiar spatially confined setting where a translationally invariant PDE is compactified on a torus does not possess the type of resonant spectrum of linearized perturbation frequencies underlying our studies.) Many of the cases listed above generate resonant systems possessing a rich algebraic structure, including special analytic solutions in the fully nonlinear regime and extra conserved quantities \cite{GHT,BBCE,CF,BHP,BEL, BEF}. This has led us to formulating conditions on the interaction coefficients $C_{nmkl}$ that guarantee such special properties, resulting in a large class of partially solvable resonant systems presented in \cite{AO}. Our purpose in this paper is to further explore the properties of this class of systems, and to construct its generalizations. To this end, we start by devising a complex plane representation for the partially solvable resonant systems of \cite{AO} formulated as an integro-differential equation for the generating function of $\alpha_n$. Such representations have proven very effective, for example, for the cubic Szeg\H o equation \cite{GG} and the LLL equation \cite{GHT,GT}, the latter equation being a member of our partially solvable class.\footnote{We owe some inspiration to Patrick G\'erard who has shown to us that the complex plane representation of the conformal flow developed in \cite{CF} can be used to give a lightning-speed proof of the known fact that all Blaschke product functions give rise to stationary states of the conformal flow (which is, again, a particular representative of the partially solvable class of \cite{AO}). Similarly, powerful results for the asymptotics of stationary states at spatial infinity and the distribution of their zeros are derived for the LLL equation in \cite{GGT} relying on a complex plane representation.} After reviewing the material of \cite{AO} and constructing the complex plane representation in section 2, we use this representation, in section 3, to prove a simple closed-form formula for stationary solutions bifurcating from individual modes. In section 4, we present a quintic generalization of the partially solvable resonant systems of \cite{AO}, and of the new material of sections 2 and 3. The reason to look for such generalization is twofold. First, it is interesting to see which of the partially solvable features seen for the cubic equation (\ref{ressyst}) are structurally stable and survive a quintic generalization (for instance, there is no known quintic generalization \cite{gerard_quintic} of the solvable structures of the cubic Szeg\H o equation, a Lax-integrable resonant system outside the partially solvable class of \cite{AO}). Second, in physically motivated situations, one often has the field value reflection symmetry that only allows odd powers of the field in the equations of motion. Under such circumstances, if the cubic term is tuned down to zero, the quintic one becomes the dominant nonlinearity that governs the weakly nonlinear dynamics. This is literally possible in trapped Bose-Einstein condensates, where the cubic term can be switched off using Feshbach resonances, leading to the emergence of a quintic nonlinear Schr\"odinger equation \cite{NLSquint}. Finally, in section 5, we present two concrete quintic PDEs of mathematical physics whose resonant systems fit into our setup: first, the quintic nonlinear Schr\"odinger equation in a one-dimensional harmonic trap (recently treated in \cite{fennell}), and second, the quintic conformally invariant wave equation on a two sphere (which generalizes the cubic considerations on a three-sphere in \cite{CF}). \section{Complex plane representation for cubic resonant systems}\label{sec:cpl} We start with briefly reviewing the main points of \cite{AO}. One defines a large class of resonant systems of the form (\ref{ressyst}) by specifying conditions on the interaction coefficients $C_{nmkl}$ that result in far-reaching analytic properties. To this end, we first introduce a positive real number $G$ (to be specified for each individual resonant system in our class) and define \begin{equation} f_n=\sqrt{\frac{(G)_n}{n!}}, \label{fdef} \end{equation} where $(G)_n\equiv G(G+1)\cdots(G+n-1)$ is the Pochhammer symbol. We furthermore define \begin{equation} \beta_n=\frac{\alpha_n}{f_n},\qquad S_{nmkl}=f_nf_mf_kf_lC_{nmkl}. \label{betaSdef} \end{equation} The key condition of \cite{AO} we shall impose on $C_{nmkl}$ is conveniently stated in terms of $S_{nmkl}$ as \begin{equation} \left(n -1+ G\right)S_{n-1,mkl} + \left(m - 1 + G\right)S_{n,m-1,kl} - (k+1)S_{nm,k+1,l} - (l+1)S_{nmk,l+1} =0. \label{AOdef} \end{equation} If this condition is fulfilled\footnote{We adopt the convention, here and for the rest of the paper, that if any of the indices is negative, the corresponding value of $S$ is zero.} for some specific number $G$ and for all mode number quartets $(n,m,k,l)$ satisfying $n+m-1=k+l$, the following properties of the corresponding resonant system (\ref{ressyst}) are ensured: \begin{itemize} \item The resonant system (\ref{ressyst}) respects the conservation of \begin{equation} Z = \sum_{n=0}^{\infty}\sqrt{(n+1)(n + G)}\,\bar{\alpha}_{n+1} \alpha_n. \label{Zdef} \end{equation} \item There is an invariant manifold of the evolution defined by (\ref{ressyst}) given by \begin{equation} \beta_n = \big( b(t)+n\, a(t)\big) (p(t))^n, \end{equation} where $a(t),b(t),p(t)$ are complex-valued functions of time. \item Within this invariant manifold, the dynamics is Liouville-integrable, and the spectrum $|\beta_n(t)|^2$ is exactly periodic in time. \end{itemize} Such features were first seen in concrete resonant systems arising from PDEs of mathematical physics \cite{CF,BEL,BBCE}, which led to the formulation of the general condition (\ref{AOdef}) from which they follow. We note that there is a special case that can be seen as a $G\to\infty$ limit, explicitly described in \cite{AO}. This special case amounts to typographically replacing all expressions of the form ``(integer+$G$)'' in the above formulas by 1, and correspondingly replacing $(G)_n$ in (\ref{fdef}) by 1. The resulting structure remains valid, and is in fact physically realized in the Landau level truncations of the resonant approximation to the Gross-Pitaevskii equation \cite{BBCE}. The details can be found in \cite{AO}. For brevity, we shall not be treating this special case here, and will concentrate on generic values of $G$. The finite difference equation (\ref{AOdef}) can be resolved in terms of the generating function for $S_{nmkl}$ and, as shown in \cite{AO}, is equivalent to stating that \begin{equation}\label{S_sol} S(y,z,v,w)=\sum_{n,m,k,l} S_{nmkl}\, y^nz^mv^kw^l=\frac{{\cal F}\left(\ln\left[\frac{(1-vy)(1-wz)}{(1-vz)(1-wy)}\right]\right)}{\left[(1-vy)(1-vz)(1-wy)(1-wz)\right]^{G/2}}, \end{equation} where $\cal F$ is an arbitrary even function, ${\cal F}(x)={\cal F}(-x)$. One can set ${\cal F}(0)=1$ by rescaling the time variable. We note (and this shall be used below) that since the generating function depends only on $vy$, $vz$, $wy$ and $wz$, its power series expansion contains only terms with $n+m=k+l$. Thus summing over all $n,m,k,l$ in the definition of the generating function is completely equivalent to only summing over the resonant quartets $n+m=k+l$. The resonant system (\ref{ressyst}) can be expressed through $\beta_n$ defined by (\ref{betaSdef}) as \begin{equation} i\frac{(G)_n}{n!}\frac{d\beta_n}{dt}=\hspace{-2mm}\sum_{n+m=k+l} \hspace{-2mm}S_{nmkl}\bar\beta_m\beta_k\beta_l. \label{betares} \end{equation} In order to take advantage of (\ref{S_sol}), we introduce the following generating functions for $\beta_n$: \begin{equation} u(t,z)=\sum_{n=0}^\infty \beta_n z^n,\qquad \tilde u(t,z)=\sum_{n=0}^\infty\frac{\bar\beta_n}{z^n}, \label{udef} \end{equation} so that \begin{equation} \beta_n(t)=\frac1{2\pi i}\oint \frac{dz }{z^{n+1}}u(z,t),\qquad \bar\beta_n=\frac1{2\pi i}\oint dz \,z^{n-1}\,\tilde u(z,t) . \label{betaint} \end{equation} Note that the integration contour for $\beta_n$ must not enclose any singularities of $u$, while the integration contour for $\bar\beta_n$ must enclose all singularities of $\tilde u$. The tilde-conjugation can be understood as taking complex conjugates of the values of $u$ on the unit circle, and then analytically continuing away from the unit circle to obtain $\tilde u$. One can also write \begin{equation} \tilde u(z)=\overline{u\left(1/\bar z\right)}. \label{tildeconj} \end{equation} Substituting (\ref{betaint}) to (\ref{betares}), multiplying with $z^n$ and summing over $n$, one gets \begin{equation} \frac{i}{\Gamma(G)}\partial_t\partial_z^{G-1}(z^{G-1} u(t,z))=\frac1{(2\pi i)^3}\oint\frac{ds}s \oint\frac{dv}v \oint\frac{dw}w S(z,s,1/v,1/w)\tilde u(t,s) u(t,v) u(t,w), \label{complexeq} \end{equation} where we have used the fact that the constraint $n+m=k+l$ in the summation can be ignored since $S_{nmkl}$ extracted from the generating function $S$ automatically vanish unless this constraint is satisfied. The integration contours for $v$ and $w$ are outside the unit circle but do not enclose any singularities of $u$, while the integration contour for $s$ is inside the unit circle and encloses all singularities of $\tilde u$. Note that to ensure convergence in the resummation of $S$ according to (\ref{S_sol}), $|v|$ and $|w|$ must be greater than both $|z|$ and $|s|$. The fractional derivative $\partial_z^{(G-1)}$ is defined by its action on powers of $z$ \begin{equation} \partial_z^a z^b=\frac{\Gamma(b+1)}{\Gamma(b-a+1)}z^{b-a}, \label{fracdef} \end{equation} with $\Gamma$ being the usual Euler's $\Gamma$-function. This simple-minded definition goes back to the very origins of the fractional calculus \cite{frcalc1,frcalc2}. In general, acting with fractional derivatives on integer powers of $z$ produces fractional powers, which are not single-valued, which is a source of ambiguities in defining fractional calculus. Note, however, that applying $\partial_z^a z^a$ to an analytic function, which is the only operation we need for (\ref{complexeq}), always results in an analytic function, hence no subtleties occur in this case. The way (\ref{fracdef}) enters the complex plane representation of (\ref{betaint}) is through the relation $\Gamma(n+G)/\Gamma(n+1)=\Gamma(G)\,\,(G)_n/n!$. We note that the differentiation defined by (\ref{fracdef}) can be equally well implemented via a Cauchy-like complex contour formula for the operator $\partial_z^a z^a$ featured in (\ref{complexeq}) acting on a holomorphic function $f(z)$: \begin{equation} \partial_z^a (z^a f(z))=\frac{\Gamma(a+1)}{2\pi i}\oint \frac{s^a\, f(s)\,ds}{(s-z)^{a+1}}. \label{fracCauchy} \end{equation} Indeed, expanding $f(s)$ in terms of integer powers of $s$, we see that (\ref{fracCauchy}) acts on the individual powers in accordance with (\ref{fracdef}), as can be verified by evaluating the residue at $s=\infty$. (Note that, for noninteger $a$, there is a cut in the complex plane, but it only extends from $s=0$ to $s=z$, without affecting the evaluation of the residue at $s=\infty$. The integration contour is defined to lie outside this cut.) A representation of the form (\ref{complexeq}) has been previously obtained for the conformal flow \cite{CF}, which corresponds in our present language to $G=2$ (while the right-hand side integral can be further simplified due to particular factorization properties of $S$ in the case of the conformal flow). Our present derivation has established this representation for the entire class of partially solvable resonant systems of \cite{AO}, of which the conformal flow is a representative. One can substitute (\ref{S_sol}) into (\ref{complexeq}) to obtain explicitly \begin{equation} \frac{i}{\Gamma(G)}\partial_t\partial_z^{G-1}(z^{G-1} u(t,z))=\frac1{(2\pi i)^3}\oint\frac{ds\,dv\,dw}{s\,(vw)^{1-G}} \frac{\tilde u(t,s) u(t,v) u(t,w) \,\,{\cal F}\left(\ln\left[\frac{(v-s)(w-z)}{(v-z)(w-s)}\right]\right)}{\left[(v-s)(v-z)(w-s)(w-z)\right]^{G/2}}, \label{complexeq2} \end{equation} Note the emergence of the cross-ratio $(v-s)(w-z)/(v-z)(w-s)$, which is a conformally invariant combination of the coordinates of four points $(z,s,v,w)$ on the complex plane \cite{appconf}. \section{Stationary states}\label{sec:stst} We shall consider stationary states of the form \begin{equation} \alpha_n= e^{-i\lambda t} A_n \end{equation} that solve (\ref{ressyst}), where $\lambda$ is a real number and $A_n$ are constants. Evidently, because of the resonant structure of (\ref{ressyst}), \begin{equation} A_n=0\qquad\mbox{if}\quad n\ne k, \label{singlemode} \end{equation} provide such solutions for any given $k$. These single-mode solutions supported by mode number $k$ are present in any resonant system of the form (\ref{ressyst}), irrespectively of the values of the interaction coefficients $C_{nmkl}$. For the partially solvable resonant systems of \cite{AO} that possess the structures outlined in the previous section, it is possible to go further and give a simple explicit formula for inifinite families of stationary solutions bifurcating from each single-mode solution. This section is dedicated to developing these formulas. One particular purpose is to demonstrate the power of the complex plane representation (\ref{complexeq2}) through the way it facilitates the analysis of these stationary states. The existence of the families bifurcating from the solutions (\ref{singlemode}) may be anticipated from the presence of the symmetry generated by the conserved quantity (\ref{Zdef}), which may act on the states (\ref{singlemode}) to produce new solutions. We are not aware, however, of a straightforward way to integrate the infinitesimal transformations generated by (\ref{Zdef}) to finite transformations on the infinite-dimensional configuration space parametrized by $\alpha_n$, and resort to direct analysis of the equations of motion in the form (\ref{complexeq2}). Below, we start by presenting the stationary state bifurcating from mode 0, for which the analysis is particularly simple, and allows for a transparent demonstration of the underlying idea. We then proceed with the general consideration for stationary states bifurcating from higher modes. \subsection{Stationary states bifurcating from the lowest mode} We claim that \begin{equation} u(t,z)=\frac{e^{-i\lambda t}}{1-pz} \label{mode0fam} \end{equation} solves (\ref{complexeq}) for any complex value of $p$ (and some $p$-dependent value of $\lambda$). Note that if one sends $p$ to 0, one obtains $u(t,z)=e^{-i\lambda t}$, which does not depend on $z$ and corresponds to $\alpha_{n\ge0}=0$, i.e., to the single-mode stationary states supported by mode 0. Thus, our family (\ref{mode0fam}) bifurcates from mode 0, as anticipated in the title. The l.h.s.\ of (\ref{complexeq2}) becomes simply \begin{equation} \frac{\lambda e^{-i\lambda t}}{(1-pz)^G}, \label{mode0lhs} \end{equation} as one can easily see by applying (\ref{fracCauchy}) and evaluating the residue at $1/p$. For the r.h.s.\ of (\ref{complexeq2}), we first note that \begin{equation} \tilde u(t,z)=\frac{e^{i\lambda t}}{1-\bar p/z}. \label{mode0famtilde} \end{equation} Hence, the r.h.s.\ of (\ref{complexeq2}) may be written as \begin{equation} \frac{e^{-i\lambda t}}{(2\pi i)^3}\oint\frac{ds\,dv\,dw}{s\,(vw)^{1-G}} \frac{{\cal F}\left(\ln\left[\frac{(v-s)(w-z)}{(v-z)(w-s)}\right]\right)}{\left[(v-s)(v-z)(w-s)(w-z)\right]^{G/2}}\frac{1}{1-\bar p/s} \frac{1}{1-pv} \frac{1}{1-pw}, \end{equation} Consider first the integral over $v$. There are branch cuts connecting $v=0$, $v=z$ and $v=s$, but all of these branch cuts are inside the integration contour, while the simple pole at $v=1/p$ is outside the contour. Thus, the integral can be evaluated as the residue at $v=1/p$. The same argument applies to the integral over $w$. Implementing these two operations, one gets: \begin{equation} \frac1{(1-pz)^G}\frac{e^{-i\lambda t}}{2\pi i}\oint \frac{ds}{(s-\bar p)(1-sp)^G}. \end{equation} Note that at $v=w=1/p$ the argument of ${\cal F}$ has turned into 0, while we have assumed ${\cal F}(0)=1$ by a choice of the time scale. The function ${\cal F}$ has thus dropped out from our expression at this stage. As far as the remaining integral over $s$ is concerned, once again, there is a branch cut outside the integration contour, but inside there is only a simple pole, so one can express the result through the residue, obtaining \begin{equation} \frac{1}{(1-|p|^2)^G}\frac{e^{-i\lambda t}}{(1-pz)^G}. \end{equation} This expression for the r.h.s.\ of (\ref{complexeq2}) manifestly matches the l.h.s.\ given by (\ref{mode0lhs}). The equations of motion are thus satisfied by our family of stationary states (\ref{mode0fam}) provided that \begin{equation} \lambda=\frac{1}{(1-|p|^2)^G}. \end{equation} Note that this holds for every value of $G$, and irrespectively of the form of the arbitrary function $\cal F$ contained in the definition (\ref{S_sol}) of our class of resonant systems. \subsection{Stationary states bifurcating from higher modes} We now proceed with the stationary solutions bifurcating from mode number $N$. We claim that the relevant generating function $u(t,z)$ satisfies \begin{equation} u(t,z)=e^{-i\lambda t}u(z),\qquad\partial_z^{G-1}(z^{G-1} u(z))=\frac{(\bar p -z)^N}{(1-pz)^{N+G}}. \label{uNder} \end{equation} Indeed, if $p=0$, $\partial_z^{G-1}(z^{G-1} u)$ is proportional to $z^N$, and hence $u$ itself is proportional to $z^N$, i.e., the only nonvanishing $\alpha_n$ is $\alpha_N$. (The above formula, as well as (\ref{mode0fam}), originated as a guess based on numerical experimentation, before being provided an analytic proof that we are about to present.) We have specified $u(z)$ through the result of acting on it with $\partial_z^{G-1}z^{G-1}$. What about $u(z)$ itself? We can say that it is of the form \begin{equation} u(z)=\sum_{k=0}^{N} \frac{c_k}{(1-pz)^{k+1}}, \label{formuN} \end{equation} though we are not aware of simple explicit expressions for $c_k$. Differentiation of the individual terms in (\ref{formuN}) follows the rule \begin{equation} \partial_z^{G-1}\frac{z^{G-1} }{(1-pz)^{k+1}}=\frac{\Gamma(G)}{2\pi i}\oint \frac{ds}{(s-z)^G}\frac{s^{G-1} }{(1-ps)^{k+1}}=\frac{\Gamma(G)}{(-p)^{k+1}}\,\partial^{k}_s\frac{s^{G-1} }{(s-z)^G}\Bigg|_{s=1/p}. \end{equation} The last expression is evidently a linear combination of terms of the form $1/(1-pz)^G$, $1/(1-pz)^{G+1}$, ..., $1/(1-pz)^{G+k}$. Hence, $\partial_z^{G-1}(z^{G-1} u(t,z))$ is a linear combinations of terms of the form $1/(1-pz)^G$, $1/(1-pz)^{G+1}$, ..., $1/(1-pz)^{G+N}$ with coefficients that are themselves linear combinations of $c_0$, $c_1$, ..., $c_{N}$. By tuning these $N+1$ coefficients, we can make $\partial_z^{G-1}(z^{G-1} u(t,z))$ equal $1/(1-pz)^{G+N}$ times an arbitrary polynomial of degree $N$ in $z$, and in particular, we can make it equal (\ref{uNder}). With these preliminaries, the l.h.s.\ of (\ref{complexeq}) is by construction \begin{equation} \frac{\lambda\,e^{-i\lambda t}}{\Gamma(G)}\frac{(\bar p -z)^N}{(1-pz)^{N+G}}, \label{lhs} \end{equation} and we have to prove that the r.h.s.\ of (\ref{complexeq}), given by \begin{equation} \frac{e^{-i\lambda t}}{(2\pi i)^3}\oint\frac{ds\,dv\,dw}{s\,(vw)^{1-G}} \frac{\tilde u(s) u(v) u(w) \,\,{\cal F}\left(\ln\left[\frac{(v-s)(w-z)}{(v-z)(w-s)}\right]\right)}{\left[(v-s)(v-z)(w-s)(w-z)\right]^{G/2}}, \label{rhsuuu} \end{equation} matches this form. Our proof will proceed in two steps (which can be thought of as lemmas). In step 1, we shall show that (\ref{rhsuuu}) must be of the form \begin{equation} \frac{Q_N(z)\,e^{-i\lambda t}}{(1-pz)^{N+G}}, \label{rhsQ} \end{equation} where $Q_N(z)$ is a polynomial of degree $N$ in $z$. In step 2, we shall show that (\ref{rhsuuu}) and its first $N-1$ $z$-derivatives must vanish at $z=\bar p$, which means that (\ref{rhsuuu}) has a degree $N$ zero at that point. Combined, these two facts imply that (\ref{rhsuuu}) is proportional to \begin{equation} e^{-i\lambda t}\frac{(\bar p -z)^N}{(1-pz)^{N+G}}, \label{rhsfnl} \end{equation} The coefficient of proportionality simply fixes $\lambda$, and in view of (\ref{lhs}), the equation of motion (\ref{complexeq2}) is satisfied. We now proceed filling in the details of step 1 and step 2 required to complete the proof. In handling (\ref{rhsuuu}) below, we shall suppress the factor $e^{-i\lambda t}$ which is common to the entire expression (\ref{rhsuuu}) and already matches (\ref{lhs}).\vspace{2mm} \noindent{\bf Step 1.} With (\ref{formuN}) in mind, the integrals over $v$ and $w$ in (\ref{rhsuuu}) can be evaluated in terms of residues as a linear combination of terms of the form \begin{equation} \frac1{2\pi i}\oint\frac{ds}s \tilde u(s)\,\,\partial_v^k\partial_w^l\Bigg(\frac{{\cal F}\left(\ln\left[\frac{(v-s)(w-z)}{(v-z)(w-s)}\right]\right)}{(vw)^{1-G}\left[(v-s)(v-z)(w-s)(w-z)\right]^{G/2}}\Bigg)\Bigg|_{v,w=1/p} \label{rhskl} \end{equation} with \begin{equation} 0\le k,l\le N. \label{klN} \end{equation} Now, for any $\cal G$ that depends on the indicated argument, \begin{equation} \partial_v{\cal G}\left(\ln\left[\textstyle\frac{(v-s)(w-z)}{(v-z)(w-s)}\right]\right)=\left(\frac{1}{v-s}-\frac{1}{v-z}\right){\cal G\,}'\left(\ln\left[\textstyle\frac{(v-s)(w-z)}{(v-z)(w-s)}\right]\right). \label{diffG} \end{equation} Applying such differentiations recursively, each $v$-derivative may produce either one extra factor of $1/(v-z)$ if it acts on the numerator in (\ref{rhskl}) or on the factor $1/(v-z)^{G/2}$, or it may produce one extra factor of $1/(v-s)$ in a similar manner, or it may produce simply extra factors of $v$ if it acts on $1/v^{1-G}$ (such factors will be expressed through $p$ only after one substitutes $v=1/p$ at the end, and are irrelevant for our present argument). Furthermore, in the process of differentiation, $\cal F$ may change into another function of the same argument. The situation with $w$-differentiations is, of course, directly parallel. We denote the number of extra factors of $1/(v-z)$ generated through such differentiations as $k_1$, the number of extra factors $1/(v-s)$ as $k_2$, the number of extra factors of $1/(w-z)$ as $l_1$, and the number of factors $1/(w-s)$ as $l_2$. Evidently, from the above description of differentiations, \begin{equation} k_1+k_2\le k\quad\mbox{and}\quad l_1+l_2\le l. \label{k1k2k} \end{equation} One concludes that $\partial_v^k\partial_w^l(\ldots)$ in (\ref{rhskl}) consists of terms of the form \begin{equation} \frac{{\cal G}\left(\ln\left[\textstyle\frac{(v-s)(w-z)}{(v-z)(w-s)}\right]\right)}{(v-z)^{k_1+G/2}(v-s)^{k_2+G/2}(w-z)^{l_1+G/2}(w-s)^{l_2+G/2}}, \end{equation} where we have omitted the $(s,z)$-independent prefactor, and $\cal G$ is some function expressed through $\cal F$ and its derivatives. Once $v=w=1/p$ have been substituted, the argument of $\cal G$ turns into 0, so it is just a number. At the end, once again ignoring $(s,z)$-independent factors, (\ref{rhskl}) is written as a linear combination of terms of the form \begin{equation} \frac{1}{(1-pz)^{G+k_1+l_1}}\frac1{2\pi i}\oint \frac{ \tilde u(s)\,ds}{s\,(1-ps)^{G+k_2+l_2}}. \label{k1k2l1l2} \end{equation} Now, with $\sigma=1/s$, \begin{equation} \frac1{2\pi i}\oint ds \frac{\tilde u(s)}{s\,(1-ps)^{G+k_2+l_2}}=\frac1{2\pi i}\oint d\sigma\frac{\sigma^{G+k_2+l_2-1} \tilde u(1/\sigma)}{(\sigma-p)^{G+k_2+l_2}}. \end{equation} But $\sigma^{k_2+l_2}/(\sigma-p)^{k_2+l_2}$ can be written as a linear combination of terms of the form $ 1/(\sigma-p)^{m}$ with $0\le m\le k_2+l_2$. Hence, (\ref{k1k2l1l2}) can be written as a linear combination of \begin{equation} \frac1{2\pi i}\oint d\sigma\frac{\sigma^{G-1} \tilde u(1/\sigma)}{(\sigma-p)^{G+m}}=\frac1{\Gamma(G+m)}\partial_\sigma^{G+m-1}\left(\sigma^{G-1} \tilde u(1/\sigma)\right)\Big|_{\sigma=p}. \label{usigmares} \end{equation} By (\ref{tildeconj}), $\tilde u(1/\sigma)$ is exactly the same as $u(z)$ with $z$ replaced by $\sigma$ and $p$ replaced by $\bar p$. Therefore, from (\ref{uNder}), \begin{equation} \partial_\sigma^{G-1}\left(\sigma^{G-1}\tilde u(1/\sigma)\right)=\frac{(p -\sigma)^N}{(1-\bar p\sigma)^{N+G}}, \end{equation} which evidently has a degree $N$ zero at $\sigma=p$. Hence, at least $N$ further differentiations ($m \ge N$) must be applied in (\ref{usigmares}) in order for the result to be nonzero. So (\ref{k1k2l1l2}) can only be nonvanishing if $k_2+l_2\ge N$. But by (\ref{klN}) and (\ref{k1k2k}) this implies $k_1+l_1\le N$. Thus, after evaluating of the $s$-integral in (\ref{k1k2l1l2}), the result is a linear combination of $1/(1-pz)^{G+n}$ with $0\le n\le N$. Since all contributions to (\ref{rhsuuu}) are in the form (\ref{k1k2l1l2}), this property is inherited by (\ref{rhsuuu}), and hence the latter must then be expressible as (\ref{rhsQ}).\vspace{2mm} \noindent{\bf Step 2.} To pin down $Q_N(z)$ in (\ref{rhsQ}), we shall compute the $k$th $z$-derivative of (\ref{rhsuuu}) at $z=\bar p$, and we shall start with performing the $s$-integral in (\ref{rhsuuu}). The structure of the argument is rather similar to step 1, but with the roles of $(z,s)$ and $(v,w)$ interchanged. The $s$-integral is evaluated through the residues of $\tilde u$. The following expression for $\tilde u$ follows from (\ref{formuN}): \begin{equation} \tilde u(s)=\sum_{l=0}^{N} \frac{\bar c_l}{(1-\bar p/s)^{l+1}}=s\sum_{l=0}^{N} \frac{d_l}{(s-\bar p)^{l+1}}, \label{formutild} \end{equation} where $d_l$ are certain linear combinations of $\bar c_l$ with $\bar p$-dependent coefficients.\footnote{The explicit expression for $d_l$ is irrelevant for the purposes of our argument. For any fixed $N$, it is easily derived by multiplying the numerator and denominator of ${\bar c_l}/(1-\bar p/s)^{l+1}$ by $s^{l+1}$ and then expanding ${s^l}/(s-\bar p)^{l+1}$ in terms of $1/(s-\bar p)$, $1/(s-\bar p)^2$,... , $1/(s-\bar p)^{l+1}$. For instance, at $N=1$, $d_0=\bar c_0+\bar c_1$ and $d_1=\bar p\bar c_1$.} Then the $k$th $z$-derivative of (\ref{rhsuuu}) at $z=\bar p$ consists of terms of the form \begin{equation} \frac1{(2\pi i)^2}\oint\frac{dv\,dw}{(vw)^{1-G}} \,u(v) u(w)\partial_z^k\partial_s^l\Bigg( \frac{{\cal F}\left(\ln\left[\frac{(v-s)(w-z)}{(v-z)(w-s)}\right]\right)}{\left[(v-s)(v-z)(w-s)(w-z)\right]^{G/2}}\Bigg)\Bigg|_{z,s=\bar p} . \end{equation} The $z$- and $s$-differentiations are performed in a manner directly parallel to the $v$- and $w$- differentiations in step 1. One gets a collection of terms of the form \begin{equation} \frac1{(2\pi i)^2}\oint\frac{dv}v \oint\frac{dw}w \frac{u(v) u(w)}{v^{k_1+l_1}w^{k_2+l_2}(1-\bar p/v)^{G+k_1+l_1}(1-\bar p/w)^{G+k_2+l_2}}, \end{equation} with $k_1+k_2=k$ and $l_1+l_2=l$, which can be rewritten as \begin{equation} \partial_v^{G-1+k_1+l_1}(v^{G-1}u(v))\, \partial_w^{G-1+k_2+l_2}(w^{G-1}u(w))\Big|_{v,w=\bar p}. \end{equation} But by construction $\partial_z^{G-1}(z^{G-1}u(z))$ has a degree $N$ zero at $z=\bar p$. Therefore, at least $N$ extra differentiations must be applied in both factors above in order to make the result nonvanishing, i.e., $k_1+l_1\ge N$ and $k_2+l_2\ge N$, while $l_1+l_2=l\le N$ from (\ref{formutild}), and hence $k=k_1+k_2\ge N$. In other words, if we apply fewer than $N$ $z$-derivatives to (\ref{rhsuuu}) and evaluate the result at $z=\bar p$, all the terms identially vanish. That means that (\ref{rhsuuu}) has a degree $N$ zero at $z=\bar p$. Since we already know that it is of the form (\ref{rhsQ}), it must be proportional to (\ref{rhsfnl}), completing our proof that (\ref{uNder}) satisfies the equations of motion (\ref{complexeq2}). Note that the specific form of the arbitrary function $\cal F$ defining our resonant system does not affect the form of stationary solutions (\ref{uNder}), though it may affect the relation of $\lambda$ and $p$. \section{A quintic generalization} In this section, we address the question which of the properties of cubic resonant systems we have previously described generalize to the quintic analog of (\ref{ressyst}) given by \begin{equation} i\,\frac{d\alpha_n}{dt}=\hspace{-4mm}\sum_{n+n_2+n_3=k_1+k_2+k_3} \hspace{-4mm}C_{nn_2n_3k_1k_2k_3}\bar\alpha_{n_2}\bar\alpha_{n_3}\alpha_{k_1}\alpha_{k_2}\alpha_{k_3}. \label{res5} \end{equation} We shall see that most (but not all) of the properties discussed above have a natural generalization, provided that a suitable constraint is imposed on $C$. This is in contrast, for example, to the cubic Szeg\H o equation whose integrability properties have no known quintic generalization \cite{gerard_quintic}. We start by asking whether the conservation of $Z$ given by (\ref{Zdef}) can be ensured for some value of $G$. By analogy with (\ref{betaSdef}), define \begin{equation} S_{n_1n_2n_3k_1k_2k_3}=f_{n_1}f_{n_2}f_{n_3}f_{k_1}f_{k_2}f_{k_3}C_{n_1n_2n_3k_1k_2k_3} \label{Squintdef} \end{equation} with $f_n$ given by (\ref{fdef}). Note that $C$, and hence $S$, are symmetric under any permutations of $(n_1,n_2,n_3)$, any permutations of $(k_1,k_2,k_3)$ and under interchange of these two groups of indices. One can then check that (\ref{Zdef}) is indeed conserved by (\ref{res5}) provided that \begin{align}\label{quintid} &(n-1+G)S_{(n-1)miklj} +(m-1+G)S_{n(m-1)iklj} + (i-1+G)S_{nm(i-1)klj}\\ &\hspace{3cm}\nonumber - (k+1)S_{nmi(k+1)lj} - (l+1)S_{nmik(l+1)j} - (j+1)S_{nmikl(j+1)}=0 \end{align} for all $(n,m,i,j,k,l)$ satisfying $n+m+i-k-l-j = 1$. The proof is an immediate generalization of the corresponding derivation for the cubic case given in section 3 of \cite{AO}. One simply differentiates (\ref{Zdef}) with respect to $t$ and applies (\ref{res5}). We note that while the conservation of (\ref{Zdef}) generalizes to the quintic case, we are not aware of a similar generalization of the invariant manifolds and the corresponding analytic solutions mentioned for the cubic case under (\ref{Zdef}). On the other hand, the generating function (\ref{S_sol}) and the stationary solutions (\ref{mode0fam}) and (\ref{uNder}) do have natural quintic counterparts, as we shall proceed to demonstrate. \subsection{The generating function} We would like to convert the condition (\ref{quintid}) into an explicit solution for the generating function \begin{equation} S(z_1,z_2,z_3,s_1,s_2,s_3) = \sum_{n_i,k_j=0}^{\infty} S_{n_1n_2n_3k_1k_2k_3} z_1^{n_1} z_2^{n_2}z_3^{n_3}s_1^{k_1}s_2^{k_2}s_3^{k_3}. \label{quintgendef} \end{equation} From (\ref{quintid}), one must have \begin{equation} \sum_{i=1}^3\left[ z_i^2\partial_{z_i}+Gz_i-\partial_{s_i}\right]S=0. \label{Seq1} \end{equation} By the symmetries of $S_{n_1n_2n_3k_1k_2k_3}$, this also implies \begin{equation} \sum_{i=1}^3\left[ s_i^2\partial_{s_i}+Gs_i-\partial_{z_i}\right]S=0. \label{Seq2} \end{equation} These two equations are only compatible if the commutator of the two operators in square brackets annihilates $S$ as well, giving \begin{equation} \sum_{i=1}^3\left[ z_i\partial_{z_i}-s_i\partial_{s_i}\right] S=0. \label{Seq3} \end{equation} Equations (\ref{Seq1}-\ref{Seq3}) can be solved using the method of characteristics, as in \cite{AO}, to yield three alternative representations for $S$: \begin{eqnarray} S & = & \frac{A\left(s_1 - \frac{1}{z_1},s_1 - \frac{1}{z_2},s_1 - \frac{1}{z_3},s_1 - s_2, s_1 - s_3\right)}{(z_1z_2z_3)^G}\\ S & = & \frac{B\left(z_1 - z_2, z_1 - z_3, z_1 - \frac{1}{s_1}, z_1 - \frac{1}{s_2}, z_1 - \frac{1}{s_3}\right)}{(s_1s_2s_3)^G}\\ S & = & C(z_1s_1,z_1s_2,z_1s_3,z_2s_1,z_2s_2,z_2s_3,z_3s_1,z_3s_2,z_3s_3), \end{eqnarray} where $A$, $B$ and $C$ are arbitrary functions. These can be rewritten as \begin{eqnarray} S & = & \frac{\tilde A\left(s_1 - \frac{1}{z_1},s_1 - \frac{1}{z_2},s_1 - \frac{1}{z_3},s_1 - s_2, s_1 - s_3\right)}{\Big[\prod_{i,j=1}^3 (1-z_is_j)\Big]^{G/3}}\label{Stilde1}\\ S & = & \frac{\tilde B\left(z_1 - z_2, z_1 - z_3, z_1 - \frac{1}{s_1}, z_1 - \frac{1}{s_2}, z_1 - \frac{1}{s_3}\right)}{\Big[\prod_{i,j=1}^3 (1-z_is_j)\Big]^{G/3}}\\ S & = &\frac{\tilde C(z_1s_1,z_1s_2,z_1s_3,z_2s_1,z_2s_2,z_2s_3,z_3s_1,z_3s_2,z_3s_3)}{\Big[\prod_{i,j=1}^3 (1-z_is_j)\Big]^{G/3}}.\label{Stilde3} \end{eqnarray} One can easily verify that the ratio of $A$ and $\tilde A$ is expressible through the arguments of $A$ (which are the same as the arguments of $\tilde A$), and similarly for $B$ and $C$. From (\ref{Stilde1}-\ref{Stilde3}), $\tilde A$, $\tilde B$ and $\tilde C$ are the same function, which should be expressible in three different ways through the indicated arguments. Furthermore, $\tilde A=\tilde B=\tilde C$ must have a regular power series expansion around $z_i=s_i=0$ in order to provide a legitimate generating function of the form (\ref{quintgendef}). While the denominator of (\ref{Stilde1}-\ref{Stilde3}) has such an expansion, and for $\tilde C$ one simply needs regularity around the zeros of its arguments, the presence of $1/z_i$ and $1/s_i$ in the arguments of $\tilde A$ and $\tilde B$ requires imposing further constraints on these functions in order for the power series expansion of $S$ to exist. Consider $\tilde A$ first. By writing $s_1-1/z_1-(s_1-s_2)=s_2-1/z_1$ and so on, we may express $\tilde A$ as a function of nine arguments $s_i-1/z_k$ for all possible values of $i$ and $k$. Of course, only five of these new arguments are independent, but the advantage is that the expression becomes more symmetric with respect to permutations of $z_i$ and $s_i$. Now, we need ${\tilde A}(s_i-1/z_k)$ to have a regular power series expansion near $z_i=s_i=0$, which corresponds to infinite values of all of its nine arguments. It is hence more convenient to express $\tilde A$ through $-1/(s_i-1/z_k)=z_k/(1-z_ks_i)$ so that $z_i=s_i=0$ corresponds to zero values of the new arguments. Since we know from (\ref{Stilde3}) that $\tilde A$ must be expressible through $z_is_j$ only, it must depend on the ratios of $z_k/(1-z_ks_i)$ and $z_k/(1-z_ks_j)$, in which $z_k$ cancels out. Thus, $\tilde A$ is a function of the ratios $(1-z_ks_i)/(1-z_ks_j)$. By a similar argument, $\tilde B$ is a function of the ratios $(1-z_is_k)/(1-z_js_k)$. The combinations of $z_i$ and $s_i$ that are expressible both through the arguments of $\tilde A$ and $\tilde B$ are \begin{equation} u_{ijkl}=\frac{(1-z_is_k)(1-z_js_l)}{(1-z_is_l)(1-z_js_k)}. \label{crs} \end{equation} These can be recognized as (a subset of) the conformally invariant cross-ratios \cite{appconf} of the six points $(z_1,z_2,z_3,1/s_1,1/s_2,1/s_3)$ on the complex plane. The generating function takes the form \begin{equation} S(z_i,s_i)=\frac{{\cal F}(\ln (u_{ijkl}))}{\Big[\prod_{n,m=1}^3 (1-z_ns_m)\Big]^{G/3}}, \label{genquint} \end{equation} where $\cal F$ is an arbitrary function, and the logarithm is introduced in the definition to make it easier to account for the symmetries of $S$, as permutations of $z_i$ and $s_i$ act particularly straightforwardly on the logarithms of cross-ratios (\ref{crs}). The symmetries of $S$, of course, translate into a set of symmetries of $\cal F$, but we will not need to characterize them explicitly for our present purposes. \subsection{Complex plane representation and stationary states} With the generating functions (\ref{genquint}), the considerations of sections \ref{sec:cpl} and \ref{sec:stst} directly generalize to the quintic case. One first introduces $\beta_n$ as in (\ref{betaSdef}) and the corresponding generating functions $u$ and $\tilde u$ as in (\ref{udef}-\ref{betaint}). Then, (\ref{res5}) is converted to the corresponding equation in terms of the generating functions, analogous to (\ref{complexeq}): \begin{align} &\frac{i}{\Gamma(G)}\partial_t\partial_z^{G-1}(z^{G-1} u(t,z))= \frac1{(2\pi i)^5}\oint\frac{dz_2}{z_2} \oint\frac{dz_3}{z_3}\oint\frac{ds_1}{s_1} \oint\frac{ds_2}{s_2} \oint\frac{ds_3}{s_3} \label{complexeq5} \\ &\hspace{3cm}\times S(z,z_2,z_3,1/s_1,1/s_2,1/s_3)\,\, \tilde u(t,z_2)\tilde u(t,z_3) u(t,s_1) u(t,s_2) u(t,s_3),\nonumber \end{align} with $S$ given by (\ref{genquint}). As the generating function (\ref{genquint}) is very similar to (\ref{S_sol}), the structure of equation (\ref{complexeq5}) is directly parallel to (\ref{complexeq}) and (\ref{complexeq2}). In particular, there are solutions in the form of stationary states \begin{equation} u(t,z)=e^{-i\lambda t}u(z),\qquad\partial_z^{G-1}(z^{G-1} u(z))=\frac{(\bar p -z)^N}{(1-pz)^{N+G}} \label{ststquint} \end{equation} for any nonnegative integer $N$ and any complex number $p$. The proof simply retraces the steps of section \ref{sec:stst}. Note that because $\cal F$ in (\ref{genquint}) only depends on the conformal cross-ratios, there will be a direct analog of the differentiation formula (\ref{diffG}), which plays a key role in the derivation. \subsection{$G\to\infty$ limit} We have already remarked in section \ref{sec:cpl} that there is a special $G\to\infty$ limit of our construction for the cubic case, which has been discussed in detail in \cite{AO}. A similar limit exists for the quintic generalization of the current section. Namely, instead of (\ref{fdef}), one defines \begin{equation} f^\infty_n=\frac1{\sqrt{n!}}, \end{equation} which is used instead of $f_n$ to define $\beta_n$ as in (\ref{betaSdef}) and $S$ as in (\ref{Squintdef}), \begin{equation} S_{n_1n_2n_3k_1k_2k_3}=\frac{C_{n_1n_2n_3k_1k_2k_3}}{\sqrt{n_1!\,n_2!\,n_3!\,k_1!\,k_2!\,k_3!}}. \label{Squintinf} \end{equation} Then, instead of (\ref{quintid}), one imposes \begin{align}\label{idinf} &S_{(n-1)miklj} +S_{n(m-1)iklj} + S_{nm(i-1)klj}\\ &\hspace{3cm}\nonumber - (k+1)S_{nmi(k+1)lj} - (l+1)S_{nmik(l+1)j} - (j+1)S_{nmikl(j+1)}=0 \end{align} for all sets of indices satisfying $n+m+i=k+l+j+1$, which guarantees the conservation of \begin{equation} Z = \sum_{n=0}^{\infty}\sqrt{n+1}\,\bar{\alpha}_{n+1} \alpha_n, \label{Zinf} \end{equation} and the presence of associated symmetries. One could pursue a construction of stationary states bifurcating from individual modes for this case, as in section \ref{sec:stst}, but there is a shortcut available that makes it unnecessary. For the special $G\to\infty$ limit, unlike for general values of G, the finite form of symmetry transformations generated by (\ref{Zinf}) is known explicitly. Such transformations have appeared under the name of magnetic translations in the literature on the Lowest Landau Level approximation for Bose-Einstein condensates \cite{GHT}. In terms of the generating function \begin{equation} u(t,z)=\sum_{n=0}^\infty \frac{\alpha_n(t)\,z^n}{\sqrt{n!}}, \end{equation} applying the transformation \begin{equation} u(t,z)\mapsto u(t,z-\bar p)\,e^{pz-|p|^2/2} \label{inftrans} \end{equation} maps solutions of (\ref{res5}) with the interaction coefficients satisfying (\ref{Squintinf}-\ref{idinf}) into solutions. Applying these transformation to the generating functions of single-mode solutions, $u\sim z^N$, gives the $G\to\infty$ analogs of the stationary states (\ref{ststquint}). \section{Quintic PDEs with partially solvable resonant systems} We shall now present two examples of quintic PDEs of mathematical physics whose resonant systems fall in the partially solvable class we have outlined and benefit from the structures developed in our derivations. The first example is the quintic one-dimensional nonlinear Schr\"odinger equation in a harmonic trap. The resonant system for this equation has been previously studied in mathematical literature \cite{fennell}, while motivations to consider quintic nonlinearities from the standpoint of Bose-Einstein condensate physics are given in \cite{NLSquint}. Our purpose is to clarify how this case fits in our framework. The second example, which is the resonant system of the quintic conformally invariant wave equation on a two-sphere, is novel. (Additionally, in the appendix, we give a few extra quintic resonant systems that fall into our special class, obtained by directly generalizing known cubic examples, rather than deriving them from concrete quintic PDEs.) \subsection{Quintic nonlinear Schr\"odinger equation on R$^1$} Mathematically rigorous considerations of the resonant approximation to the quintic one-dimensional nonlinear Schr\"odinger equation in a harmonic trap are given in \cite{fennell}. In a nutshell, one starts with the quintic analog of the nonlinear Schr\"odinger equation (\ref{NLS1d}) given by \begin{equation} i\,\frac{\partial \Psi}{\partial t}=\frac12\left(-\frac{\partial^2}{\partial x^2}+x^2\right)\Psi +g|\Psi|^4\Psi. \label{NLS1d5} \end{equation} and performs manupulations identical to (\ref{NLS1dlin}-\ref{NLSpreres}) to obtain a resonant system of the form (\ref{res5}) with the interaction coefficients given by \begin{equation} C_{nmiklj} = \frac{1}{2^{n+m+i}\sqrt{n!m!i!k!l!j!}}\int_{-\infty}^{\infty} dx e^{-3 x^2} H_n H_m H_i H_k H_l H_j. \label{intcoeffNLS1} \end{equation} Here, $H_n$ are Hermite polynomials. These interaction coefficients satisfy (\ref{Squintinf}-\ref{idinf}), and thus the resonant system belongs to the special $G\to\infty$ limit of our class. To prove that (\ref{intcoeffNLS1}) satisfies (\ref{Squintinf}-\ref{idinf}), recall the following identities for the Hermite polynomials \begin{align} & H_{n+1} = 2x H_n - \partial_x H_n, \label{eq:_id_1_H}\\ &\partial_x H_n = 2n H_{n-1}, \label{eq:_id_2_H}\\ &H_{n+1} = 2x H_n - 2n H_{n-1}. \label{eq:_id_3_H} \end{align} Omitting the irrelevant numerical prefactor and remembering that $n+m+i=k+l+j+1$, we obtain for the left-hand side of (\ref{idinf}) \begin{equation*} \int_{-\infty}^{\infty} dx e^{-3 x^2} [ 2nH_{n-1}H_m H_i H_k H_l H_j + 2m H_{n}H_{m-1} H_i H_k H_l H_j + 2i H_{n}H_m H_{i-1} H_k H_l H_j - \end{equation*} \begin{equation*} - H_n H_mH_i H_{k+1}H_lH_j - H_n H_mH_i H_{k}H_{l+1}H_j - H_n H_mH_i H_{k}H_lH_{j+1}] \end{equation*} \begin{equation*} =\int_{-\infty}^{\infty} dx e^{-3 x^2} [ \partial_x \left(H_nH_mH_i\right) H_kH_lH_j - 6x H_n H_mH_i H_lH_j + H_n H_mH_i \partial_x\left(H_{k}H_lH_{j}\right)] \end{equation*} \begin{equation*} = \int_{-\infty}^{\infty} dx e^{-3 x^2} [ \partial_x \left(H_nH_mH_iH_kH_lH_j\right) - 6x H_n H_mH_i H_lH_j ] \end{equation*} \begin{equation*} =\int_{-\infty}^{\infty} dx \partial_x[e^{-3 x^2} H_nH_mH_iH_kH_lH_j ] = e^{-3 x^2} H_n(x)H_m(x)H_i(x)H_k(x)H_l(x)H_j(x) \Big{|}_{-\infty}^{\infty} = 0. \end{equation*} Thus, (\ref{idinf}) is satisfied and (\ref{Zinf}) is conserved. Stationary solutions can be straightforwardly constructed using the transformation (\ref{inftrans}). \subsection{Conformally invariant quintic wave equation on S$^2$} We now turn to the conformally invariant quintic wave equation on a two-sphere, where the structures we display are novel. A cubic precursor of these structures is the conformal flow originating from the cubic wave equation on a three-sphere and studied in \cite{CF,BHP}. One of the questions that has triggered our present study of quintic nonlinearities is whether the properties of the conformal flow generalize to the quintic case. Existence of such generalization is by no means guaranteed. Consider the (2+1)-dimensional Einstein cylinder $\mathbb{R}\times \mathbb{S}^2$ with the metric \begin{equation}\label{metric} g= -dt^2 + d\vartheta^2+\sin^2{\vartheta} d\varphi^2 \end{equation} and put on it a real scalar field $\phi$ satisfying the conformally invariant quintic wave equation\footnote{Note that solutions to (\ref{eq-conf}) satisfying $\phi=0$ at the equator of the two-sphere can be mapped by a standard conformal transformation to solutions for a conformally invariant quintic wave equation on the three-dimensional Anti-de Sitter spacetime with the Dirichlet boundary conditions at infinity. This map connects our equation to topics of the much-studied AdS$_3$/CFT$_2$-correspondence.} \begin{equation}\label{eq-conf} \left(\square_g -\frac{1}{8} R(g)\right) \phi -\phi^5 =0\,, \end{equation} where $\square_g:=g^{\mu\nu}\nabla_{\mu}\nabla_{\nu}$ and $R(g)$ are the wave operator and the Ricci scalar associated with $g$. We introduce $x=\cos{\vartheta}$ and impose rotational symmetry by assuming that $\phi=f(t,x)$. Then, equation \eqref{eq-conf} reduces to \begin{equation}\label{eq} f_{tt}-\partial_x \left((1-x^2) f_x\right) + \frac{1}{4} f + f^5 =0\,. \end{equation} % Decomposing small (weakly nonlinear) solutions with amplitudes of order $\varepsilon$ into Legendre polynomials satisfying the standard normalization condition $\int_{-1}^1 P_n\,P_m\,dx=2\delta_{nm}/(2n+1)$ as \begin{equation} f(t,x)=\varepsilon \sum\limits_{n=0}^{\infty} c_n(t) P_n(x), \end{equation} we obtain from \eqref{eq} an infinite system of coupled oscillators \begin{equation}\label{fourier} \frac{d^2 c_n}{dt^2} + \omega_n^2 c_n = -\omega_n\varepsilon^4\,\sum\limits_{jklmi} C_{njklmi}\, c_j c_k c_l c_m c_i, \end{equation} where $\omega_n=n+\frac{1}{2}$ and \begin{equation}\label{C} C_{njklmi}=\int_{-1}^{1} P_n(x) P_j(x) P_k(x) P_l(x) P_m(x) P_i(x) \,dx. \end{equation} We now develop a resonant approximation to (\ref{fourier}) at small $\varepsilon$, based on the standard time-averaging techniques \cite{murdock}. One first introduces the complex normal mode amplitudes $\alpha_n$, which will be our new dynamical variables \begin{equation} c_n(t)=\alpha_n(t) e^{-i\omega_n t}+ \bar\alpha_n(t) e^{i\omega_n t},\qquad \frac{d c_n(t)}{dt}=-i\omega_n\left(\alpha_n(t) e^{-i\omega_n t}- \bar\alpha_n(t) e^{i\omega_n t}\right). \end{equation} Substituting this into (\ref{fourier}), one obtains schematically \begin{equation} i\omega_n \frac{d\alpha_n}{dt}=\omega_n\varepsilon^4\sum C_{njklmi}\, \overset{\sssty ?}{\alpha}_j \overset{\sssty ?}{\alpha}_k \overset{\sssty ?}{\alpha}_l \overset{\sssty ?}{\alpha}_m \overset{\sssty ?}{\alpha}_i\, e^{i\Omega_{njklmi}t}, \label{CF5preres} \end{equation} where $\overset{\sssty ?}{\alpha}_n$ may be either $\alpha_n$ or $\bar\alpha_n$ (all such choices must be summed over) and \begin{equation} \Omega_{njklmi}=\omega_n\pm \omega_j\pm \omega_k\pm \omega_l\pm \omega_m\pm \omega_i, \label{Enjklmi} \end{equation} where the plus signs are chosen if the corresponding $\overset{\sssty ?}{\alpha}_n$ occurs as $\bar\alpha_n$, and the minus signs if it occurs as $\alpha_n$. By the standard lore of time-averaging \cite{murdock}, at small $\varepsilon$, nonresonant interactions corresponding to nonzero $\Omega_{njklmi}$ may be dropped from (\ref{CF5preres}). Keeping only the resonant couplings satisfying $\Omega_{njklmi}=0$ results in an accurate approximation on time scales $1/\varepsilon^4$ for small $\varepsilon$, and amounts to implementing the resonant approximation. Before we state the final form of our quintic resonant system, it is important to point out an additional simplification that occurs for our specific case. There are many possible choices of signs in (\ref{Enjklmi}), but it turns out that the interaction coefficients (\ref{C}) vanish unless there are exactly three plus signs and three minus signs total. This is analogous to the selection rules that have been extensively studied in the context of resonant systems in Anti-de Sitter spacetime \cite{CEV,Yang,EN}. We now sketch a proof of this claim. Consider first the case where (\ref{Enjklmi}) has one plus sign and five minus signs. Then, the resonant condition is $\Omega_{njklmi}=\omega_n-\omega_j- \omega_k- \omega_l- \omega_m- \omega_i=0$, which means that $n=j+k+l+m+i+2$. But then the degree of $P_n$ in (\ref{C}) is higher than the total degree of the polynomial $P_j P_k P_lP_mP_i$, and hence the corresponding $C$ vanishes by orthogonality of the Legendre polynomials and does not contribute to (\ref{CF5preres}). Consider now the case where (\ref{Enjklmi}) has two plus signs and four minus signs. By the index permutation symmetry of $C$, the two plus signs can be associated with the indices $n$ and $j$, so that the resonant condition is $\Omega_{njklmi}=\omega_n+\omega_j- \omega_k- \omega_l- \omega_m- \omega_i=0$, which means $n+j=k+l+m+i+1$. But the Legendre polynomials satisfy $P_n(-x)=(-1)^nP_n(x)$. Hence, $P_nP_j P_k P_lP_mP_i$ is reflection-odd, and its integral from $-1$ to $1$ in (\ref{C}) is zero. The cases with four and five plus signs reduce to the previous two cases after renaming the indices. Thus, the only resonant condition that may contribute to the resonant approximation to (\ref{CF5preres}) is (\ref{Enjklmi}) with three plus and three minus signs (which corresponds to two $\bar\alpha$'s and three $\alpha$'s), as claimed. Putting everything together and absorbing $\varepsilon^4$ together with any numerical factors into a redefinition of time, we arrive at the following resonant approximation to (\ref{fourier}), which we call the {\em quintic conformal flow}: \begin{equation} \label{flow} i \frac{d\alpha_n}{dt}=\hspace{-3mm} \sum\limits_{n+n_2+n_3=n_4+n_5+n_6}\hspace{-3mm} C_{n n_2 n_3 n_4 n_5 n_6} \bar\alpha_{n_2} \bar \alpha_{n_3} \alpha_{n_4} \alpha_{n_5} \alpha_{n_6}. \end{equation} The interactions coefficients can be expressed as \begin{equation} C_{n_1 n_2 n_3 n_4 n_5 n_6} = \frac{1}{1+N}\,\sum_{j_1=0}^{n_1} \sum_{j_2=0}^{n_2} \sum_{j_3=0}^{n_3} \sum_{j_4=0}^{n_4} \sum_{j_5=0}^{n_5} \sum_{j_6=0}^{n_6} (-1)^J \frac{\prod_{k=1}^6 {\binom{n_k}{j_k}}^2}{\binom{N}{J}}, \end{equation} where $N=\sum_{k=1}^6 n_k$ and $J=\sum_{k=1}^6 j_k$, though we shall not use this explicit formula. The corresponding analysis together with a combinatorial interpretation of the integrals (\ref{C}) can be found in \cite{GJZ}. We shall now prove that the interaction coefficients (\ref{C}) satisfy (\ref{quintid}) with $G = 1$, and thus (\ref{flow}) belongs to the class of resonant systems analyzed in our paper. We first notice the identities \begin{align} &n P_{n-1} = (2n+1) x P_n - (n+1) P_{n+1}, \label{eq:_id_1}\\ &(x^2-1)\partial_x P_n = n x P_n - n P_{n-1}, \label{eq:_id_2}\\ &(x^2-1)\partial_x P_n + (n+1) x P_n = (n+1)P_{n+1}. \label{eq:_id_3} \end{align} From (\ref{eq:_id_1}), \begin{equation*} n P_{n-1}P_m P_i P_k P_l P_j + m P_nP_{m-1} P_i P_k P_l P_j + i P_nP_m P_{i-1} P_k P_l P_j = (2(n+m+i) + 3) x P_n P_m P_i P_k P_l P_j \end{equation*} \begin{equation*} - (n+1) P_{n+1}P_m P_i P_k P_l P_j - (m+1)P_nP_{m+1} P_i P_k P_l P_j - (i+1) P_n P_m P_{i+1} P_k P_l P_j . \end{equation*} Hence, the left-hand side of (\ref{quintid}) becomes \begin{equation*} \int_{-1}^{1}dx\ [ (2(n+m+i) + 3) x P_n P_m P_i P_k P_l P_j - (n+1) P_{n+1}P_m P_i P_k P_l P_j - (m+1)P_nP_{m+1} P_i P_k P_l P_j \end{equation*} \begin{equation*} - (i+1) P_n P_m P_{i+1} P_k P_l P_j - (k+1)P_n P_m P_i P_{k+1} P_l P_j - (l+1)P_n P_m P_i P_k P_{l+1} P_j - (j+1)P_n P_m P_i P_k P_l P_{j+1}]. \end{equation*} Using (\ref{eq:_id_3}) on the terms with minus signs, this is written as \begin{align*} &\int_{-1}^{1}dx\ [ (2(n+m+i) + 3) x P_n P_m P_i P_k P_l P_j - (x^2-1)\partial_x \left(P_nP_mP_iP_kP_lP_j\right) \\ & \hspace{5cm}- (n+m+i+k+l+j + 6) x P_nP_mP_iP_kP_lP_j]. \end{align*} Then, remembering that $n+m+i = k + l+j + 1$, one gets \begin{equation*} \int_{-1}^{1}dx\left(-2x - (x^2-1)\partial_x \right)P_n P_m P_i P_k P_l P_j =\int_{-1}^{1}dx \partial_x \left((x^2-1) P_n P_m P_i P_k P_l P_j \right) = \end{equation*} \begin{equation*} = (x^2-1) P_n (x)P_m(x)P_i(x) P_k(x) P_l(x) P_j(x) \Big{|}_{-1}^{1} = 0. \end{equation*} Hence, (\ref{quintid}) is satisfied, and the resonant system (\ref{flow}) benefits from the conserved quantity (\ref{Zdef}) and the stationary solutions (\ref{ststquint}) with $G=1$. \section{Outlook} We have explored properties of cubic resonant systems of the form (\ref{ressyst}) with the interaction coefficients satisfying (\ref{AOdef}), and quintic resonant systems of the form (\ref{res5}) with the interaction coefficients satisfying (\ref{quintid}). By constructing complex plane representations (\ref{complexeq}), (\ref{complexeq2}) and (\ref{complexeq5}), we were able to establish families of stationary states (\ref{uNder}) bifurcating from every individual mode. Our results have direct implications for a number of equations of mathematical physics whose resonant systems fall in our class, in relation to Bose-Einstein condensates \cite{BBCE}, the Schr\"odinger-Newton system in a harmonic potential \cite{BEF}, and relativistic wave equations in highly symmetric spacetimes \cite{CF,BEL}. Examples with quintic nonlinearities include the nonlinear Schr\"odinger equation in a one-dimensional harmonic trap, previously treated from a mathematical perspective in \cite{fennell}, and the conformally invariant quintic wave equation on a two-sphere, brought forth in a our present study. We have also presented in the appendix a few extra quintic resonant systems that benefit from the analytic stuctures we have formulated, which have been obtained by a generalization of explicitly known cubic resonant systems. Our formalism with its defining conditions (\ref{AOdef}) and (\ref{quintid}), as well as the examples from the appendix, furthermore admit a natural extension to higher order nonlinearities (which we do not explicitly pursue). The class of resonant systems we have studied includes known representatives originating as resonant approximations \cite{CF,BEL,BEF} to nonlinear dynamics in AdS spacetimes. The latter topic is of appreciable significance in the area of AdS/CFT correspondence, where such studies connect to the physics of thermalization in the dual field theories \cite{therm,relax}. While the dynamical equations in \cite{CF,BEL,BEF} do not always include gravitational backreaction on the AdS metric, such nonlinear probe fields have also been studied in the context of AdS/CFT correspondence, see, e.g., \cite{Das}. We believe that the complex plane representations for our resonant systems hold much more power than we have explicitly displayed in our treatment. As we mentioned in the introduction, the complex plane representation for the conformal flow \cite{CF} can be used to give an elegant proof \cite{gerard_quintic} to the rather nontrivial fact that there are stationary states with generating functions in the form of arbitrary Blaschke products \begin{equation} u(t,z)=e^{-i\lambda t}\,\frac{(\bar p_1-z)\cdots(\bar p_k-z)}{(1-p_1z)\cdots (1-p_kz)} \end{equation} for any set of complex numbers $(p_1,\ldots,p_k)$ and any $k$. Similarly, a complex plane represention for the LLL equation has been used in \cite{GGT} to obtain powerful results on classification of stationary states and properties of their zeros (the latter subject being of pivotal importance in the physics of Bose-Einstein condensates). It remains to be seen what further conclusions may emerge from the analysis of our complex plane representations, both in general, and in application to specific physically motivated representatives in our classes of resonant systems. \section*{Acknowledgments} This research has been supported by FPA2014-52218-P and FPA2017-84436-P from Ministerio de Economia y Competitividad, by Xunta de Galicia ED431C 2017/07, by European Regional Development Fund (FEDER), by Grant Mar\'ia de Maetzu Unit of Excellence MDM-2016-0692, by Polish National Science Centre grant number 2017/26/A/ST2/00530 and by CUniverse research promotion project by Chulalongkorn University (grant CUAASC). A.B. thanks the Spanish program ``ayudas para contratos predoctorales para la formaci\'on de doctores 2015'' and its mobility program for his stay at Jagiellonian University, where part of this project was developed. \section*{Appendix: Additional examples of quintic partially solvable\\\rule{2.7cm}{0cm} resonant systems} We list a few quintic systems satisfying (\ref{quintid}) or (\ref{idinf}) obtained by a direct generalization of known cubic systems satisfying (\ref{AOdef}): \begin{itemize} \item The interaction coefficients \begin{equation} S_{nmiklj} = \frac{1}{(n+m+i +1)(n+m+i + 2)} \end{equation} satisfy (\ref{quintid}) with $G = 1$. This is a generalization of the maximally rotating cubic resonant system on a three-sphere from \cite{BEL}. \item The interaction coefficients \begin{equation} S_{nmiklj} = \frac{\Gamma(n+\delta)\Gamma(m+\delta)\Gamma(i+\delta)\Gamma(k+\delta)\Gamma(l+\delta)\Gamma(j+\delta)\Gamma(n+m+i+1)}{\Gamma(n+1)\Gamma(m+1)\Gamma(i+1)\Gamma(k+1)\Gamma(l+1)\Gamma(j+1)\Gamma(n+m+i+3\delta)} \end{equation} satisfy (\ref{quintid}) with $G = \delta$, which can be an arbitrary positive real number. This is a generalization of the maximally rotating cubic resonant systems on Anti-de Sitter spacetimes from \cite{BEL}. \item The interaction coefficients \begin{equation} S_{nmiklj} = \frac{8}{\pi}\int_{0}^{\pi} \frac{dx}{\sin^2 x}\sin(n+1)x\sin(m+1)x\sin(i+1)x\sin(k+1)x\sin(l+1)x\sin(j+1)x \end{equation} satisfy (\ref{quintid}) with $G = 2$. The cubic prototype is \begin{equation} S_{nmkl} = \frac{2}{\pi}\int_{0}^{\pi} \frac{dx}{\sin^2 x}\sin(n+1)x\sin(m+1)x\sin(k+1)x\sin(l+1)x = \text{min}(n,m,k,l)+1, \end{equation} which is the conformal flow \cite{CF}. \item The interaction coefficients \begin{equation} S_{nmiklj} = \frac{1}{3^{n+m+i}} \frac{(n+m+i)!}{n!m!i!k!l!j!} \end{equation} Satisfy (\ref{idinf}) and thus correspond to the $G\to\infty$ limit in our class of systems. The cubic prototype is the LLL equation \cite{GHT,GT,BBCE,GGT}. \end{itemize}
1,116,691,497,078
arxiv
\section{How to Use this Template} \section{Introduction} Inflationary cosmology \cite{Guth,Linde} is now the dominant perspective in explaining the early universe's physics, solving the flatness, homogeneity and unwanted relics problems, and providing a mechanism to interpret the inhomogeneities in the Cosmic Microwave Background Radiation (CMBR). In the standard slow-roll cold inflation models, the universe experiences an exponential expansion, during which density perturbations are created by quantum fluctuations of the inflaton field, followed by the reheating stage, where a temporally localized mechanism must rapidly distribute sufficient vacuum energy. Fang and Berera \cite{BereraFang} realized that combining the exponential accelerating expansion phase and the reheating one could resolve disparities assembled by each separately. In \cite{Berera} Berbera proposes a warm inflationary model in which thermal equilibrium is maintained during the inflationary phase, and radiation production is started throughout it, i.e., relativistic particles are created during the inflationary period. Many inflationary models inspired by particle physics, string theory, and quantum gravity have been studied within the context of warm inflation. Visinelli \cite{Visinelli}, derived and analysed the experimental bounds on warm inflation with a monomial potential, whereas Kamali in \cite{Kamali} investigated the warm scenario with non-minimal coupling (NMC) to gravity with a Higgs-like potential. The authors of \cite{Chamoun_universe} treated the warm scenario with NMC to modified gravity with a special potential motivated by variation of constants. In \cite{Amaek}, warm inflationary models in the context of a general scalar-tensor theory of gravity were investigated within only the strong limit of dissipation. The natural inflation (NI) proposed by Freese, Frieman, and Olinto \cite{Freese}, with a cosine potential, is a popular model due to its shift symmetry with a flat potential, preventing significant radiative corrections from being introduced, which gives NI an ability to solve theoretical challenges inherent in slow rolling inflation models. However, NI is disfavored at greater than $95 \% $ confidence level by current observational constraints from $Planck 2018$ on the scalar-tensor ratio $r$ and spectral index $n_s$ \cite{Planck18, Stein}. Moreover, a more recent analysis of BICEP/Keck XIII in 2018 (BK18) \cite{BK18} has put more stringent bounds on $r$. In \cite{Chamoun_JCAP}, it was shown that NMC to gravity within $f(R)$ setting was enough to bring ``cold'' NI to within $95 \% $ confidence levels of the current observational constraints represented by $Planck 2018$ (TT, EE, TE), BK18 and other experiments (lowE, lensing) separately or combined. The aim of this work is to study NI with NMC to gravity within the Warm paradigm, in both the strong and weak limits of the dissipation term characterizing the warm scenario. We study two forms for the NMC to gravity term, which is generally produced at one-loop order in the interacting theory for a scalar field, even if it is absent at the tree level \cite{Freedman:1974gs}. Actually, in general all terms of the form $(R^i \phi^j, R^{\mu\nu} \partial_\mu\phi \partial_\nu\phi, \ldots)$ are allowed in the action. However, omitting the derivative terms and taking a finite number of loop graphs enforce a polynomial form of the NMC term, and if one imposes CP symmetry on the action the term should include even powers of the inflaton field $\phi$. For simplicity, we include only the quadratic monomial ($\xi \phi^2 R$ ) of dim-4. However, and since some microscopic theories may suggest the emergence of an NMC similar in form to the original potential \cite{Salvio}, we also consider an NMC of a periodic form respecting the shift symmetry of the NI potential so to be of the form ($\lambda \left(1+\cos(\frac{\phi}{f})\right)$). We find that the NI with NMC to gravity within Warm paradigm is able to accommodate the $(n_s, r)$ observable constraints, but at a price of getting a small value for the e-folding number $N_e \approx 30$ to solve the horizon and flatness problems. However, one can bring $N_e$ to be acceptable ($\geq 40$) but for $n_s \approx 0.98$ just getting outside the admissibility contours. The paper is organized as follows. In section 1, we present the setup of the Warm paradigm, for general potentials, whereas in section 2 we specify the study to NI. In section 3 (4), we study the strong (weak) limit ($Q \equiv \frac{\Gamma}{3 H} >> (<<) 1$) for both quadratic and periodic NMC, whereas we end up by conclusions and a summary in section 5. \section{Warm Inflation Setup} \label{warm_section} \subsection{Arena} We consider the general local action for a scalar field coupled with radiation and gravity within the Jordan frame, \begin{equation} \label{eq:GAction} S=\int d^4 x \sqrt{-g}\bigg\{\frac{1}{2} \Omega^2(\phi) R + \mathcal{L}_\phi + \mathcal{L}_{\gamma}+\mathcal{L}_{Int} \bigg\}. \end{equation} where g is the determinant of the metric $g_{\mu \nu}$, $ \mathcal{L}_{\gamma}$ is the Lagrangian density of the radiation field and $\mathcal{L}_{Int}$ describes the interaction between the latter and the inflaton $\phi$ whose Lagrangian density, considered as that of a canonical scalar field, is given by \begin{eqnarray} \mathcal{L}_\phi&=& -\frac{1}{2}g^{\mu \nu }\Delta_\mu \phi \Delta_\nu \phi-V(\phi), \end{eqnarray} where $V(\phi)$ is the inflaton potential, whereas $\Omega^2(\phi)$ indicates the NMC between the scalar field $\phi$ and the gravity. One can take the usual electromagnetic Lagrangian for $\mathcal{L}_{\gamma}$, while we leave aside, for now, the `unknown' interaction density $\mathcal{L}_{Int}$. Carrying out the usual action optimization by changing with respect to metric and approximating the energy-momentum tensor for both the inflaton and the radiation fields by perfect fluids characterized by energy density $\rho$ and pressure $p$, we get the following equation of motion: \begin{eqnarray} \label{conservation_eq} \dot{\rho}^{\phi} + 3H \left(\rho^\phi + p^\phi\right) + \frac{1}{2} (\Omega^2)^\prime_\phi \dot{\phi} R + \dot{\rho}^\gamma + 4 H \rho^\gamma &=& 0, \end{eqnarray} with $(\Omega^2)^\prime_\phi$ meaning a derivative with resect to $\phi$. Some remarks are in order here. First, the two terms including the Hubble constant ``$H$" terms, which is known to be related to `total energy' including both those of radiation and inflaton, represents a `direct' coupling between the inflaton and radiation in contrast to the `indirect' one via the gravity which couples to all fields. Second, $\mathcal{L}_{int}$ contributes an additional `direct' coupling. However, we still assume that its contribution to the total energy density is negligible such that \begin{eqnarray} \rho^{tot} &=& \rho^\phi + \rho^\gamma \end{eqnarray} There are in the literature some microscopic models for $\mathcal{L}_{int}$ (look for e.g. \cite{Amaek}), but we shall not dwell into that, but rather assume that its effect is described phenomenologically by a term $\Gamma \dot{\phi}^2$, which can be motivated/justified in a field theory approach specific to the considered microscopic model, such that: \begin{eqnarray} \label{friedman_rad_Jordan} \dot{\rho}^\gamma + 4 H \rho^\gamma &=& \Gamma \dot{\phi}^2 \end{eqnarray} whence from Eq.(\ref{conservation_eq}) we have \begin{eqnarray} \label{friedman_inflaton1_Jordan} \dot{\rho}^{\phi} + 3H \left(\rho^\phi + p^\phi\right) + \frac{1}{2} (\Omega^2)^\prime_\phi \dot{\phi} R &=& - \Gamma \dot{\phi}^2 \end{eqnarray} Using \begin{eqnarray} \rho^\phi = \frac{1}{2} \dot{\phi}^2 + V &,& p^\phi = \frac{1}{2} \dot{\phi}^2 - V \end{eqnarray} we get \begin{eqnarray} \label{friedman_inflaton2_Jordan} \ddot{\phi}+ 3H \dot{\phi} V^\prime_\phi + \frac{1}{2} (\Omega^2)^\prime_\phi R &=& - \Gamma \dot{\phi} \end{eqnarray} We see that Eq. \ref{conservation_eq} expresses the conservation of total energy, to which one neglects the contribution of $\mathcal{L}_{int}$ which, meanwhile and through Eqs. (\ref{friedman_rad_Jordan} and \ref{friedman_inflaton1_Jordan}), affects individually both ($\rho^\gamma$ and $\rho^\phi$). There are many possibilities for the dissipative term, but we shall study in this article the case where it depends linearly on temperature ($\Gamma = \Gamma_0 T$). During warm inflation, we have $T \gg H$ and due to the inflaton interactions with the matter/radiation, a bath of particles is continuously produced during the slow roll period, which transits the universe into a radiation-dominated phase through a smooth transition eliminating, thus, the need for a reheating stage. Thermal fluctuaions dominate over quantum fluctuations, even though $\rho^\gamma$ is neglected versus $\rho^\phi$, which is reflected through the factor \begin{eqnarray} \label{Q} Q&=& \frac{\Gamma}{3H} = \frac{\Gamma_0 T}{3 H} \;\;, \end{eqnarray} so that the inflation is described to be in the strong (weak) limit regime when $Q \gg 1$ ($Q \ll 1$). It is convenient to go from Jordan frame to Einstein frame, in which the gravitational sector of the action takes the form of the Hilbert-Einstein action, and the NMC to gravity disappears. Consequently, in Einstein frame, one is able to use the usual equations of general relativity, the inflationary solutions, and the standard slow-roll analysis. The conformal transformation is defined as: \begin{eqnarray} \label{conf_trans} \tilde{g}_{\mu\nu} = \Omega^2(\phi) {g}_{\mu\nu} &\Rightarrow& \sqrt{-\tilde{g}} = \Omega^4 \sqrt{-g} \end{eqnarray} leading to the action expressed in Einstein frame by \begin{eqnarray} \label{action-einstein} S &=&\int d^4 x \sqrt{-\tilde{g}}\bigg\{\frac{1}{2} \tilde{R} -\frac{1}{2} \frac{1+6 ({\Omega}^\prime)^2}{\Omega^2} \tilde{g}^{\mu\nu}\tilde{\nabla}_\mu \phi \tilde{\nabla}_\nu \phi - \frac{V(\phi)}{\Omega^4} \bigg\}+ \int d^4 x \sqrt{-\tilde{g}} \tilde{\mathcal{L}}_\gamma +S_{Int} . \nonumber \\ \end{eqnarray} For the radiation field, and since the corresponding integrand in the action is invariant under rescaling, then by Eq. (\ref{conf_trans}) we find that the Lagrangian density (energy-momentum tensor) is divided by $\Omega^4$ ($\Omega^2$), as: \begin{eqnarray} T_{\mu \nu}^\gamma = \frac{-2} {\sqrt{-g}} \frac{\delta\left(\sqrt{-g} \mathcal{L}_\gamma\right)}{\delta g^{\mu \nu}}&\rightarrow& \tilde{T}_{\mu \nu}^\gamma = \frac{-2}{\sqrt{-g}} \frac{\delta\left(\sqrt{-\tilde{g}} \tilde{\mathcal{L}}_\gamma\right)}{\delta \tilde{g}^{\mu \nu}}=\frac{T_{\mu\nu}^\gamma}{\Omega^2} ,\end{eqnarray} and thus we conclude that the perfect fluid assumption for the radiation field will remain valid in Einstein frame with energy density ($\rho_\gamma = \frac{\rho^\gamma}{\Omega^4}$) and pressure ($p_\gamma = \frac{p^\gamma}{\Omega^4}$) . Taking the definition of temperature: \begin{eqnarray} \label{tempe} \rho_{(\gamma)}^{\gamma} = C_\gamma {T_{(\gamma)}^{\gamma}}^4 &:& C_\gamma=\frac{\pi^2 g_*}{30}, \end{eqnarray} with $g_*$ denoting the number of created massless modes, we see that the temperature scales by $1/\Omega$ going from Jordan to Einstein frame: \begin{eqnarray} \label{T-transform} T \xrightarrow{\mbox{Jordan}\rightarrow \mbox{Einstein}} T/\Omega \end{eqnarray} As to the inflaton and gravity sector, we see that in Einstein frame there is a `pure' GR gravity part, whereas we have a non-canonical kinetic term for the inflaton scalar field, which can be put in a canonical form by defining a new field $\chi$, related to $\phi$ by: \begin{equation} \label{Z_metric} \bigg(\frac{d\phi}{d \chi} \bigg)^2 \equiv \frac{1}{Z^2} = \frac{\Omega^2}{1+6 ({\Omega}^\prime_\phi)^2} = \frac{2 \Omega^4}{2 \Omega^2 + 3 (({\Omega^2})^\prime_\phi)^2}, \end{equation} so to get (from now on, we drop the tilde off, but we keep in mind that all calculations are carried out in Einstein frame): \begin{eqnarray} \label{action-einstein} S &=&\int d^4 x \sqrt{-g}\bigg\{\frac{1}{2} R -\frac{1}{2} g^{\mu\nu}\nabla_\mu \chi \nabla_\nu \chi - U(\chi) \bigg\}+ S_\gamma +S_{Int}, \end{eqnarray} where \begin{eqnarray} U(\chi) &=& \frac{V(\phi(\chi))}{\Omega^4} \end{eqnarray} A spatially flat Friedmann-Robertson-Walker (FRW) Universe gives the energy density $\rho_\chi$ and the pressure $ p_\chi$ of the inflaton field as, \begin{eqnarray} \label{rho-p_chi} \rho_\chi = \frac{1}{2} \dot{\chi}^2 + U(\chi) &,& p_\chi = \frac{1}{2} \dot{\chi}^2 - U(\chi) \end{eqnarray} with Friedman equation given by, \begin{equation} \label{friendmn_1} H^2 = \frac{1}{3} \rho_{tot} = \frac{1}{3}(\rho_\chi+\rho_\gamma). \end{equation} For the interaction Lagrangian $\mathcal{L}_{int}$, and lacking a model-independent Lagrangian term leading to the RHS of (\ref{friedman_inflaton2_Jordan}), we shall argue by comparison to the cold inflation scenario in order to find the corresponding equation in Einstein frame. Note that, unlike standard studies (\cite{Kamali}) where the damping term is introduced in Einstein frame, we espouse the viewpoint that the field approach models justifying the damping term form are to be defined in the original Jordan frame. However, we shall show that under an approximation, which we shall adopt, the form would be similar in the two frames. Actually, the Hubble parameter transformation has an inhomogenous term \cite{fujii}\footnote{Note however that the ``measurable'' Hubble parameter in Einstein frame will be the one corresponding to dropping the inhomogenous term \cite{Yamaguchi}. }: \begin{eqnarray} \label{H-transform} H \xrightarrow{\mbox{Jordan}\rightarrow \mbox{Einstein}} H/\Omega + \frac{\dot{\overset{\Huge\frown}{\log \Omega}}}{\Omega} \end{eqnarray} then looking at Eq. (\ref{T-transform}) and dropping/neglecting the inhomogenous logarithmic variation of $\Omega$, we see that $\frac{T}{H}$ is conformally invariant, and likewise the factor $Q$ (Eq. \ref{Q}) is also invariant. In Einstein frame, the field will undergo slow rolling generating inflation where one assumes approximate constancy for both $H$ and $T$ \cite{sciencedirect}, so one can consider $Q$ as constant in Einstein, and thus also in Jordan, frame. We know that in cold inflation, including NMC to gravity, a Jordan-frame Euler-Lagrange-type equation expressing metric stationarity: \begin{eqnarray} \label{EL_NMC_Jordan} \ddot{\psi}_J+ 3H_J \dot{\psi}_J + (V_J)^\prime_{\psi_J} + \frac{1}{2} (\Omega^2)^\prime_{\psi_J} R &=& 0 \end{eqnarray} would lead in Einstein frame to a standard GR inflationary equation: \begin{eqnarray} \label{friedman_Einstein} \ddot{\psi}_E+ 3H_E \dot{\psi}_E + (V_E)^\prime_{\psi_E} &=& 0 \end{eqnarray} We see now that using Eq. (\ref{Q}) in Eq. (\ref{friedman_inflaton2_Jordan}), we get an equation similar to Eq. (\ref{EL_NMC_Jordan}), but with $(3 H_J)$ replaced by $(3(1+Q)H_J)$, where $Q$ is approximately constant, then we conclude that we get in Einstein frame an equation similar to Eq. (\ref{friedman_Einstein}) with ($H_E$) replaced by ($H_E (1+Q)$). Rewriting $Q$ in Einstein frame we get (dropping the subscript E) in Einstein frame: \begin{eqnarray} \label{chi-einstein} \ddot{\chi}+3 H \dot{\chi} +U^\prime_\chi &=& - \Gamma \dot{\chi} = -\Gamma_0 T \dot{\chi} \end{eqnarray} So, the upshot here is that we can use, under some approximation and for a damping factor linearly proportional to temperature, the above standard form, albeit starting from a free parameter $\Gamma_0$ defined originally in Jordan frame. By conservation of energy, We get: \begin{eqnarray} \label{gamma-einstein} \dot{\rho}_\gamma + 3 H (\rho_\gamma + p_\gamma) = \dot{\rho}_\gamma + 4 H \rho_\gamma &=& + \Gamma \dot{\chi}^2 \end{eqnarray} The fundamental equations for warm inflation within the slow roll approximation ($\dot{\chi}^2 \ll U, \ddot{\chi} \ll \dot{\chi}, \dot{\rho}_\gamma \ll \rho_\gamma$) are: \begin{eqnarray} \label{warm_inflation_equations} H^2 \approx U/3 &,& \dot{H} \approx -\frac{1}{2} (1+Q) \dot{\chi}^2 , \\ \rho_\gamma \approx \frac{3}{4} Q \dot{\chi}^2 &,& \dot{\chi} \approx -\frac{U^\prime_\chi}{3 H (1+Q)}. \end{eqnarray} Using Eq. (\ref{tempe}), we get \begin{eqnarray} \label{temperature} T&=& \left(\frac{1}{4 C_\gamma} \frac{Q}{(1+Q)^2} \frac{(U^\prime_\chi)^2}{U}\right)^{\frac{1}{4}} \end{eqnarray} \subsection{Power spectrum} We define the following slow roll parameters: \begin{eqnarray} \label{slowroll parameters} \epsilon &=& \frac{1}{2}\left(\frac{U^\prime_\chi}{U}\right)^2 = Z^2 \epsilon^\phi, \\ \eta &=& \frac{U^{\prime\prime}_{\chi\chi}}{U} = Z^2 \eta^\phi + Z Z^\prime_\phi \sqrt{2 \epsilon^\phi}, \\ \beta &=& \frac{\Gamma^\prime_\chi U^\prime_\chi}{\Gamma U} = Z^2 \beta^\phi , \end{eqnarray} where $\epsilon^\phi, \eta^\phi, \beta^\phi$ correspond to the same definitions with the derivative carried out with respect to the field $\phi$. One can show that the slow roll regime is met provided we have \begin{eqnarray} \label{slow roll condition} \epsilon, \eta, \beta &\ll& 1+Q \end{eqnarray} The spectrum of the adiabatic density perturbations generated during inflation is given by \cite{Kamali} (the star * parameter denotes parameter at horizon crossing): \begin{eqnarray} \label{scalar_power_spectrum} \Delta_R(k/k_*) &=& P_0(k/k_*) \mathcal{F}(k/k_*) : \\ P_0(k/k_*) = \left( \frac{H_*^2}{2\pi \dot{\chi}_k}\right)^2 &,& \mathcal{F}(k/k_*) = \left( 1+2 \nu_k + \omega_k \right) G(Q_*): \\ \nu_k = \frac{1}{e^{\frac{H}{T}}-1} &,& \omega_k = \frac{T}{H} \frac{2\sqrt{3} \pi Q_k}{\sqrt{3+4\pi Q_k}} \label{generalomega}, \end{eqnarray} and where the modification function $G$, which is due to coupling between the inflaton field and radiation fluctuations, is given numerically by: \begin{eqnarray} \label{G} G(Q) &=& 1+0.335 Q^{1.364} + 0.0815 Q^{2.315}. \end{eqnarray} We see that the cold inflation is restored when ($\nu_k$), the Bose-Einstein distribution in a radiation bath of temperature $T$, and $\omega_k$, due to thermal effects, both go to zero: \begin{eqnarray} \left( \nu_k \rightarrow 0 \,\,\, \& \,\,\, \omega_k \rightarrow 0 \right) &\Rightarrow& \mbox{cold inflation} \end{eqnarray} The observable spectral index is given by: \begin{eqnarray} \label{n_s} n_s-1 &=& \left.\frac{d \log \Delta_R(k)}{d \log k}\right|_{k=k_*} = \frac{1}{H}\frac{d \log \Delta_R(k)}{dt} = \frac{1}{H \Delta_R}\frac{d\Delta_R}{dt} \end{eqnarray} whereas the observable $r$, the tensor-to-scalar ratio, is given by: \begin{eqnarray} \label{r} r = \frac{\Delta_T(k)}{\Delta_R} = \frac{2 H^2}{\pi^2 \Delta_R} = \frac{16 \epsilon}{(1+Q)^2} \mathcal{F}^{-1} \end{eqnarray} We distinguish two limit regimes. \begin{itemize} \item {\bf Strong Limit $Q \gg 1$}: Here, using Eq. (\ref{temperature}), one can show that \begin{eqnarray} \label{strongT} T&=& \left(\frac{Z^2 (U^\prime_\phi)^2}{4 H C_\gamma \Gamma_0}\right)^{1/5} \end{eqnarray} We have via Eq. (\ref{generalomega}): \begin{eqnarray} \label{strongomega} \omega_k &=& T \sqrt{\frac{\pi \Gamma}{H^3}} = \frac{T}{H}\sqrt{3 \pi Q} \end{eqnarray} Thus, $1+\nu_k \approx \frac{T}{H} \ll \omega_k$, and one gets: \begin{eqnarray} \label{strong_delta_R} \Delta_R = \Delta R_s \; G&:& \Delta R_s=\frac{\sqrt{3}TH}{8\sqrt{\pi}} \epsilon^2 Q^{\frac{5}{2}} \end{eqnarray} Thus we get \begin{eqnarray} n_s-1 &=&\frac{1}{H\Delta R_s} \frac{d\Delta R_s}{dt} + \frac{\dot{Q}}{H} \frac{G^\prime_Q}{G} \end{eqnarray} The first term will give after lengythy calculations (look at \cite{Visinelli}: \begin{eqnarray} \frac{1}{H\Delta R_s} \frac{d\Delta R_s}{dt} &=& \frac{1}{Q} \left( -\frac{9}{4} \epsilon + \frac{3}{2} \eta - \frac{9}{4} \beta \right) \end{eqnarray} whereas we get for the second term: \begin{eqnarray} \frac{\dot{Q}}{H} \frac{G^\prime_Q}{G} &=& \frac{2.315}{Q} (\epsilon-\beta) \end{eqnarray} where we have used the identity: \begin{eqnarray} \label{identity} \frac{\dot{Q}}{HQ} = \frac{\dot{\Gamma}}{H \Gamma} - \frac{\dot{H}}{H^2} &=& \frac{-1}{1+Q} (\beta -\epsilon) \end{eqnarray} Thus we get: \begin{eqnarray} \label{Strong_ns} n_s-1 &=& \frac{1}{Q} \left( -\frac{9}{4} \epsilon + \frac{3}{2} \eta - \frac{9}{4} \beta +2.3 \epsilon - 2.3 \beta\right) \end{eqnarray} Note that $n_s$ involves the temperature $T$ through the expression of $Q=\frac{\Gamma_0 T}{3 H}$. Also, the temperature $T$ plays a role in determining the ``end of inflation'' field ($\phi_f$) being the argument of the slow roll parameter ($\epsilon, \eta, \beta$) when it equals $1+Q=1+\frac{\Gamma_0 T}{3H}$, whichever amidst the three meets the equality first. Determining $\phi_f$ allows to compute the e-folding number by: \begin{eqnarray} \label{efolding} N_e \equiv \log \frac{a_{\mbox{end}}}{a_k} = \int_t^{t_f} H dt = \int_{\chi_k}^{\chi_f} H \frac{d\chi}{\dot{\chi}} \approx \int^{\chi_k}_{\chi_f} \frac{U}{U^\prime_\chi} (1+Q) d\chi = \int^{\phi_k}_{\phi_f} \frac{U}{U^\prime_\phi} (1+Q) Z^2 d\phi \end{eqnarray} The initial time when the inflation started is taken to correspond to the horizon crossing when the dominant quantum fluctuations freeze transforming into classical perturbations with observed power spectrum. As for the tensor-to-scalar ratio we get \begin{eqnarray} \label{Strong-r} r &=& \frac{H}{T}\frac{16 \epsilon}{Q^{5/2}} G^{-1} = \frac{H}{T}\frac{16 \epsilon}{0.0185 Q^{4.815}} \end{eqnarray} \item{\bf Weak Limit $Q \ll 1$} Using Eq. (\ref{temperature}), one can show that \begin{eqnarray} \label{weakT} T&=& \left(\frac{Z^2 (U^\prime_\phi)^2 \Gamma_0}{36 C_\gamma H^3}\right)^{1/3} \end{eqnarray} From eq. (\ref{generalomega}), we have \begin{eqnarray} \label{weakomega} \omega_k &=& \frac{2 \pi \Gamma T}{3 H^2} = \frac{2 \pi T Q}{H} \end{eqnarray} Thus, $1+\nu_k \approx \frac{T}{H} \gg \omega_k$, and one gets: \begin{eqnarray} \label{weak_delta_R} \Delta_R = \Delta R_w \; G&:& \Delta R_w=\frac{4 T H}{\pi} \epsilon^2 \end{eqnarray} Thus we get \begin{eqnarray} n_s-1 &=&\frac{1}{H\Delta R_w} \frac{d\Delta R_w}{dt} + \frac{\dot{Q}}{H} \frac{G^\prime_Q}{G} \end{eqnarray} The first term will give after lengthy calculations (look at \cite{Visinelli}: \begin{eqnarray} \frac{1}{H\Delta R_w} \frac{d\Delta R_w}{dt} &=& 1-6 \epsilon + 2 \eta + \frac{\omega_k}{1+\omega_k} \left( \frac{15 \epsilon -2 \eta -9 \beta}{4}\right) \end{eqnarray} which gives, under the condition: \begin{eqnarray} \label{condition} \omega_k=\frac{2 \pi T Q}{H} &\ll& 1, \end{eqnarray} the answer \begin{eqnarray} \frac{1}{H\Delta R_w} \frac{d\Delta R_w}{dt} &=& 1-6 \epsilon + 2 \eta + \frac{2 \pi \Gamma_0 T^2}{3H^2} \left( \frac{15 \epsilon -2 \eta -9 \beta}{4}\right) \end{eqnarray} As to the second term, we get using Eq. (\ref{identity}) \begin{eqnarray} \frac{\dot{Q}}{H} \frac{G^\prime_Q}{G} &=& 0.456 Q^{1.364} (\epsilon-\beta) \end{eqnarray} Thus we get: \begin{eqnarray} \label{weak_ns} n_s-1 &=& 1-6 \epsilon + 2 \eta + \frac{2 \pi \Gamma_0 T^2}{12 H^2} (15 \epsilon -2 \eta -9 \beta) + 0.456 Q^{1.364} (\epsilon-\beta) \end{eqnarray} As for the tensor-to-scalar ratio we get, using $G \approx 1$, the following \begin{eqnarray} \label{Strong-r} r &=& \frac{16 \epsilon}{(1+Q)^2} \mathcal{F}^{-1} = \frac{8 H \epsilon}{T}= \frac{8 H Z^2 \epsilon^\phi}{T} \end{eqnarray} \end{itemize} \section{Natural Inflation} The potential in the NI is periodic of the form \begin{eqnarray} \label{NI-potential} V &=& V_0 \left( 1+ \cos(\frac{\phi}{f})\right) \end{eqnarray} where $V_0$ is a scale of an effective field theory generating this potential, and $f$ is a symmetry breaking scale. As mentioned in the introduction we shall consider two well motivated forms of NMC to gravity: \begin{itemize} \item{Quadratic NMC}: \begin{eqnarray} \label{NMC-quadratic} \Omega^2(\phi) &=& 1+ \xi \phi^2 \end{eqnarray} which is considered a leading order of terms allowed in the action generated by loops in the interacting theory. $\xi$ is the free parameter coupling constant characterizing the strength of the NMC to gravity. \item{Periodic NMC} \begin{eqnarray} \label{NMC-periodic} \Omega^2(\phi) &=& 1+ \lambda \left(1+\cos(\frac{\phi}{f})\right) \end{eqnarray} which is similar in form to the original potential, allowing it to be justified in some microscopic models. \end{itemize} It is well known that Cold Natural inflation with NMC is not enough to accommodate data. In \cite{Chamoun_JCAP}, we showed that Cold Natural inflation with NMC and $F(R)$-modified gravity was viable. Here we are trying to dispense of the modification of gravity ingredient, while assuming, instead, the Warm scenario. We shall see that $2$ constraints out of $3$ can be met for the Warm NI with NMC. The strategy would amount to carry out an exhaustive scanning of the free parameters space (that of $\phi_*, \Gamma_0, f, V_0, \xi \mbox{ or } \lambda$) and compute for each `benchmark' the corresponding $n_s, r$ and $\phi_f$, the latter making one of the slow parameters equal to $(1+Q)$, which would allow us to compute the efolding number $N_e$, which with ($n_s, r$) would constitute the observational constraints to be accommodated. As to the number of relativistic degrees of freedom of radiation, we use $g_*(T) = 228.75$, i.e. $C_\gamma=57.2557$, corresponding to the number of relativistic degrees of freedom in the minimal supersymmetric standard model at temperatures greater than the electroweak phase transition. \section{Comparison to Data: Strong case } We carried out an extensive scan over the free parameters space, and for each point we computed $(n_s, r)$ and $N_e$. We could not find benchmarks meeting the constraints of ($n_s, r$) at $95\%$ confidence levels according to the 2018 Planck (TT, EE, TE), BK18 and other experiments (lowE, lensing) separately or combined, which would allow also for acceptable $N_e \geq 40$ in order to solve the flatness and horizon problems. Accommodating $(N_e=40, r)$ was possible but at the expense of getting ($n_s$) a bit large. \vspace{0.3cm} \begin{itemize} \item {Quadratic NMC} \begin{figure}[H] \includegraphics[width=11.5cm]{NaturalWarmStrongQuadraticNMC.jpg} \caption{ Predictions of warm natural inflation with Quadratic NMC to gravity in the Strong limit. We took the values in units where Planck mass is unity ($\Gamma_0=7000, f=5, V_0=5 \times 10^{-6}$). For the black (red) dots, we have $\xi=-20 (-40), \phi_* \in [3\times 10^{-4},0.0015]$ ($\in[3 \times 10^{-4}, 0.0029]$) corresponding to $N_e \in [14.8,30.4] (\in [9.3,26.4])$. $Q$ in both cases is of order $10^3$ \label{fig1}} \end{figure} Fig. (\ref{fig1}) shows the results of scanning the parameters space in the case of Strong limit Warm NI with Quadratic NMC to gravity. One could accommodate ($n_s, r$) but with too little $N_e$. In the figures, the two colors dots correspond to two choices of the coupling $\xi=-20, -40$. Looking to meet the e-folds constraint, we imposed ($N_e=40$) with ($\xi=-20$), and fixed the values of ($\Gamma_0, f, V_0$) as before, while scanned over $\phi_*$. We found the `bench mark': ($\phi_*=0.0029$) giving the required e-folds with $r=1.03 \times 10^{-14}$ and $Q$ of order $1.3 \times 10^3$. However, the scalar spectral index $n_s$ was large ($n_s = 0.98$) outside the acceptable contours. \vspace{0.3cm} \item{Periodic NMC} \begin{figure}[H] \includegraphics[width=11.5cm]{NaturalWarmStrongPeriodicNMC_update.jpg} \caption{Predictions of warm natural inflation with Periodic NMC to gravity in the Strong limit. We took the values in units where Planck mass is unity ($\Gamma_0=1000, f=100, V_0=1 \times 10^{-6}$). For the red (black, pink) dots, we have $\lambda=5 \times 10^6 (6 \times 10^6, 8 \times 10^6), \phi_* \in [1\times 10^{-4},1.5 \times 10^{-4}]$ ($\in [1\times 10^{-4},1.8 \times 10^{-4}], \in [1\times 10^{-4},2 \times 10^{-4}] $) corresponding to $N_e \in [24.5,29.3] (\in [22.6,29.3], \in [19.9, 21.6])$. $Q$ in all cases is of order $6 \times 10^3$ \label{fig2}} \end{figure} Fig. (\ref{fig2}) shows the results of scanning the parameters space in the case of Strong limit Warm NI with periodic NMC to gravity. As in the case of Quadratic NMC, one could accommodate ($n_s, r$) but with too little $N_e$. In the figures, the three colors dots correspond to three choices of the coupling $\lambda (\times 10^{-6}) =5, 6, 8$. Again, one could meet the acceptable value ($N_e=40$) with ($\lambda = 5 \times 10^{-6}$) and the values of ($\Gamma_0, f, V_0$) as before, through scanning over $\phi_*$, and finding a `bench mark': ($\phi_*=5 \times 10^{-4}$) giving the required e-folds ($N_e=40.14$) with $r=4.2 \times 10^{-20}$ and $Q$ of order $1.3 \times 10^4$. However, the scalar spectral index $n_s$ was again large ($n_s = 0.98$) outside the acceptable contours. \end{itemize} \section{Comparison to Data: Weak case } As in the case of Strong limit, we performed an exhaustive scan over the free parameters, and for each point we computed $(n_s, r)$ and $N_e$. Again, the search was negative for benchmarks meeting the constraints of ($n_s, r$) at $95\%$ confidence levels of the Planck 2018 data, with acceptable $N_e \geq 40$. Unlike the strong limit, we could not accommodate $(N_e=40)$ even with out-of-range ($n_s, r$). \begin{figure}[H] \includegraphics[width=11.5cm]{NaturalWarmWeakNMC_update.jpg} \caption{ Predictions of warm natural inflation with NMC to gravity in the Weak limit. For the Quadratic (Periodic) NMC in red (black) dots, we took (the values are given in units where Planck mass is unity): $\Gamma_0=7.14 \times 10^{-7}, f=2, V_0=2.25 \times 10^{-15}$. We fixed $\phi_* = 2 (6.9)$ and scanned over $\xi (\lambda)$ $\in [1.99, 2.00] ([1.04, 1.06])$. We found $n_s \in [0.95, 0.97] ([0.95, 0.97])$, $r \in [0.015, 0.016] ([0.0385, 0.0386])$, and we got $N_e \approx 0.96 (0.27)$. In both cases of NMC we had $Q$ of order $10^{-4}$. \label{fig3}} \end{figure} Fig. {\ref{fig3}} shows some results of our scan. In both cases of Quadratic and Periodic NMC to gravity we took $\Gamma_0=7.14 \times 10^{-7}, f=2, V_0=2.25 \times 10^{-15}$. The dots correspond to fixing the horizon crossing field and scanning over the NMC coupling ($\xi, \lambda$). As the figure shows, even though one could accommodate the observables ($n_s, r$), however the e-folds number was always too small to be acceptable, which means the ingredient of ``warm scenario' was not enough to solve the problems of the NI with NMC. \section{Summary and Conclusion} We discussed in this paper the scenario of warm NI with NMC to gravity. It is well known that NI with NMC and modifed gravity is viable considering the PLanck 2018 data. We kept the GR Einstein-Hilbert action and examined the possibility of whether assuming the 'warm' paradigm could make the NI with NMC viable. Within the warm paradigm, we introduced the `phenomenological' damping factor in Jordan frame, and examined the approximation which would put it in the same form in Einstein frame. We restricted our study to the case where the damping constant is linearly proportional to temperature. We found that in the strong limit, the model is able to accommodate the spectral observables ($n_s, r$) but with a small e-fold number reaching $N_e \sim 30$. However, the points allowing for larger $N_e \geq 40$ would lead to spectral observables slightly out of range. In the weak limit, the allowed parameter space for ($n_s, r$) is far narrower than in the strong limit, but the corresponding $N_e$ is too small ($N_e \leq 1$) to be remedied even at the price of pushing ($n_s, r$) considerably out of range. We conclude that the `warm' ingredient is not enough to solve the problems of NI. A possible combination of `warm' paradigm plus other mechanisms may be necessary if one wants to make a warm NI with NMC viable. \vspace{6pt} {\bf Acknowledgments:} N. Chamoun acknowledges support from ICTP-Associate program, from the Alexander von Humboldt Foundation, and from the PIFI program at the Chinese Academy of Sciences.
1,116,691,497,079
arxiv
\section{Introduction} \indent We present a linear implicit \emph{m-}step method LIL (Local Iterative Linearization) and prove its convergence applied for the following initial value problem \begin{equation} \overset{.}{x}=f\,(t,x),\quad x(t_{0})=x_{0}, \label{1.1} \end{equation} \noindent \noindent where \ $f\,:\,[t_{0},T]\times \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}$,\ $T>0,\,t_{0}\in \mathbb{R}_{+},$\ is a $C^{\,m}$ smooth Lipschitz function\footnote{% The Lipschitz condition is necessary for the stability proof.}. Although the classical linear multi-step algorithms are very known and utilized, the LIL characteristics (convergence properties, time stability and applications results) show that this numerical method could be considered as an interesting alternative to the widely used formulas. The backward approximation of derivatives implies null coefficients of the odd order derivatives which represent a major advantage for the propagation of errors. As a comparative test two simple ODEs with known analytical solutions and a chaotic continuous-time dynamical system, first studied by Fabrikant and Rabinovich [6] and recent numerically re-examined by Danca and Chen [3], was integrated using the LIL algorithm and some of the most known algorithms. The complex dynamic of this special model represented a real challenge for almost all of these methods as shown in Sect.5. Being an implicit method, an extrapolation is used as the predictor phase. Like all the $m$-step algorithms, the previous \ $m$ \ points (beside the first \ $m$ \ start points) should be estimated every step. To study the convergence we use the unified approach of stability and consistency developed by Germund Dahlquist in 1956 [2] (see also [7-8]). Thus, the LIL method applied to the initial value problem (\ref{1.1}) is considered convergent if and only if it is stable and consistent. The content of this paper is as follows: In Sect. 2 the LIL method is deduced. The convergence is proved in Sect. 3. In Sect. 4 is presented the time stability with the corresponding time stability domains. In Sect. 5 three examples are presented. All computer tests were realized using a Turbo Pascal code written by the author. In the Appendix, the coefficients of LIL method are presented. \section{Deduction of the LIL method} \indent Let us consider the uniform grid \begin{equation*} \Delta =(t_{0}<t_{1}<...<t_{n}=T\,),\quad n\in \mathbb{N}^{\ast }, \end{equation*} \noindent with the step-size \ \begin{equation*} h=\frac{T-t_{0}}{n}=2\,\delta t\,, \end{equation*} \strut where \ $\delta t$ \ stands for the ray of the neighborhood \ $% V_{k}=\left( t_{k}-\delta t\,,\,t_{k}+\delta t\right) \,,$ \ $k=1,2,...,n-1.$ \noindent We assume that all infinite Taylor series converge, but this is not necessarily since one truncate at a sufficiently large but finite number of terms. \noindent One introduce the following notations \begin{eqnarray*} x_{k-j} &:&=x(t_{k-j})=x\left[ t_{0}+\left( k-j)h\right) \right] , \\ x_{k}^{(i)} &:&=x^{(i)}(t_{k}),\quad x_{k}^{(0)}:=x(t_{k}),\quad \qquad j=1,2,...,m. \end{eqnarray*} In the following \ $k$ \thinspace is supposed to take the values \ $% k=1,2,...,n-1.$ \noindent If we consider \ $x_{k-j}$ \ as a function of variable $h$ \ defined in \ $V_{k}$ , then the first \ $m$ \ terms of Taylor approximation of \ $x_{k-j}$ \ is\footnote{% The choice of \ $m$ \ and \ $h$ \ is supposed to be such that the Taylor approximation can be used. The link between \ $h$ \ and \ $m$ \ is analyzed in Section 3.1} \begin{equation} x_{k-j}\thickapprox x_{k}-\frac{j\,h}{1!}x_{k}^{^{\prime }}+\frac{\left( j\,h\right) ^{2}}{2!}x_{k}^{^{\prime \prime }}-...+\left( -1\right) ^{m}% \frac{\left( j\,h\right) ^{m}}{m!}x_{k}^{\left( m\right) }, \label{2.1} \end{equation} where \ $j=1,2,...m.$ \ The relations \ (\ref{2.1}) represent a Cramer system with the unknown \ $x_{k}^{\left( i\right) },\ i=1,2,...,m$% \begin{equation} \begin{array}[t]{l} x_{k-1}-x_{k}\thickapprox \frac{h}{1!}x_{k}^{^{\prime }}+\frac{\,h^{2}}{2!}% x_{k}^{^{\prime \prime }}-...+\left( -1\right) ^{m}\frac{\,h^{m}}{m!}% x_{k}^{\left( m\right) }, \\ x_{k-2}-x_{k}\thickapprox \frac{2\,h}{1!}x_{k}^{^{\prime }}+\frac{\left( 2\,h\right) ^{2}}{2!}x_{k}^{^{\prime \prime }}-...+\left( -1\right) ^{m}% \frac{\left( 2\,h\right) ^{m}}{m!}x_{k}^{\left( m\right) }, \\ ... \\ x_{k-m}-x_{k}\thickapprox \frac{m\,h}{1!}x_{k}^{^{\prime }}+\frac{\left( m\,h\right) ^{2}}{2!}x_{k}^{^{\prime \prime }}-...+\left( -1\right) ^{m}% \frac{\left( m\,h\right) ^{m}}{m!}x_{k}^{\left( m\right) }. \end{array} \label{2.2} \end{equation} The determinant of the system (\ref{2.2}) is \begin{equation*} \Delta =\frac{h^{^{m\left( m+1\right) /2}}}{1!2!...m!}\left| \begin{array}{lllll} 1 & 1 & 1 & ... & 1 \\ 2 & 2^{2} & 2^{3} & ... & 2^{m} \\ ... & ... & ... & ... & ... \\ m & m^{2} & m^{3} & ... & m^{m} \end{array} \right| =h^{^{^{m\left( m+1\right) /2}}}1!2!...\left( m-2\right) !\,. \end{equation*} Because for \ $m\geq 2$\ $\,$we have $\,\Delta \neq 0$,\ \ there exists a unique solution \vspace{-0.2cm} \begin{eqnarray*} x_{k}^{\left( i\right) } &=&\frac{1}{h^{i}}\sum_{j=0}^{m}\delta _{i\,j}x_{k-j},\quad i=2,...,m\quad \text{for \ \ }m\geq i>1, \label{2.3} \\ x_{k}^{^{\prime }} &=&\frac{x_{k}-x_{k-1}}{h}\quad \text{for \ }m=i=1, \notag \end{eqnarray*} the coefficients \ $\delta _{i\,j}$ \ being drawn in Table 7/Appendix. \noindent Thus we obtained a backward approximation of derivatives, which represents the key of LIL method. The Taylor approximation of the solution \ $x$, considered now as function of \ $t$ \ in the neighborhood \ $V_{k},$ \ is \begin{equation} x(t)\thickapprox x(t_{k})+\frac{t-t_{k}}{1!}x^{^{\prime }}(t_{k})+\frac{% \left( t-t_{k}\right) ^{2}}{2!}x^{^{\prime \prime }}(t_{k})+...+\frac{\left( t-t_{k}\right) ^{m}}{m!}x^{\left( m\right) }(t_{k}). \label{2.4} \end{equation} Next, integrating (\ref{2.4}) in $\ V_{k}$ \ we get \vspace{-0.5cm} \begin{center} \begin{equation} \begin{array}{l} \int\limits_{t_{k}-\delta \,t}^{t_{k}+\delta \,t}x(t)\,dt=\int\limits_{-\,\delta \,t}^{\delta \,t}x(t+t_{k})\,dt\thickapprox \int\limits_{-\,\delta \,t}^{\delta \,t}\left( \sum\limits_{i\,=0}^{m}x_{k}^{\left( i\right) }t^{i}\right) dt= \\ =\sum\limits_{\substack{ i\,=0,2,4,... \\ i\leq \,m}}^{m}\frac{1}{% 2^{i}\left( i+1\right) !}h^{i+1}x_{k}^{\left( i\right) }=h\,x_{k}+h^{3}\frac{% 1}{24}x_{k}^{^{\prime \prime }}+h^{5}\frac{1}{1920}x_{k}^{\left( 4\right) }+... \end{array} \label{2.5} \end{equation} \end{center} \begin{remark} The zero coefficients of the derivatives \ $x_{k}^{\left( 2i+1\right) }$ \ for $\,i=0,1,2,...$ in\emph{\ (\ref{2.5})} represent a major advantage for the propagation of errors and computation time. \end{remark} \noindent If we use in (\ref{2.5}) the derivatives expression (2.3) we have \begin{equation} \int\limits_{t_{k}-\delta \,t}^{t_{k}+\delta \,t}x(t)\,dt\thickapprox h\sum_{i=0}^{m}\sigma _{0\,i}x_{k-i}\, \label{2.6} \end{equation} the coefficients \ $\sigma _{0\,i}$ \ being given in Table 8(a)/Appendix. Using the same way one can approximate \ $x^{\prime }$ \ on \ $V_{k}$ \begin{equation} \int\limits_{t_{k}-\delta \,t}^{t_{k}+\delta \,t}x^{\prime }(t)\,dt\thickapprox \sum_{i=0}^{m}\sigma _{1\,i}x_{k-i}\,, \label{2.7} \end{equation} the coefficients \ $\sigma _{1i}$ \ being drawn in Table 8(b)/Appendix. \noindent To overcome the difficulty of Taylor approximation of the composite function \ $f$\ \ we found, empirically, that the relations (\ref {2.6}) could be considered as a simple way to approximate the integral of \ $% f$ \ without altering the method convergence. Thus \begin{equation} \int\limits_{t_{k}-\delta \,t}^{t_{k}+\delta \,t}f\,(t,x(t))\,dt\thickapprox h\sum_{i\,=\,0}^{m}\sigma _{0\,i}\,\,f_{k-i}, \label{2.8} \end{equation} where \ $f_{k-i}:=f\,(t_{k-i},x(t_{k-i}))$ . \noindent Using (\ref{2.7}) and (\ref{2.8}) we can integrate (\ref{1.1}) in \ $V_{k}\,$ \begin{equation*} \sum_{i\,=\,0}^{m}\sigma _{1i}x_{k-i}=h\,\sum_{i\,=\,0}^{m}\sigma _{0\,i}\,\,f_{k-i}\,. \end{equation*} \noindent Because \ $\sigma _{10}\neq 0$ , \ for every \ $m\,\ $(see Table 8/Appendix), \ the approximation of the solution in \ $V_{k}$ \ is \begin{equation} x_{k}=\frac{\,h}{\sigma _{10}}\sum\limits_{i\,=\,0}^{m}\sigma _{0\,i}\,\,f_{k-i}-\frac{1}{\sigma _{10}}\sum\limits_{i\,=\,1}^{m}\sigma _{1\,i}\,x_{k-i}. \label{2.9} \end{equation} \noindent If we denote \begin{equation*} u_{k}:=\frac{\,1}{\sigma _{10}}\sum\limits_{i\,=\,0}^{m}\sigma _{0\,i}\,\,f_{k-i},\quad v_{k}:=-\,\frac{1}{\sigma _{10}}\sum\limits_{i\,=% \,1}^{m}\sigma _{1\,i}\,x_{k-i}, \end{equation*} the relations (\ref{2.9}) become \begin{equation} x_{k}=v_{k}+h\,u_{k}\,,\quad k=1,2,...n-1\,. \label{2.10} \end{equation} Formula (\ref{2.10}) represents the $m\,$th-order LIL method. In Table 1 the formulae for orders one through five ($m\in \{1,2,3,4,5\})$ are presented. \label{Table 1} \begin{table} \begin{center} \begin{tabular}{||l|l||} \hline\hline $m$ & \\ \hline & \\ $1$ & $x_{k}=x_{k-1}+h\,f_{k},$ \\ & \\ \hline & \\ $2$ & $x_{k}=\frac{4}{3}x_{k-1}-\frac{1}{3}x_{k-2}+\frac{h}{36}\left( 25\,f_{k}-2\,f_{k-1}+f_{k-2}\right) ,$ \\ & \\ \hline & \\ $3$ & $x_{k}=\frac{5}{3}x_{k-1}-\frac{13}{15}x_{k-2}+\frac{1}{5}x_{k-3}+% \frac{h}{45}\left( 26\,f_{k}-5\,f_{k-1}+4\,f_{k-2}-f_{k-3}\right) $ \\ & \\ \hline & \\ $4$ & $x_{k}=2x_{k-1}-\frac{8}{5}x_{k-2}+\frac{26}{35}x_{k-3}-\frac{1}{7}% x_{k-4}+\frac{h}{12600}(6463\,f_{k}-2092f_{k-1}$ \\ & \\ & $\,\,\,\,\,\,\,\,\,\,\,+2298f_{k-2}-1132f_{k-3}+223f_{k-4}),$ \\ & \\ \hline & \\ $5$ & $x_{k}=\frac{7}{3}x_{k-1}-\frac{38}{15}x_{k-2}+\frac{62}{35}x_{k-3}-% \frac{43}{63}x_{{k-4}}+\frac{1}{9}x_{k-5}+\frac{h}{14175}% (6669\,f_{k} $ \\ & \\ & $\,\,\,\,\,\,\,\,\,\,\,-3122\,f_{k-1}+4358\,f_{k-2}-3192\,\,f_{k-3}+1253\,% \,f_{k-4}-206\,f_{k-5}).$ \\ & \\ \hline\hline \end{tabular} \caption{LIL algorithms.} \end{center} \end{table} The study was achieved up to \ $m=8,$ \ but in this paper for the sake of simplicity we considered only \ $m\in \{1,2,3,4,5\}$. For \ $m=1$ the LIL method is equivalent to the backward Euler method. The LIL method is an implicit method due to the presence of the term $f_{k}$ in the right hand side which depends on $x_{k}$. Therefore additional computations are necessary in order to calculate $f_{k}$. In this purpose we approximate $x_{k-1} \in V_{k}$ (see (\ref{2.1})) \begin{equation*} x_{k-1}\thickapprox x_{k}-\frac{h}{1!}x_{k}^{\prime }+\frac{h^{2}}{2!}% x_{k}^{\prime \prime }-...+\left( -1\right) ^{m}\frac{h^{m}}{m!}% x_{k}^{\left( m\right) }. \end{equation*} Using for derivatives the relations (2.3) one obtains \begin{equation*} x_{k-1}\thickapprox x_{k}-\frac{1}{1!}\sum\limits_{i=0}^{m}\delta _{1\,i}x_{k-i}+\frac{1}{2!}\sum\limits_{i=0}^{m}\delta _{2\,i}x_{k-i}-...+% \frac{\left( -1\right) ^{m}}{m!}\sum\limits_{i=0}^{m}\delta _{m\,i}x_{k-i}\,, \end{equation*} \noindent wherefrom we have \begin{equation} x_{k}\thickapprox \sum\limits_{i=1}^{m}\varepsilon _{m\,i}x_{k-i},\quad i=1,2,...,m,\quad m>1.\, \label{2.11} \end{equation} The coefficients \ $\varepsilon _{m\,i}$ \ are given in Table 9/Appendix. \noindent Using (\ref{2.11}), \ $f_{k}$ \ becomes \begin{equation*} f_{k}=f\,\left( t_{k},\sum\limits_{i=1}^{m}\varepsilon _{m\,i}x_{k-i}\right) . \end{equation*} The relation (\ref{2.11}) represents an extrapolation formula (predictor phase) for \ $x_{k}$ \ and can be used to approximate the solution, but without an acceptable accuracy, while (\ref{2.10}) is the corrector phase. \noindent Because (\ref{2.10}) is a multi-step relation, a starting method (for example the standard Runge-Kutta method) is necessary in order to calculate the \ $m$ \ first start values: \ $x_{-1},x_{-2},...,x_{-m}$. \section{The convergence} \indent The convergence is analyzed using the Dahlquist theory which states that a numerical method is convergent\footnote{% The ''convergence'' means here ''uniform convergence'' on an interval for any \thinspace $C^{m}\,\,$smooth function $\,f$\thinspace .} if it is consistent and stable (see [2], [4] or [7-8]). In this purpose let us consider the LIL method (\ref{2.10}) in the usual form \begin{equation} \sigma _{10}x_{k}+\sigma _{11}x_{k-1}+...+\sigma _{1m}x_{k-m}=\sigma _{00}\,f_{k}+\sigma _{01}\,f_{k-1}+...+\sigma _{0m}\,f_{k-m}, \label{3.1} \end{equation} \noindent with the characteristic polynomials \begin{equation} \alpha _{m}(s)=\sum\limits_{i=0}^{m}\sigma _{1\,i}\,s^{m-i},\quad \beta _{m}\left( s\right) =\sum\limits_{i=0}^{m}\sigma _{0\,i}\,s^{m-i},\,\,m\in \{1,2,3,4,5\}. \label{3.2} \end{equation} \subsection{Consistency and errors} \indent Following the Dahlquist theory, the LIL method is consistent because its characteristic polynomials (\ref{3.2}) satisfy \ $\alpha _{m}(1)=0$ \ and \ $% \alpha _{m}^{\prime }(1)=-\beta _{m}\left( 1\right) $ \ for \ $m\in \{1,2,3,4,5\}.$ \ As it is known, the order of a linear multi-step method is $r$ \ if, and only if, \ $r$ $\ $of\ the following coefficients \begin{equation*} C_{j}=\sum\limits_{i=0}^{m}\sigma _{1\,i}\,\,i\,^{j}+j\sum\limits_{i=0}^{m}\sigma _{0\,i}i^{\,j-1},\quad j=1,2,...,r, \end{equation*} \noindent vanish. \noindent Note that above the convention \ $0^{0}=1$ \ was used. The values of \ $C$ \ for LIL method are given in Table 2. \label{Table 2} \begin{table} \begin{center} \begin{tabular}[t]{||c|c|c|c|c|c|c|c|c||} \hline\hline $m$ & $C_{1}$ & $C_{2}$ & $C_{3}$ & $C_{4}$ & $C_{5}$ & $C_{6}$ & $C_{7}$ & $% \epsilon _{t}$ \\ \hline {\small 1} & {\small 0} & {\small 0} & {\small -0.5} & & & & & $O(h^{2})$ \\ \hline {\small 2} & {\small 0} & {\small 0} & {\small 0} & {\small -0.04} & & & & $O(h^{3})$ \\ \hline {\small 3} & {\small 0} & {\small 0} & {\small 0} & {\small 0} & {\small % -0.313} & & & $O(h^{4})$ \\ \hline {\small 4} & {\small 0} & {\small 0} & {\small 0} & {\small 0} & {\small 0} & {\small -1.37} & & $O(h^{5})$ \\ \hline {\small 5} & {\small 0} & {\small 0} & {\small 0} & {\small 0} & {\small 0} & {\small 0} & {\small -177.184} & $O(h^{6})$ \\ \hline\hline \end{tabular} \caption{C coefficients.} \end{center} \end{table} \noindent From Table 2 one can deduce that the LIL order (the largest \ $r$ \ for which \ $C$ is null) is \ $m+1.$ The local truncation error \ $\epsilon _{t}$ \ is, for a given \ $m,$ \ of order \ $m+1\,$(see e.g. [7]). Comparatively, the local truncation error for the standard (4th-order) Runge-Kutta algorithm is of order 4, and for the multi-step algorithms Adams-Moulton and Gear are of order \ $m+1$, the same as for LIL algorithm. The global truncation error (the accumulation of the local truncation errors) per unit time is \ $\overline{\epsilon _{t}}=\epsilon _{t}/h$. Hence the global truncation error per unit time is of $\ m$ \ order. \subsection{Stability} \indent LIL is stable if all solutions of the following difference equations \begin{equation} \alpha _{m}(s)=0,\,\,m\in \{1,2,3,4,5\}, \label{3.3} \end{equation} \noindent are bounded. A\ necessary and sufficient condition for stability is that all zeros \ $s_{k}$\thinspace $,k=1,2,...,m$\ \ of \ $\alpha _{m}$ \ satisfy \ $\left| \,s_{k}\right| \leq 1$ \ and that zeros with \ $\left| \,s_{k}\right| =1$ \ be simple. \ It is easy to see that $\,\alpha _{1}(s)=s-1$ \ and\ for \ $m\geq 2,$\ $\,\,\alpha _{m}(s)=(s-1)\gamma _{m-1}(s)$ \ (Table 3) with the zeros, numerically found for \ $% m=3,4,5$, given in Table 4. \label{Table 3} \begin{table} \begin{center} \begin{tabular}{||l|l||} \hline $m=2$ & $\gamma _{1}(s)=(3\,s-1),$ \\ \hline $m=3$ & $\gamma _{2}(s)=(15\,s^{2}-10\,s+3),$ \\ \hline $m=4$ & $\gamma _{3}(s)=(35\,s^{3}-35\,s^{2}+21\,s-5),$ \\ \hline $m=5$ & $\gamma _{4}(s)=(315\,s^{4}-420\,s^{3}+378\,s^{2}-180\,s+35).$ \\ \hline\hline \end{tabular} \caption{The polynomials $\gamma _{m-1}$.} \end{center} \end{table} \label{Table 4} \begin{table} \begin{center} \begin{tabular}{||c|c|c|c|c|c||} \hline\hline & $s_{1}$ & $s_{2}$ & $s_{3}$ & $s_{4}$ & $s_{5}$ \\ \hline $m=2$ & $1$ & $0.33$ & {\small -} & {\small -} & {\small -} \\ \hline $m=3$ & $1$ & $0.33+i\,0.30$ & $0.33-i\,0.30$ & {\small -} & {\small -} \\ \hline $m=4$ & $1$ & $0.40$ & $0.30+i\,0.52$ & $0.30-i\,0.52$ & {\small -} \\ \hline $m=5$ & $1$ & $0.40+i\,0.17$ & $0.40-i\,0.17$ & $0.26+i\,0.72$ & $% 0.26-i\,0.72$ \\ \hline\hline \end{tabular} \caption{The zeros of the characteristic equation $(3.3).$} \end{center} \end{table} \noindent Hence the LIL method is stable and therefore we have the following result \begin{theorem} The LIL method for to the initial value problem \emph{(\ref{1.1})} is convergent for all \ $m\in \{1,2,3,4,5\}.$. \begin{proof} Because LIL is consistent and stable, following the Dahlquist theory, it is convergent. \end{proof} \end{theorem} \section{The regions of time stability} \indent An integration method may have low round-off error and low truncation error, but be totally worthless because it is time unstable. The standard method for testing the time (numerical) stability is to apply the integration method to the first-order linear test equation \begin{equation} \overset{.}{x}=\lambda x,\quad x(0)=x_{0}, \label{4.1} \end{equation} \noindent where \ $x,$ $x_{0},\lambda $ \ may be complex. A method is \emph{% time (numerically) stable} for specified values \ $\left( \lambda ,h\right) $ if it produces a bounded sequence \ $\{x_{n}\}$ \ when applied to the test problem (\ref{4.1}) [7]. The set of the complex values $\ z=\lambda h$ \ for which \ $\{x_{n}\}$ \ is bounded is called the \emph{stability region} of the method. When an integration method is applied to the system (\ref{4.1}) the result is a linear, discrete-time system with a fixed point at the origin. This means that the stability regions contain the half plan \ $Re(z)\leq 0.$ Therefore the stability of this fixed point determines the time stability of the integration method. \noindent Although this stability criterion guarantees that a method is stable only when integrating a linear system, and not for nonlinear systems it is an usual way to compare numerical performances for different algorithms. Following the theorem which states that a linear multi-step method is time stable for a particular \ $z$ \ if and only if, the equation \ $\alpha _{m}(\xi )=z\,\beta _{m}(\xi )$ \ has the following properties: all roots satisfy \ $|\,\xi \,|\leq 1,$ \ and all roots with \ $|\,\xi \,|=1$ \ are simple (see e.g. [8]), the proof of the time stability of LIL method follows from convergence study. In order to draw the stability regions let us define \begin{equation*} P_{m}(\xi ):=\alpha _{m}(\xi )-z\,\beta _{m}(\xi ), \end{equation*} Then, a linear multi-step method has the stability region \ $S$ , \ the set of all points \ $z\in \mathbb{C}$ \ such that all the roots of \ $P_{m}(\xi )=0$ \ lie inside or on the unit circle and those on the unit circle are simple. Hence we obtain \ the equation $\ $ \begin{equation} z=\frac{\alpha _{m}(\xi )}{\beta _{m}(\xi )}, \label{4.2} \end{equation} \noindent which has to be solved for any given \ $z\in \mathbb{C}$ . \ But instead of solving (\ref{4.2}) for given \ $z$\thinspace , we can give\ $\xi =e^{i\,\theta }$ \ with \ $\left| \,\xi \right| =1$ and plot\vspace{-0.09in} \begin{equation} z=\frac{\alpha _{m}(e^{i\,\theta })}{\beta _{m}(e^{i\,\theta })}, \label{4.3} \end{equation} \noindent for \ $\theta \in \left[ 0,\,2\pi \right] $ \ The set thus mapped must contain \ $\partial \,S$ . The stability region of a numerical stable algorithm has to contain the origin in his boundary. \noindent In Figure 1 the stability regions for LIL algorithm for \thinspace $m\in \{1,2,3,4,5\}\,$ are drawn. One can observe that LIL algorithm has, for all \ $m,$ large (even unlimited) regions of stability, including the entire left-half complex plane, typically for implicit algorithms$.$ \ The time stability of LIL method is more efficient than that of other known algorithms and is comparable with time stability of the Gear's algorithm (see e.g. [5] where the stability regions were drawn for several known algorithms). \begin{figure}[ht] \centering \subfigure[]{ \includegraphics[scale=0.53]{FIG1A.png}} \subfigure[]{ \includegraphics[scale=0.5]{FIG1B.png}} \centering \subfigure[]{ \includegraphics[scale=0.51]{FIG1C.png}} \subfigure[]{ \includegraphics[scale=0.52]{FIG1D.png}} \begin{center} \includegraphics[scale=0.5] {FIG1E.png}\label{fig1} \caption{stability regions for $(m+1)$th LIL method: a) $m=1$; b) $m=2$; c) $m=3;$ d) $m=4$; e) $m=5$.} \end{center} \end{figure} \noindent Taking account of the fact that higher order is not always higher accuracy, an acceptable compromise between the accuracy, time stability and computational time was proved to be \ $m=3$. \section{Applications} \subsection{LIL versus standard methods} The goal of this section is to compare the characteristics of few known standard algorithms (the 4th\emph{-}order\emph{\ }$\,\,$methods: Runge-Kutta, Gear, Adams-Moulton, the 3th\emph{-}order Adams-Bashforth method and the Milne method) and 4th-order LIL method. For this purpose we integrated two simple examples, with known analytical solutions: the Bernoulli equation \begin{equation*} 2\,t^{2}\overset{.}{x}(t)-4\,t\,x(t)-x^{2}(t)=0,\, \end{equation*} \noindent and \begin{equation*} \overset{.}{x\,}(t)=\cos (t).\, \end{equation*} \noindent The following values were calculated: - the relative error \ $\varepsilon _{r}=\sum \left| x_{a}-x\right| /\sum x_{a}\,,\ \ $where $x_{a}\,$\ is the analytical solution. The sum is taken over the integration interval. - the maximum absolute error: $\Delta =\underset{k}{\max }\left| x_{a,k}-x_{k}\right| ,\,$\thinspace where $x_{a,k}$ \thinspace is the exact solution in \thinspace $t_{k}.$ - the computation time $t$\footnote{$t$ is here only a relative value since it depends on the used code (Turbo Pascal and using 64 bits), and the computer processor (500 MHz ).} \noindent The results are presented in Table 5 and 6. \begin{table}[ht] \begin{center} \begin{tabular}{||c|c|c|c|c|c|c||} \hline\hline (a) & R-K & Gear & A-M & A-B & Milne & LIL \\ \hline $\varepsilon _{r}$ & {\small 1.9}$\cdot 10^{-4}$ & {\small 1.9}$\cdot 10^{-4} $ & {\small 1.1}$\cdot 10^{-7}$ & {\small 1.1}$\cdot 10^{-7}$ & {\small 1.4}$\cdot 10^{-7}$ & {\small 1.4}$\cdot 10^{-7}$ \\ \hline $\Delta $ & {\small 1.9}$\cdot 10^{-2}$ & {\small 2.0}$\cdot 10^{-2}$ & {\small 1.2}$\cdot 10^{-5}$ & {\small 1.2}$\cdot 10^{-5}$ & {\small 1.8}$% \cdot 10^{-5}$ & {\small 1.5}$\cdot 10^{-5}$ \\ \hline \emph{t}[s] & {\small 0.16} & {\small 0.16} & {\small 0.16} & {\small 0.10} & {\small 0.10} & {\small 0.16} \\ \hline\hline \end{tabular} \begin{tabular}{||c|c|c|c|c|c|c||} \hline\hline (b) & R-K & Gear & A-M & A-B & Milne & LIL \\ \hline $\varepsilon _{r}$ & {\small 3.8}$\cdot 10^{-5}$ & {\small 3.8}$\cdot 10^{-5} $ & {\small 2.3}$\cdot 10^{-10}$ & {\small 2.3}$\cdot 10^{-10}$ & {\small 2.8}$\cdot 10^{-10}$ & {\small 2.8}$\cdot 10^{-10}$ \\ \hline $\Delta $ & {\small 1.9}$\cdot 10^{-3}$ & {\small 2.0}$\cdot 10^{-3}$ & {\small 1.2}$\cdot 10^{-8}$ & {\small 1.8}$\cdot 10^{-8}$ & {\small 1.8}$% \cdot 10^{-8}$ & {\small 1.5}$\cdot 10^{-8}$ \\ \hline \emph{t}[s] & {\small 0.82} & {\small 0.71} & {\small 0.82} & {\small 0.43} & {\small 0.71} & {\small 0.87} \\ \hline\hline \end{tabular} \caption{ Bernoulli equation integrated with: a) $h=0.01,~$\thinspace $t\in \lbrack 1,100]$; b) $h=0.001$, ~$t\in [1,50].$} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{||c|c|c|c|c|c|c||} \hline\hline $(a)$ & R-K & Gear & A-M & A-B & Milne & LIL \\ \hline $\varepsilon _{r}$ & {\small 2.4}$\cdot 10^{-2}$ & {\small 4.9}$\cdot 10^{-2} $ & {\small 7.7}$\cdot 10^{-4}$ & {\small 4.0}$\cdot 10^{-3}$ & {\small 5.0}$\cdot 10^{-3}$ & {\small 5.0}$\cdot 10^{-3}$ \\ \hline $\Delta $ & {\small 3.7}$\cdot 10^{-2}$ & {\small 5.3}$\cdot 10^{-2}$ & {\small 4.9}$\cdot 10^{-4}$ & {\small 2.6}$\cdot 10^{-3}$ & {\small 3.9}$% \cdot 10^{-3}$ & {\small 3.3}$\cdot 10^{-3}$ \\ \hline $\emph{t}$[s] & $0.0$ & {\small 0.0} & {\small 0.0} & {\small 0.0} & {\small % 0.0} & {\small 0.0} \\ \hline\hline \end{tabular} \begin{tabular}{||c|c|c|c|c|c|c||} \hline\hline $(b)$ & R-K & Gear & A-M & A-B & Milne & LIL \\ \hline $\varepsilon _{r}$ & $4.9\cdot 10^{-4}$ & $9.9\cdot 10^{-4}$ & $3.9\cdot 10^{-7}$ & {\small 1.5}$\cdot 10^{-6}$ & {\small 1.9}$\cdot 10^{-6}$ & $% 1.9\cdot 10^{-6}$ \\ \hline $\Delta $ & $7.5\cdot 10^{-4}$ & $1.0\cdot 10^{-3}$ & $2.4\cdot 10^{-7}$ & {\small 2.7}$\cdot 10^{-7}$ & {\small 1.5}$\cdot 10^{-6}$ & $1.2\cdot 10^{-6} $ \\ \hline $\emph{t}$[s] & $0.16$ & {\small 0.16} & {\small 0.16} & {\small 0.16} & {\small 0.16} & {\small 0.16} \\ \hline\hline \end{tabular} \caption{ $\dot{x}(t)=\cos (t)$, $t\in [0,2\pi]$ integrated with: a) $h=0.05$; b) $h=0.001$.} \end{center} \end{table} Comparing the results in Tables 5 and 6 one can deduce that LIL's performances, for these two examples, are comparable to those of performant methods like Gear, Adams-Moulton and Adams-Bashforth. \subsection{Rabinovich-Fabrikant system} \indent The hard test was the integration of the Rabinovich-Fabrikant system. Rabinovich and Fabrikant [6] studied the following dynamical system (named the R-F model hereafter) \begin{equation} \begin{array}{l} \overset{.}{x_{1}}=x_{2}(x_{3}-1+x_{1}^{2})+ax_{1}, \\[3pt] \overset{.}{x_{2}}=x_{1}(3x_{3}+1-x_{1}^{2})+ax_{2}, \\[3pt] \overset{.}{x_{3}}=-2x_{3}(b+x_{1}x_{2}), \end{array} \quad \qquad a,b\in \mathbb{R}. \label{5.1} \end{equation} \begin{figure}[b] \centering \subfigure[]{ \includegraphics[clip,width=0.6\textwidth]{FIGURE2A.png}} \end{figure} \begin{figure} \begin{center} \includegraphics[clip,width=0.6\textwidth] {FIGURE2B.png} \caption{Two chaotic trajectories of R-F system: a) Three-dimensional phase portrait for $a=0.1$,~ $b=0.2876$; b) Plane phase portraits and time series for $a=-1$, ~ $b=-0.1$.} \end{center} \end{figure} \begin{figure} \centering \subfigure[]{ \includegraphics[clip,width=0.62\textwidth]{FIGURE3A.png}} \end{figure} \begin{figure} \begin{center} \includegraphics[clip,width=0.62\textwidth] {FIGURE3B.png} \caption{Two different sizes of the same attractor obtained with different step-sizes: a) for \ $h=5\times 10^{-3},$ \ $x_{3\max }=35$ \ while b) for \ $h=5\times 10^{-4},$ \ $x_{3\max }=350.$} \end{center} \end{figure} \begin{figure} \centering \subfigure[]{ \includegraphics[clip,width=0.55\textwidth]{FIGURE4A.png}} \end{figure} \begin{figure} \begin{center} \includegraphics[clip,width=0.55\textwidth] {FIGURE4B.png} \caption{Two different attractors (plotted here by points), with the same initial conditions and parameters values ($a=0.12,$ \ $b=0.05)$, but with different step-size \ a) $h=0.05$ \ and \ b) $h=0.005.$} \end{center} \end{figure} \begin{figure} \centering \subfigure[]{ \includegraphics[clip,width=0.7\textwidth]{FIGURE5A.png}} \centering \subfigure[]{ \includegraphics[clip,width=0.7\textwidth]{FIGURE5B.png}} \end{figure} \begin{figure} \centering \subfigure[]{ \includegraphics[clip,width=0.6\textwidth]{FIGURE5C.png}} \end{figure} \begin{figure} \begin{center} \includegraphics[clip,width=0.6\textwidth] {FIGURE5D.png} \caption{The case \ $a=0.3$ \ and \ $b=0.1$ \ integrated with: a) the 4% \emph{th}-order Adams-Moulton ; b) the 4\emph{th}-order Gear algorithm; c) 3% \emph{th} Adams-Bashforth algorithm; d) 4\emph{th}-LIL algorithm.} \end{center} \end{figure} \noindent This system models the stochasticity arising from the modulation instability in a non-equilibrium dissipative medium. Some qualitative analysis and numerical dynamics have been reported in [6] and a carefully re-examination together with many new and rich complex dynamics of the model, that were mostly not reported before, are presented in [3]. The chaotic R-F model proved to be a great challenge to the classical numerical methods, most of them being not successful to study the complex dynamics of this special model. \noindent All computer test results and graphical plots in Figures 2-5 were obtained with a special Turbo Pascal code which plots phase diagrams and time series \ The code for LIL method may be obtained directly from the author. \noindent For $a<b$, the system is characterized by the appearance of chaotic attractors in the phase space (see e.g. Figures 2). \noindent It is well known that because of the sensitive dependence on initial data, a chaotic system tends to amplify, often exponentially, tiny initial errors. These kind of errors could be amplified to so large, that it is almost impossible to draw mathematically rigorous conclusions based on numerical simulations. A typical case can be seen from Figure 3, wherefrom one deduces that the attractor's size along the $x_{3}$-axis increases significantly as the step-size decreases. This problem has been noticed for a long time, and has promoted a useful theory called ``shadowing,'' namely, the existence of a true orbit nearby a numerically computed approximate orbit [1]. \noindent We have also found that the strong dependence on the step-size for R-F system, for certain values of $\ b$ and with the same initial conditions, could produce totally different attractors (see Figures 4) There are few special cases which proved to be a real challenge for the numerical methods. As example for the case $\ a=0.3$ \ and \ $b=0.1$ (shown in Figure 3), the 4th-order Runge-Kutta and Milne methods failed while only the Gear and Adams-Moulton methods seem to give comparable results to those obtained with LIL method; the attractors obtained with the 3th-order Adams-Bashforth method are different to those obtained with Gear, Adams-Moulton and LIL methods (Figures 5). \section{Concluding remarks} \indent In this paper we present a linear implicit multi-step method, LIL, for ODEs proving its convergence, too. The method could be considered as an acceptable alternative to the classical algorithms for ODEs and can be successfully used in practical applications. One of the advantages is that in (\ref{2.5}) only the even order derivatives appear, this fact reducing the truncation error and the computational time. The algorithm seems to be stiffly-stable since it can integrate efficiently and accurately enough dynamical systems like R-F which presents stiff characteristics. The implementation of adaptive step-size represents a task for a future work. The basic approach would be applicable directly to variable step-size. \section{Acknowledgments} The author acknowledges Professor T. Colo\c{s}i, the promoter of this method, for his continuous encouragement and discussions on this work.
1,116,691,497,080
arxiv
\section*{Acknowledgments} This research was supported in part by the National Nature Science Foundation of China and the post doctoral foundation of China. S.H. Zhu gratefully acknowledges the support of K.C. Wong Education Foundation, Hong Kong.
1,116,691,497,081
arxiv
\section{Introduction} Fabrication of novel nanostructured spintronics devices and the related experimental studies of spin-dependent electron transport stimulate new theoretical approaches to the physical properties of nanosystems where quantum coherence effects can have a decisive role, in contrast to the mostly quasiclassical framework of the traditional electronics. One important class of such systems concerns spin valves \cite{dieny} formed by two ferromagnetic (FM) layers separated by a thin non-magnetic (NM) spacer. The magnetization of one of the FM layers (called pinned layer) is fixed by the bias from underlying antiferromagnetic (AFM) layer, while the magnetization of the other FM layer (free layer) easily rotates when a small magnetic field is applied. This significantly affects the in-plane conductance, leading to relatively high MR values, typical for giant magnetoresistance (GMR) \cite{baibich}, but the technology still demands further improvements. One of them consists in the introduction of nano-oxide layers (NOL's) just above the free layer and inside the pinned layer (so that the pinning is not disrupted) \cite{NOL}. Such NOL-equipped device, the so called specular spin valve (SSV, Fig. \ref{spec1}b) can more than double the GMR ratio of simpler stacks (Fig. \ref{spec1}a). The increase of MR is believed to arise from the specular reflection of electrons at the FM/NOL interfaces. But, besides the evident effect of carriers confinement, the reduced normal-to-plane scale $d$ of magnetic layers (few nm thickness, controlled within a 1 \AA~precision) might allow for a pronounced quantization of the normal component of quasi-momentum, as already indicated by the recent data on spin-resolved electronic reflection from magnetic nanolayers \cite{zdyb}, \cite{graf}. Furthermore, it is expected that the relevant modes at the Fermi level for each polarization are dramatically restructured when the mutual polarization of magnetic layers is changed. All this can qualitatively change the kinetics of spin-dependent transport, compared to the usual diffusive scenario for a quasi-continuous spectrum \cite{camley}. However, the microscopic understanding of electron specular interface reflection is still far from complete, in particular its role in size quantization and coherence of Fermi states. Here we propose a theoretical description of these effects, through a properly modified Boltzmann kinetic equation, taking into account the formation of transverse-quantized electronic subbands and spin-dependent specular reflection at the interfaces within the simplest tight-binding model, easy enough to advance up to numerical calculations of the MR behavior. \begin{figure} \includegraphics[width=8.5 cm]{spec1.EPS}\\ \caption{Schematics of spin valve structures: a) common and b) specular.}\label{spec1} \end{figure} \section{Model} Let us begin from a single metal layer, made of $n$ atomic planes with simple cubic lattice coordination and hopping integral $t$ between nearest neighbors at distance $a$. The respective electronic spectrum for given spin polarization $\sigma = \uparrow,\downarrow$ and planar quasimomentum ${\bf k}$ consists of $n$ subbands of the form $\varepsilon_{\alpha,{\bf k},\sigma}=\varepsilon_{{\bf k}}+\Delta_{\sigma}+\delta_{\alpha}$ (Fig. \ref{spec2}). Here $\varepsilon_{{\bf k}} = 2 t \left(2-\cos ak_{x}-\cos ak_{y}\right)$ is the 2D dispersion law for a single plane, and in a ferromagnetic metal it is accompanied by the Stoner energy shift $\Delta_{\sigma} = \pm \Delta$ for minority and majority spins respectively. The spatial quantization is accounted for by the subband shifts $\delta_{\alpha}$ ($\alpha = 1,\dots,n$) which are the eigenvalues of the $n\times n$ secular equation \begin{equation} \left|\begin{array}{ccccc} \delta & t & 0 & \dots & 0\\ t & \delta & t & \dots & 0\\ \dots & \dots & \dots & \dots & \dots\\ 0 & \dots & t & \delta & t\\ 0 & \dots & 0 & t & \delta\end{array}\right|=0 \label{eq:1} \end{equation} \noindent with exact values $\delta_\alpha = 2t \cos \left[\pi\alpha/(n+1)\right]$. The wave function for the $\alpha,{\bf k},\sigma$ state, at the planar position ${\bf r}$ in the $j$-th plane, is $\psi_{\alpha, {\bf k},\sigma} ({\bf r},j ) = A_{j}^{(\alpha)}{\rm e}^{i {\bf k} \cdot {\bf r}} \chi_{\sigma}$, where the components of the $n$-dimensional eigenvector $A^{(\alpha)}$ related to the eigenvalue $\delta_{\alpha}$ are explicitly given by \begin{equation} A_j^{(\alpha)}=\sqrt{\frac 2{n+1}}\sin\frac{\pi\alpha j}{n+1}, \label{eq:2} \end{equation} \noindent and $\chi_{\sigma}$ is the spin function. \begin{figure} \includegraphics[width=7.5cm]{spec2.eps} \caption{\label{spec2}Sketch of the dispersion laws (along the diagonal $k_x = k_y = k_\perp$ of 2D Brillouin zone) in spin-splitted and spatially quantized subbands of a magnetic nanolayer. The circles indicate Fermi momenta for particular (minority spin) subbands and the related Fermi velocities $v_\alpha$ correspond to the slopes of dispersion laws.} \end{figure} Next the model is extended to include the hopping $t^{\prime}$ between the neighbor FM and NM layers, hybridizing the subbands of the free FM layer (composed of $n_{f}$ atomic planes) $\varepsilon_{\alpha,\mathbf{k},\sigma}^{f}$, of the NM spacer (composed of $n_{s}$ planes) $\varepsilon_{\mathbf{\alpha,k}}^{s}$ (with $\Delta=0$), and of the pinned FM layer ($n_{p}$ planes) $\varepsilon_{\alpha,\mathbf{k},\sigma}^{p}$. We shall denote the respective eigenvectors (for the uncoupled layers) by $F^{(\alpha)},S^{(\alpha)}$, and $P^{(\alpha)}$, with the components given again by Eq. \ref{eq:2} for $n = n_f, n_p, n_s$, while the notation $M^{(\alpha)}$ is adopted for the eigenvectors of the coupled system. The specularity effect in this approach is modeled by zero coupling of the FM layers to their outer neighbors. The resulting spectrum totals up to $n_t=2(n_{f}+n_s+n_{p})$ spin-resolved modes with energies $\varepsilon_{\alpha,{\bf k}}$ and wave functions $\Psi_{\alpha,{\bf k}}({\bf r},j) = M_j^{(\alpha)}{\rm e}^{i {\bf k} \cdot {\bf r}}\chi_{\sigma(\alpha)}$, where $\alpha = 1,\dots,n_t$ and $\sigma(\alpha)$ is the implicit polarization of $\alpha$-th mode (Fig. \ref{spec3}). We emphasize that from the total of $n_t$ modes, only a smaller number, $n_{r}$, of modes, those present on the Fermi level, are relevant for conductance. Thus, for the characteristic case of FM Co layers, only minority spin subbands should take part in the transport (as suggested by the bulk Co band structure \cite{band}). Moreover, we have to take into account the sizeable differences in the corresponding Fermi velocities $v_\alpha$ (practically coincident with those in the uncoupled layers, Fig. \ref{spec2}). The most essential effect of hybridization is on the amplitudes $M_j^{(\alpha)}$ which are generally some weighted combinations of all the $F,P,S$ modes, and the crucial point is that the weights of $F,P$ components in the relevant modes are strongly dependent on the mutual polarization of FM layers (see below). \begin{figure} \includegraphics[width=7.5cm]{spec3.eps} \caption{\label{spec3}Energy band structure in the trilayered system. All the modes are doubly degenerate and the relevant modes at the Fermi level are marked with circles. Inset: spatial composition of atomic planes forming the sets $J_{f,s,p}$ in \emph{f}-, \emph{s}-, and \emph{p}-layers.} \end{figure} Then the kinetics of the composite system is described by the set of $n_{r}$ distribution functions $f_{\alpha,{\bf k}} = f_{\alpha,{\bf k}}^{(0)} + g_{\alpha,{\bf k}}$ where $f_{\alpha,{\bf k}}^{(0)} = \left[{ \rm e}^{\beta \left(\varepsilon_{\alpha,{\bf k}}-\varepsilon_{\rm F}\right)} + 1 \right]^{ -1}$ is the usual equilibrium Fermi function with $\beta = 1/k_{\rm{B}}T$ and $g_{\alpha,{\bf k}}$ is the non-equilibrium part due to the external electric field $\mathbf{E}$. The current density is given by the sum \begin{equation} \mathbf{j}=\frac e {n a}{\sum_{\alpha}}^\prime\int\frac{d\mathbf{k}} {\left(2\pi\right)^{2}} \mathbf{v}_{\alpha,{\bf k}} g_{\alpha,{\bf k}}, \label{eq:3} \end{equation} \noindent where $\sum^\prime$ means summation over the $n_r$ relevant modes, $\mathbf{v}_{\alpha,{\bf k}} = \hbar^{-1}\partial \varepsilon_{\alpha,{\bf k}}/\partial{\bf k}$ is the electron velocity, and the components of the non-equilibrium distribution are defined from the system of Boltzmann equations: \begin{equation} \frac{e\mathbf{E}}{\hbar}\cdot\frac{\partial f_{\alpha,{\bf k}}^{(0)}}{\partial{\bf k}} + {\sum_{\beta}}^{\prime\prime} \int \frac{a^2 d{\bf k}^\prime}{(2\pi)^2} \omega_{\alpha,{\bf k}}^{\beta,{\bf k}^\prime} \left(g_{\beta,{\bf k}^\prime} - g_{\alpha,{\bf k}}\right)= 0. \label{eq:4} \end{equation} \begin{figure} \includegraphics[width=8cm]{spec4.eps} \caption{\label{spec4}Configurations of Fermi lines for spatially quantized subbands of minority electrons in the Brillouin zone. The characteristic points along high symmetry directions $\Gamma M$ and $MX$ were used to approximate the averages of $v_\alpha^{-1}$ and $v_\alpha^2$.} \end{figure} \noindent Here $\sum^{\prime\prime}$ means summation over relevant modes with conserved spin, $\sigma(\alpha)=\sigma(\beta)$, and $\omega_{\alpha,{\bf k}}^{\beta,{\bf k}^\prime}$ is the transition rate due to scattering from ${\bf k}$ state of $\alpha$-th subband to ${\bf k}^\prime$ state of $\beta$-th subband. We consider transitions only due to elastic scattering by random point-like impurities with potential $V$ and concentration $c \ll 1$ (per unit cell). Then the Fermi Golden Rule transition rates are $\omega_{\alpha,{\bf k}}^{\beta,{\bf k}^\prime}= \Omega_{\alpha,\beta} \delta\left(\varepsilon_{\alpha,{\bf k}} -\varepsilon_{\beta, {\bf k}^\prime} \right)$ with the scattering factors (averaged in impurity positions) \begin{equation} \Omega_{\alpha,\beta}=\frac{2\pi cV^{2}}{\hbar n}\sum_{j}\left|M_j^{(\alpha)} M_{j}^{\left(\beta\right)}\right|^{2}. \label{eq:5} \end{equation} \noindent In this simple model, the first term in the collision integral of Eq. \ref{eq:4} turns out to be proportional to $\int d{\bf k} g_{\beta, {\bf k}}\delta\left(\varepsilon_{\rm F} - \varepsilon_{ \beta,{\bf k}} \right)$, that is to the average of the non-equilibrium distribution over the Fermi surface and so it should vanish. Then the solution takes the common form $g_{\alpha,{\bf k}} = \hbar^{-1}\tau_\alpha e\mathbf{E} \cdot \partial f_{\alpha,{\bf k}}^{(0)}/\partial{\bf k}$ where the relaxation time for the $\alpha$-th mode is defined by \begin{equation} \tau_\alpha^{-1} = {\sum_{\beta}}^{\prime\prime} \rho_{\beta}\Omega_{\alpha,\beta}, \label{eq:6} \end{equation} \noindent including the Fermi densitiy of states $\rho_{\beta} = (a/2 \pi)^2 \int d{\bf k} \delta\left(\varepsilon_{\beta,{\bf k}} - \varepsilon_{ \rm{F}} \right)$ for each $\beta$-th mode. Then the total conductivity is found from Eq. \ref{eq:3} as a sum of partial contributions: \begin{equation} \sigma_{tot} = {\sum_\alpha}^\prime \sigma_\alpha, \quad \sigma_\alpha = \frac{e^2 \tau_\alpha \rho_{\alpha} \left\langle v_{\alpha}^2\right\rangle}{n a^3}, \end{equation} \noindent where $\left\langle v_{\alpha}^2\right\rangle \approx \rho_{\alpha}^{-1}\left(a/2\pi\right)^2\int d{\bf k} v_{\alpha,{\bf k}}^2 \delta\left(\varepsilon_{\alpha,{\bf k}} - \varepsilon_{\rm{F}} \right)$ is the average of the respective squared Fermi velocity. In fact, this is a particular case of the general Landauer formula \cite{land}, written for the present system of $n_r$ coherent quantum channels. The system, Eqs. \ref{eq:1}-\ref{eq:6}, can be routinely treated by numerical methods at any relative orientation of magnetizations in $f$- and $p$- layers, from parallel ($\uparrow\ua$) to antiparallel ($\uparrow\downarrow$), to result in the principal quantity of interest, the magnetoresistance \begin{equation} \frac{\Delta R}R = \frac{\sigma_{tot}^{\uparrow\ua}}{\sigma_{tot}^{{\uparrow\downarrow}}}-1 = \frac{{\sum_\alpha}^\prime\rho_\alpha\left\langle v_\alpha^2\right\rangle\tau_\alpha^{\uparrow\ua}} {{\sum_\alpha}^\prime\rho_\alpha\left\langle v_\alpha^2\right\rangle\tau_\alpha^{\uparrow\downarrow}}-1. \label{eq:7} \end{equation} But some qualitative conclusions about the specularity effect on MR in a nanolayered device can be drawn already from simple inspection of the discrete structure of the amplitudes $M_j^{(\alpha)}$, according to the following remarks. First of all, we suppose that in absence of hybridization the majority and minority subbands are well separated from each other and from the spacer subbands (like the situation in bulk Co and Cu). Then we notice that the $j$-configurations of the above amplitudes are essentially different for $\uparrow\ua$ and $\uparrow\downarrow$ cases and hence consider them separately. Finally, an important factor for the very existence of GMR (in this quantum coherent conductance regime) is the presence of certain "resonances" between relevant modes at the Fermi level. Namely, a resonance appears between two (unhybridized) modes $\varepsilon_{\alpha,{\bf k},\sigma}^f$ and $\varepsilon_{\beta,{\bf k},\sigma}^p$ if their energy separation near the Fermi level is less then the effective coupling $\sim {t^\prime}^2/\varepsilon_s $ (mediated by the spacer modes at typical energy distance $\varepsilon_s$, see Fig. \ref{spec3}). Moreover, for the sake of clarity, we shall restrict the following consideration to the simplest situation of identical \emph{f}- and \emph{p}-layers where all $n_f = n_p$ modes are relevant and can form resonant $fp$-pairs. Thus, in the $\uparrow\ua$ configuration, there appears a strong hybridization in each $F^\alpha$, $P^\alpha$ pair, forming two collective modes as their bonding and anti-bonding combinations (in neglect of small contributions $\sim {t^\prime}^2/\left(\varepsilon_s\Delta\right) \ll 1$ of the rest of the modes): \begin{equation} M_j^{\alpha,\pm} \approx \frac 1 {\sqrt 2} \left\{ \begin{array}{c} F_j^\alpha,\quad{\rm for}\quad j \in J_f, \\ 0,\quad{\rm for}\quad j\in J_s, \\ \pm P_j^\alpha,\quad{\rm for}\quad j \in J_p, \end{array}\right. \label{eq:8} \end{equation} \noindent where $J_{f,s,p}$ are the sets of atomic planes entering \emph{f}-, \emph{s}-, and \emph{p}-layers (see inset in Fig. \ref{spec3}). The respective relaxation times are given by \begin{eqnarray} &&\left(\tau_{\alpha,\pm}^{\uparrow\ua}\right)^{-1} \approx \frac{\pi c V^2}{2\hbar n} {\sum_\beta}^{\prime\prime}\rho_\beta\nonumber\\ &&\quad \times \left(\sum_{j \in n_f}\left|F_j^\alpha F_j^\beta\right|^2 + \sum_{j \in n_p}\left|P_j^\alpha P_j^\beta\right|^2\right). \label{eq:9} \end{eqnarray} \noindent Then we can use the exact sum rule for the amplitudes, Eq. \ref{eq:2}: \begin{equation} \sum_{j=1}^n \left( A_j^\alpha A_j^{\beta} \right)^2 =\frac 1 {n+1} \left(1 + \frac{\delta_{\alpha,\beta} + \delta_{\alpha,n+1-\beta} } 2\right), \label{eq:10} \end{equation} \noindent to present the relaxation times, Eq. \ref{eq:9}, as \begin{equation} \tau_{\alpha,\pm}^{\uparrow\ua} \approx \frac{\hbar n}{2 \pi c V^2 {\sum_\beta}^{\prime\prime}\rho_\beta}. \label{eq:11} \end{equation} \noindent Contrariwise, in the $\uparrow\downarrow$ configuration, all the relevant modes remain almost unhybridized, taking nearly "local" forms: \begin{eqnarray} M_j^{\alpha,f} & \approx & \left\{ \begin{array}{c} F_j^\alpha,\quad\quad{\rm for} \quad j\in J_f, \\ 0,\quad{\rm for}\quad j \in J_s \cup J_p, \end{array}\right.\nonumber\\ M_j^{\alpha,p} & \approx & \left\{ \begin{array}{c} 0,\quad{\rm for} \quad j\in J_f\cup J_s, \\ P_j^\alpha,\quad\quad{\rm for}\quad j \in J_p, \end{array}\right. \label{eq:12} \end{eqnarray} \noindent and in this approximation we obtain for the relaxation times $\tau_{\alpha,i}^{\uparrow\downarrow}$ half the value of Eq. \ref{eq:11}. Then the magnetoresistance, Eq. \ref{eq:7}, is readily estimated as $\Delta R/R \approx 100\%$. We notice that this result is practically independent of the parameters of interlayer coupling and impurity scattering, in particular it does not even need that lifetimes of majority and minority carriers be different (as necessary for quasiclassical regimes). The main MR effect in the considered limit is due to the variation of coherent quantum states, induced by the relative rotation of magnetization of the FM layers. \section{Numerical calculations} \begin{table} \begin{center} \begin{tabular}{lcccc} \hline \hline & & $10^{-7}\times\Omega_{\alpha,\beta}$ (W) \\\hline & $\beta = 1$ & $\beta = 2$ & $\beta = 3$ & $\beta = 4$ \\ $\alpha = 1$ $\uparrow\uparrow$ & $0.5175$ & $0.3431$ & $0.3417$ & $0.5151$ \\ \qquad $\;$\, $\uparrow\downarrow$ & $1.0347$ & $0.6857$ & $0.6824$ & $1.0287$ \\ \\ $\alpha = 2$ $\uparrow\uparrow$ & $0.3431$ & $0.5118$ & $0.5097$ & $0.3415$ \\ \qquad $\;$\, $\uparrow\downarrow$ & $0.6857$ & $1.0224$ & $1.0175$ & $0.6818$ \\ \\ $\alpha = 3$ $\uparrow\uparrow$ & $0.3417$ & $0.5097$ & $0.5077$ & $0.3402$ \\ \qquad $\;$\, $\uparrow\downarrow$ & $0.6824$ & $1.0175$ & $1.0127$ & $0.6786$ \\ \\ $\alpha = 4$ $\uparrow\uparrow$ & $0.5151$ & $0.3415$ & $0.3402$ & $0.5127$ \\ \qquad $\;$\, $\uparrow\downarrow$ & $1.0287$ & $0.6818$ & $0.6786$ & $1.0228$ \\ \hline \hline \end{tabular} \end{center} \caption{Scattering factors ($\Omega_{\alpha,\beta}$) for the Fermi modes ($\alpha, \beta = 1$, $2$, $3$ and $4$) in the parallel ($\uparrow\uparrow$) and antiparallel ($\uparrow\downarrow$) configurations.} \label{table1} \end{table} \begin{table} \begin{center} \begin{tabular}{lcccc} \hline \hline $\alpha$ & 1 & 2 & 3 & 4 \\\hline $10^{19}\rho_{\alpha}(\texttt{J}^{-1})$ & $0.209$ & $0.2439$ & $0.3238$ & $0.5517$ \\ $10^{10}\langle v_{\alpha}^{2}\rangle (\texttt{m}^{2}/\texttt{s}^{2})$ & $0.7431$ & $2.2595$ & $2.9334$ & $2.6008$ \\ $10^{-12}\tau_{\alpha}^{\uparrow\uparrow}(\texttt{s})$ & $1.7018$ & $1.8157$ & $1.823$ & $1.7094$ \\ $10^{-12}\tau_{\alpha}^{\uparrow\downarrow}(\texttt{s})$ & $0.8515$ & $0.9091$ & $0.9133$ & $0.8563$ \\ \hline \hline \end{tabular} \end{center} \caption{The Fermi density of states $\rho_{\alpha}$, averages of squared Fermi velocities $\langle v_{\alpha}^{2}\rangle$ and relaxation times $\tau_\alpha$ for the Fermi modes ($\alpha = 1$, $2$, $3$ and $4$) in the parallel ($\uparrow\uparrow$) and antiparallel ($\uparrow\downarrow$) configurations.}\label{table2} \end{table} To certify the above qualitative considerations, a detailed numerical calculation was done for the particular choice of parameters: $t = t^\prime = 0.25$ eV, $\Delta = 0.5$ eV, $\varepsilon_s = 2$ eV (a single-band model for real \emph{d}-bands of Co and Cu), $n_f = n_p = 4$, $n_s = 3$ (a simple discrete structure of layers), and $V = 0.5$ eV, and $c = 0.01$ (typical impurity parameters). The Fermi velocities (and their inverse values) for two characteristic directions in Brillouin zone were used to approximate the partial densities of states \[\rho_\alpha \approx \frac{a^2L_\alpha}{8\pi2\hbar}\left( \frac 1{v_{\alpha,\Gamma M}} + \frac 1 {v_{ \alpha, MX}}\right),\] \noindent ($L_\alpha$ being the length of respective Fermi line, Fig. \ref{spec4}), and then $\langle v_\alpha^2\rangle \approx v_{\alpha,\Gamma M}v_{ \alpha, MX}$. The obtained numerical results for $\Omega_{\alpha,\beta}$, $\rho_\alpha$, $\tau_\alpha$, and $\langle v_\alpha^2 \rangle$ are illustrated in tables \ref{table1} and \ref{table2}, respectively, for the $\uparrow\ua$ and $\uparrow\downarrow$ configurations. These numerical values lead to $\approx 99.65$\% of magnetoresistance, that is quite close to the maximum possible MR = 100\% in the coherent regime. To compare with, for purely incoherent currents there will be no MR at all in such two-layer system, so that the finite effect only appears from their partial mixing due to scattering at the interfaces \cite{camley} and is estimated phenomenologically as $\sim\exp\left(-d/\ell\right)$ of the above maximum value. Actually, the experimental MR values in specular spin valves \cite{gehanno,sousa} are clearly lower than the above model estimates. This can be due to a number of important factors, not included into the present simple model (which therefore should be considered as a certain idealized reference case). First of all, the postulated ideal specularity condition (supposing the wave function fully confined within the $n$-plane system) cannot be exact in reality, and a considerable part of the electronic density can "escape" through the NOL barriers to adjacent non-magnetic (or AFM) layers. This part would act as a parallel conduction channel, mostly unchanged at reorientation of FM electrodes and hence restricting the magnetoresistive effect. Also, the used model of rigid Stoner shifts of spin subbands in FM electrodes of course overestimates sharpness of the spin-dependent energy barrier between these electrodes, where in fact the band structures are not uniform on scales of few atomic layers. Other restrictive factors are the temperature effect (by phonons and magnons), the roughness in the FM/NM interfaces and the presence of defects as grain boundaries, displacements and distortions in the crystalline structure, which all reduce the coherence of relevant quantum states and so the validity of the Landauer formula. At last, the single-band model may be oversimplified, compared to the real hybridized \emph{s-d} band structures of transition metals used in various numerical studies of spin-valves \cite{tsymbal1,Chen,Blaas}, but it would be rather problematic to keep the analytic description at such detailed level. Nevertheless, a further development within the present model can be done, varying the number $n_r$ of relevant modes and admitting the presence of both spin polarizations among these modes. \section{Conclusions} A simple single-band tight-binding model was developed to estimate theoretically the maximum possible enhancement of GMR in the system of quantum coherent FM nanolayers, using a specific set of Boltzmann equations for spatially quantized and spin-resolved subbands and a Landauer-type formula for the spin-dependent conductance. It is shown that a limit GMR value close to 100\% can be reached for a fully coherent (and fully specular) SSV nanostructure and the reducing factors for this value in real SSV systems are discussed. \begin{acknowledgments} This work was supported in part by FEDER-POCTI/0155, POCTI/CTM/45252/02 and POCTI/CTM/59318/2004 from Portuguese FCT and IST-2001-37334 NEXT MRAM projects. JMT and JV are thankful for FCT grants (SFRN/BD/24012/2005 and SFRN/BPD/2163/2005). \end{acknowledgments}
1,116,691,497,082
arxiv
\section{Introduction} The Large Area Telescope (LAT) onboard the \textit{Fermi Gamma-ray Space Telescope} (\textit{Fermi}) is a pair-conversion telescope with imaging and spectroscopic capabilities, a large field-of-view of $\sim 2$ sr, and sensitive to $\gamma$-rays in the energy range from $\sim 20$ MeV to $\sim 1$ TeV\footnote{\url{https://www.slac.stanford.edu/exp/glast/groups/canda/lat_Performance.htm}} \citep{atwood2009large}. The \textit{Fermi}-LAT performs an all-sky survey every $\sim 3$ hours and, since its launch in 2008, has collected more than 13 years of data. Once the events detected by the \textit{Fermi}-LAT are classified, the collected $\gamma$-ray data are released to the public on the Fermi Science Support Center (FSSC) data server\footnote{\url{https://fermi.gsfc.nasa.gov/cgi-bin/ssc/LAT/LATDataQuery.cgi}}. The users of these data can rely on \texttt{Fermitools}\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/}} and \texttt{Fermipy}\footnote{\url{https://fermipy.readthedocs.io/en/latest/}} \citep{wood2017fermipy} to perform their analyses, both tools relying on several online tutorials\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/}, \url{https://fermipy.readthedocs.io/en/latest/quickstart.html}}. \texttt{Fermipy}, in particular, is a high-level \texttt{Python} package that facilitates the analysis of \textit{Fermi}-LAT data in the framework of the \texttt{Fermitools}. In this work we present \texttt{easyFermi}, an open-source graphical interface suited to perform \textit{Fermi}-LAT data analyses of point-like and extended $\gamma$-ray sources. \texttt{easyFermi} facilitates the experience and can save a substantial amount of time for those doing $\gamma$-ray astronomy, especially to scientists just starting in the field of high-energy astrophysics. Here we provide an introduction to what is possible to do with \texttt{easyFermi} and how to use its graphical interface. Further tutorials can be found online on GitHub\footnote{\url{https://github.com/ranieremenezes/easyFermi}} and YouTube\footnote{\url{https://www.youtube.com/channel/UCeLCfEoWasUKky6CPNN_opQ}}. This paper is organized as follows. We describe the installation and setup processes of \texttt{easyFermi} in \S \ref{sec:installation} and detail what is happening behind the scenes, especially the dependency of \texttt{easyFermi} on \texttt{Fermipy}, in \S \ref{sec:behind}. In \S \ref{sec:results} we show some of the main data products of \texttt{easyFermi} and, finally, we conclude in \S \ref{sec:conclusions}. \section{Installation and setup} \label{sec:installation} The current (and upcoming) release of \texttt{easyFermi} (i.e. V1.0.7) is available in the Python Package Index (PyPI) server\footnote{\url{https://pypi.org/project/easyFermi/}} and GitHub, together with detailed instructions for installation and usage. The installation of \texttt{easyFermi} (V1.0.7) requires an existing installation of the \texttt{Fermitools} V2.0.8 and \texttt{Fermipy} V1.0.1 (Python 3 version) and works on Linux and Mac operational systems. The graphical interface will be maintained such that its installation will be compatible with the upcoming releases of the \texttt{Fermitools} and \texttt{Fermipy}. In summary, once the \texttt{Fermitools} and \texttt{Fermipy} are installed (see the online tutorial in the \texttt{easyFermi} GitHub webpage), one can use the terminal to open the \texttt{fermi} environment with \texttt{conda} by typing: \texttt{\$ conda activate fermi} And then simply type: \texttt{\$ pip install easyFermi} To test if the installation is properly working, the user can type: \texttt{\$ python} \texttt{>> import easyFermi} \begin{figure*} \centering \includegraphics[width=\linewidth]{easyFermi.png} \caption{The main window of \texttt{easyFermi}.} \label{fig:easyFermi} \end{figure*} {\noindent The graphical interface of \texttt{easyFermi} should appear at this point (see Figure \ref{fig:easyFermi}) and the user can enter the desired configurations.} To exemplify the usage of the software, we will use the blazar PG 1553+113 as a test target. The first step is to download the spacecraft and photon data files from the FSSC server\footnote{\url{https://fermi.gsfc.nasa.gov/cgi-bin/ssc/LAT/LATDataQuery.cgi}}, as well as the Galactic (\texttt{gll\_iem\_v07}) and isotropic (iso\_P8R3\_SOURCE\_V3\_v1) diffuse emission models\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html}}. The parameters that we adopt to download the \textit{Fermi}-LAT data for PG 1553+113 are: \begin{itemize} \item Coordinates: 238.92935, 11.19010 ($^{\circ}$) \item Search radius: 10 ($^{\circ}$) \item Observation dates: 2008-08-04 15:43:36, 2009-08-04 00:00:00 (Gregorian) \item Energy range: 1000, 500000 (MeV) \end{itemize} For a standard point-source analysis of PG 1553+113, the user only needs to feed the empty boxes of \texttt{easyFermi} with the coordinates (i.e. 238.92935, 11.19010), desired energy range (1000, 500000), the directory where the spacecraft, photon and background files were downloaded, and the adopted time interval (from 2008-08-04 15:43:36 to 2009-08-04 00:00:00). For the users with some experience on \textit{Fermi}-LAT data analysis, there are a set of advanced configurations that can also be controlled, but all of them are optional. Since PG 1553+113 is listed in 4FGL, the configuration step is finished. For targets not listed in 4FGL (or 3FGL), the user has to change the ``Target cataloged'' entry to ``No'' and insert a nickname for the target in the box ``Target name/tag''. The user can then simply choose what are the desired outputs by checking the Light curve, Spectral Energy Distribution (SED), Extension, Re-localize and test statistic (TS) map boxes. The results will then be saved as ``.txt'', ``.npy'', ``.fits'', and ``.pdf'' (or ``.png'') files in the selected output directory. If no output directory is chosen, \texttt{easyFermi} will save all files in the current working directory. The analysis will start after pressing ``Go!'', however, the user can still make modifications on the fly, like activating or deactivating one of the boxes in the \texttt{Science} panel. For more complex analysis, we refer the user to the online tutorials. \section{Behind the scenes} \label{sec:behind} The graphical interface and data analysis provided by \texttt{easyFermi} strongly depend on the \texttt{Python} packages \texttt{PyQt5} and \texttt{Fermipy}, although requiring minimum maintenance. Once the user sets the desired configurations and start the process, \texttt{easyFermi} automatically organizes all these information with \texttt{PyQt5} and communicates them to \texttt{Fermipy}, thereafter starting a binned likelihood analysis. This process generates several data subproducts, as a list of selected events, a counts cube, an exposure map, a livetime cube, and a source map\footnote{More details on these data subproducts here: \url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/binned_likelihood_tutorial.html}}. Before fitting the model to the data in the region of interest (ROI), we first optimize the ROI model with the \texttt{Fermipy} function \texttt{optimize()} to ensure that all parameters are close to their global likelihood maxima, and then look for uncataloged sources with the function \texttt{find\_sources()} (this one can be disabled by the user). The model is then fitted to the data using the minimizer \texttt{MINUIT} and the main results for the target are saved in the file ``Target\_results.txt''. For the full set of results including all sources in the ROI, the user can access the files ``Results.fits'' or ``Results.npy''. \subsection{Current spectral models} In \texttt{easyFermi} we use the spectral models from the \textit{Fermi}-LAT third and fourth source catalogs \citep[3FGL and 4FGL, respectively;][]{acero2015_3FGL,abdollahi2020_4FGL}, namely i) the power-law model, defined by $$ \frac{dN}{dE} = N_0 \left( \frac{E}{E_0} \right)^{\gamma}, $$ where $N_0$ is the normalization (in units of cm$^{-2}$ s$^{-1}$ MeV$^{-1}$), $E$ is the photon energy, $E_0$ is the pivot energy, and $\gamma$ is the photon index; ii) the log-parabolic model \citep{massaro2004log} defined by $$ \frac{dN}{dE} = N_0 \left( \frac{E}{E_0} \right)^{-\alpha -\beta\log(E/E_0)},$$ where $\alpha$ and $\beta$ are indexes describing the hardness and curvature of the spectrum; and iii) the power-law with a super exponential cutoff $$ \frac{dN}{dE} = N_0 \left( \frac{E}{E_0} \right)^{\gamma}\exp (-aE^b),$$ where $a$ and $b$ are the exponential factor and index describing the shape of the spectral cutoff. If the target is listed in one of the \textit{Fermi}-LAT catalogs, \texttt{easyFermi} automatically uses the cataloged spectral model in the analysis, otherwise a new source with a power-law spectrum is added to the ROI model. The user can always change the spectral model for the target under the checkbox ``Change model''. Depending on the feedback by the users of \texttt{easyFermi}, we can add more spectral models in the next software releases. \subsection{Automatic configuration} For those using the \texttt{Standard} mode of \texttt{easyFermi}, the following set of configurations apply. \begin{itemize} \item The ROI is defined as an $L \times L$ square with size depending on the adopted starting energy ($E_{min}$), i.e. $L = 15^{\circ}, 12^{\circ}$ or $10^{\circ}$ for $E_{min} < 500$ MeV, 500 MeV $\leq E_{min} < 1000$ MeV and $E_{min} \geq 1000$ MeV, respectively. \item For the same energy ranges defined in the previous item, we set the maximum zenith angle to $z_{max} = 90^{\circ}, 100^{\circ}$ or $105^{\circ}$. \item The classification and point spread function type for each event are filtered with \texttt{evclass} $= 128$ and \texttt{evtype} $= 3$, while the adopted instrument response function is \texttt{P8R3\_SOURCE\_V3} and the dataset is divided in 8 bins per energy decade. \item The radius from the ROI center in which the parameters of the sources are allowed to vary during the fit is defined as $R_{free} = L/2$, where $L$ is the size of the ROI. \end{itemize} The experienced user can change all of these configurations by passing a customized configuration file to \texttt{easyFermi} under the selection of the ``custom'' button and by modifying the entries in the ``Advanced configurations'' box (see Figure \ref{fig:easyFermi}). \subsection{Recovering the state of the analysis} A very useful characteristic of \texttt{easyFermi} is that it allows the user to recover the latest state of the analysis, such that the user can quit \texttt{easyFermi} and continue the exact same analysis some time later. Once the analysis is done, the state of the graphical interface is automatically saved in the file ``GUI\_status.npy'' in the output directory, and can easily be recovered by clicking on ``Load GUI state'' in the ``Menu'' button of the toolbar. \section{Main data products} \label{sec:results} The main results from \texttt{easyFermi} are the measurements of the flux and spectral shape for all sources in the ROI, which are saved in the files ``Target\_results.txt'', ``Results.npy'', and ``Results.fits''. Furthermore, the user can relocalize the target (the ROI is then updated with the new location of the target), compute a light curve, a $\gamma$-ray spectrum, look for extended emission and compute a TS map. In Figures \ref{fig:SED}, \ref{fig:LC}, and \ref{fig:eLC} we show the $\gamma$-ray spectrum and light curves for PG 1553+113 built with the dataset described in \S \ref{sec:installation}. Upper-limits are displayed whenever an energy or time bin has TS $\leq 9$. For the extension, since PG 1553+113 is a point-like source, the peak in the delta log-likelihood shown in Figure \ref{fig:extension} is centered at zero. All of these results are saved in files named ``SOURCENAME\_task.fits'' and ``SOURCENAME\_task.npy'', where ``task'' here can be ``loc'', ``lightcurve'', ``SED'', or ``extension''. These files can easily be accessed with \texttt{numpy} \citep{harris2020numpy}, \texttt{astropy} \citep{2013astropy1,2018astropy2} and \texttt{TopCat} \citep{2005topcat}. Furthermore, \texttt{easyFermi} automatically plots the results as \texttt{.pdf} or \texttt{.png} figures labeled as, e.g. ``Quickplot\_task.pdf''. \begin{figure} \centering \includegraphics[width=\linewidth]{Quickplot_SED.pdf} \caption{The $\gamma$-ray spectrum for PG 1553+113 fitted with a log-parabolic model.} \label{fig:SED} \end{figure} \begin{figure*} \centering \includegraphics[width=\linewidth]{Quickplot_LC.pdf} \caption{Photon flux light curve for PG 1553+113.} \label{fig:LC} \end{figure*} \begin{figure*} \centering \includegraphics[width=\linewidth]{Quickplot_eLC.pdf} \caption{Energy flux light curve for PG 1553+113.} \label{fig:eLC} \end{figure*} \begin{figure} \centering \includegraphics[width=\linewidth]{Quickplot_extension.pdf} \caption{The extension of PG 1553+113 is compatible with zero, meaning that this source is point-like.} \label{fig:extension} \end{figure} \subsection{Goodness of fit} The detection, flux determination and spectral modeling of \textit{Fermi}-LAT sources with \texttt{easyFermi} is accomplished by means of a binned likelihood optimization technique \citep{abdo2009fermi_likelihood}, using \texttt{MINUIT} as the minimizer. The multi-dimensional minimization of parameters, however, is not an easy task and is susceptible to fail. To be sure that the fit properly converged, the user can follow the information in the \textit{Log} box in the graphical interface (see Figure \ref{fig:easyFermi}), where the following information is displayed depending on the success of the fit: \begin{itemize} \item Fit quality: 3. Excellent fit. Full accurate covariance matrix. \item Fit quality: 2. Reasonable fit. Full matrix, but forced positive-definite (i.e. not accurate). \item Fit quality: 1. Poor fit. Diagonal approximation only, not accurate. \item Fit quality: 0. Bad fit. Error matrix not calculated. \end{itemize} For any fit quality value other than 3, we recommend the user to rerun the analysis changing the configurations in the panel \textit{Free source radius}, in the ``Advanced configurations'' box. In this panel, the user can select the radius (from the ROI center) defining the circular region in which the parameters of the $\gamma$-ray sources are left free to vary, can fix the spectral shape of the target or of all sources in the ROI, and can freeze the Galactic and Isotropic diffuse emission models. For a step-by-step guide on the quality/goodness of fit, the user is invited to follow the tutorial on YouTube\footnote{\url{https://www.youtube.com/channel/UCeLCfEoWasUKky6CPNN_opQ}}. \section{Validation, performance, and maintenance} The results obtained with \texttt{easyFermi} are exactly the same as those obtained with \texttt{Fermipy}, since \texttt{easyFermi} is actually running \texttt{Fermipy} in the background. We tested this by performing several analyses (e.g., the light curves, SEDs and TS maps for ROIs centered on PG1551+113, 3C 273, 3C279, Omega Centauri, and the extended lobes of Centaurus A) with \texttt{easyFermi} and comparing the results with pure \texttt{Fermipy} analyses. In terms of performance, an analysis done with \texttt{easyFermi} takes basically as much CPU time and RAM as an analysis done directly with \texttt{Fermipy} on the terminal, however, \texttt{easyFermi} is slightly faster and uses less RAM than an analysis performed with \texttt{Fermipy} in web-based interactive computing platforms. To test if \texttt{easyFermi} is properly working, we recommend the user to follow the tutorial on GitHub/YouTube\footnote{\url{https://github.com/ranieremenezes/easyFermi}}. Furthermore, the user can check if both \texttt{Fermipy} and \texttt{easyFermi} are working smoothly by simply typing \texttt{\$ python} \texttt{>> from fermipy.gtanalysis import GTAnalysis} \texttt{>> import easyFermi} If both modules are successfully imported, the installation is fine. The maintenance of \texttt{easyFermi} will be done such that the users can ask for the repair of possible issues via GitHub and in a way that we guarantee its compatibility with the upcoming releases of the \texttt{Fermitools} and \texttt{Fermipy}. \section{Conclusions} \label{sec:conclusions} The \texttt{easyFermi} graphical interface is a user-friendly tool that allows astronomers from all niches to use \textit{Fermi}-LAT $\gamma$-ray data. This tool is especially indicated for those scientists just starting in the field of high-energy astrophysics, and can be used for several goals like building a light curve, computing a $\gamma$-ray spectrum, or looking for extended $\gamma$-ray emission. Furthermore, \texttt{easyFermi} is meant to be simple. For more complex types of analyses, we refer the user to \texttt{Fermipy} and \texttt{Fermitools}. The tutorials and source code for \texttt{easyFermi} can be found online\footnote{\url{https://github.com/ranieremenezes/easyFermi}} and allow the user to learn how to use \textit{Fermi}-LAT data in just a few minutes. Bug reports and proposals for new functionality should be made through the GitHub issue tracker. Although minimum maintenance is required, we aim to keep \texttt{easyFermi} updated and in synergy with \texttt{Fermipy}, hence new versions of the tool can be released in the future. We also plan to make a similar graphical interface for the Cherenkov Telescope Array \citep{bernlohr2013_CTA} and possibly for other Imaging Atmospheric Cherenkov Telescopes in the near future. \section*{Acknowledgements} I would like to thank the anonymous referees for their suggestions and comments, as well as Alessandra Azzollini, Clodomir Vianna, Douglas Carlos, Fabio Cafardo, Kaori Nakashima, Lucas Costa Campos, Lucas Siconato, Ra\'i Menezes, Rodrigo Lang, and Romana Grossova for installing, testing, and helping me with the development of \texttt{easyFermi}. Part of this project was also supported by the European Research Council for the ERC Starting grant MessMapp, under contract no. 949555. \section*{Data Availability} All data used to exemplify the usage of \texttt{easyFermi} in this work can be found online in the \textit{Fermi}-LAT data server\footnote{\url{https://fermi.gsfc.nasa.gov/cgi-bin/ssc/LAT/LATDataQuery.cgi}}.
1,116,691,497,083
arxiv
\section{Introduction} Open-Domain Question Answering (ODQA) is a longstanding task in natural language processing that aims to answer questions about a wide range of world knowledge with no context given \citep{voorhees1999trec, huang2020recent,zhu2021retrieving,zhang2022survey}. It is challenging even for humans without access to a large external knowledge corpus. The most common and de facto approach for ODQA now is the Retriever-Reader pipeline \citep{chen-etal-2017-reading}: first retrieving the most related documents of the question then applying the reader model to extract or generate the final answer conditioned on these documents \citep{karpukhin-etal-2020-dense, lewis2020retrieval, izacard-grave-2021-leveraging}. Although perform well, these methods usually need to index the whole Wikipedia, leading to a huge storing cost, and the pipeline is also quite complicated. With the emergence of Large Langauge Models (LLMs) like GPT3 \citep{brown2020language}, FLAN \citep{wei2022finetuned}, OPT \citep{zhang2022opt}, InstructGPT \citep{ouyang2022training}, some searchers start to use them for ODQA tasks. Through large-scale unsupervised pre-training, LLMs have stored sufficient knowledge in their parameters to answer most open-domain questions and can also recall them precisely given a simple query in natural language. Theoretically, they are capable of generating correct answers without any training data and external corpus, but in practice there still exists a clear gap between LLMs and fully fine-tuned models. To stimulate more potential of them, some attempts have been made in previous works, like encouraging models to generate a rationale called \textit{chain-of-thought} before the final answer \cite{wei2022chain, kojima2022large}, or asking the model to first generate a contextual document and answer the question based on it in a second forward pass \cite{yu2022generate}. However, these methods use only a small portion of the capabilities of LLMs and do not fully utilize a large number of other skills that may be helpful for ODQA. In this paper, the scenario we focus on is ODQA with no training data and external corpus, and we propose \textbf{Self-Prompting} the LLM to explicitly activate various different capabilities of LLMs and combine these capabilities to further explore the upper bound of performance. In the preparation stage, the LLM is required to generate a pseudo QA dataset in the following steps: write a short Wikipedia passage, extract named entities in this passage as answers, raise corresponding questions for the answers, and explain each generated QA pair in a short sentence based on the passage. Relying on the strong instruction understanding ability of LLMs, all these sub-tasks can be perfectly conducted with simple natural language prompts. We can automatically build a pseudo dataset by repeating these procedures, where each item is a high quality QA pair with a related context passage and a short sentence to explain it. During inference, we propose a novel clustering-based retrieval method to select both similar and diverse examples from this pseudo dataset as the in-context demonstrations for each test sample. These selected QA pairs, along with the passages and explanations, are concatenated with the test question in a specific order to form the final input sequence, which is then fed into the LLM to get the final answer. We evaluate our methods on three ODQA benchmarks, including WebQ \citep{berant-etal-2013-semantic}, NQ \citep{kwiatkowski-etal-2019-natural} and TriviaQA \citep{joshi-etal-2017-triviaqa}. Experimental results show that our method significantly surpass the plain zero-shot baseline (+16 EM on average), and also the previous SOTA method \textit{GENREAD} \citep{yu2022generate} (+9 EM on average). We also conduct extensive ablation studies, case studies and analysis to discuss the effects of different generated components, the formats of placing them into sequence, ways of demonstration selection from the pseudo dataset, the quality of generated QAs, and many other aspects of our framework. In general, our contributions can be summarized as follows: \begin{itemize} \item We propose Self-Prompting to comprehensively combine multiple capabilities of LLMs for zero-shot ODQA. It is able to automatically build a pseudo but high-quality and annotated ODQA dataset in advance, and use it in a in-context learning manner. \item We propose a clustering-based retrieving method to effectively utilize the built pseudo dataset to select both semantically similar and diverse examples for each test sample. \item We conduct extensive experiments to show the effectiveness of Self-Prompting on three ODQA tasks. It outperforms previous SOTA under the zero-shot setting, and is comparable to several fine-tuned Retriever-Reader models. \end{itemize} \begin{figure*} \centering \includegraphics[height=0.45\textwidth]{imgs/overall_framework.pdf} \caption{The overall framework for Self-Prompting on ODQA. In the self-generation steps, \lightblue{blue} refers to contents generated in previous steps or manually designed topics and \lightgreen{green} is the newly generated texts. In inference, \blueback{question}, \greenback{answer}, and \redback{explanation} are question, answer and explanation for both demonstrations and test sample.} \label{fig:overall} \end{figure*} \section{Related Works} \paragraph{Retriever-Reader Models for ODQA} The mainstream method to tackle ODQA tasks is the Retriever-Reader architecture. It first leverage a retriever over a large knowledge corpus like Wikipedia to select several related documents that may contain the answer, then a reader is used to process the retrieved documents and predict the final answer. Conventional models use sparse retrieval methods such as TF-IDF or BM25, while recent works choose dense retrieval based on the representation vector encoded by pre-trained language models \cite{karpukhin-etal-2020-dense, lewis2020retrieval, guu2020retrieval, izacard-grave-2021-leveraging}. As for the reader, there are also two different choices: extractive readers like BERT \cite{devlin-etal-2019-bert} or generative ones like T5 \cite{raffel2020exploring}. A similar work in this branch is PAQ \cite{lewis-etal-2021-paq}, which first generated 65 million probably-asked questions based on the complete dump of Wikipedia, and directly retrieve these questions instead of documents. \paragraph{LLM and In-context Learning} Generally, Large Language Models (LLMs) refer to the pre-trained models with tens or hundreds of billions of parameters. Some preminent examples are GPT3, FLAN, PaLM, OPT, and InstructGPT \cite{brown2020language, wei2022finetuned, chowdhery2022palm, zhang2022opt, ouyang2022training}. These models are trained with large-scale unsupervised learning, and are able to perform NLP tasks by converting inputs into natural language queries without further training. The cost of fine-tuning these models is extremely huge, so the usual way of using them is in-context learning, which is to put some input-output pairs as demonstrations in front of the test sample. Some previous works have investigate the calibration \cite{zhao2021calibrate}, example selection \cite{liu-etal-2022-makes,rubin-etal-2022-learning} and ordering \cite{lu-etal-2022-fantastically} of in-context learning, while to the best of our knowledge we are the first to use LLM itself to generate the examples used in in-context learning. \paragraph{Enhancing Models with LLM generation} A recent line of researches aim to use the generation outputs of LLMs to facilitate the training of small models. For example, \citet{ye2022zerogen} use GPT2 \cite{radford2019language} to generate pseudo data to train tiny language models, and \citet{wang2022elaboration} distill the knowledge of GPT3 into GPT2 for commonsense QAs. Another line of works try directly using contents generated by LLM itself. Some works use LLMs to first generate relevant contexts or background documents then providing them as additional input when answering questions \cite{liu-etal-2022-generated, yu2022generate}, while others focus on eliciting a series of intermediate reasoning steps referred to as \textit{chain-of-thought} for arithmetic problems \cite{wei2022chain, kojima2022large, zhang2022automatic}. \section{Approach} We present details of Self-Prompting in this section. It can be divided into two stages: preparation and inference. In the first stage, we require the LLM to automatically build a pseudo ODQA dataset by prompting it to generate QA pairs with context passages and explanations. In the second stage, we dynamically select several examples in the pool through a clustering-based retrieval method as in-context demonstrations to help understanding and answering the given question. An overall framework is shown in Figure \ref{fig:overall}. \subsection{QA Pool Generation} As a preparation in advance, we first ask the LLM to automatically generate a QA pool as a pseudo dataset. It follows the following steps: \paragraph{Passage Generation} \label{para:pg} To ensure the diversity of the generated passages, we first manually design some topics (like \textit{countries, books, tourist attractions} etc.) that are likely to appear in ODQA, referring to the dataset statistics in TriviaQA \citep{joshi-etal-2017-triviaqa}. For each topic, the LLM is asked to list some examples with instructions like \texttt{"List some \{topic\}:"}, and this step is repeated until we have collected a certain number of different examples for this topic. Through this, we obtain a huge number of examples covering different categories and they are leverared to generate short Wiki-style passages with the following prompt \texttt{"This is a passage from Wikipedia about the \{topic\}, \{example\}:"}. \paragraph{Named Entity Recognition} We extract the named entities in these generated passages as the candidate answers. Usually, this NER step is conducted by a fine-tuned small model. In our framework, this is done also by LLM itself. For a given generated passage, the prompt might be \texttt{"Here is a passage: \{passage\} Extract the named entities in it:"}, and we can get the entities in this passage. \paragraph{Question Generation} Named entities (like date, name, location) extracted in the previous step are used as the candidate answers. We then ask the LLM to output a proper question for this answer based on the given passage with prompts like \texttt{"\{passage\} \{entity\}\ is the answer to the question:"}. To ensure the correctness of the QA pair, we ask the LLM to do a double-check, i.e., reanswer the question based on the passage to see whether it can recover the entity. In practice, we observe that the conflicts between new predictions and the raw entities are often caused by the failure of generating related questions, while in contrast the new predictions often match the question well. So we keep the new predictions as final answers. \paragraph{Explain the QA pair} For each QA pair, we ask the LLM to return a piece of one-sentence explanation for it based on the passage. The prompt is like \texttt{"Passage: \{passage\} Question: \{question\} Answer: \{answer\} You can refer to the passage and write a short explanation to this Question-Answer pair:"}. In this step, we try to elicit the summarization and induction skills of LLM to provide a fine-grained annotation for the generated QA pairs. \subsection{Dynamic In-context Demonstrations Selection for Inference} It is a open question that how to use the pseudo QA dataset the LLM have generated in the preparation stage. We focus on two aspects, namely selection and format. \paragraph{Clustering-based Retrieval} Some previous works point out using examples with high semantic similarity as in-context demonstrations brings benefits \citep{liu-etal-2022-makes}, while others claim a fixed set of examples based on clustering is better \cite{zhang2022automatic}. We propose to combine these two ways together. First, each QA pair is encoded into a vector representation over the whole QA pool with Sentence-BERT \citep{reimers-gurevych-2019-sentence}. Suppose $k$ examples are needed for in-context demonstration, the pseudo QAs will be clustered into $k$ categories by the k-means algorithm. For a given question, we also use Sentence-BERT to encode it, and retrieve the most similar example from each cluster with a simple cosine similarity. This selection method has a balance of the similarity and diversity of the demonstrations. \paragraph{Answer then Explain} Finally is the organization format of these selected examples. In the input sequence, we first put these examples sequentially in the format of \textit{Question$\to$ Answer $\to$ Explanation}, and place the test question at the end of the sequence. The specific template is shown in Appendix. By doing so, the LLM is capable of viewing much more information then just switching to a QA mode, and it can also give out a brief explanation for the its answer. This is quite different from the common practice in \textit{chain-of-thought} prompting, i.e., generating a rationale before the answer, but our experiments prove the effectiveness of the former choice. \begin{table} \centering \setlength{\tabcolsep}{3pt} \begin{tabular}{lccccc} \toprule Datasets & Test & Q Words & A Words & Answers \\ \midrule WebQ & 2.0K & 6.8 & 2.5 & 2.4 \\ NQ & 3.6K & 9.1 & 2.2 & 2.0 \\ TriviaQA & 11K & 14.0 & 2.5 & 14.0 \\ \bottomrule \end{tabular}% \caption{Statistics for each dataset, Answers, Q Words and A Words refer to the average number reference answers, words in question, and words in answer for each sample respectively in the Test split.} \label{tab:data_stat}% \end{table}% \begin{table*} \centering \setlength{\tabcolsep}{3pt} \begin{tabular}{l|ccc|cccc} \toprule \multirow{2}{*}{Models} & \# Total & Train & External & \multirow{2}{*}{WebQ} & \multirow{2}{*}{NQ} & \multirow{2}{*}{TriviaQA} & \multirow{2}{*}{Avg.} \\ & Params. & Data &Corpus \\ \midrule \multicolumn{8}{l}{\textit{*fine-tuned models without retrieval}} \\ T5-SSM \cite{roberts-etal-2020-much}& 11B & \ding{51} & \ding{55} & 40.8 & 34.8 & 51.0 & 42.2 \\ \midrule \multicolumn{8}{l}{\textit{*retrieval-augmented fine-tuned models}} \\ REALM \cite{guu2020retrieval}& 330M & \ding{51} & \ding{51} & 40.7 & 40.4 & 55.8 & 45.6 \\ DPR \cite{karpukhin-etal-2020-dense} & 330M & \ding{51} & \ding{51} & 41.1 & 41.5 & 56.8 & 46.5 \\ RAG \cite{lewis2020retrieval} & 620M & \ding{51} & \ding{51} & 45.2 & 44.5 & 56.1 & 48.6 \\ \midrule \multicolumn{8}{l}{\textit{*retrieval-augmented prompting LLMs (DPR trained on target datasets)}} \\ Google+InstructGPT & 175B & \ding{55} & \ding{51} & 19.9 & 27.8 & 58.7 & 35.5 \\ DPR+InstructGPT & 175B & \ding{55} & \ding{51} & 20.1 & 29.9 & 55.3 & 35.1 \\ \midrule \multicolumn{8}{l}{\textit{*directly prompting LLMs}} \\ InstructGPT & 175B & \ding{55} & \ding{55} & 18.6 & 20.9 & 52.6 & 30.7 \\ GENREAD (InstructGPT) \cite{yu2022generate}& 175B & \ding{55} & \ding{55} & 24.8 & 28.2 & 59.3 & 37.4 \\ \midrule \multicolumn{8}{l}{\textit{*our method, self prompting by generating pseudo QA for in-context learning}} \\ Self-Prompting (InstructGPT) & 175B & \ding{55} & \ding{55} & 35.6 & 36.2 & 66.8 & 46.2 \\ \bottomrule \end{tabular}% \caption{Main results on three ODQA benchmarks, Self-Prompting is free from any training data and external knowledge corpus. \# Total Params. is the total number of model paramaters in this system (e.g. RAG use 2 BERT-base with 110M$\times$2 and 1 BART-large with 400M). Train Data refers to whether the system uses training data for training, and External Corpus means whether a external knowledge corpus is used to retrieve documents.} \label{tab:main_res}% \end{table*}% \section{Experiments} \subsection{Datasets and Settings} In this paper we conduct experiments on three ODQA benchmarks, including WebQ \citep{berant-etal-2013-semantic}, NQ \citep{kwiatkowski-etal-2019-natural} and TriviaQA \citep{joshi-etal-2017-triviaqa}. The dataset statistics are in Table \ref{tab:data_stat}. We use InstructGPT \cite{ouyang2022training} (the \texttt{text-davinci-002} of GPT3 \cite{brown2020language}) as the LLM, which keep the same with previous works \cite{wei2022chain,kojima2022large,yu2022generate}. The exact model we used as Sentence-BERT is \texttt{all-mpnet-base-v2}, and the number of demonstrations for in-context learning is 10. In passage generation, we design 29 topics in advance. Their names and numbers of required examples for each topic are in Appendix. In question generation, we ban some pronouns like \textit{they, he, she} by setting their \texttt{logit\_bias} as -100 in the API call, to prevent getting ambiguous questions (e.g. \textit{what did he do in 1997?}). After the generation process, we filter the QA pair where the answer has more than 5 words or the LLM output an explanation sentence with no answer span in it. For each passage, the upper limit of generating QA pairs is 10. After filtering duplicated questions, we collect 1,216 passages and 4,883 QA pairs with explanations. The parameters like \texttt{max\_tokens} or \texttt{temperature} for LLM generation in each step are in Appendix. The metrics we evaluate is Exact Match (EM), with the same answer normalization as in \citet{karpukhin-etal-2020-dense}. We also observe that in WebQ, if the correct answer contains multiple listed entities, the reference is often given as only one of them, so we perform an additional post-processing on this dataset to extract only the first when the LLM predicts multiple ones (e.g. only return \textit{A} if the raw prediction is \textit{A, B and C}). The baselines we select include direct prompting: InstructGPT \cite{ouyang2022training}, GENREAD \cite{yu2022generate}; retrieval-augmented LLM prompting: DPR+InstructGPT, Google+InstructGPT; fine-tuned models with no retrieval: T5-SSM 11B \cite{roberts-etal-2020-much}; retrieval-augmented fine-tined models: REALM \cite{guu2020retrieval}, DPR \cite{karpukhin-etal-2020-dense}, RAG \cite{lewis2020retrieval}. \subsection{Main Results} The main results are shown in Table \ref{tab:main_res}. Compared to directly prompting methods, our Self-Prompting method surpass the InstructGPT baseline by +15.5 EM on average, and the previous SOTA method GENREAD by +8.8 EM on average. This strongly indicates that the way of first generating high-quality and annotated pseudo dataset and using them for in-context learning can comprehensively invoke the capabilities of LLM in different aspects, which brings a significant improvement over the simple and crude ways of directly LLM prompting. Self-prompting is also better than retrieval-augmented prompting methods, which show that LLM itself has stored enough world knowledge in its paramaters, so there is no need to explicitly collect a large external corpus for retrieving. We notice that Self-Prompting achieves higher EM than T5-SSM 11B on two datasets except WebQ, even though we do not give any training data to InstructGPT, showing that the potential of LLMs for ODQA is large under a zero-shot setting. Finally, we find that Self-Prompting gets comparable results to some powerful retrieval-augmented fine-tuned models with regard to the average score on three datasets, especially on TriviaQA we see more than +10 EM. On WebQ and NQ, Self-Prompting lags behind these methods, but we find that this is mainly caused by the features of these two datasets like fewer reference answers for each question or outdated answers. We will introduce such phenomenon in detail in the Analysis section with case studies. \section{Analysis} To save the cost of using the OpenAI APIs, we conduct several ablation studies on subsets of these three datasets by randomly selecting 1,000 samples from their test sets. \begin{table} \centering \setlength{\tabcolsep}{2.8pt} \begin{tabular}{ll|cccc} \toprule Demos & \multicolumn{1}{l}{Pred.} & WebQ & NQ & TriviaQA & Avg. \\ \midrule \midrule \multicolumn{6}{l}{\textit{*one iteration}} \\ QA & Q→A & 35.6 & \textbf{37.8} & 68.2 & 47.2 \\ \midrule QAE & Q→AE & \textbf{38.7} & 37.2 & \textbf{68.7} & \textbf{48.2} \\ QAP & Q→AP & 37.6 & 35.6 & 67.6 & 46.9 \\ QEA & Q→EA & 34.6 & 33.8 & 61.6 & 43.3 \\ QPA & Q→PA & 32.0 & 31.2 & 57.8 & 40.3 \\ QAEP & Q→AEP & 35.4 & 37.6 & 67.8 & 46.9 \\ QAPE & Q→APE & 37.2 & 34.8 & 67.2 & 46.4 \\ \midrule \midrule \multicolumn{6}{l}{\textit{*two iterations}} \\ 1: QP & Q→P & - & - & - & - \\ 2: PQA & PQ→A & 30.6 & 32.0 & 62.0 & 41.5 \\ \midrule 1: QE & Q→E & - & - & - & - \\ 2: EQA & EQ→A & 34.8 & 32.8 & 63.4 & 43.7 \\ \bottomrule \end{tabular}% \caption{Using different formats for in-context learning.} \label{tab:format_ana}% \end{table}% \subsection{How to Use Generated Passages and Explanations} \label{sec:format} A crucial question is how to use the byproducts, namely the passages and explanations, to form a better format for in-context learning. Given a list of 10 demonstrations (Q, A, P, E)$\times$ 10 (by clustering-based retrieval), we investigate several input formats as shown in Table \ref{tab:format_ana}. The \textit{one iteration} methods mean only one call of API is needed to generate the answer and the passage/explanation, while in \textit{two iteration} methods the LLM needs generate the passage/explanation first and put them into input sequence to generate answers in the second API call. Detailed templates for these methods are in Appendix. From Table \ref{tab:format_ana} we see that the simplest QA format is sufficient to output good performance, and only QAE surpasses it. The other three answer-then-explain formats QAP, QAEP, QAPE are worse than QA, which indicates that the redundant information in passages is harmful to LLMs. We also find that the two \textit{chain-of-thought} style formats, QEA and QPA, are much worse than the baselines. In \citet{lu2022learn} and \citet{wei2022chain}, similar findings are concluded that \textit{chain-of-thought} is beneficial to complex, multi-hop math reasoning tasks but has little impact on commonsense questions. Finally, the \textit{two iterations} methods also lead to a large performance drop, which makes them the worst settings considering the doubled inference time and cost. \begin{table} \centering \setlength{\tabcolsep}{0.8pt} \begin{tabular}{lcccc} \toprule Selection & WebQ & NQ & TriviaQA & Avg. \\ \midrule Random & $34.7_{1.4}$ & $36.9_{1.5}$ & $67.8_{1.6}$ & $46.4_{1.3}$ \\ Retrieve & 36.2 & 37.0 & 66.8 & 46.7 \\ ClusterCenter & 35.0 & \textbf{37.4} & \textbf{68.8} & 47.1 \\ RetrieveInCluster & \textbf{38.7} & 37.2 & 68.7 & \textbf{48.2} \\ \bottomrule \end{tabular}% \caption{Different ways of selection demonstrations. We run random selection for 5 times with different seeds, and report their mean value and standard deviation.} \label{tab:selection_ana}% \end{table}% \subsection{Ways of Demonstration Selection} Since the size of the pseudo QA dataset automatically generated by the LLM is much larger than the number of examples we put in the input sequence for in-context learning, a proper selection method is necessary. We conduct experiments with four settings: randomly selecting QAs from the pool (Random), retrieving the most similar QAs with cosine similarity globally (Retrieve), selecting the closest QA to the centroid in each cluster (ClusterCenter), and retrieving the most similar QA in each cluster (RetrieveInCluster). The other hyper-parameters are kept the same, i.e., 10 demonstrations and QAE format. Results in Table \ref{tab:selection_ana} show that random selection performs the worst, and it also suffers from instability. Both the Retrieve and ClusterCenter methods have some improvements over Random, but the progress is not universal over the three datasets. The method we propose, RetrieveInCluster, can incorporate the advantages of both methods to select both diverse and semantically similar demonstrations, so it is robust enough to achieve satisfactory scores among all datasets. \begin{figure} \centering \includegraphics[height=0.38\textwidth]{imgs/num_vs_traindata_2.png} \caption{Performance of different numbers of demonstrations in Self-Prompting, as well as using a fixed set of training data as demonstrations for each dataset.} \label{fig:num_vs_traindata} \end{figure} \begin{table*}[t] \centering \begin{tabular}{p{15.5cm}} \toprule \textbf{Topic:} Movie or TV series \\ \textbf{Example:} Hocus Pocus \\ \textbf{Passage:} \teal{Hocus Pocus is a 1993 American Halloween fantasy comedy horror film directed by Kenny Ortega, produced by Walt Disney Pictures, and written by Neil Cuthbert and Mick Garris.} The film stars Bette Midler, Kathy Najimy, and Sarah Jessica Parker as a trio of witches who are inadvertently resurrected by a teenage boy in Salem, Massachusetts. The film was released on July 16, 1993, and grossed \red{\$39 million} in its opening weekend. It went on to gross \red{\$817 million} worldwide. \\ \textbf{Entity:} \blue{Neil Cuthbert and Mick Garris} \\ \textbf{Question:} Who wrote Hocus Pocus? \\ \textbf{Explanation:} Hocus Pocus is a 1993 American Halloween fantasy comedy horror film directed by Kenny Ortega, produced by Walt Disney Pictures, and written by \blue{Neil Cuthbert and Mick Garris.} \\ \\ \textbf{Topic:} Actor or actress \\ \textbf{Example:} George Clooney \\ \textbf{Passage:} George Clooney is an American actor, director, producer, screenwriter, and \red{businessman}. \teal{He is the recipient of \red{three} Golden Globe Awards and two Academy Awards, one for acting in Syriana (2005) and the other for co-producing Argo (2012).} In 2018, he was the recipient of the AFI Lifetime Achievement Award. \\ \textbf{Entity:} \blue{Argo} \\ \textbf{Question:} What is the name of the film that won George Clooney an Academy Award for co-producing? \\ \textbf{Explanation:} George Clooney won an Academy Award for co-producing the film \blue{"Argo."} \\ \\ \textbf{Topic:} Historical event \\ \textbf{Example:} The Battle of Gallipoli \\ \textbf{Passage:} The Battle of Gallipoli was a military campaign that took place during World War I. The campaign was fought by the British and French against the Ottoman Empire and lasted from \red{April} 1915 to January 1916. \teal{The battle was fought in an effort to force the Ottoman Empire out of the war, and to open up a supply route to Russia through the Dardanelles and the Black Sea.} The campaign ended in failure, and resulted in the deaths of \red{over half a million men}. \\ \textbf{Entity:} \blue{Dardanelles} \\ \textbf{Question:} What is the name of the waterway that was the site of the Battle of Gallipoli? \\ \textbf{Explanation:} The Battle of Gallipoli was fought along the \blue{Dardanelles}, a waterway that connects the Aegean Sea to the Sea of Marmara. \\ \bottomrule \end{tabular}% \caption{Three examples in the pseudo QA dataset.} \label{tab:selfgen_examples}% \end{table*}% \begin{table*} \centering \begin{tabular}{l|l|ccc} \toprule \multicolumn{1}{c|}{\multirow{2}[2]{*}{Case}} & \multicolumn{1}{c|}{\multirow{2}[2]{*}{Examples}} & \multicolumn{3}{c}{Ratio (\%)} \\ & & WebQ & NQ & TriviaQA \\ \midrule \midrule \multirow{2}[1]{*}{AW, EW} & \textbf{Q:} what is the only anagram of the word `english`? & \multirow{2}[1]{*}{16*} & \multirow{2}[1]{*}{40} & \multirow{2}[1]{*}{59} \\ & \textbf{Ref:} Shingle; \textbf{Pred:} Elinghs & & & \\ \midrule \multirow{2}[2]{*}{Need Details} & \textbf{Q:} what countries does greece share borders with? & \multirow{2}[2]{*}{10} & \multirow{2}[2]{*}{3} & \multirow{2}[2]{*}{1} \\ & \textbf{Ref: }Turkey; \textbf{Pred:} several countries & & & \\ \midrule \midrule \multirow{2}[1]{*}{Form} & \textbf{Q:} who is judy garland father? & \multirow{2}[1]{*}{34} & \multirow{2}[1]{*}{25} & \multirow{2}[1]{*}{31} \\ & \textbf{Ref:} Francis Avent Gumm; \textbf{Pred:} Frank Gumm & & & \\ \midrule \multirow{2}[2]{*}{Multiple} & \textbf{Q:} who are china's neighbors? & \multirow{2}[2]{*}{15} & \multirow{2}[2]{*}{6} & \multirow{2}[2]{*}{5} \\ & \textbf{Ref:} Pakistan; \textbf{Pred:} Russia & & & \\ \midrule \multirow{2}[2]{*}{RW} & \textbf{Q:} who plays caesar flickerman in the hunger games? & \multirow{2}[2]{*}{9} & \multirow{2}[2]{*}{12} & \multirow{2}[2]{*}{1} \\ & \textbf{Ref:} Art Conforti; \textbf{Pred:} Stanley Tucci & & & \\ \midrule \midrule \multirow{2}[1]{*}{Open} & \textbf{Q:} what is there to do for fun in kansas city? & \multirow{2}[1]{*}{9} & \multirow{2}[1]{*}{1} & \multirow{2}[1]{*}{1} \\ & \textbf{Ref:} Kemper Arena; \textbf{Pred:} visit Kansas City Zoo & & & \\ \midrule \multirow{2}[2]{*}{Time} & \textbf{Q:} who did carlos boozer play for? & \multirow{2}[2]{*}{6} & \multirow{2}[2]{*}{6} & \multirow{2}[2]{*}{2} \\ & \textbf{Ref:} Utah Jazz; \textbf{Pred:} the Chicago Bulls & & & \\ \midrule \multirow{2}[2]{*}{Unanswerable} & \textbf{Q:} the legend of heroes trails in the sky the 3rd vita & \multirow{2}[2]{*}{1} & \multirow{2}[2]{*}{7} & \multirow{2}[2]{*}{0} \\ & \textbf{Ref:} July 14, 2016; \textbf{Pred:} PlayStation Vita & & & \\ \bottomrule \end{tabular}% \caption{Error Analysis on 100 randomly select samples from each dataset with EM Score=0. To save space, only one reference answer is displayed if there are multiple ones. *We note that in WebQ, there are 2 cases where the answer is wrong but the explanation is correct, and 1 case with correct answer and wrong explanation.} \label{tab:error_ana}% \end{table*}% \subsection{Different Number of Demonstrations} \label{sec:num_analysis} A natural idea of in-context learning is to put as many as possible examples in the input sequence, so we also investigate the effect of different number of demonstrations, as shown in the \greenline{green line} in Figure \ref{fig:num_vs_traindata}. We report the average EM score over the three datasets with number of demonstrations in \{2, 4, \dots, 14, 16\}. When the number is in 2-10, the performance of our method generally becomes better as the number increases, and using more than 10 examples does not bring significant improvements. As a result, we choose 10 demonstrations in our main experiments for both performance and cost considerations. \subsection{Comparison between Self-Prompting and Using Training Data} To evaluate the quality of the pseudo dataset generated by LLM, we randomly select a set of samples from their training sets for in-context learning, and annotate them with related Wiki passages and short explanation sentences as what is automatically done in Self-Prompting. Following Section \ref{sec:num_analysis}, we report the average EM score on three subsets with 2 - 16 demonstrations, and try both the \blueline{QA} and \orangeline{QAE} formats. Results in Figure \ref{fig:num_vs_traindata} reveals that our Self-Prompting method performs on par with using the manually annotated training data. Compared with Traindata-QA, Self-Prompting is only about 1 EM lower than it at different number of demonstrations, showing that the LLM itself is powerful enough to tackle ODQA with a fine-grained and step-by-step guide, even without any training data. We also observe a stable boost from Traindata-QA to Traindata-QAE, illustrating that the QAE format can be effective not only on the pseudo-data constructed by LLM, but also on the real training data. \subsection{Data Generation Quality Analysis} To further explore the quality of the pseudo dataset LLM generated, we pick three examples and put them here in Table \ref{tab:selfgen_examples} as a case study, with key sentences in passage highlighted in \teal{teal} and answer entities in \blue{blue}. Overall, these generated passages are accurate, but they still contain some factual mistakes, or hallucination \citep{ji2022survey}, in the texts (highlighted by \red{red}). The questions generated for extract entities include different types (e.g., people, item, location). They are proper and answerable even with no context given, which is in line with common open-domain questions. As an important part of Self-Prompting, we can see the explanations written by the LLM are of high quality. In the first example, the LLM precisely extracts the key sentence from the passage; in the second example, the LLM successfully conducts co-reference resolution to replace \textit{He} with \textit{George Clooney} and removes redundant texts; in the last example, the LLM not only summarizes the key sentence, but also adds extra information not mentioned in ths passage to its output. In all, Self-Prompting can automatically generate pseudo but high quality ODQA dataset with passages and explanations as annotations. \subsection{Error Analysis} Finally we conduct a study for error analysis to see why Self-Prompting fails on some questions. We randomly select 100 questions that Self-Prompting gets EM Score=0 from each dataset, and manually examine them. From these 300 samples, we conclude 3 major and 8 minor types: \textbf{1) True Negative} - AW, EW (both the prediction and explanation are incorrect), Need Details (the prediction is not specific); \textbf{2) False Negative} - Form (the prediction and the reference are the same thing but with different form), Multiple (the prediction is not in the list of references but also a correct answer), RW (the reference itself is incorrect); \textbf{3) Bad Question} - Open (open question with no exact answers), Time (cannot be answered if not clarifing the time), Unanswerable (information to answer the question is incomplete). The results are shown in Table \ref{tab:error_ana}. We observe that there is a large number of False Negative among all three datasets (58, 43, 37), so Self-Prompting is largely under-estimated. Most of them are Form, which indicates that the EM Score is a sub-optimal metric to evaluate ODQA systems. The quality of WebQ and NQ are low according to the table, as they have lots of poorly annotated answers (Multiple, RW) and bad questions (Open, Time, Unanswerable). Especially, many questions in these two datasets rely on the Wiki Dump of a certain time point, and using the latest one even hurts the performance \citep{izacard2022few,yu2022generate}. In TriviaQA, the True Negative rate is much more higher than the other two, so it can better reflect the performance of Self-Prompting. This is also a potential reason to explain why Self-Prompting outperforms fine-tuned baselines significantly but has lower scores on WebQ and NQ. \section{Conclusion} In this paper, we propose Self-Prompting Large Language Models (LLMs) for Open-Domain Question Answering (ODQA). Our method requires the LLM to generate pseudo QA dataset with matching passages and explanations, and use them for in-context learning. It successfully stimulates the potential of the LLM in ODQA by explicitly activate various language understanding abilities and elicit the world knowledge in its parameters. With no training data and external corpus, Self-Prompting surpasses previous SOTA significantly, and performs on par with several retrieval-augmented fine-tuned models. In future, we will improve the framework, for example to eliminate hallucination in generation and reduce manual design. We will also expand Self-Prompting to other kinds of NLP tasks to prove its universal effectiveness, and further release the power of LLMs in different fields.
1,116,691,497,084
arxiv
\section{Introduction} Anisotropies in the cosmic microwave background (CMB) radiation and inhomogeneities in the large scale structures of the Universe have nowadays become a fundamental tool to study the early universe~\cite{infla1}. Present and future data will allow us to discriminate among different inflationary models in the near future. For this reason, the comparison of observations with inflationary models requires theoretical advances in the predictions of the power spectrum of primordial perturbations beyond the lowest order in the slow-roll parameters $\epsilon_i$'s, first obtained by Stewart and Lyth~\cite{SL} (their definitions will be recalled in Section~\ref{sCP}). \par An analytic form for the {\em full\/} inflationary power spectra to second order in the slow-roll parameters was first obtained through the Green's function method (GFM henceforth) in Ref.~\cite{gongstewart}. In this way a characterization of the power spectrum to second order in the slow-roll parameters was given which was not just derived from the running of the spectral index (whose leading order is precisely ${\cal O}(\epsilon^2)$~\cite{KT}). An equivalent second order characterization has been recently obtained by means of the improved WKB approximation of Refs.~\cite{WKB_PLB,WKB_lungo}, which extended to second order in the slow-roll parameters previous results based on a more standard WKB approach~\cite{MS,WKB1}. The WKB approximation has confirmed the structure of the second order power spectra found with the GFM, within a numerical difference in the ${\cal O}(\epsilon^2)$ coefficients of $5\,\%$ at most~\cite{WKB_PLB}. Compared with the GFM, the WKB approximation has the additional advantage that the slow-roll parameters do not have to be constant in time~\cite{WKB_lungo} and can therefore be applied to a wider class of inflationary models. \par The purpose of this paper is to illustrate the use of the {\em method of comparison equations\/} (MCE in brief)~\cite{MG,dingle,berry} to predict inflationary power spectra. We shall see that this method yields exact results for the case of constant slow-roll parameters (e.g.~power-law inflation) and polynomial structures to second order in the $\epsilon_i$'s in agreement with the GFM and WKB approximation. The MCE however has the advantage of being more accurate to lowest (leading) order, whereas other methods (namely, the WKB~\cite{MS}, improved WKB~\cite{WKB1} and GFM~\cite{gongstewart}) reach a similar accuracy to next-to-leading order. We shall also discuss cases for which our present method appears more flexible than the slow-roll approximation. \par In the next Section we briefly review the general MCE and in Section~\ref{sCP} the theory of cosmological perturbations. We then apply the MCE to cosmological perturbations in Section~\ref{sMCECP} where we also analyze the error around the ``turning point'' in detail. In Section~\ref{s_app} we analyze power-law inflation, chaotic inflation and the arctan model; we expand our general results to second slow-roll order and compare with analogous results obtained with the GFM. We finally comment on our results in Section~\ref{sC}. Some more technical details are given in two Appendices. \section{Method of comparison equations} \label{sMCE} Let us briefly review the MCE (the name is due to Dingle~\cite{dingle}). It was independently introduced in Refs.~\cite{MG,dingle} and applied to wave mechanics by Berry and Mount in Ref.~\cite{berry}. The standard WKB approximation and its improvement by Langer~\cite{langer} are just particular cases of this method and, recently, its connection with the Ermakov-Pinney equation was also studied~\cite{KLV}. \par Let us consider the second-order differential equation \begin{eqnarray} \left[ \frac{{\d}^2}{{\d}x^2}+\omega^2(x) \right] \,\chi(x)=0 \ , \label{exact_EQ} \end{eqnarray} where $\omega^2$ is a (not necessarily positive) ``potential'' (or ``frequency''), and suppose that we know an exact solution to a similar second-order differential equation, \begin{eqnarray} \left[ \frac{{\d}^2}{{\d}\sigma^2}+\Theta^2(\sigma) \right] \,U(\sigma)=0 \ , \label{aux_EQ} \end{eqnarray} where $\Theta$ is the ``comparison function''. One can then represent an exact solution of Eq.~(\ref{exact_EQ}) in the form \begin{eqnarray} \chi(x)=\left(\frac{{\d}\sigma}{{\d}x}\right)^{-1/2}\,U(\sigma) \ , \label{exact_SOL} \end{eqnarray} provided the variables $x$ and $\sigma$ are related by \begin{eqnarray} \omega^2(x)\!=\!\left(\frac{{\d}\sigma}{{\d}x}\right)^{2}\Theta^2(\sigma) -\left(\frac{{\d}\sigma}{{\d}x}\right)^{1/2} \frac{{\d}^2}{{\d}x^2}\left(\frac{{\d}\sigma}{{\d}x}\right)^{-1/2} \ . \label{new_EQ} \end{eqnarray} Eq.~(\ref{new_EQ}) can be solved by using some iterative scheme, in general cases~\cite{KLV,hecht} or for specific problems~\cite{mori,pechukas}. If we choose the comparison function sufficiently similar to $\omega$, the second term in the right hand side (r.h.s.) of Eq.~(\ref{new_EQ}) will be negligible with respect to the first one, so that \begin{eqnarray} \omega^2(x)\simeq\left(\frac{{\d}\sigma}{{\d}x}\right)^{2} \Theta^2(\sigma) \ . \label{new_EQ_appr} \end{eqnarray} On selecting a pair of values $x_0$ and $\sigma_0$ such that $\sigma_0=\sigma(x_0)$, the function $\sigma(x)$ can be implicitly expressed as \begin{eqnarray} -\xi(x)\equiv \int_{x_0}^x\sqrt{\pm\,\omega^2(y)}\,{\d}\,y \simeq \int_{\sigma_0}^{\sigma}\sqrt{\pm\,\Theta^2(\rho)}\,{\d}\,\rho \ , \label{new_EQ_int} \end{eqnarray} where the signs are chosen conveniently~\footnote{We recall that $\xi(x)$ is the same quantity as used in Ref.~\cite{WKB1,WKB_PLB,WKB_lungo}.}. The result in Eq.~(\ref{exact_SOL}) leads to a uniform approximation for $\chi(x)$, valid in the whole range of the variable $x$, including ``turning points''~\footnote{With this term, borrowed from point particle mechanics, one usually means a real zero of the ``frequency'' $\omega$.}. The similarity between $\Theta$ and $\omega$ is clearly very important in order to implement this method. \section{Cosmological perturbations} \label{sCP} Let us begin by recalling that scalar (density) and tensor (gravitational wave) fluctuations on a flat Robertson-Walker background with scale factor $a$ \begin{eqnarray} \d s^2=a^2\,\left(-\d\eta^2 + \d r^2 + r^2 \d\Omega^2\right) \ , \end{eqnarray} are given respectively by $\mu=\mu_{\rm S}\equiv a\,Q$ and $\mu=\mu_{\rm T}\equiv a\,h$, where $Q$ is the Mukhanov variable~\cite{mukh1,mukh2} and $h$ the amplitude of the two polarizations of gravitational waves~\cite{gris,staro}. The functions $\mu$ must satisfy the one-dimensional Schr\"odinger-like equation \begin{eqnarray} \left[\frac{{\d}^2}{{\d}\eta^2}+\Omega^2(k,\eta)\right] \,\mu=0 \ , \label{osci} \end{eqnarray} together with the initial condition (corresponding to a Bunch-Davies vacuum) \begin{eqnarray} \lim_{\frac{k}{a\,H}\rightarrow +\infty} \mu(k,\eta) \simeq\frac{{\rm e}^{-i\,k\,\eta}}{\sqrt{2\,k}} \ . \label{init_cond_on_mu} \end{eqnarray} In the above $\eta$ is the conformal time (derivatives with respect to it will be denoted by primes), $k$ the wave-number, $H=a'/a^2$ the Hubble parameter and \begin{eqnarray} \Omega^2(k,\eta)\equiv k^2-\frac{z''}{z} \ , \label{freq} \end{eqnarray} where $z=z_{\rm S}\equiv a^2\,\phi'/H$ for scalar and $z=z_{\rm T}\equiv a$ for tensor perturbations ($\phi$ is the homogenous inflaton). The dimensionless power spectra of scalar and tensor fluctuations are then given by \numparts \begin{eqnarray} \mathcal{P}_{\zeta}\equiv \displaystyle\frac{k^{3}}{2\,\pi^{2}}\, \left|\frac{\mu_{\rm S}}{z_{\rm S}}\right|^{2} \ , \ \ \ \ \mathcal{P}_{h}\equiv \displaystyle\frac{4\,k^{3}}{\pi^{2}}\, \left|\frac{\mu_{\rm T}}{z_{\rm T}}\right|^{2} \label{spectra_def} \end{eqnarray} and the spectral indices and runnings by \begin{eqnarray} && n_{\rm S}-1\equiv \left.\displaystyle\frac{\d\ln \mathcal{P}_{\zeta}} {\d\ln k}\right|_{k=k_{*}} \ , \ \ \ n_{\rm T}\equiv \left.\displaystyle\frac{\d\ln \mathcal{P}_{h}} {\d\ln k}\right|_{k=k_{*}} \label{n_def} \\ && \alpha_{\rm S}\equiv\left. \frac{\d^{2}\ln\mathcal{P}_{\zeta}} {(\d\ln k)^{2}}\right|_{k=k_{*}} \ , \ \ \ \alpha_{\rm T}\equiv\left. \frac{\d^{2}\ln \mathcal{P}_{h}} {(\d\ln k)^{2}}\right|_{k=k_{*}} \label{alpha_def} \end{eqnarray} where $k_*$ is an arbitrary pivot scale. We also define the tensor-to-scalar ratio at $k=k_*$ as \begin{eqnarray} R\equiv\left.\frac{\mathcal{P}_{h}}{\mathcal{P}_{\zeta}} \right|_{k=k_{*}} \ . \label{R_def} \end{eqnarray} \endnumparts Finally, in the following we shall often make use of the hierarchy of horizon flow functions (HFF in short, also referred to as slow-roll parameters) $\epsilon_i$'s defined by~\cite{terrero} \begin{eqnarray} \epsilon_1 \equiv -\frac{\dot{H}}{H^2} \ , \quad\quad \epsilon_{n+1} \equiv \frac{\dot{\epsilon}_n}{H\,\epsilon_n} \quad n\ge1 \label{HFF_def} \end{eqnarray} where dots denote derivatives with respect to the cosmic time $\d t=a\,\d\eta$. \section{MCE and cosmological perturbations} \label{sMCECP} In order to apply the MCE to cosmological perturbations, we shall start from the same equation as was used with the improved WKB method in Refs.~\cite{MS,WKB1}, to which we refer for more details. We recall here that the WKB approximation can be more effectively applied {\em after\/} the following redefinitions of the wave-function and variable, \numparts \begin{eqnarray} && \chi=(1-\epsilon_1)^{1/2}\,{\rm e}^{-x/2}\,\mu \\ && x=\ln\left(\frac{k}{H\,a}\right) \ . \end{eqnarray} \endnumparts This yields an equation of the form~(\ref{exact_EQ}) with the ``frequency'' $\Omega(k,\eta)$ of Eq.~(\ref{osci}) replaced by \begin{eqnarray} \omega^2(x)=\frac{{\rm e}^{2\,x}}{\left[1-\epsilon_1(x)\right]^2}-\nu^2(x) \ , \label{our_freq} \end{eqnarray} with $\nu^2(x)$ given, respectively for scalar and tensor perturbations, by \numparts \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \nu_{\rm S}^2(x)&=& \frac{1}{4}\,\left(\frac{3-\epsilon_1}{1-\epsilon_1}\right)^2 +\frac{(3-2\,\epsilon_1)\,\epsilon_2}{2\,(1-\epsilon_1)^2} +\frac{(1-2\,\epsilon_1)\,\epsilon_2\,\epsilon_3}{2\,(1-\epsilon_1)^3} +\frac{(1-4\,\epsilon_1)\,\epsilon_2^2}{4\,(1-\epsilon_1)^4} \label{nu2_S} \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \nu_{\rm T}^2(x)&=& \frac{1}{4}\,\left(\frac{3-\epsilon_1}{1-\epsilon_1}\right)^2 -\frac{\epsilon_1\,\epsilon_2}{2\,(1-\epsilon_1)^2} -\frac{\epsilon_1\,\epsilon_2\,\epsilon_3}{2\,(1-\epsilon_1)^3} -\frac{(2+\epsilon_1)\,\epsilon_1\,\epsilon_2^2}{4\,(1-\epsilon_1)^4} \ , \label{nu2_T} \end{eqnarray} \endnumparts where we omit the dependence on $x$ in the $\epsilon_i$ for the sake of brevity. The point $x=x_0$ where the frequency vanishes, $\omega(x_0)=0$ (i.e.~the classical ``turning point''), is given by the expression \begin{eqnarray} x_0=\ln\left[\bar{\nu}\,\left(1-\bar{\epsilon}_1\right)\right] \ , \end{eqnarray} where we have defined $\bar{\nu}\equiv\nu(x_0)$ and $\bar{\epsilon}_1\equiv\epsilon_1(x_0)$. We now choose the comparison function \begin{eqnarray} \Theta^2(\sigma)= \frac{{\rm e}^{2\,\sigma}}{(1-\bar{\epsilon}_1)^2}-\bar{\nu}^2 \ , \label{aux_FREQ} \end{eqnarray} and note that then $\sigma_0=x_0$. Solutions to Eq.~(\ref{exact_EQ}) can now be expressed, by means of Eqs.~(\ref{aux_EQ}), (\ref{new_EQ_appr}) and (\ref{aux_FREQ}), as \begin{eqnarray} \chi_{\pm}(x)\simeq\sqrt{\frac{\Theta(\sigma)}{\omega(x)}}\, J_{\pm\bar{\nu}}\left(\frac{{\rm e}^{\sigma}}{1-\bar{\epsilon}_1}\right) \ , \label{exact_SOL_bis} \end{eqnarray} where the $J$'s are Bessel functions~\cite{abram}, and the initial condition~(\ref{init_cond_on_mu}) can be satisfied by taking a linear combination of them. However, in contrast with the WKB method~\cite{MS,WKB1} and as we pointed out in Section~\ref{sMCE}, MCE solutions need not be matched at the turning point, since the functions~(\ref{exact_SOL_bis}) are valid solutions for the whole range of the variable $x$. Eq.~(\ref{new_EQ_int}) at the end of inflation, $x=x_{\rm f}$, becomes \begin{eqnarray} \xi(x_{\rm f}) &\simeq& -\Theta(\sigma_{\rm f}) -\frac{\bar{\nu}}{2}\,\ln\left[ \frac{\bar{\nu}-\Theta(\sigma_{\rm f})} {\bar{\nu}+\Theta(\sigma_{\rm f})}\right] \nonumber \\ &\simeq& -\bar{\nu}\, \left[1+\ln\left(\frac{{\rm e}^{\sigma_{\rm f}}}{1-\bar{\epsilon}_1}\right) -\ln\left(2\,\bar{\nu}\right)\right] \ , \label{new_EQ_integrate} \end{eqnarray} where the super-horizon limit $x_{\rm f}\ll x_0$ ($\sigma_{\rm f}\to-\infty$) has been taken in the second line. One then has \begin{eqnarray} \frac{{\rm e}^{\sigma_{\rm f}}}{1-\bar{\epsilon}_1} \simeq \frac{2\,\bar{\nu}}{\rm e}\,{\rm exp} \left[-\frac{\xi(x_{\rm f})}{\bar{\nu}}\right] \ . \label{arg_exp} \end{eqnarray} Finally, on using Eq.~(\ref{arg_exp}), we obtain the general expressions for the power spectra to leading MCE order, \numparts \begin{eqnarray} {\cal P}_\zeta &=& \left[\frac{H^2}{\pi\,\epsilon_1\,m_{\rm Pl}^2} \left(\frac{k}{a\,H}\right)^3 \frac{{\rm e}^{2\,\xi_{\rm S}}} {\left(1-\epsilon_1\right)\,\omega_{\rm S}}\right]_{x=x_{\rm f}} g_{\rm S}(x_0) \label{spectra_S} \\ {\cal P}_h &=& \left[\frac{16\,H^2}{\pi\,m_{\rm Pl}^2}\, \left(\frac{k}{a\,H}\right)^3\, \frac{{\rm e}^{2\,\xi_{\rm T}}} {\left(1-\epsilon_1\right)\,\omega_{\rm T}}\right]_{x=x_{\rm f}} g_{\rm T}(x_0) \label{spectra_T} \ , \end{eqnarray} where $m_{\rm Pl}$ is the Planck mass and the quantities inside the square brackets are evaluated in the super-horizon limit, whereas the functions \begin{eqnarray} g(x_0) \equiv \frac{\pi\,e^{2\,\bar{\nu}}\,\bar{\nu}^{1-2\,\bar{\nu}}} {\left[1-\cos\left(2\,\pi\,\bar{\nu}\right)\right]\, \left[\Gamma\left(1-\bar{\nu}\right)\right]^2} \ , \label{corr_TP} \end{eqnarray} \endnumparts describe corrections that just depend on quantities evaluated at the turning point and represent the main result of the MCE applied to cosmological perturbations~\footnote{Inside the square brackets we recognize the general results given by the WKB approximation~\cite{MS,WKB1}. The ``correction'' $g(x_0)$ accounts for the fact that $\Theta^2$ in Eq.~(\ref{aux_FREQ}) is a better approximation than Langer's~\cite{WKB1,langer}.}. The expression in Eq.~(\ref{corr_TP}) is obtained by simply making use of the approximate solutions~(\ref{exact_SOL_bis}) and their asymptotic expansion at $x\to\infty$ to impose the initial conditions~(\ref{init_cond_on_mu}). In the WKB calculations~\cite{WKB_PLB,WKB_lungo,WKB1} one finds a similar factor but, in that case, using the Bessel functions of order $1/3$ leads to a large error in the amplitudes. The MCE instead uses Bessel functions of order $\bar{\nu}$, with $\bar\nu=3/2$ to leading order in the HFF (i.e.~the right index for the de~Sitter model), which yields a significantly better value for the amplitudes of inflationary spectra. \par The MCE allows one to compute approximate perturbation modes with errors in the asymptotic regions (i.e.~in the sub- and super-horizon limits) which are comparable with those of the standard (or improved) WKB approximation~\cite{MS,WKB1}. Since these methods usually give large errors at the turning point~\cite{WKB1} (which produce equally large errors in the amplitude of the power spectra) it will suffice to estimate the error at the turning point in order to show that the MCE is indeed an improvement. To leading order (that is, on using the approximate solution~(\ref{exact_SOL_bis})), the MCE gives an error at the turning point of the second order in the HFF, which means that we have a small error in the amplitudes of the power spectra. Unfortunately, this error remains of second order in the HFF also for next-to-leading order in the MCE. We shall see this by applying Dingle's analysis~\cite{dingle} for linear frequencies to our case~(\ref{our_freq}). We start by rewriting Eq.~(\ref{new_EQ}) as \begin{eqnarray} \left\{\omega^2-\sigma_1^2\,\left[\frac{{\rm e}^{2\,\sigma}} {(1-\bar{\epsilon}_1)^2}-\bar{\nu}^2\right] \right\} + \left[ \frac34\,\frac{\sigma_2^2}{\sigma_1^2}-\frac12\,\frac{\sigma_3}{\sigma_1} \right] =0 \ , \label{new_EQ_4_DING} \end{eqnarray} where we dropped the $x$ dependence in $\omega$ and $\sigma$ and the order of the derivatives is given by their subscripts ($\sigma_1\equiv \d\sigma/\d x$, $\omega_1^2\equiv \d\omega^2/\d x$, etc.). Note that the term in square brackets is the error obtained on using the solutions~(\ref{exact_SOL_bis}). We then evaluate Eq.~(\ref{new_EQ_4_DING}) and its subsequent derivatives at the turning point (i.e.~at $x=x_0$), \numparts \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \left\{ \omega^2-\sigma_1^2\,\Theta^2\left(\sigma\right) \right\} + \left[ \frac34\,\frac{\sigma_2^2}{\sigma_1^2}-\frac12\,\frac{\sigma_3}{\sigma_1} \right] =0 \label{new_EQ_4_DING_TP} \end{eqnarray} \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \left\{ \omega^2_1 -2\,\sigma_2\,\sigma_1\,\Theta^2\left(\sigma\right) -\frac{2\,{\rm e}^{2\,\sigma}\,\sigma_1^3}{\left(1-\bar{\epsilon}_1\right)^2} \right\} + \left[ 2\,\frac{\sigma_3\,\sigma_2}{\sigma_1^2} -\frac32\,\frac{\sigma_2^3}{\sigma_1^3} -\frac12\,\frac{\sigma_4}{\sigma_1} \right] =0 \label{new_EQ_deriv1} \end{eqnarray} \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && \left\{ \omega^2_2 -2\,\left(\sigma_3\,\sigma_1+\sigma_2^2\right)\,\Theta^2\left(\sigma\right) -\frac{2\,\sigma_1^2\,{\rm e}^{2\,\sigma}}{\left(1-\bar{\epsilon}_1\right)^2}\, \left(2\,\sigma_1^2+5\,\sigma_2\right) \right\} \nonumber\\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && + \left[ \frac92\,\frac{\sigma_2^4}{\sigma_1^4} -\frac{17}{2}\frac{\sigma_3\,\sigma_2^2}{\sigma_1^3} +\frac52\,\frac{\sigma_4\,\sigma_2}{\sigma_1^2} +2\,\frac{\sigma_3^2}{\sigma_1^2} -\frac12\,\frac{\sigma_5}{\sigma_1} \right] =0 \label{new_EQ_deriv2} \end{eqnarray} \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && \left\{ \omega^2_3 -2\,\left(\sigma_4\,\sigma_1+3\,\sigma_2\,\sigma_3\right)\,\Theta^2\left(\sigma\right) -\frac{2\,\sigma_1\,{\rm e}^{2\,\sigma}}{\left(1-\bar{\epsilon}_1\right)^2}\, \left(4\,\sigma_1^4 +18\,\sigma_2\,\sigma_1^2 +7\,\sigma_3\,\sigma_1 +12\,\sigma_2^2\right) \right\} \nonumber \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && +\left[ \frac{87}{2}\,\frac{\sigma_3\,\sigma_2^3}{\sigma_1^4} -18\,\frac{\sigma_2^5}{\sigma_1^5} -\frac{27}{2}\,\frac{\sigma_4\,\sigma_2^2}{\sigma_1^3} -21\,\frac{\sigma_3^2\,\sigma_2}{\sigma_1^3} +3\,\frac{\sigma_5\,\sigma_2}{\sigma_1^2} +\frac{13}{2}\,\frac{\sigma_3\,\sigma_4}{\sigma_1^2} -\frac12\,\frac{\sigma_6}{\sigma_1} \right] =0 \ , \label{new_EQ_deriv3} \end{eqnarray} \endnumparts where $\Theta^2\left(\sigma\right)$ was defined in Eq.~(\ref{aux_FREQ}) and we omit two equations for brevity. In order to evaluate the error at the turning point \begin{eqnarray} \Delta_{\rm TP}= \left[\frac34\,\frac{\sigma_2^2}{\sigma_1^2} -\frac12\,\frac{\sigma_3}{\sigma_1}\right]_{x=x_0} \ , \end{eqnarray} we ignore the terms in square brackets and equate to zero the expressions in the curly brackets in Eqs.~(\ref{new_EQ_4_DING_TP})-(\ref{new_EQ_deriv3}) and so on. This leads to \numparts \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \sigma &\!=\!& \ln\left[\left(1-\bar{\epsilon}_1\right)\,\bar{\nu}\right] \label{sigma0_0order} \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \sigma_1 &\!=\!& \left(\frac{\omega^2_1}{2\,\bar{\nu}^{2}}\right)^{1/3} \label{sigma1_0order} \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \sigma_2 &\!=\!& \frac{1}{5\,\left(2\,\bar{\nu}^2\right)^{1/3}} \left[ \frac{\omega^2_2}{\left(\omega^2_1\right)^{2/3}} -\left(2\,\frac{\omega^2_1}{\bar{\nu}}\right)^{2/3} \right] \label{sigma2_0order} \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \sigma_3 &\!=\!& -\frac{6 \left(2^{1/3}\,\omega^2_2\right)^2} {175\,\left(\bar{\nu}^{2/5}\,\omega^2_1\right)^{5/3}} -\frac{3\cdot 2^{1/3}\,\omega^2_2} {25\,\left(\bar{\nu}^{4}\,\omega^2_1\right)^{1/3}} +\frac{16\,\omega^2_1}{175\,\bar{\nu}^2} +\frac{\omega^2_3} {7 \left(2^{1/2}\,\bar{\nu}\,\omega^2_1\right)^{2/3}} \ , \label{sigma3_0order} \end{eqnarray} \endnumparts and similar expressions for $\sigma_4$, $\sigma_5$, and $\sigma_6$ which we again omit for brevity. On inserting Eqs.~(\ref{our_freq}), (\ref{nu2_S}) and (\ref{nu2_T}) in the above expressions, we find the errors to leading MCE order \numparts \begin{eqnarray} \Delta^{(0)}_{\rm TP, S}&=& -\frac{32}{315}\,\epsilon_1\,\epsilon_2 -\frac{22}{315}\,\epsilon_2\,\epsilon_3 \label{corr_TP_S_0order} \\ \Delta^{(0)}_{\rm TP, T}&=& -\frac{32}{315}\,\epsilon_1\,\epsilon_2 \ , \label{corr_TP_T_0order} \end{eqnarray} \endnumparts for scalar and tensor modes respectively. \par On iterating this procedure we can further obtain the errors for the next-to-leading MCE order $\Delta^{(1)}_{\rm TP}$. We first compute next-to-leading solutions to Eqs.~(\ref{new_EQ_4_DING_TP})-(\ref{new_EQ_deriv3}) and so on by inserting the solutions found to leading order for $\sigma_1,\sigma_2,\ldots,\sigma_6$ into the corrections (i.e.~the square brackets) and into all terms containing $\Theta^2(\sigma)$~\cite{dingle}. This leads to \numparts \begin{eqnarray} \Delta^{(1)}_{\rm TP, S}&=& -\frac{31712}{331695}\,\epsilon_1\,\epsilon_2 -\frac{21598}{331695}\,\epsilon_2\,\epsilon_3 \label{corr_TP_S_1order} \\ \Delta^{(1)}_{\rm TP, T}&=& -\frac{31712}{331695}\,\epsilon_1\,\epsilon_2 \ , \label{corr_TP_T_1order} \end{eqnarray} \endnumparts which show that the next-to-leading MCE solutions lead to an error of second order in the HFF, too. We suspect that this remains true for higher MCE orders, since there is no {\em a priori\/} relation between the MCE and the slow-roll expansions. Let us however point out that the above expressions were obtained without performing a slow-roll expansion and therefore do not require that the $\epsilon_i$ be small. \section{Applications} \label{s_app} In this section we apply the formalism developed in the previous section to some models of inflation. We shall expand our general expressions (\ref{spectra_S})-(\ref{corr_TP}) to second order in the HFF and compare them with other approximation methods used in the literature. \subsection{Power-law~inflation} \label{power-law} In this model~\cite{PL,LS}, the scale factor is given in conformal time by \begin{eqnarray} a(\eta)=\ell_0\left|\eta\right|^{1+\beta} \ , \label{a} \end{eqnarray} where $\beta \le -2$ and $\ell_0=H^{-1}$ corresponds to the (constant) Hubble radius for de~Sitter ($\beta=-2$). Since the HFF are constant, \begin{eqnarray} \epsilon_1 = \frac{2+\beta}{1+\beta} \ , \quad \quad \epsilon_n = 0 \ , \quad n>1 \ , \label{eps_PL} \end{eqnarray} the MCE yields the exact power spectra, spectral indices and runnings, \numparts \begin{eqnarray} {\cal P}_{\zeta}= \frac{\ell_{\rm Pl}^2}{\ell_0^2\,\pi\,\epsilon_1}\,f(\beta)\,k^{2\beta+4} \label{PL_spectra_S} \\ {\cal P}_h= \frac{16\,\ell_{\rm Pl}^2}{\ell_0^2\,\pi}\,f(\beta)k^{2\beta+4} \label{PL_spectra_T} \end{eqnarray} where $\ell_{\rm Pl}=m_{\rm Pl}^{-1}$ is the Planck length and \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! f(\beta)=\frac{\pi}{2^{2\,\beta+1}}\, \frac{1}{\left[1-\cos\left(2\,\pi\left|\beta+\frac12\right|\right)\right]\, \Gamma^2\left(\beta+\frac32\right)} \equiv \frac{1}{\pi}\, \left[\frac{\Gamma\left(\left|\beta+\frac12\right|\right)}{2^{\beta+1}}\right]^2 \ , \label{f} \end{eqnarray} \endnumparts with $\Gamma$ the Gamma function. The spectral indices are $n_{\rm S}-1=n_{\rm T}=2\beta+4$ and their runnings $\alpha_{\rm S}=\alpha_{\rm T}=0$. Finally, the tensor-to-scalar ratio becomes \begin{eqnarray} R=16\,\frac{2+\beta}{1+\beta} \ , \label{R_PL} \end{eqnarray} which is constant as well. \subsection{Leading~MCE and second~slow-roll~order} \label{lead_MCE_and_2SR} We now consider the results~(\ref{spectra_S})-(\ref{corr_TP}) given by the MCE to leading order (denoted by the subscript MCE) and evaluate them to second~order in the HFF (labelled by the superscript $(2)$) for a general inflationary scale factor. A crucial point in our method is the computation of the function $\xi$ defined in Eq.~(\ref{new_EQ_int}) which can be found in detail in Section~III of Ref.~\cite{WKB_lungo}. For the sake of brevity, we shall not reproduce that analysis here but it is important to stress that, in contrast with the GFM and other slow-roll approximations, it does not require {\em a priori\/} any expansion in the HFF since Eq.~(34) of Ref.~\cite{WKB_lungo} is exact and higher~order terms are discarded {\em a fortiori\/}. From that expression, upon neglecting terms of order higher than two in the HFF, we obtain the power spectra \numparts \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && \mathcal{P}_{\zeta,\scriptscriptstyle\scriptscriptstyle{\rm MCE}}^{(2)} \!\!=\!\! \frac{H^2}{\pi\epsilon_1m_{\rm Pl}^2}\! \left\{1\!-\!2\left(C\!+\!1\right)\epsilon_1\!-\!C\,\epsilon_2 \!+\!\left(2C^2\!+\!2C\!+\!\frac{\pi^2}{2}\!-\!5\right)\epsilon_1^2 \!+\!\left(\frac12C^2\!+\!\frac{\pi^2}{8}\!-\!1\right)\epsilon_2^2 \right. \nonumber \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && \,\,\,\,\,\, +\!\! \left. \left(2\,C^2\!-\!2\,C\,D_{\scriptscriptstyle{\rm MCE}} \!+\!D_{\scriptscriptstyle{\rm MCE}}^2\!-\!C\!-\!2\,C\,\ln(2) \!+\!2\,D_{\scriptscriptstyle{\rm MCE}}\,\ln(2) \!+\!\frac{7\pi^2}{12}\!-\!\frac{64}{9}\right)\epsilon_1\,\epsilon_2 \right. \nonumber \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && \,\,\,\,\,\, +\!\! \left. \left(-C\,D_{\scriptscriptstyle{\rm MCE}} \!+\!\frac12\,D_{\scriptscriptstyle{\rm MCE}}^2\!-\!C\,\ln(2) \!+\!D_{\scriptscriptstyle{\rm MCE}}\,\ln(2)\!+\!\frac{\pi^2}{24} \!-\!\frac{1}{18}\right)\epsilon_2\,\epsilon_3 \right. \nonumber \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && \,\,\,\,\,\, +\!\!\left. \left[-2\,\epsilon_1\!-\!\epsilon_2\!+\!2\left(2\,C\!+\!1\right) \epsilon_1^2 \!+\!\left(4\,C\!-\!2\,D_{\scriptscriptstyle{\rm MCE}}\!-\!1\right)\epsilon_1\,\epsilon_2 \!+\!C\,\epsilon_2^2\!-\!D_{\scriptscriptstyle{\rm MCE}}\,\epsilon_2\,\epsilon_3\right] \ln\left(\frac{k}{k_*}\right) \right. \nonumber \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && \,\,\,\,\,\, +\!\! \left. \frac12\left(4\,\epsilon_1^2\! +\!2\,\epsilon_1\,\epsilon_2\! +\!\epsilon_2^2 \!-\!\epsilon_2\,\epsilon_3\right) \ln^2\left(\frac{k}{k_*}\right)\right\} \label{PS_SlowRoll_0_2order} \end{eqnarray} \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \mathcal{P}_{h,\scriptscriptstyle{\rm MCE}}^{(2)} &\!=\!& \frac{16H^2}{\pi m_{\rm Pl}^2} \left\{1-2\left(C+1\right)\epsilon_1 +\left(2C^2+2C+\frac{\pi^2}{2}-5\right)\epsilon_1^2 \nonumber \right. \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && +\!\! \left. \left(-2CD_{\scriptscriptstyle{\rm MCE}}+D_{\scriptscriptstyle{\rm MCE}}^2 -2C-2C\ln(2)+2D_{\scriptscriptstyle{\rm MCE}}\ln(2) +\frac{\pi^2}{12}-\frac{19}{9}\right)\epsilon_1\epsilon_2 \nonumber \right. \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && +\!\! \left. \left[-2\epsilon_1+2\left(2C+1\right)\epsilon_1^2 -2\left(D_{\scriptscriptstyle{\rm MCE}}+1\right)\epsilon_1\epsilon_2\right]\, \ln\left(\frac{k}{k_*}\right) \nonumber \right. \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && +\!\! \left.\frac12\left(4\epsilon_1^2 -2\epsilon_1\epsilon_2\right)\ln^2\left(\frac{k}{k_*}\right) \right\} \ , \label{PT_SlowRoll_0_2order} \end{eqnarray} \endnumparts where $D_{\scriptscriptstyle{\rm MCE}}\equiv\frac{1}{3}-\ln 3\approx -0.7652$ and $C\equiv\ln 2+\gamma_{\rm E}-2\approx -0.7296$, with $\gamma_{\rm E}$ the Euler-Mascheroni constant. A clarification concerning the $g(x_0)$'s is in order. Since the turning point does not coincide with the horizon crossing where the spectra are evaluated~\cite{WKB_lungo}, we have used the relation \begin{eqnarray} \epsilon_i(x_0) \simeq \epsilon_i-\epsilon_i\,\epsilon_{i+1}\,\ln\left(\frac32\right) \ , \label{epsilon_n_TPtoHC} \end{eqnarray} in order to express the $g(x_0)$'s as functions of the crossing time (HFF with no explicit argument are evaluated at the horizon crossing). The spectral indices~(\ref{n_def}) are then given by \numparts \begin{eqnarray} && \!\!\!\!\!\!\!\!\!\!\! n_{\rm S,\scriptscriptstyle{\rm MCE}}^{(2)}-1 = -2\,\epsilon_1-\epsilon_2-2\,\epsilon_1^2 -\left(2\,D_{\scriptscriptstyle{\rm MCE}}+3\right)\,\epsilon_1\,\epsilon_2 -D_{\scriptscriptstyle{\rm MCE}}\,\epsilon_2\,\epsilon_3 \label{n_2'order_S} \\ && \!\!\!\!\!\!\!\!\!\!\! n_{\rm T,\scriptscriptstyle{\rm MCE}}^{(2)} = -2\,\epsilon_1-2\,\epsilon_1^2 -2\,\left(D_{\scriptscriptstyle{\rm MCE}}+1\right)\,\epsilon_1\,\epsilon_2 \ , \label{n_2'order_T} \end{eqnarray} and their runnings~(\ref{alpha_def}) by \begin{eqnarray} && \alpha_{\rm S,\scriptscriptstyle{\rm MCE}}^{(2)}= -2\,\epsilon_1\,\epsilon_2 -\epsilon_2\,\epsilon_3 \label{alpha_2'order_S} \\ && \alpha_{\rm T,\scriptscriptstyle{\rm MCE}}^{(2)}= -2\,\epsilon_1\,\epsilon_2 \label{alpha_2'order_T} \ . \end{eqnarray} \endnumparts The tensor-to-scalar ratio~(\ref{R_def}) becomes \begin{eqnarray} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \frac{R^{(2)}_{\scriptscriptstyle{\rm MCE}}}{16\,\epsilon_1} &=& 1+C\,\epsilon_2 +\left(C-\frac{\pi^2}{2}+5\right) \epsilon_1\,\epsilon_2 +\left(\frac{1}{2}\,C^2-\frac{\pi^2}{8}+1\right) \epsilon_2^2 \nonumber \\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && +\left(C\,D_{\scriptscriptstyle{\rm MCE}} -\frac{1}{2}\,D_{\scriptscriptstyle{\rm MCE}}^2 +C\,\ln(2) -D_{\scriptscriptstyle{\rm MCE}}\,\ln(2) -\frac{\pi^2}{24}+\frac{1}{18}\right) \epsilon_2\,\epsilon_3 \ . \label{R_lead_WKB} \end{eqnarray} The polynomial structure in the HFF of the results agrees with that given by the GFM~\cite{gongstewart,LLMS} and the WKB~approximation~\cite{MS,WKB1,WKB_PLB,WKB_lungo} (the same polynomial structure is also found for the spectral indices by means of the uniform approximation~\cite{LA}). Let us also note other aspects of our results: first of all, the factors $g(x_0)$ modify the standard WKB leading order amplitudes~\cite{MS,WKB1} so as to reproduce the standard first order slow-roll results; secondly, we found that $C$ and $D_{\scriptscriptstyle{\rm MCE}}$ are ``mixed'' in the numerical factors in front of second order terms (we recall that $D_{\scriptscriptstyle{\rm MCE}}$ differs from $C$ by about $5\,\%$); further, $D_{\scriptscriptstyle{\rm MCE}}=D_{\scriptscriptstyle{\rm WKB}}$ of Refs.~\cite{MS,WKB1,WKB_PLB,WKB_lungo}. The runnings $\alpha_{\rm S}$ and $\alpha_{\rm T}$ are predicted to be ${\mathcal O}(\epsilon^2)$~\cite{KT}, and in agreement with those obtained by the GFM~\cite{gongstewart,LLMS}. \begin{figure}[!ht] \includegraphics[width=0.5\textwidth]{WKB} \includegraphics[width=0.5\textwidth]{WKBstar} \\ \null \hspace{2cm}$\Delta R_{_{\rm WKB}}$ \hspace{6cm}$\Delta R_{_{\rm WKB*}}$ \\ \includegraphics[width=0.5\textwidth]{MCE} \includegraphics[width=0.45\textwidth]{tex_per_assi_bassi} \\ \null \hspace{2cm}$\Delta R_{_{\rm MCE}}$ \caption{Percentage differences~(\ref{Y_X}) between scalar-to-tensor ratios given by the GFM and those obtained from the WKB, ${\rm WKB*}$ and MCE.} \label{compare_grap} \end{figure} \subsection{Second~slow-roll~order MCE and WKB versus~GFM} \label{mixed} \begin{figure}[!ht] \includegraphics[width=0.5\textwidth]{ns-1abs} \hspace{0.1cm} \includegraphics[width=0.5\textwidth]{PS_3D.eps} \\ \null \hspace{2cm} $|n_{\rm S\,\scriptscriptstyle{MCE}}-n_{\rm S\,\scriptscriptstyle{GFM}}| \times 100$ \hspace{2cm} $\Delta P_{\zeta\,\scriptscriptstyle{\rm MCE}}(k_*)$ \caption{Left box: absolute difference between scalar spectral indeces $n_{\rm S}$ evaluated with the MCE and GFM (rescaled by a factor of $100$ for convenience). Note that relevant differences of order $0.05$ (shown as $5$) occur at the boundaries of the intervals considered for the $\epsilon_i$'s. Right box: percentage difference~(\ref{Y_X}) for the amplitudes of scalar perturbations $P_\zeta$ at the pivot scale $k=k_*$ between the MCE and GFM. See Fig.~\ref{compare_grap} for the meaning of dot size.} \label{compare_ns} \end{figure} \begin{figure}[h] \includegraphics[width=0.7\textwidth]{PS_2D.eps} \raisebox{3cm}{$\Delta P_{\zeta\,\scriptscriptstyle{\rm MCE}}(k_*) |_{\epsilon_2=-2 \epsilon_1}$} \caption{Percentage difference~(\ref{Y_X}) between the MCE and the GFM for $P_\zeta(k_*)$ restricted on the hypersurface $\epsilon_2=-2\,\epsilon_1$, which corresponds to a scale-invariant spectrum to first order in the slow-roll expansion. The graph is given for $0<\epsilon_1<0.5$ and $-0.5<\epsilon_3<0.5$.} \label{figure_PS} \end{figure} We shall now compare the slow-roll results obtained from the MCE and other methods of approximation previously employed~\cite{WKB_lungo} with those obtained using the GFM in the slow-roll expansion We use the GFM just as a reference for the purpose of comparing the other methods among each other and of illustrating deviations from pure slow-roll results. For a given inflationary ``observable'' $Y$ evaluated with the method X, we denote the percentage difference with respect to its value given by the GFM as \begin{eqnarray} \Delta Y_{_{\rm X}} \equiv 100\,\left|\frac{Y_{_{\rm X}}-Y_{_{\rm GFM}}}{Y_{_{\rm GFM}}}\right|\% \ , \label{Y_X} \end{eqnarray} where ${\rm X}={\rm WKB}$ stands for the first WKB and second slow-roll orders~\cite{WKB_PLB}, ${\rm X}={\rm WKB*}$ for the second WKB and second slow-roll orders~\cite{WKB_lungo} and, of course ${\rm X}={\rm MCE}$ for the result obtained in this paper. Note that for the case ${\rm X}={\rm WKB*}$ we shall set the three undetermined parameters $b_{\rm S}=b_{\rm T}=d_{\rm S}=2$ in order to minimize the difference with respect to the results of the GFM (see~\ref{und_par} and Ref.~\cite{WKB_lungo}). \par In Fig.~\ref{compare_grap} we show, with dots of variable size, the percentage differences~(\ref{Y_X}) for the scalar-to-tensor ratios $R$ at the pivot scale $k=k_*$ for $0<\epsilon_1<0.5$, $|\epsilon_2|$ and $|\epsilon_3|<0.5$. From the plots it appears that the level of accuracy of the MCE is comparable to that of the ${\rm WKB*}$ and both are (almost everywhere) more accurate than the WKB. However, the MCE achieves such a precision at leading order and is thus significantly more effective than the ${\rm WKB*}$. In Fig.~\ref{compare_ns} we show the difference in $n_{\rm S}$ and the relative difference in $P_{\zeta} (k_*)$ between the MCE and GFM. In Fig.~\ref{figure_PS} we finally plot the relative difference in $P_{\zeta} (k_*)$ for $\epsilon_2=-2\,\epsilon_1$ (the scale invariant case to first order in slow-roll parameters). \subsection{GFM and slow-roll approximation} Let us further compare our results with those obtained by the GFM. As we stated before, the most general result of the present work is given by the expressions for the power spectra~(\ref{spectra_S}) and~(\ref{spectra_T}) with the corrections~(\ref{corr_TP}). In fact, the spectral indices and their runnings are the same as found with the standard WKB leading order~\cite{MS,WKB_lungo}. On the other hand, the main results of the GFM are the expressions for power spectra, spectral indices and runnings ``$\ldots$ in the slow-roll expansion.'' that is, for small HFF (see,~e.g.,~Eqs.~(41) and~(43) in Ref.~\cite{gongstewart}). Here, and in some previous work~\cite{MS,WKB1}, we instead obtain results which hold for a general inflationary context, independently of the ``slow-roll conditions''~\footnote{Our approximation method does not require small HFF.} or ``slow-roll approximation''~\footnote{We expand instead according to the scheme given in Ref.~\cite{WKB_lungo}.}. The slow-roll case is just a possible application of our general expressions which can be evaluated for any model by simply specifying the scale factor. Our general, and non-local, expressions in fact take into account all the ``history'' of the HFF during inflation. For example, we note that the MCE at leading order reproduces the exact result for power-law inflation (see Sec.~\ref{power-law}), whereas the GFM reproduces those to the next-to-leading order~\cite{gongstewart}. \par Chaotic inflation~\cite{linde} is another example for which a Taylor approximation of the HFF (such as the one required by the GFM~\cite{gongstewart} or our Green's function perturbative expansion in~\ref{VENTURI_pert}) may lead to inaccurate results. We consider a quadratic potential \begin{eqnarray} V(\phi)=\frac{1}{2}\,m^2\,\phi^2 \ , \label{chaos} \end{eqnarray} where $m$ is the mass of the inflaton $\phi$. In a spatially flat Robertson-Walker background, the potential energy dominates during the slow-rollover and the Friedman equation becomes \begin{eqnarray} H^2=\frac{4\,\pi}{3\,m_{\rm Pl}^2}\,\left[\dot\phi^2+m^2\,\phi^2\right] \simeq\frac{4\,\pi\,m^2\,\phi^2}{3\,m_{\rm Pl}^2} \ . \label{hubble} \end{eqnarray} \begin{figure}[t!] \raisebox{3cm}{$H$} \includegraphics[width=0.7\textwidth]{H_chaotic} \\ \null \hspace{6cm}$t$ \caption{Numerical evolution of $H$ (solid line) and its analytic approximation (dashed line). The initial condition corresponds to $H_i\approx 8.2\,m$ (time in units of $1/m$) when the inflaton starts in the slow-roll regime with a value $4\,m_{\rm Pl}$.} \label{fig:hubble} \end{figure} The Hubble parameter evolves as \begin{eqnarray} \dot H=-\frac{4\,\pi}{m_{\rm Pl}^2}\,\dot\phi^2 \ . \label{hubbleder} \end{eqnarray} On using the equation of motion for the scalar field \begin{eqnarray} \ddot\phi+3\,H\,\dot\phi+m^2\,\phi=0 \ , \label{scalar_hom} \end{eqnarray} and neglecting the second derivative with respect to the cosmic time, we have $3\,H\,\dot\phi\simeq-\,m^2\,\phi$. Eq.~(\ref{hubbleder}) then yields \begin{eqnarray} \dot H\simeq-\frac{m^2}{3}\equiv\dot H_0 \ , \label{hdot} \end{eqnarray} which leads to a linearly decreasing Hubble parameter and, correspondingly, to an evolution for the scale factor which is not exponentially linear in time, i.e. \numparts \begin{eqnarray} && H(t)\simeq H_0+\dot H_0\,t \ , \label{evoH} \\ && a(t)\simeq a(0)\,\exp\left(H_0\,t+\dot H_0\,\frac{t^2}{2}\right) \equiv a(0)\,\exp\left[N(t)\right] \ . \label{evoA} \end{eqnarray} \endnumparts In Fig.~\ref{fig:hubble} the analytic approximation~(\ref{evoH}) is shown to be very good on comparing with the exact (numerical) evolution. We are interested in the HFF in the slow-roll regime. From the definitions~(\ref{HFF_def}) we then have \numparts \begin{eqnarray} && \epsilon_1(t)= -\frac{\dot H_0}{\left(H_0+\dot H_0\,t\right)^2} \label{eps1_epsn_funct1} \\ && \epsilon_n(t)= -\frac{2\,\dot H_0}{\left(H_0+\dot H_0\,t\right)^2} =2\,\epsilon_1(t) \ , \quad n\ge2 \ , \label{eps1_epsn_funct2} \end{eqnarray} \endnumparts which we plot in Fig.~\ref{eps1_epsn}. \begin{figure}[t!] \raisebox{3cm}{$\epsilon_i$} \includegraphics[width=0.7\textwidth]{eps1_epsn} \\ \null \hspace{6cm}$t$ \caption{HFF in the slow-roll regime: the dashed line is $\epsilon_1$ and the solid line is all the $\epsilon_n$'s with $n\ge2$ (time in units of $1/m$).} \label{eps1_epsn} \end{figure} \par Let us take the end of inflation at $t_{\rm end}\simeq 24/m$ (so that the analytic approximation~(\ref{evoH}), (\ref{evoA}) is very good till the end), corresponding to $N_{\rm end}=125$~e-folds. We then consider the modes that leave the horizon 60~e-folds before the end of inflation and take that time ($t=t_*\simeq 8/m$ or $N_*=125-60=65$) as the starting point for a Taylor expansion to approximate the HFF~\footnote{One can of course conceive diverse expansions, e.g.~using other variables instead of $N$, however the conclusion would not change as long as equivalent orders are compared.}, \begin{eqnarray} \epsilon_i\left(\Delta N\right)\simeq \epsilon_i+\epsilon_i\,\epsilon_{i+1}\,\Delta N +\frac12\,\epsilon_i\, \left(\epsilon_{i+1}^2+\epsilon_{i+1}\,\epsilon_{i+2}\right)\, \Delta N^2 \label{epsn_expan} \ , \end{eqnarray} where the $\epsilon_i$'s in the r.h.s.~are all evaluated at the time $t_*$ and $\Delta N=N-N_*$. The percentage error with respect to the analytic expressions~(\ref{eps1_epsn_funct1}) and~(\ref{eps1_epsn_funct2}) , \begin{eqnarray} \delta_i\equiv100\,\left|\frac{\epsilon_i(t) -\epsilon_i\left(\Delta N\right)}{\epsilon_i(t)} \right|\% \label{err_epsn_expan} \ , \end{eqnarray} with $\Delta N=N(t)-N_*$ is then plotted in Fig.~\ref{err1_err2} for $i=1$. \begin{figure}[t] \raisebox{3cm}{$\delta_1$} \includegraphics[width=0.7\textwidth]{err1_err2} \\ \null \hspace{6cm}$t$ \caption{Percentage error for $\epsilon_1$ to first and second order in $\Delta N$ (dashed and solid line respectively; time in units of $1/m$).} \label{err1_err2} \end{figure} It obviously becomes large rather quickly away from $t=t_*$ and we can immediately conclude that, had we used a Taylor expansion to approximate the HFF over the whole range of chaotic inflation ($-65<\Delta N<60$), we would have obtained large errors from the regions both near the beginning and the end of inflation. \par In general, we expect that the Taylor expansion of the HFF will lead to a poor determination of the numerical coefficients (depending on $C$ in Eqs.~(\ref{PS_SlowRoll_0_2order}) and (\ref{PT_SlowRoll_0_2order})) in the second slow-roll order terms for those models in which the HFF vary significantly in time. In fact, we can calculate the integral of Eq.~(21) in Ref.~\cite{gongstewart} with $\epsilon_1$ instead of the complete $g\left(\ln x\right)$ and $y_0(u)$ instead of $y(u)$. The percentage difference between this integral calculated using the Taylor expansion~(\ref{epsn_expan}) with respect to the same integral calculated with $\epsilon_1$ in Eqs.~(\ref{eps1_epsn_funct1}) and~(\ref{eps1_epsn_funct2}) is $92\,\%$ and $88\,\%$ to first and second orders in $\Delta N$, respectively. The MCE method does not require such an expansion and does therefore not suffer from this restriction~\footnote{We again refer the reader to Section~III of Ref.~\cite{WKB_lungo} for more details.}. \par \begin{figure}[t!] \raisebox{3cm}{${\rm Log}(P_Q)$} \includegraphics[width=0.7\textwidth]{PS_chaotic_QMCE_paper} \\ \null \hspace{6cm}${\rm Log}(k/k_*)$ \caption{Spectrum of the Mukhanov variable $Q$ ($P_Q=k^3\,|Q_k|^2/(2\,\pi^2)$) evaluated at the end of inflation for the chaotic model of Eq.~(\ref{chaos}). The dots represent the numerical values and the solid line the analytic fit based on Eqs.~(\ref{n_2'order_S}) and (\ref{alpha_2'order_S}) obtained by the second slow-roll order MCE approximation. The first order and GFM analytic results are not shown since they are almost indistinguishable from the MCE result plotted. $k_*$ crosses the Hubble radius at $\phi_*\simeq 3\,m_{\rm Pl}$ (the lines are normalized at $10^{2.25}\,k_*$).} \label{fig:spe} \end{figure} We have also compared our analytic MCE results in Eqs.~(\ref{n_2'order_S}) and (\ref{alpha_2'order_S}) with the numerical evaluation of the spectrum for scalar perturbations. In particular, we have evolved in cosmic time the modulus of the Mukhanov variable $Q$ (recalling that $P_\zeta=k^3\,|Q_k|^2/(2\,\pi^2)$), which satisfies the associated non-linear Pinney equation~\cite{BFV}. The initial conditions for the numerical evolution are fixed for wave-lengths well within the Hubble radius and correspond to the adiabatic vacuum, i.e.~$|Q_k(t_i)|=1/(a(t_i)\,\sqrt{k})$ and $|\dot Q_k|(t_i) = - H(t_i)\,|Q_k(t_i)|$~\cite{FMVV_1}. The agreement of the analytic MCE approximation with the numerical results is good, as shown in Fig.~\ref{fig:spe}. \subsection{Beyond slow-roll} The slow-roll approximation is quite accurate for a wide class of potentials. However, violations of the slow-roll approximation may occur during inflation, leading to interesting observational effects in the power spectra of cosmological perturbations. An archetypical model to study such violations is given by the potential \begin{eqnarray} V(\phi)=V_0\,\left[1 -\frac{2}{\pi}\,\arctan\left(N\,\frac{\phi}{m_{\rm Pl}}\right)\right] \ , \label{arctan} \end{eqnarray} introduced with $N=5$ in Ref.~\cite{WMS}. In Fig.~\ref{arctan_figure2} we show that the MCE result expanded to second slow-roll order provides a very good fit for the power spectrum of scalar perturbations even in situations where the slow-roll parameters are not very small (see Fig.~\ref{arctan_figure1}). This example also shows that second slow-roll order results are much better than first order ones (analogous results were also obtained with the GFM in Ref.~\cite{LLMS}). \begin{figure}[t!] \raisebox{3cm}{$\epsilon_i$} \includegraphics[width=0.7\textwidth]{eps_arctan_vs_phi} \\ \null \hspace{6cm} $\phi/m_{\rm Pl}$ \caption{Evolution of $\epsilon_i$ with the value of the inflaton $\phi$ (in units of $m_{\rm Pl}$) in the arctan model of Eq.~(\ref{arctan}): $\epsilon_1$ (solid line), $\epsilon_2$ (long-dashed line) and $\dot\epsilon_2/H=\epsilon_2\,\epsilon_3$ (short-dashed line).} \label{arctan_figure1} \end{figure} \begin{figure}[t!] \raisebox{3cm}{${\rm Log}(P_Q)$} \includegraphics[width=0.7\textwidth]{PS_acrtan_before_correct_wider2} \\ \null \hspace{6cm} ${\rm Log} (k/k_*)$ \caption{Spectrum of the Mukhanov variable $Q$ ($P_Q=k^3\,|Q_k|^2/(2\,\pi^2)$) for the arctan model of Eq.~(\ref{arctan}). The dots represent the numerical values, the solid line the analytic results from Eqs.~(\ref{n_2'order_S}) and (\ref{alpha_2'order_S}) obtained by the second slow-roll order MCE approximation and the dashed line those given by the first order slow-roll approximation. $k_*$ crosses the Hubble radius at $\phi_*\simeq -0.3\,m_{\rm Pl}$ (the lines are normalized at $10^{3/20} k_*$) and the spectrum is evaluated at $\simeq 55$~e-folds afterwards.} \label{arctan_figure2} \end{figure} \section{Conclusions} \label{sC} We have presented the application of the method of comparison equations to cosmological perturbations during inflation. By construction (i.e.~by choosing a suitable comparison function), this approach leads to the exact solutions for inflationary models with constant horizon flow functions $\epsilon_i$'s (e.g.~power-law inflation). \par The main result is that, on using this approach to leading order, we were able to obtain the general expressions~(\ref{spectra_S})-(\ref{corr_TP}) for the inflationary power spectra which are more accurate than those that any other method in the literature can produce at the corresponding order. In fact, the MCE leads to the correct asymptotic behaviours (in contrast with the standard slow-roll approximation~\cite{SL}) and solves the difficulties in finding the amplitudes which were encountered with the WKB method~\cite{MS}. \par Starting from the general results~(\ref{spectra_S})-(\ref{corr_TP}), we have also computed the full analytic expressions for the inflationary power spectra to second slow-roll order in Eqs.~(\ref{PS_SlowRoll_0_2order}) and~(\ref{PT_SlowRoll_0_2order}) and found that the dependence on the horizon flow functions $\epsilon_i$'s is in agreement with that obtained by different schemes of approximation, such as the GFM~\cite{gongstewart} and the WKB approximation~\cite{WKB_PLB,WKB_lungo}. Moreover, the results obtained with the MCE do not contain undetermined coefficients, in contrast with second slow-roll order results obtained with the WKB$*$~\cite{WKB_lungo}. \par Let us conclude by remarking that, just like the WKB approach, the MCE does not require any particular constraints on the functions $\epsilon_i$'s and therefore has a wider range of applicability than any method which assumes them to be small. As an example, we have discussed in some detail the accuracy of the MCE for the massive chaotic and arctan inflationary models. We have shown that the MCE leads to accurate predictions even for a model which violates the slow-roll approximation during inflation. \ack We would like to thank S.~Leach for discussions, A.~O.~Barvinsky and G.~P.~Vacca for discussions and comments on the manuscript.
1,116,691,497,085
arxiv
\section{Introduction} This paper is concerned with the qualitative structure of admissible solutions to the strictly hyperbolic $N\times N$ system of conservation laws in one space dimension \begin{equation} \label{basic equation} \begin{cases} u_t+f(u)_x=0 & u:\ensuremath{\mathbb{R}}^+\times\ensuremath{\mathbb{R}}\to\Omega\subset\ensuremath{\mathbb{R}}^N,\ f\in C^2(\Omega,\ensuremath{\mathbb{R}}), \crcr u_{|t=0}=u_0 & u_0\in \mathrm{BV}(\ensuremath{\mathbb{R}};\Omega). \end{cases} \end{equation} We assume the strict hyperbolicity in $\Omega$: the eigenvalues $\{\lambda_i(u)\}_{i=1}^N$ of the Jacobin matrix $A(u)=Df(u)$ satisfy \begin{equation*} \lambda_1(u)<\dots<\lambda_N(u), \qquad u \in\Omega. \end{equation*} Furthermore, as we only consider the solutions with small total variation, it is not restrictive to assume that $\Omega$ is bounded and there exist constants $\{\check{\lambda}_j\}^N_{j=0}$, such that \begin{equation}\label{lambda} \check{\lambda}_{k-1}<\lambda_k(u)<\check{\lambda}_{k}, \qquad \forall u\in\Omega,\ k=1,\dots, N. \end{equation} Let $\{r_i(u)\}_{i=1}^N$ and $\{l_j(u)\}_{j=1}^N$ be a basis of right and left eigenvectors, depending smoothly on $u$, such that \begin{equation*}\label{assumponri} l_j(u) \cdot r_i(u) = \delta_{ij} \text{ and } |r_i(u)| \equiv 1, \quad i,j=1,\dots, N. \end{equation*} Let $R_i[u_0](\omega)$ be the value at time $\omega$ of the solution to the Cauchy problem \[ \frac{du}{d\omega}=r_i(u(\omega)),\quad u(0)=u_0. \] We call the curve $R_i[u_0]$ the \emph{$i$-rarefaction curve} through $u_0$. We say that the system \eqref{basic equation} is \emph{piecewise genuinely nonlinear} if the set where $\nabla \lambda_i \cdot r_i = 0$ is covered by $\bar k_i$ transversal manifolds: more precisely, \begin{equation*} Z_i:= \big\{u \in \Omega:\ |\nabla\lambda_i\cdot r_i(u)|=0 \big\}=\bigcup^{\bar{k}_i}_{j=1} Z_i^j, \end{equation*} where $Z_i^j$ is a $N-1$-dimensional manifolds such that \begin{enumerate} \item each $Z^j_i$ is transversal to the vector field $r_i(u)$, i.e. \begin{equation} \big{(} \nabla (\nabla\lambda_i\cdot r_i)\cdot r_i\big{)}(u)\ne 0 \quad \text{for $u\in Z_i^j$}; \end{equation} \item each rarefaction curve $R_i[u_0]$ crosses all the $Z^j_i$, and moreover defining the points $\omega^j[u_0]$ by \[ R_i[u_0](\omega^j[u_0]) \in Z_i^j, \] then $j \mapsto \omega^j[u_0]$ is strictly increasing. \end{enumerate} This implies that along $R_i$, $\lambda_i$ has a finite number of critical points. Denote with $\Delta^j_i$ the set of points $u$ between $Z^j_i$ and $Z^j_{i+1}$: \[ \Delta^j_i:= \big\{ u\in \Omega: \ \omega^j[u] < 0 < \omega^{j+1}[u] \big\}. \] Without any loss of generality, we assume that \begin{subequations}\label{gnhull} \begin{equation} \nabla \lambda_i\cdot r_i(u)< 0\ \text{ if $j$ is even},\ u \in \Delta^j_i, \end{equation} \begin{equation} \nabla \lambda_i\cdot r_i(u)> 0\ \text{ if $j$ is odd},\ u \in \Delta^j_i. \end{equation} \end{subequations} From now on, we assume that {\it every characteristic field of \eqref{basic equation} is piecewise genuinely nonlinear}. It is well known that, because of the nonlinear dependence of the characteristic speeds $\lambda_i(u)$ on the state variable $u$, the solution to \eqref{basic equation} develops discontinuities within finite time, even with smooth initial data. Therefore, in order to construct solutions globally defined in time, one considers weak solutions interpreting the equation (1.1) in a distributional sense. We recall that $u\in C(\ensuremath{\mathbb{R}}^+; L^1_{loc}(\ensuremath{\mathbb{R}};\ensuremath{\mathbb{R}}^N ))$ is a weak solution to the Cauchy problem \eqref{basic equation} if the initial condition is satisfied and, for any smooth function $\phi\in C^1_c(]0,T[\times \ensuremath{\mathbb{R}})$ there holds \[ \int^T_0 \int _{\ensuremath{\mathbb{R}}} \phi_t(t, x) u(t, x) + \phi_x (t, x)f(u(t, x))dxdt = 0 \] As a consequence of the weak formulation, it follows that a function with a single jump discontinuity \begin{equation*} u(t, x)=\begin{cases} \ensuremath{u^\mathrm{L}} & \text{if}\ x < \sigma t,\\ \ensuremath{u^\mathrm{R}} &\text{if}\ x > \sigma t, \end{cases} \end{equation*} is a solution to \eqref{basic equation}, if and only if the left and right states $\ensuremath{u^\mathrm{L}},\ensuremath{u^\mathrm{R}} \in \ensuremath{\mathbb{R}}^N$, and the speed $\hat \sigma$ satisfy the Rankine-Hugoniot condition \begin{equation}\label{d:R-H condition} f(\ensuremath{u^\mathrm{R}}) -f(\ensuremath{u^\mathrm{L}} ) = \hat \sigma(\ensuremath{u^\mathrm{R}} -\ensuremath{u^\mathrm{L}} ). \end{equation} By the strict hyperbolicity, it is known that for any $u^-\in \Omega$ there exists $s_0>0$ and N smooth curves $S_i[u^-]:[-s_0,s_0]\rightarrow \Omega$, associated with functions $\hat \sigma_i:[-s_0,s_0]\rightarrow \ensuremath{\mathbb{R}}$ such that \begin{equation}\label{d:rh} \hat \sigma_i(s)[S_i[u^-](s)-u^-]=f(\hat \sigma_i(S_i[u^-](s))-f(u^-) \end{equation} and satisfying \begin{equation*} S_i[u^-](0)=u^-,\qquad \hat \sigma_i(0)=\ensuremath{\lambda_i}(u^-), \qquad \frac{d}{ds}S_i[u^-](0) = r_i(u^-). \end{equation*} The curve $S_i[u^-]$ the $i$-th \emph{Hugoniot curve} issuing from $u^-$ and we also say that $[u^-,u^+]$ is an \emph{$i$-discontinuity} with speed $\hat \sigma_i(u^-,u^+):=\hat \sigma(s)$ if $ u^+= S_i[u^-](s)$. Since weak solutions to \eqref{basic equation} may not be unique, an entropy criterion for admissibility is usually added to rule out nonphysical discontinuities. In \cite{Liu1}, T.P. Liu proposed the following admissibility criterion valid for weak solutions to general systems of conservation laws We say the $i$-discontinuity $[u^-,u^+]$, $u^+ = S_i[u^-](s)]$, is \emph{Liu admissible} if it satisfies \emph{Liu admissible condition}: for $s_0>0$ \begin{equation*} \hat \sigma_i(u^+,u^-)\leq \hat \sigma_i(u,u^-) \end{equation*} where $u=S_i[u^-](\tau)$ for each $\tau \in ]0,s_0[$, and for $s_0 < 0$ \[ \hat \sigma_i(u^+,u^-) \geq \hat \sigma_i(u,u^-), \] where $u=S_i[u^-](\tau)$ for each $\tau\in]s_0,0[$. Let $[u^-,u^+]$, $u^+= S_i[u^-](s)$, be a Liu admissible $i$-discontinuity. Following the notation of \cite{Liu1}, we call the jump $[u^-,u^+]$ \emph{simple} if $\forall \tau\in ]0,s[$ \text{ when } $s>0\ (\forall \tau\in ]s,0[$ \text{ when } $s<0)$, \begin{equation*} \hat \sigma_i(S_i[u^-](\tau),u^-)<\hat \sigma_i(u,u^-)\quad (\hat \sigma_i(S_i[u^-](\tau),u^-)>\hat \sigma_i(u,u^-)). \end{equation*} If $[u^-,u^+]$ is not simple, then we call it a \emph{composition} of the waves $[u^-,u_1],[u_1,u_2],\cdots,[u_l,u^+]$, if \begin{equation} \label{E_composi_point} u_k=S_i[u^-](s_k) \quad \text{and} \quad \hat \sigma_i(u^k,u^{k-1})= \hat \sigma_i(u^+,u^-), \end{equation} where \[ 0=s_0<s_1<s_2<\cdots<s_l<s \quad (\text{or} \ s<s_l<\cdots<s_1<s_0=0), \] and there are no other points $\tau$ such that \eqref{E_composi_point} holds. In \cite{Liu1}, under assumption of piecewise genuinely nonlinearity, it is proved by using Glimm scheme that if the initial data has small total variation, there exists a weak BV solution of \eqref{basic equation} satisfying Liu admissible condition. Therefore, it enjoys the usual regularity properties of BV function: $u$ either is approximately continuous or has an approximate jump at each point $(x,t)\in \ensuremath{\mathbb{R}}^+\times\ensuremath{\mathbb{R}}\setminus \mathcal{N}$, where $\mathcal{N}$ is a subset whose one-dimensional Hausdorff measure is zero. In \cite{Liu1}, the author shows much stronger regularity that $u$ holds. The set $\mathcal{N}$ contains at most countably many points. Moreover, $u$ is continuous (not just approximate continuous) outside $\mathcal{N}$ and countably many Lipschitz continuous curves. In \cite{BL}, the authors adopt wave-front tracking approximation to prove the similar result for \eqref{basic equation} with the assumption that each characteristic field is genuinely nonlinear. Moreover, the authors where able to prove that outside the countable set $\mathcal N$ there exist right and left limits $u^-$, $u^+$ along the jump curves in the uniform norm, and these limits are stable w.r.t. wavefront approximate solutions: more precisely, for each jump point (not interaction point) of the solution, there exists a jump curve for the approximate solution converging to it and such that its left and right limit converge to $u^-$, and $u^+$ uniformly. In \cite{Bre} (Theorem 10.4), the author generalize his result in \cite{BL} to the case when some characteristic field may be linearly degenerate. To prove this new regularity estimates one has to overtake additional difficulties, and this is the reason why they have so far been restricted to genuinely nonlinear of linearly degenerate systems: in fact the proof in \cite{Bre} is based on the wave structure of the solution to genuinely nonlinear or linearly degenerate systems, where only one shock curve passes through the discontinuous point (which is not an interaction point) of the admissible solution. In this paper, we extend the techniques of \cite{Bre} to prove an analogous result about global structure of admissible solution for piecewise genuinely nonlinear system of \eqref{basic equation} by means of wave-front tracking approximation. This not only completes the corresponding result in \cite{Liu1}, but also makes it possible to prove SBV regularity for the solution of piecewise genuinely nonlinear strictly hyperbolic system. In fact, one of the key argument for SBV regularity in the proofs contained in \cite{BCa} and \cite{BY}, is that outside the interaction points the left and right values of jumps are approximated uniformly by wavefront approximate solutions. \begin{figure}[htbp] \hfill \begin{minipage}[t]{.45\textwidth} \begin{center} \begin{picture}(0,0)% \includegraphics{2-tan-curves.pdf}% \end{picture}% \setlength{\unitlength}{1973sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(6465,3589)(2401,-4948) \put(4126,-2011){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$f(u)$}% }}}} \put(2401,-4186){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$u_1$}% }}}} \put(4876,-2461){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$u_2$}% }}}} \put(5851,-2011){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$u_3$}% }}}} \put(8251,-1561){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$u_4$}% }}}} \end{picture}% \caption{} \label{f:tsirw} \end{center} \end{minipage} \hfill \begin{minipage}[t]{.45\textwidth} \begin{center} \begin{picture}(0,0)% \includegraphics{tan-curves-sol.pdf}% \end{picture}% \setlength{\unitlength}{2763sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(3174,2382)(4864,-4681) \put(5176,-4636){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x_1$}% }}}} \put(6151,-4636){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x_2$}% }}}} \put(7276,-4636){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x_3$}% }}}} \put(7276,-3811){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$u_4$}% }}}} \put(5176,-3886){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$u_1$}% }}}} \put(6451,-2986){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$P$}% }}}} \put(6676,-4261){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$u_3$}% }}}} \put(5701,-4261){\makebox(0,0)[lb]{\smash{{\SetFigFont{8}{9.6}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$u_2$}% }}}} \end{picture}% \caption{} \label{f:tsc} \end{center} \end{minipage} \hfill \end{figure} As we said, the assumption of genuinely nonlinearity done in \cite{BL} implies that there is only one shock curve passing through the discontinuous point (which is not an interaction point) of the admissible solution. For piecewise genuinely nonlinear case, however, due the presence of the composite discontinuity, there may be several discontinuity curves passing through the same discontinuity point which is not an interaction point. For example, consider a scalar equation where $f$ has two inflection points (it is thus clearly piecewise genuinely nonlinear), Figure \ref{f:tsirw}, and let $u_0$ be the initial data be \begin{equation*} u_0=\begin{cases} u_1 & \mbox{if $x<x_1$},\\ u_2 & \mbox{if $x_1<x<x_2$},\\ u_3 & \mbox{if $x_2<x<x_3$,}\\ u_4 & \mbox{if $x>x_3$.} \end{cases} \end{equation*} The same Figure \ref{f:tsc} shows two shocks connecting value $u_1,u_2$ and $u_3,u_4$ respectively interact with a center rarefaction wave and eventually have the same speed at $P$ (which is thus \emph{not} and interaction point) and combine together becoming a single shock. Clearly such wave pattern can not happen if $f$ is convex or concave. In this paper we prove the following theorem: \begin{theorem}\label{t:main theorem} Let $u$ be a Liu admissible solution of the Cauchy problem \eqref{basic equation}. Then there exist a countable set $\Theta$ of interaction points and a countable family $\ensuremath{\mathscr{T}}$ of Lipschitz continuous curves such that $u$ is continuous outside $\Theta$ and ${Graph}(\ensuremath{\mathscr{T}})$. Moreover, suppose $u(t_0,x)$ is discontinuous at $x=x_0$, and $(t_0,x_0)\notin\Theta$. Write $u^L=u(t_0,x_0-),\ u^R=u(t_0,x_0+)$ and suppose that $\ensuremath{u^\mathrm{R}}=S_i[\ensuremath{u^\mathrm{L}}](s)$ with $s>0\ (s<0)$. \begin{itemize} \item If $[u^L,u^R]$ is simple, there exists a Lipschitz curve $y\in \ensuremath{\mathscr{T}}$, s.t $y(t_0)=x_0$ \begin{equation*} \ensuremath{u^\mathrm{L}}=\lim_{\genfrac{}{}{0pt}{}{x<y(t)} {(x,t)\rightarrow(t_0,x_0)}}u(x,t),\ \qquad \ensuremath{u^\mathrm{R}}=\lim_{\genfrac{}{}{0pt}{}{x>y(t)} {(x,t)\rightarrow(t_0,x_0)}}u(x,t) \end{equation*} and the curve $y$ propagates with shock speed $\hat \sigma(\ensuremath{u^\mathrm{L}},\ensuremath{u^\mathrm{R}})$ at $(t_0,x_0)$, that is \begin{equation*} \dot{y}(t_0)(\ensuremath{u^\mathrm{R}}-\ensuremath{u^\mathrm{L}})=f(\ensuremath{u^\mathrm{R}})-f(\ensuremath{u^\mathrm{L}}). \end{equation*} and \begin{equation*} \dot{y}(t_0)\leq \hat \sigma_i(S_i[\ensuremath{u^\mathrm{L}}](\tau),\ensuremath{u^\mathrm{L}}),\ \forall \tau\in [0,s] \ (\dot{y}(t_0)\geq \hat \sigma_i(S_i[\ensuremath{u^\mathrm{L}}](\tau),\ensuremath{u^\mathrm{L}}),\ \forall \tau\in [s,0]). \end{equation*} \item If $[\ensuremath{u^\mathrm{L}},\ensuremath{u^\mathrm{R}}]$ is a composition of $[\ensuremath{u^\mathrm{L}},u_1],\ [u_1,u_2],\cdots,[u_l,\ensuremath{u^\mathrm{R}}]$, then there exists $p$ Lipschitz continuous curves $y_1,\cdots,y_{p}\in \ensuremath{\mathscr{T}}$, $p\leq l+1$ satisfying \begin{itemize} \item[-] $y_1(t_0)=\cdots=y_{p}(t_0)=x_0$, \item[-] $y'_1(t_0)=\cdots=y'_{p}(t_0)$, \item[-] $y_1(t)\leq \cdots\leq y_{p}(t)$,for all $t$ in a neighborhood of $t_0$, \end{itemize} s.t. \[ \ensuremath{u^\mathrm{L}}=\lim_{\genfrac{}{}{0pt}{}{x<y_1(t)} {(x,t)\rightarrow(t_0,x_0)}}u(x,t),\ \qquad \ensuremath{u^\mathrm{R}}=\lim_{\genfrac{}{}{0pt}{}{x>y_{p}(t)} {(x,t)\rightarrow(t_0,x_0)}}u(x,t), \] and if in a small neighborhood of $(t_0,x_0)$, $y_j$ and $y_{j+1}$ are not identical, one has \begin{equation}\label{e:lim inside} u_j=\lim_{\genfrac{}{}{0pt}{}{y_j(t)<x<y_{j+1}(t)} {(x,t)\rightarrow(t_0,x_0)}}u(x,t). \end{equation} Also, these curves propagate with speed $\hat \sigma(\ensuremath{u^\mathrm{L}},\ensuremath{u^\mathrm{R}})$ at $(t_0,x_0)$, that is \begin{equation*} \dot{y}_n(t_0)(\ensuremath{u^\mathrm{R}}-\ensuremath{u^\mathrm{L}})=f(\ensuremath{u^\mathrm{R}})-f(\ensuremath{u^\mathrm{L}}),\qquad n\in\{1,\cdots,p\}. \end{equation*} and the stability conditions hold: \begin{equation*} \dot{y}_n(t_0)\leq \hat \sigma_i(S_i[\ensuremath{u^\mathrm{L}}](\tau),\ensuremath{u^\mathrm{L}}),\ \forall \tau\in [0,s], \ (\dot{y}_n(t_0)\geq \hat \sigma_i(S_i[\ensuremath{u^\mathrm{L}}](\tau),\ensuremath{u^\mathrm{L}}),\ \forall \tau\in [s,0]). \end{equation*} \end{itemize} \end{theorem} As in \cite{Bre}, the above result is based on this strong convergence result for approximate wave-front solutions. \begin{theorem}\label{t:approx_shock_conv} Consider a sequence of wave-front tracking approximate solutions $u_\nu$ (see Section \ref{s:ft} for definitions) converging to $u$ in $L^1_{loc}$. Suppose $P=(\tau,\xi)$ is a discontinuous point of $u$ and write $u^L=u(\tau,\xi-),\ u^R=u(\tau,\xi+)$. Assume there are only $l$ Lipschitz continuous curves $\ensuremath{\mathscr{T}}\ni y_{n}:[t^-_{n},t^+_{n}]\mapsto\ensuremath{\mathbb{R}},\ n=1,\cdots,l$ passing through the point $P$ and \[ y_1(t)\leq\cdots\leq y_l(t)\qquad \text{in a small neighborhood of $\tau$.} \] Then up to a subsequence, there exists $y_{n,\nu}:[t^-_{n,\nu},t^+_{n,\nu}]\mapsto\ensuremath{\mathbb{R}},\ n=1,\cdots,l$ which are discontinuity curves of $u_\nu$ with uniformly large strengths, where $t^-_{n,\nu}\to t^-_n,t^+_{n,\nu}\to t^+_n$ and \[ y_{n,\nu}(t)\rightarrow y_n(t)\qquad \text{for every $t\in[t^-_n,t^+_n]$.} \] Moreover, one has \begin{subequations} \begin{equation*} \lim_{r \rightarrow 0+} \limsup_{\nu \rightarrow \infty} \left( \sup_{\genfrac{}{}{0pt}{}{x<y_{1,\nu}(t)}{(x,t)\in B(P,r)}} \big| u_\nu(x,t) - u^L \big| \right) = 0, \end{equation*} \begin{equation*} \lim_{r \rightarrow 0+} \limsup_{\nu \rightarrow \infty} \left( \sup_{\genfrac{}{}{0pt}{}{x>y_{l,\nu}(t)} {(x,t)\in B(P,r)}} \big| u_\nu(x,t) - u^R \big| \right) = 0. \end{equation*} \end{subequations} \end{theorem} A brief outline of this paper follows. In Section \ref{s:rp}, we recall the definition of construction of Riemann solvers introduced in \cite{BB} In Section \ref{s:ft}, we briefly describe the wave-front tracking approximate scheme which originally designed for general strictly hyperbolic system (see \cite{AM1}). In particular, we introduce the definition of interaction and cancelation measures. Section \ref{s:sdc} contains the main idea of the paper: the definition of subdiscontinuity curves and $(\epsilon,k)$-approximate subdiscontinuity curves in the approximate wavefront solution. In this section we show that their number is uniformly bounded with respect to approximation parameter. In Section \ref{s:pf}, we finally give the proof for Theorem \ref{t:main theorem} and Theorem \ref{t:approx_shock_conv}. In Section \ref{s:example}, we construct a strictly hyperbolic $2\times 2$ system of conservation laws, which is not piecewise genuinely nonlinear and its admissible solutions to some initial datum do not have the structural properties described in Theorem \ref{t:main theorem}. \section{Solution of Riemann problem}\label{s:rp} As in \cite{B2}, for a fixed point $u^0\in \Omega$ and $i\in\{1,\cdots,N\}$, one can construct smooth vector-value maps $\r_i=\r_j(u,v_i,\sigma_i)$ for $(u,v_i,\sigma_i)\in \ensuremath{\mathbb{R}}^N\times\ensuremath{\mathbb{R}}\times\ensuremath{\mathbb{R}}$ with $\r_i(u,0,\sigma)=r_i(u)$ for all $u,\ \sigma_i$. Setting $l^0_i:=l_i(u^0)$, we can normalize $\r_i$ such that \begin{equation}\label{a:para_Tk} \langle l^0_j,\r_i(u,v_i,\sigma_i)\rangle=\begin{cases} 1 & i=j,\\ 0 & i\ne j. \end{cases} \end{equation} Writing the speed function $\tilde{\lambda}_i:=l^0_i\cdot Df(u) \r_i(u,v_i,\sigma_i)$, we consider the set, for some $\delta_0,C_0>0$ fixed and $s>0$ \begin{equation*} \begin{split} \Gamma_i(s,u^-):=&\Big{\{}\gamma \in \Lip([0,s],\ensuremath{\mathbb{R}}^{N+2}),\ \gamma(\xi)=(u(\xi),v_i(\xi),\sigma_i(\xi))\\ &\ u(0)=u^-,|u(\tau)-u^-|=\tau, v_i(0)=0,\\ &\ |v_i|\leq \delta_1, |\sigma_i(\tau)-\lambda_i(u^0)|\leq 2C_0\delta_1\Big{\}}. \end{split} \end{equation*} Given a curve $\gamma\in \Gamma_i$, we define the scalar flux function \begin{equation}\label{d:scalar_flux} \ensuremath{\tilde{f}}_i(\tau,\;\gamma)=\int^\tau_0 \tilde{\lambda}_i(u(\xi),v_i(\xi),\sigma_i(\xi))d\xi. \end{equation} Moreover, we define the lower convex envelope of $\ensuremath{\tilde{f}}_i$ on $[a,b]\subset [0,s]$ as \[ \begin{split} \conv_{[a,b]}\ensuremath{\tilde{f}}_i(\tau;\gamma):=\inf \Big{\{} \theta f_i(\tau',\gamma)&+(1+\theta)f_i(\tau'',\gamma);\\ &\theta\in[0,1],\tau',\tau''\in[a,b],\tau=\theta\tau'+(1+\theta)\tau''\Big{\}}. \end{split} \] Then define a nonlinear operator $\t_{i,s}:\Gamma_i(s,u^-)\to\Gamma_i(s,u^-)$ by setting $\gamma=\check \gamma:=(\check{u},\check{v}_i,\check{\sigma}_i)$,where \begin{equation}\label{d:element operator} \begin{cases} \check{u}(\tau)=u^-+\int^\tau_0\r_i(u(\xi),v_i(\xi),\sigma(\xi))d\xi,\\ \check{v}_i(\tau)=\tilde{f}_i(\tau;\gamma)-\conv_{[0,s]}\tilde{f}_i(\tau,\gamma),\\ \check{\sigma}_i=\frac{d}{d\tau}\conv_{[0,s]}\ensuremath{\tilde{f}}_i(\tau;\gamma). \end{cases} \end{equation} One can show that $\t$ is a contraction in $\Gamma_i(s,u^-)$ with respect to the distance \begin{equation*} D(\gamma,\gamma'):=\delta_1||u-u'||_{L^\infty}+||v_i-v'_i||_{L^1}+||v_i\sigma_i-v'_i\sigma'_i||_{L^1}, \end{equation*} where \begin{equation*} \gamma=(u,v_i,\sigma_i),\ \gamma'=(u',v'_i,\sigma'_i)\in \Gamma_i(s,u^-). \end{equation*} Hence, for any $s$ and $u^-$ in a small neighborhood of $u^0$, $\t_i$ has a unique fixed point, which is a Lipschitz continuous curve \begin{equation*} \bar{\gamma}(\tau)= \big{(}\bar{u}(\tau;u^-,s),\bar{v}_i(\tau;u^-,s),\bar{\sigma}_i(\tau;u^-,s)\big{)},\quad \tau\in[0,s]. \end{equation*} Then the elementary curve for $i$-th family is defined as \begin{equation}\label{d:Tk} T_i[u^-](s):=\bar{u}(s;u^-,s). \end{equation} After adopting the following notations \begin{equation*} \sigma_i[u^-](s,\tau):=\bar{\sigma}_i(\tau;u^-,s), \end{equation*} \begin{equation}\label{d:scalar function} \ensuremath{\tilde{f}}_i[u^-](s,\tau):=\ensuremath{\tilde{f}}_i(\tau;\bar{\gamma}). \end{equation} and recalling that the Riemann problem is the Cauchy problem \eqref{basic equation} with piecewise constant initial data of the form \begin{equation}\label{e:r} u_0(x)=\left\{\begin{array}{ll} u^\mathrm{L} & \mbox{$x<0$,}\\ u^\mathrm{R} & \mbox{$x>0$,} \end{array}\right. \end{equation} where $\ensuremath{u^\mathrm{L}},\ \ensuremath{u^\mathrm{R}}$ are two constants. One has the following theorem \cite{BB}. \begin{theorem} \label{t:ec} For every $u\in\Omega$ and $s>0$ sufficiently small, \begin{enumerate} \item N Lipschitz continuous curves $s \mapsto T_i[u](s)\in\Omega,\ i=1,\dots,N$, satisfying $\lim_{s\rightarrow 0}\frac{d}{ds}T_i[u](s)=r_i(u)$, \item N Lipschitz continuous functions $(s,\tau) \mapsto \sigma_i[u](s,\tau)$, with $0 \leq \tau \leq s$ and $i=1,\dots,N$, satisfying $\tau \mapsto \sigma_i[u](s,\tau)$ which are increasing and such that $\sigma_i[u](s,0) = \lambda_i(u)$, \end{enumerate} with the following properties. \noindent When $u^\mathrm{L}\in\Omega,\ u^\mathrm{R}=T_i[\ensuremath{u^\mathrm{L}}](s)$, for some $s$ sufficiently small, the unique Liu admissible solution of the Riemann problem \eqref{basic equation}-\eqref{e:r} is defined a.e. by \begin{equation}\label{d:Riem_sol} u(x,t) := \begin{cases} u^\mathrm{L} & x/t < \sigma_i[u^\mathrm{L}](s,0), \crcr T_i[u^\mathrm{L}](\tau) & x/t=\sigma_i[u^\mathrm{L}](s,\tau), \tau \in [0,s], \crcr u^\mathrm{R} & x/t>\sigma_i[u^\mathrm{L}](s,s). \end{cases} \end{equation} \end{theorem} \vspace{12pt} For the case when $s<0$, a right state $\ensuremath{u^\mathrm{R}}=T_i[\ensuremath{u^\mathrm{L}}](s)$ can be constructed in the same way as before, except that one replaces $\conv_{[0,s]} \ensuremath{\tilde{f}}_i$ in \eqref{d:element operator} with the upper concave envelope of $\ensuremath{\tilde{f}}_i$ on $[s,0]$: \[ \begin{split} \conc_{[a,b]}\ensuremath{\tilde{f}}_i(\tau;\gamma):=\sup \Big{\{} \theta f_i(\tau',\gamma)&+(1+\theta)f_i(\tau'',\gamma);\\ &\theta\in[0,1],\tau',\tau''\in[a,b],\tau=\theta\tau'+(1+\theta)\tau''\Big{\}}, \end{split} \] and looks at the fixed point of of the integral system \eqref{d:element operator} on the interval $[s,0]$. Because of the assumption \eqref{a:para_Tk} and the definition \eqref{d:Tk}, the elementary curve $T_i[\ensuremath{u^\mathrm{L}}]$ is parameterized by its $i$-th component relative to the basis $r_1(u^0),\cdots,r_N(u^0)$ i.e. \begin{equation}\label{e:para_T_k} s = \langle l_i^0, T_i[\ensuremath{u^\mathrm{L}}](s)-\ensuremath{u^\mathrm{L}}\rangle. \end{equation} \begin{remark} In \cite{BB}, it is proved that if $\ensuremath{u^\mathrm{L}},\ensuremath{u^\mathrm{R}}\in \Delta^k_i$ with some $k$ even (odd) and $\ensuremath{u^\mathrm{R}}=T_i[\ensuremath{u^\mathrm{L}}](s),\ s>0\ (s<0)$, the solution $u$ of the Riemann problem with the initial date \eqref{e:r} is a \emph{center rarefaction wave}, that is for $t>0$, \begin{equation*} u(x,t)=\begin{cases} \ensuremath{u^\mathrm{L}} &\text{if}\ x/t<\lambda_i(\ensuremath{u^\mathrm{L}}),\\ \ensuremath{u^\mathrm{R}} &\text{if}\ x/t>\lambda_i(\ensuremath{u^\mathrm{R}}),\\ R_i[\ensuremath{u^\mathrm{L}}](\tau) &\text{if}\ x/t\in[\lambda_i(\ensuremath{u^\mathrm{L}}),\lambda_i(\ensuremath{u^\mathrm{R}})],\ x/t=\lambda_i(R_i[\ensuremath{u^\mathrm{L}}](\tau)), \end{cases} \end{equation*} where $\tau\in [0,s]\ (\tau\in [s,0])$ such that $s = \langle l_i^0, R_i[\ensuremath{u^\mathrm{L}}](s)-\ensuremath{u^\mathrm{L}}\rangle$. Notice that $u$ is smooth for $t>0$. \end{remark} \begin{remark}\label{r:shocks_rarefaction} As shown in \cite{BB} (also see Remark 4 in \cite{AM1} and Section 4 of \cite{Liu1}), under the assumption of piecewise genuine nonlinearity, the solution of the Riemann problem provided by \eqref{d:Riem_sol} is a composed wave of the $i$-th family containing a finite number of rarefaction waves and Liu admissible discontinuities. Recalling Theorem \ref{t:ec}, one knows that the regions where the $v_i$-component of the solution to \eqref{d:element operator} vanishes correspond to rarefaction waves, while the regions where the $v_i$-component of the solution to \eqref{d:element operator} is different from zero correspond to admissible discontinuities. \end{remark} \vspace{12pt} The Liu admissible solution \cite{BB} of a Riemann problem for \eqref{basic equation}-\eqref{e:r} is obtained by constructing a Lipschitz continuous map \begin{equation}\label{RMap} \mathbf{s}:=(s_1,\dots,s_N)\mapsto T[u^L](\mathbf{s}):=T_N\big[T_{N-1}\big[\cdots\left[T_1[u^\mathrm{L}](s_{1})\right]\cdots\big](s_{N-1})\big](s_N)=u^\mathrm{R}, \end{equation} which is one to one from a neighborhood of the origin onto a neighborhood of $u^\mathrm{L}$. Then we can uniquely determine intermediate states $u^\mathrm{L}=\omega_0,\ \omega_1,\ \dots,\ \omega_N = u^\mathrm{R}$, and the \emph{wave strength} $s_1,\ s_2,\ \dots,\ s_N$ such that \begin{equation*} \omega_i = T_i[\omega_{i-1}](s_i), \quad i=1,\dots,N, \end{equation*} provided that $|u^\mathrm{L}-u^\mathrm{R}|$ is sufficiently small. By Theorem \ref{t:ec}, each Riemann problem with initial date \begin{equation}\label{e:erp} u_0 = \begin{cases} \omega_{i-1} & x<0, \\ \omega_i & x>0, \end{cases} \end{equation} admits a self-similar solution $u_i$, containing only $i$-waves. We call $u_i$ the $i$-th \emph{elementary composite wave} or simply $i$-$wave$. Therefore, under the strict hyperbolicity assumption, the solution of the Riemann problem with the initial data \eqref{e:r} is obtained by piecing together the self-similar solutions of the Riemann problems given by \eqref{basic equation}-\eqref{e:erp}. Indeed, from the strict hyperbolicity assumption \eqref{lambda}, the speed of each elementary $i$-th wave in the solution $u_i$ is inside the interval $[\check{\lambda}_{i-1},\check{\lambda}_{i}]$ if $s \ll 1$, so that the solution of the general Riemann problem \eqref{basic equation}-\eqref{e:r} is then given by \begin{equation} \label{e:riemann solution} u(x,t) = \begin{cases} u^\mathrm{L} & x/t <\check{\lambda}_{0},\\ u_i(x,t) & \check{\lambda}_{i-1}<x/t<\check{\lambda}_{i}, i=1,\dots,N,\\ u^\mathrm{R} & x/t>\check{\lambda}_{N}. \end{cases} \end{equation} \section{Description of wave-front tracking approximation}\label{s:ft} In \cite{AM1}, the authors provide an algorithm of wave-front tracking approximation for vanishing viscosity BV solutions to the strictly hyperbolic system which is much more general than the case discussed here. We modify a little about its algorithm in order to simplify our analysis. Due to Theorem \ref{t:ec}, one knows that the solution constructed by such approximation is Liu admissible. Wave-front tracking approximation is an algorithm which produces piecewise constant approximate solutions to the Cauchy problem \eqref{basic equation}. In order to construct approximate wave-front tracking solutions, given a fixed $\epsilon>0$, we first choose a piecewise constant function $u^\epsilon_0$ which is a good approximation to initial data $u_0$ such that \begin{equation}\label{initial approx} \mathrm{Tot.Var.}\{u^\epsilon_0\}\leq \mathrm{Tot.Var.}\{u_0\}, \quad ||u^\epsilon_0-u_0||_{L^1}<\epsilon, \end{equation} and $u^\epsilon_0$ only has finite jumps. Let $x_1<\dots<x_m$ be the jump points of $u^\epsilon_0$. For each $\alpha=1,\dots,m$, we approximately solve the Riemann problem (just shifting the center from $(0,0)$ to $(0,x_\alpha)$) with the initial data of the jump $[u^\epsilon_0(x_\alpha-),[u^\epsilon_0(x_\alpha+)]$ by a function $w(x,t)=\phi(\frac{x-x_0}{t})$ where $\phi$ is a piecewise constant function. The straight lines where the discontinuities locate are called $wave$-$fronts$ (or just \emph{fronts} for short). The wave-fronts can prolong until they interact with other fronts, then at the interaction point, the corresponding Riemann problem is approximately solved and several new fronts are generated forward. Then one tracks the wave-fronts until they interact with other wave-fronts, etc... In order to avoid the algorithm to produce infinite many wave-fronts in finite time, different kinds of approximate Riemann solvers should be introduced. \subsubsection{The approximate $i$-th elementary wave} \label{Ss_k_gnl} Suppose that $u_i$ is an $i$-th elementary composite wave which is obtain by solving Riemann problem with initial data \eqref{e:erp} where $\omega_i=T_i[\omega_{i-1}](s_i)$. For notational convenience, we write $\sigma_i(\tau):=\sigma_i[\omega_{i-1}](s_i,\tau)$. Let \begin{equation*} p:=\left[\frac{\sigma_i(s_i)-\sigma_i(0)}{\epsilon}\right]+1 \end{equation*} and \begin{equation*} \vartheta_l:=\sigma_i(0)+\frac{l}{p}\left[\sigma_i(s_i)-\sigma_i(0)\right],\qquad l=0.\cdots,p-1. \end{equation*} We set \begin{equation*} \omega_{i-1,l}=T_i[\omega_{i-1}](s_{i,l}), \end{equation*} where \begin{equation*} s_{i,l}:=\begin{cases} \text{min}\big{\{}s\in [0,s_i], \sigma_i(s)=\vartheta_l \big{\}},\quad s_i\geq 0,\\ \text{max}\big{\{}s\in [s_i,0], \sigma_i(s)=\vartheta_l \big{\}},\quad s_i\leq 0. \end{cases} \end{equation*} Then the $i$-th elementary composite wave $u_i$ is approximated by $\tilde{u}_i$ as the following, \begin{equation}\label{e:aew} \tilde{u}_i(x,t)=\begin{cases} \omega_{i-1}\quad &x/t<\vartheta_{i,0},\\ \omega_{i-1,l},\quad &\vartheta_{i-1,l-1}<x/t<\vartheta_{i,l},\quad (l=1,\cdots,p-1),\\ \omega_{i}\quad &x/t>\vartheta_{i.p-1}. \end{cases} \end{equation} Notice that $\tilde{u}_i$ consists of p fronts with small strength. \subsubsection{Approximate Riemann solver} Suppose at the point $(t_1,x_1)$, a wave-front $[u^L,u^M]$ of strength $s'$ belonging to $i'$-th family interacts from the left with a wave-front $[u^M,u^R]$ of strength $s''$ belonging to $i''$-th family for some $i',\ i''\in \{1,\cdots,N\}$ such that \[ u^M=T_{i'}[u^\mathrm{L}](s'),\qquad u^\mathrm{R}=T_{i''}[u^M](s''). \] Assume that $|u^\mathrm{L}-u^\mathrm{R}|$ sufficiently small. Then at the interaction point, the Riemann problem with the initial data of the jump $[u^\mathrm{L},u^\mathrm{R}]$ may be solved by two kinds approximate Riemann solver according to different situation. \begin{itemize} \item\emph{Accurate Riemann solver}: It replaces each elementary composite wave of the exact Riemann solution (refers to $u_i$ in the solution \eqref{e:riemann solution}) with an approximate $i$-th elementary wave defined by \eqref{e:aew}. \vspace{6pt} \item\emph{Simplified Riemann solver}: It only generates the approximate elementary waves belong to $i'$-th and $i''$-th families with the corresponding strength $s'$ and $s''$ as the incoming ones if $i'\ne i''$ or the approximate $i'$-th elementary waves of strength $s'+s''$ if $i'= i''$. The simplified Riemann solver collects the remaining new waves into a single \emph{nonphysical front}, traveling with a constant speed $\hat{\lambda}$, strictly larger than all characteristic speeds. Therefore, usually the simplified Riemann solver generates less outgoing fronts after an interaction than the accurate Riemann solver. \end{itemize} Since the simplified Riemann solver produces nonphysical wave-fronts and they can not interact with each other, one needs an approximate Riemann solver defined for the interaction between, for example, a physical front of the $i$-th family with strength $s$, connecting $u^M$, $u^\mathrm{R}$ and a nonphysical front (coming from the left) connecting the left value $u^\mathrm{L}$ and $u^M$ traveling with speed $\hat{\lambda}$. \begin{itemize} \item\emph{Crude Riemann solver}: It generates an approximate $i$-th elementary wave connecting $u^\mathrm{L}$ and $\tilde{u}^M=T_i[u^\mathrm{L}](s)$ and a nonphysical wave-front joining $\tilde{u}^M$ and $u^\mathrm{R}$, traveling with speed $\hat{\lambda}$. In the following, for simplicity, we just say that the non-physical fronts belong to the $(N+1)$-th characteristic field. \end{itemize} \begin{remark} It is not restrictive to assume that at each time $t>0$, at most one interaction takes place, involving exactly two incoming fronts, because one can always slightly change the speeds of the incoming fronts if more than two fronts meet at the same point. It is sufficient to require that the error vanishes when the approximation solutions converge to the exact solution. Actually, suppose x=y(t) is a front in an approximate solution $u$ with parameter $\epsilon$ and $u^L=u(t,x-)$ and $u^R=u(t,x+)$ such that \begin{equation}\label{R_H} u^R=T_i[u^L](s) \end{equation} for some index $i\in \{1,\cdots,N\}$ and wave strength $s$. Then the following holds \begin{equation}\label{front speed error} \left|\dot{y}(t)-\sigma_i[u^L](s,\tau)\right|\leq 2\epsilon, \ \forall \tau\in [0,s]. \end{equation} \end{remark} \begin{remark} There are three kinds of physical wave-fronts. Suppose $[\ensuremath{u^\mathrm{L}},\ensuremath{u^\mathrm{R}}]$ is wave front in an approximate solution, and $\ensuremath{u^\mathrm{R}}=T_i[\ensuremath{u^\mathrm{L}}](s)\ (s>0)$. Recalling the notaton \eqref{d:scalar function} and Remark \ref{r:shocks_rarefaction}, and writing $\tilde{f}_i(\tau):=\tilde{f}_i[\ensuremath{u^\mathrm{L}}](s,\tau)$, one knows that $[\ensuremath{u^\mathrm{L}},\ensuremath{u^\mathrm{R}}]$ may be one of the following three kinds of fronts: \begin{itemize} \item \emph{Discontinuity front} if $\tilde{f}_i(\tau)>\conv_{]0,s[} \tilde{f}_i(\tau),\ \forall \tau\in]0,s[\setminus\mathcal{S}$, where $\mathcal{S}$ is set of finite cardinality. \item \emph{Rarefaction front} if $\tilde{f}_i(\tau)=\conv_{]0,s[} \tilde{f}_i(\tau),\ \forall \tau\in]0,s[$. In this case, $\ensuremath{u^\mathrm{L}},\ensuremath{u^\mathrm{R}} \in \Delta^k_i$ for some $k$ even and $\ensuremath{u^\mathrm{R}}=R_i[\ensuremath{u^\mathrm{L}}](s)$. \item \emph{Mixed front} if there exist $]a,b[\subsetneq ]0,s[$ such that $\tilde{f}_i(\tau)>\conv_{]0,s[} \tilde{f}_i(\tau),\ \forall \tau\in]a,b[$ and $]c,d[\subsetneq ]0,s[$ such that $\tilde{f}_i(\tau)=\conv_{]0,s[} \tilde{f}_i(\tau),\ \forall \tau\in]c,d[$. \end{itemize} \end{remark} \subsubsection{Interaction potential and BV estimates} \label{interaction amount} In order to check the total variations of approximate solutions are uniformly bounded with respect to time, one needs the estimate on the difference between the strength of the incoming waves and the strength of the outgoing waves produced by an interaction. Suppose two wave-fronts with strength $s'$ and $s''$ interact and $\ensuremath{\tilde{f}}'_i,\ \ensuremath{\tilde{f}}''_i$ are the corresponding scalar flux function defined by \eqref{d:scalar_flux}. We define the \emph{amount of interaction $\mathcal{I}(s',s'')$} between $s'$ and $s''$. When $s'$ and $s''$ belong to different characteristic families, set \begin{equation}\label{d:interation amount ijwaves} \mathcal{I}(s',s'')=|s's''|. \end{equation} When $s',\ s''$ belong to the same family, \begin{itemize} \item[(a)] If $s''>0$, we set \begin{equation*} \begin{split} \ensuremath{\mathcal{I}}(s',s''):=\int^{s'}_0&\big{|}\conv_{[0,s']}\ensuremath{\tilde{f}}'_i(\xi)-\conv_{[0,s'+s'']}(\ensuremath{\tilde{f}}'_i\cup\ensuremath{\tilde{f}}''_i)(\xi)\big{|}d\xi\\ &+\int^{s'+s''}_{s'}\big{|}\conv_{[0,s'']}(\ensuremath{\tilde{f}}'(s')\ensuremath{\tilde{f}}''_i(\xi-s'))-\conv_{[0,s'+s'']}(\ensuremath{\tilde{f}}'_i\cup\ensuremath{\tilde{f}}''_i)(\xi)\big{|}d\xi, \end{split} \end{equation*} \item[(b)] if $-s'\leq s''<0$, we set \begin{equation*} \begin{split} \ensuremath{\mathcal{I}}(s',s''):=\int^{s'+s''}_0&\big{|}\conv_{[0,s']}\ensuremath{\tilde{f}}'_i(\xi)-\conv_{[0,s'+s'']}\ensuremath{\tilde{f}}'_i(\xi)\big{|}d\xi\\ &+\int^{s'}_{s'+s''}\big{|}\conv_{[0,s']}\ensuremath{\tilde{f}}'(\xi)-\conc_{[s'+s'',s']}\ensuremath{\tilde{f}}'_i(\xi)\big{|}d\xi, \end{split} \end{equation*} \item[(c)] if $s''<-s'$, we set \begin{equation*} \begin{split} \ensuremath{\mathcal{I}}(s',s''):=\int^{-s'}_{s''}&\big{|}\conc_{[s'',0]}\ensuremath{\tilde{f}}''_i(\xi)-\conc_{[s'',-s']}\ensuremath{\tilde{f}}''_i(\xi)\big{|}d\xi\\ &+\int^{0}_{-s'}\big{|}\conv_{[s'',0]}(\ensuremath{\tilde{f}}'(\xi))-\conv_{[-s',0]}\ensuremath{\tilde{f}}''_i(\xi)\big{|}d\xi, \end{split} \end{equation*} \end{itemize} Throughout the paper, we write $A\lesssim B\ (A\gtrsim B)$ if there exists a constant $C>0$ which only depends on the system \eqref{basic equation} such that $A\leq CB\ (A\geq CB)$. Recall the Lipschitz continuous map $T$ defined in \eqref{RMap} and suppose $u^M=T_i[u^L](s_1),u^R=T_j[u^M](s_2)$ and $u^R=T[u^L](\mathbf{s})$. By Glimm's interaction estimates proved in \cite{B2} (also see Lemma 1 in \cite{AM1}), one has \begin{equation}\label{gie} |\mathbf{s}-\mathbf{s}_1-\mathbf{s}_2|\lesssim \ensuremath{\mathcal{I}}(s_1,s_2) \end{equation} where $\mathbf{s}_h=(s_1,s_2,\cdots,s_N)$, $h=1,2$ with $s_n=0$ for $n\ne h$. At each time $t>0$ when no interaction occurs, and the approximate solution $u$ has jumps at $x_1,\dots,x_m$, we denote by \[ \omega_1,\dots,\omega_m, \quad s_1,\dots,s_m, \quad i_1,\dots,i_m, \] their left states, signed strengths and characteristic families respectively: the sign of $s_\alpha$ is given by the respective orientation of $dT_i[u](s)/ds$ and $r_i$, if the jump at $x_\alpha$ belongs to the $i$-th family. The total variation of $u_\nu$ will be computed as \[ V(t) := \sum_{\alpha} \big| s_\alpha \big|. \] Since $T_i[u^0]$ is a Lipschitz continuous function and the Lipschitz constant is uniformly bounded for any $u^0\in \Omega$, one has \begin{equation}\label{e:TV_V} {\rm Tot.Var}\{u(\cdot,t)\}\lesssim V(t). \end{equation} Then the estimate of increasing of ${\rm Tot.Var}\{u_\nu(\cdot,t)\}$ turns out to be the estimate of the total amount of interaction. Following \cite{B2}, we define the \emph{Glimm wave interaction potential} as follows: \begin{equation*}\label{d:gp} \begin{split} \ensuremath{\mathcal{Q}}(t) &:= \sum_{\genfrac{}{}{0pt}{}{i_\alpha>i_\beta}{x_\alpha<x_\beta}} \big| s_\alpha s_\beta \big| + \frac{1}{4} \sum_{i_\alpha=i_\beta<N+1} \int^{|s_\alpha|}_0\int^{|s_\beta|}_0 \big| \sigma_{i_\beta}[\omega_\beta](s_\beta,\tau'')-\sigma_{i_\alpha}[\omega_\alpha](s_\alpha,\tau') \big| d\tau'd\tau''. \end{split} \end{equation*} Denoting the time jumps of the total variation and the Glimm potential as \[ \Delta V(\tau)=V(\tau+)-V(\tau-),\ \ \Delta \ensuremath{\mathcal{Q}}(\tau)=\ensuremath{\mathcal{Q}}(\tau+)-\ensuremath{\mathcal{Q}}(\tau-), \] the fundamental estimates are the following (Lemma 5 in \cite{AM1}): in fact, when two wave-fronts with strength $s',\ s''$ interact, \begin{subequations} \label{e:gpe_ve} \begin{equation} \label{e:gpe} \Delta\ensuremath{\mathcal{Q}}(\tau)\lesssim\mathcal{I}(s',s''), \end{equation} \begin{equation} \label{e:ve} \Delta V(\tau)\lesssim \mathcal{I}(s',s''). \end{equation} \end{subequations} Thus one defines the \emph{Glimm functional} \begin{equation*} \label{e:glimm_funct} \Upsilon(t) := V(t) + C_0 \ensuremath{\mathcal{Q}}(t) \end{equation*} with $C_0$ suitable constant, so that $\Upsilon$ decreases at any interaction. Using this functional, one can prove that their total variations are uniformly bounded. Moreover, one can also show that the number of wave-fronts remains finite for all time (see section 6.1 of \cite{AM1}). This makes sense for the construction of approximate wave-front tracking solutions. \subsubsection{Construction of the approximate solutions and their convergence to exact solution} The construction starts at initial time $t=0$ with a given $\epsilon>0$, by taking $u_{0,\epsilon}$ as a suitable piecewise constant approximation of initial data $u_0$, satisfying \eqref{initial approx}. At the jump points of $u_{0,\epsilon}$, we locally solve the Riemann problem by accurate Riemann solver. The approximate solution then can be prolonged until a first time $t_1$ when two wave-fronts interact. Again we solve the Riemann problem at the interaction point by an approximate Riemann solver. Whenever the amount of interaction (see Section \ref{interaction amount} for the definition) of the incoming waves is larger than some threshold parameter $\rho = \rho(\epsilon) > 0$, we shall adopt the accurate Riemann solver. Instead, in the case where the amount of interaction of the incoming waves is less than $\rho$, we shall adopt the simplified Riemann solvers. The threshold $\rho$ is suitably chosen so that the number of wave-fronts remains finite for all times. And we will apply the crude Riemann solver if one of the incoming wave-front is non-physical front. One can show that the number of wave fronts is uniformly bounded (see Section 6.2 in \cite{AM1}). We call such approximate solutions \emph{$\epsilon$-approximate front tracking solutions}. At each time $t$ when there is no interaction, the restriction $u_\epsilon(t)$ is a step function whose jumps are located along straight lines in the $(x,t)$-plane. Let $\{\epsilon_\nu\}^\infty_{\nu=1}$ be a sequence of positive real numbers converging to zero. Consider a corresponding sequence of $\epsilon_\nu$-approximate front tracking solutions $u_\nu:=u_{\epsilon_\nu}$ of \eqref{basic equation}: it is standard to show that the functions $t\mapsto u_\nu(t,\cdot)$ are uniformly Lipschitz continuous in $L^1$ norm. since \eqref{e:TV_V} and \eqref{e:gpe_ve} hold independent of the parameter $\epsilon_\nu$, $u_\nu(t,\cdot)$ have uniformly bounded total variation. Therefore by Helly's theorem, $u_\nu$ converges up to a subsequence in $\mathbb{L}^1_{\mathrm{loc}}(R^+\times\ensuremath{\mathbb{R}})$ to some function $u$, which is a weak solution of \eqref{basic equation}. It can be shown that by the choice of the Riemann solver in Theorem \ref{t:ec}, the solution obtained by the front tracking approximation coincides with the unique vanishing viscosity solution \cite{BB}. Furthermore, there exists a closed domain $\ensuremath{\mathcal{D}}\subset L^1(\ensuremath{\mathbb{R}},\Omega)$ and a unique distributional solution $u$, which is a Lipschitz semigroup $\ensuremath{\mathcal{D}}\times[0,+\infty[\rightarrow \ensuremath{\mathcal{D}}$ and which for piecewise constant initial data coincides, for a small time, with the solution of the Cauchy problem obtained piecing together the standard entropy solutions of the Riemann problems. Moreover, it lives in the space of BV functions. For simplicity, the pointwise value of $u$ is its $L^1$ representative such that the restriction map $t\mapsto u(t)$ is continuous form the right in $L^1$ and $x \mapsto u(x,t)$ is right continuous from the right. \subsubsection{Further estimates} To each $u_\nu$, we define the \emph{measure $\mu^\mathrm{I}_\nu$ of interaction} and the \emph{measure $\mu^\mathrm{IC}_\nu$ of interaction and cancelation} concentrated on the set of interaction points as follows. If two physical fronts belonging to the families $i',i''\in\{1,\dots,N\}$ with strength $s',\ s''$ interact at point $P$, we denote \begin{subequations}\label{ICmeas} \begin{equation}\label{imeas} \mu^\mathrm{I}_\nu(\{P\}):=\mathcal{I}(s',s''), \end{equation} \begin{equation}\label{icmeas} \mu^\mathrm{IC}_\nu(\{P\}) \;:=\mathcal{I}(s',s'')+\; \left\{\begin{array}{ll} |s'|+|s''|-|s'+s''| & \mbox{$i'=i''$},\\ 0 & \mbox{$i'\neq i''$}. \end{array}\right. \end{equation} \end{subequations} The wave strength estimates \eqref{gie} yields balance principles for the wave strength of approximate solutions. More precisely, given a polygonal region $\Gamma$ with edges transversal to the waves it encounters. Denote by $W^{i\pm}_{\nu,\mathrm{in}}$, $W^{i\pm}_{\nu,\mathrm{out}}$ the positive $(+)$ or negative $(-)$ $i$-waves in $u^\nu$ entering or exiting $\Gamma$, and let $W^i_{\nu,\mathrm{in}}=W^{i+}_{\nu,\mathrm{in}}-W^{i-}_{\nu,\mathrm{in}}$, $W^i_{\nu,\mathrm{out}}=W^{i+}_{\nu,\mathrm{out}}-W^{i-}_{\nu,\mathrm{out}}$. Then the measure of interaction and the measure of interaction-cancelation control the difference between the amount of exiting $i$-waves and the amount of entering $i$-waves w.r.t. the region as follows: \begin{subequations} \label{e:bl_bl_1} \begin{equation*} \label{e:bl} |W^i_{\nu,\mathrm{out}}-W^i_{\nu,\mathrm{in}}|\lesssim \mu^\mathrm{I}_\nu(\Gamma), \end{equation*} \begin{equation*} \label{e:bl_1} |W^{i\pm}_{\nu,\mathrm{out}}-W^{i\pm}_{\nu,\mathrm{in}}|\lesssim \mu^\mathrm{IC}_\nu(\Gamma). \end{equation*} \end{subequations} The above estimates are fairly easy consequences of the interaction estimates \eqref{e:gpe_ve} and the definition of $\mu^\mathrm{I}_\nu$, $\mu^\mathrm{IC}_\nu$. On the other hand, the uniform boundedness of Tot.Var.$\{u(\cdot,t)\}$ w.r.t. time $t$ and parameter $\nu$ implies that $\mu^I_\nu$ and $\mu^{IC}-\nu$ are bounded measures for all $\nu$ By taking a subsequence and using the weak compactness of bounded measures, there exist bounded measures $\mu^{I}$ and $\mu^\mathrm{IC}$ on $\ensuremath{\mathbb{R}}^+\times\ensuremath{\mathbb{R}}$ such that the following weak convergence holds: \begin{equation*}\label{def of muic} \mu^\mathrm{I}_{\nu}\rightharpoonup\mu^\mathrm{I}, \quad \mu^\mathrm{IC}_\nu\rightharpoonup\mu^\mathrm{IC}. \end{equation*} \section{Construction of subdiscontinuity curves}\label{s:sdc} Suppose $[\ensuremath{u^\mathrm{L}},\ensuremath{u^\mathrm{R}}]$ with $\ensuremath{u^\mathrm{R}}=T_i[\ensuremath{u^\mathrm{L}}](s)$ is a wave front of $i$-th family in the approximate solution $u_\nu$, and the wave curve $\tau\mapsto T_i[\ensuremath{u^\mathrm{L}}](\tau)$ (see Theorem \ref{t:ec}) intersects $Z^j_i, \cdots, Z_i^{j+p}$ at $u_j,\cdots,u_{j+p}$ for $0\leq \tau \leq s$, such that \[ u_j=T_i[\ensuremath{u^\mathrm{L}}](s_j). \] Then we say that the wave front $[u^L,u^R]$ has $(i,k)$-substrength $s_i^k:=s_{k+1}-s_k$ and we decompose the front into $(i,k)$-subdiscontinuity fronts with strength $s_i^k$, where $k\in\{j,\cdots,j+p-1\}\cap2\ensuremath{\mathbb{Z}}$ when $s>0$, or $k\in\{j,\cdots,j+p-1\}\cap (2\ensuremath{\mathbb{Z}}+1)$ when $s<0$. The points $u_k$ and $u_{k+1}$ are connected by the part of the curve $T_i[u^L](\cdot)$ inside $\Delta^k_i$. We denote the family of all $(i,k)$-subdiscontinuity fronts as $\S_i^k$. It is obviously that only mixed fronts and discontinuity fronts can have $(i,k)$-substrength $s_i^k>0$ for some $k$, which means they can be decomposed into subdiscontinuity fronts. \begin{lemma}\label{l:1subdis} In a wave-front tracking approximation solution, an interaction can only generate at most one subdiscontinuity front with strength $s_i^k$ for some $i,k$. \end{lemma} \begin{proof} By the construction of approximate Riemann solver and the uniformly small total variation of approximate solutions, it is sufficient to prove that the Lipschitz continuous curve $T_i[u^0](\cdot):[0,s]\rightarrow \ensuremath{\mathbb{R}}^N$ can intersect with $Z_i^j$ at most once for any $u^0\in \Omega$ and all $j$ if $s>0$ is sufficiently small. In fact, by Theorem \ref{t:ec} $\lim_{t\rightarrow 0}T_i[u^0](t)=r_i(u^0)$, one has for $t\in[0,s]$, \[ \left|\frac{d}{dt}T_i[u^0](t)-r_i(T_i[u^0](t))\right|\lesssim t. \] Recall the assumption $r_i(u)\cdot \mathtt{n}>0$ on $Z^j_i$, one has \[ \frac{d}{dt}T_i[u^0](t)\cdot \mathtt{n}>0\ \text{on}\ Z^j_i \quad \text{for $t\in[0,s]$} \] as long as $s$ small enough. This concludes that $T_i[u^0](\cdot):[0,s]\rightarrow \ensuremath{\mathbb{R}}^N$ can intersect $Z_i^j$ at most once. \end{proof} Suppose $y_1\in \S_i^{k'}$, $y_2\in \S_i^{k''}$ and $y_1, y_2$ belong to the same wave front, then we say artificially $y_1$ is on the left (right) of $y_2$ if $k',k''$ are even (odd) and $k'<k''\ (k'>k'')$. Then it is easy to see the following lemma. \begin{lemma}\label{l:non-crossing} Suppose two subdiscontinuity fronts $y_1\in \S_i^{k'}$, $y_2\in \S_i^{k''}$ interact and generate two subdiscontinuity fronts $y'_1\in \S_i^{k'}$, $y'_2\in \S_i^{k''}$. Then if $k'<k''\ (k'>k'')$, $y_1$ must be on the left (right) of $y_2$ and also $y'_1$ must be on the left (right) of $y'_2$. \end{lemma} Base on these two lemmas, we can follow the idea of \cite{BC} to define approximate subdiscontinuity curves. \begin{definition}\label{d:asc} Given $\epsilon\ne 0$, in $u_\nu$, an $(\epsilon,i,k)$-approximate subdiscontinuity curve is a polygonal line in $(x,t)$-plane with nodes $(t_0,x_0),(t_1,x_1),\cdots,(t_n,x_n)$ satisfying \begin{enumerate} \item $(t_j,x_j)$ are interaction points with $0\leq t_0<t_1<\cdots<t_n$. \item For $1\leq j \leq n$ the segment joining $(t_{j-1},x_{j-1}),\ (t_j,x_j)$ is an $(i,k)$-subdiscontinuity front with $s_i^{k}\geq \epsilon/2$ when $\epsilon>0$ ($s_i^{k}\leq \epsilon/2$ when $\epsilon<0$), and there is at least one index $j'\in \{1,\cdots,n\}$ such that the wave front connecting $(t_{j'-1},x_{j'-1})$, $(t_{j'},x_{j'})$ has $(i,k)$-substrength $s_i^{k}\geq \epsilon$ if $\epsilon>0$ ($s_i^{k}\leq \epsilon$ if $\epsilon<0$). \item $\forall k<N$, one selects the $(i,k)$-subdiscontinuity fronts with larger speed, that is, if two fronts with substrengths $s_i^k>\epsilon/2$ interact at the node $(x_k,t_k)$, then the front of the $(\epsilon,i,k)$-approximate subdiscontinuity curve is the one coming from the left. \end{enumerate} \end{definition} An $(\epsilon,i,k)$-approximate subdiscontinuity curve which is maximal w.r.t. set inclusion is called a \emph{maximal $(\epsilon,i,k)$-approximate subdiscontinuity curve}. Let $M^{k}_{i,\nu}(\epsilon)$ be the number of maximal $(\epsilon,i,k)$-approximate subdiscontinuity curves in $u_\nu$. \begin{lemma} For fixed $k$ and $\epsilon$, $M^{k}_{i,\nu}(\epsilon)$ is uniformly bounded w.r.t $\nu$. \end{lemma} \begin{proof} We consider the case for $\epsilon>0$ (and the case when $\epsilon<0$ is similar). Since the total variation of $u_\nu(0,\cdot)$ is uniformly bounded by \eqref{initial approx}, the number of maximal $(\epsilon,i,k)$-approximate subdiscontinuity curves which start at time $t=0$ is clearly of order $\epsilon^{-1}$. The number of $(\epsilon,i,k)$-approximate subdiscontinuity curves which start at a time $t_0>0$ and do not end in finite time is also of order $\epsilon^{-1}$, because the total variation of $u_\nu(\cdot,t)$ is uniformly bounded w.r.t time $t$ and $\nu$. Now considering an $(\epsilon,i,k)$-approximate subdiscontinuity curve $y_\nu$ which starts at a time $t_0>0$ and ends in finite time, we claim that \begin{equation}\label{claim_mu} \mu^{IC}_\nu(y_\nu)\gtrsim \epsilon^2. \end{equation} If the claim is true, as the total amount of interaction and cancelation in the solution $u_\nu$ is uniformly bounded, the number of such $(\epsilon,i,k)$-approximate subdiscontinuity curves is of order $\epsilon^{-2}$. Thus, combining these situations, we finally obtain the estimate \[ M^{k}_{i,\nu}(\epsilon)\lesssim \epsilon^{-2}, \] which is uniformly valid as $\nu\rightarrow\infty$. Now we prove the claim. Suppose one $(i,k)$-subdiscontinuity front on $y_\nu$ with $(i,k)$-substrength $\alpha>0$ interact with a front of $j$-th family with strength $\beta^*$ at point $P$, generating an $i$-wave with $(i,k)$-substrength $\gamma\geq 0$ which means that either there is a front of $i$-wave with $(i,k)$-substrength $\gamma>0$ or there is no fronts of $i$-wave having $(i,k)$-substrength. We also assume that $\gamma<\alpha$. Setting $\theta=\alpha-\gamma$. First, we consider the case when $i\ne j$, we assume that $i>j$, the case $i<j$ is similar to prove. For notational convenience, we also denote $\alpha, \beta^*, \gamma$ as the front themselves. Assume that $\alpha$ locates on the front $[\ensuremath{u^\mathrm{L}},u^M]$ with $u^M=T_i[\ensuremath{u^\mathrm{L}}](\alpha^*)$ and $\ensuremath{u^\mathrm{R}}=T_j[u^M](\beta^*)$, $\gamma$ locates on the front of the $i$-wave $[\tilde{u}^L,\tilde{u}^R]$ with $\tilde{u}^R=T_i[\tilde{u}^L](\gamma^*)$. We know that (see section 9.9 of \cite{Daf2}) \begin{subequations} \label{e:diffwave} \begin{equation} \label{e:diffwave_L} \tilde{u}^L=u^L+s_j \sum_{j<i}r_j(u^M)+O(|s^L|)\alpha^*+o(|s^L|), \end{equation} \begin{equation} \label{e:diffwave_R} \tilde{u}^R=u^R-s_j \sum_{j>i}r_j(u^M)+O(|s^R|)\beta^*+o(|s^R|), \end{equation} \end{subequations} where \[ s^L=(s_1,\cdots,s_{i-1},0,\cdots,0)\quad \text{and} \quad s^R=(0,\cdots,0,s_{i+1},\cdots,s_N). \] Then, by \eqref{gie}, the assumption $i>j$ and $|u^M-u^R|\lesssim |\beta^*|$, we have \begin{subequations}\label{e:dlr} \begin{equation} \label{e:dl} |\tilde{u}^L-u^L|\lesssim |\beta^*|+\ensuremath{\mathcal{I}}(\alpha^*,\beta^*), \end{equation} \begin{equation} \label{e:dr} |u^M-\tilde{u}^R|\lesssim |\beta^*|+ \ensuremath{\mathcal{I}}(\alpha^*,\beta^*). \end{equation} \end{subequations} From Glimm's interaction estimates \eqref{gie}, the parametrization \eqref{e:para_T_k} and the estimates \eqref{e:dlr}, one concludes that the difference of $(i,k)$-substrength between $\alpha$ and $\gamma$ is controlled by the amount of interaction and the strength of the wave $\beta^*$, that is \begin{equation} \theta=\alpha-\gamma \lesssim \ensuremath{\mathcal{I}}(\alpha^*,\beta^*)+|\beta^*|. \end{equation} From definition of $\ensuremath{\mathcal{I}}$ (see Section \ref{interaction amount}), we know that here $\ensuremath{\mathcal{I}}(\alpha^*,\beta^*)=|\alpha^*\beta^*|$. So we get $|\beta^*|\gtrsim \theta$ since $|\alpha^*|\ll 1$. Therefore from $|\alpha^*|\geq \epsilon/2$, by the estimates in (i)-(iii), we obtain \begin{equation*} \ensuremath{\mathcal{I}}(\alpha^*,\beta^*)\gtrsim \epsilon\theta. \end{equation*} By notation of interaction measure \eqref{imeas}, one obtains \begin{equation}\label{ime} \mu^I_\nu(P)\gtrsim \epsilon \theta. \end{equation} Next we consider the case when $i=j$ and $\alpha$ is on the left of $\beta^*$. (It is similar for the case when $\alpha$ is on the right of $\beta^*$.) First we assume that $\alpha>0,\beta^*<0$. Since the decreasing of the $(i,k)$-substrength can only be caused by interaction and cancellation effects, then similarly one has \begin{equation*} \theta\lesssim |\beta^*|+ \ensuremath{\mathcal{I}}(\alpha^*,\beta^*). \end{equation*} Second, we assume that $\alpha>0,\beta^*>0$. From the equality \eqref{e:diffwave_L}, one know that \begin{equation}\label{e:same_wave_L} \langle l^0, \tilde u^L-u^L\rangle \lesssim \ensuremath{\mathcal{I}}(\alpha^*,\beta^*). \end{equation} From the equality \eqref{e:diffwave_R}, \begin{equation*} \begin{split} &\langle l^0, u^M-\tilde u^R\rangle+\langle l^0, u^M-u^R\rangle \\ = & \langle l^0, u^R-\tilde u^R\rangle\lesssim \ensuremath{\mathcal{I}}(\alpha^*,\beta^*). \end{split} \end{equation*} Since $\langle l^0, u^M-u^R\rangle =-\beta^*<0$, one has \begin{equation}\label{e:same_wave_R} \begin{split} \langle l^0, u^M-\tilde u^R\rangle \lesssim \ensuremath{\mathcal{I}}(\alpha^*,\beta^*). \end{split} \end{equation} Noticing that \eqref{e:same_wave_L} and \eqref{e:same_wave_R} implies that the decreasing of the $(i,k)$-substrength can be controlled by the amount of interaction, one has \[ \theta \lesssim \ensuremath{\mathcal{I}}(\alpha^*,\beta^*). \] Therefore from the definition of $\mu^{IC}_\nu$ \eqref{icmeas}, in the case when $i=j$ one has \begin{equation}\label{canem} \mu^{IC}_\nu(P)\gtrsim \theta. \end{equation} From (2) of Definition \eqref{d:asc}, the $(i,k)$-substrength all front outgoing from the terminal point of $y_\nu$ must be less than $\epsilon/2$ and there is at least one front on $y_\nu$ has $(i,k)$-substrength larger than $\epsilon$. Then by \eqref{ime} and \eqref{canem}, one can conclude that the claim \eqref{claim_mu} is true. \end{proof} \vspace{12pt} Up to a subsequence, one can assume that $M^{k}_{i,\nu}(\epsilon) = \bar{M}^{k}_{i}(\epsilon)$ is a constant independent of $\nu$ . Denote \begin{equation*} y_{m,\nu}^{k,\epsilon}:[t_{m,\nu}^{k,\epsilon-},t_{m,\nu}^{k,\epsilon+}]\rightarrow \ensuremath{\mathbb{R}},\qquad m=1,\cdots ,\bar{M}^{k}_{i}(\epsilon) \end{equation*} as the maximal $(\epsilon,i,k)$-approximate subdiscontinuity curves in $\ensuremath{u_\nu}$. Define $\ensuremath{\mathscr{T}}^k_{i,\nu}(\epsilon)$ as the collection of all maximal $(\epsilon,i,k)$-approximate subdiscontinuity curves in $\ensuremath{u_\nu}$ for fixed $\epsilon,i$ and $k$, i.e. \begin{equation*} \ensuremath{\mathscr{T}}^k_{i,\nu}(\epsilon)=\{y_{m,\nu}^{k,\epsilon}: \ m=1,\cdots ,\bar{M}^{k}_{i}(\epsilon)\}. \end{equation*} Set \begin{equation*} \mathscr{T}_{i,\nu}:= \bigcup_{k,\epsilon}\mathscr{T}_{i,\nu}^k(\epsilon). \end{equation*} as the family of all approximate subdiscontinuity curves of $i$-th family in $u_\nu$. Up to a diagonal argument and by a suitable labeling of the curves, one can assume that for each fixed $h$, $m$, as $\nu\rightarrow \infty$ the Lipschitz continuous curves $y_{m,\nu}^{k,\epsilon}$ converge uniformly to some Lipschitz continuous curves $y_{m}^{k,\epsilon}$ which are called \emph{$(\epsilon,i,k)$-subdiscontinuity curves}. Let us denote by \begin{equation*} \mathscr{T}_i^k(\epsilon) := \{y_{m}^{k,\epsilon}: \ m=1,\cdots \bar{M}^{k}_{i}(\epsilon)\} \end{equation*} the collection of all these limiting curves for fixed $i,k,\epsilon$. Let \begin{equation*} \mathscr{T}_i:=\bigcup_{k,\epsilon}\mathscr{T}_i^k(\epsilon). \end{equation*} denote the collection of all these \emph{$i$-subdiscontinuity curves}. \begin{lemma} Let $y^{k}_m :]t^-_m,t^+_m[\rightarrow \ensuremath{\mathbb{R}}$ be an $(\epsilon,i,k)$-subdiscontinuity curve. If $t\in ]t^-_m,t^+_m[$ is such that $(t,y^{k}_m(t))\notin \Theta$, then the derivative $\dot{y}^{k}_m(t)$ exists. \end{lemma} \begin{proof} There exist $\epsilon_0>0$ and $\ensuremath{y^{k}_{m,\nu}} \in \ensuremath{\mathscr{T}}^k_{i,\nu}(\epsilon_0)$, such that \[ \ensuremath{y^{k}_{m,\nu}}\rightarrow \ensuremath{y^{k}_m}, \] as $\nu\rightarrow \infty$ and the substrength of $\ensuremath{y^{k}_{m,\nu}}$ is $|s^{k}_i|\geq \epsilon_0$. Since in the wave-front tracking approximation, the change of speed of $i$-subdiscontinuity fronts is controlled by the measure $\mu^{IC}_\nu$. Then for any $\delta_\nu\rightarrow 0$, we deduce that \begin{equation*} \limsup_{\nu\rightarrow \infty}\sup_{|t-t'|<\delta_\nu}|\dot{y}^{k}_{m,\nu}(t)-\dot{y}^{k}_{m,\nu} (t')|=0. \end{equation*} From uniformly convergence $\ensuremath{y^{k}_{m,\nu}}\rightarrow \ensuremath{y^{k}_m}$ on a neighborhood of $t$, we obtain \begin{equation}\label{subdis.-conv.} \dot{y}_m^{k}(t)=\lim_{\nu\rightarrow \infty} \dot{y}^{k}_{m,\nu}. \end{equation} \end{proof} Recall the definition of generalized characteristics which will be used in the proof of Theorem \ref{t:main theorem}. \begin{definition} A \emph{generalized $i$-characteristic} associated with the approximate solution $u_\nu$, on the time interval $[t_1,t_2]\subset [0,\infty)$, is a Lipschitz continuous function $\chi:[t_1,t_2]\rightarrow (-\infty,\infty)$ which satisfies the differential conclusion \[ \dot{\chi}(t)\in [\lambda_i(u_\nu(x+,t)),\lambda_i(u_\nu(x-,t))]. \] \end{definition} For any given $(T,\bar{x})\in \ensuremath{\mathbb{R}}$, we consider the \emph{minimal (maximal) generalized $i$-characteristic} through $(T,\bar{x})$ defined as \[ \chi^{-(+)}(t)=\min(\max)\{\chi(t):\chi \text{ is a generalized characteristic, }\chi(T)=\bar{x}\}. \] The properties of approximate solutions yield that there is no wave-front of $i$-th family crossing $\chi^+$ from the left or crossing $\chi^-$ from the right. Suppose $\ensuremath{\mathscr{T}}_{i,\nu} \ni y'_\nu:[t'^-_\nu,t'^+_\nu]\rightarrow \ensuremath{\mathbb{R}}$ and $ \ensuremath{\mathscr{T}}_{i,\nu} \ni y''_\nu(\cdot):[t''^-_\nu,t''^+_\nu]\rightarrow \ensuremath{\mathbb{R}}$. By Lemma \ref{l:non-crossing} and the definition of $(\epsilon,i,k)$-approximate subdiscontinuity curves, it turns out that either \[ y'_\nu(t)\leq \ y''_\nu(t),\quad \forall t\in [t'^-_\nu,t'^+_\nu]\cap [t''^-_\nu,t''^+_\nu], \] or \[ y'_\nu(t)\geq \ y''_\nu(t), \quad \forall t\in [t'^-_\nu,t'^+_\nu]\cap [t''^-_\nu,t''^+_\nu]. \] This makes the following definition well defined. \begin{definition}\label{shock order} Suppose $y'\in \ensuremath{\mathscr{T}}^{k'}_{i}(\epsilon'), \ y'' \in \ensuremath{\mathscr{T}}^{k''}_{i}(\epsilon'') $ where $k'$ and $k''$ are both odd numbers (or even numbers), and $y'_\nu\rightarrow y',\ y''_\nu \rightarrow y''$ as $\nu\rightarrow \infty$ and assume there exists a point $(t_0,x_0)$ with $t_0>0$ such that $y'(t_0)=y''(t_0)=x_0$, we say $y'\prec y''$ if there exists a neighborhood $[t^-,t^+]$ of time $t_0$ such that \begin{itemize} \item $y'(t) \leq y''(t)$ for all $t \in [t^-,t^+]$, \item either there exist $t^* \in [t^-,t^+]$ such that $y'(t^*) < y''(t^*)$ or for all $t \in [t^-,t^+]$, $y'(t)=y'(t)$, and $k_1<k_2\ (k_1>k_2)$. \end{itemize} \end{definition} The next lemma rules out the situation when two subdiscontinuity with different sign of strength is tangent at the same point which is not the atom point of interaction and cancellation measure. \begin{lemma}\label{l:int of two subcurves} Suppose that $y_1\in\ensuremath{\mathscr{T}}^{k_1}_i,\ y_2\in\ensuremath{\mathscr{T}}^{k_2}_i$ with $k_1$ is even and $k_2$ is odd, and assume that $y_1$ and $y_2$ pass through the same point $(x_0,t_0)$ with $\mu^{IC}(\{(x_0,t_0)\})=0$ and there is no subdiscontinuity curve $y_0$ such that \[ y_1(t)\leq y_0(t)\leq y_1(t) \] for any neigbourhood of $t_0$. Then $\dot{y}_1(t_0)\ne \dot{y}_2(t_0)$. \end{lemma} \begin{proof} Suppose $\ensuremath{\mathscr{T}}^{k_1}_{i,\nu}\ni y_{1,\nu}\to y_1,\ \ensuremath{\mathscr{T}}^{k_2}_{i,\nu}\ni y_{2,\nu}\to y_2$ and $t_{1,\nu},t_{2,\nu}\to t_0$. Let us denote the points $y_{1,\nu}(t_{1,\nu}),\ y_{2,\nu}(t_{2,\nu})$ as $A_\nu, B_\nu$ respectively. Since there is no subdiscontinuity curves between $y_1$ and $y_2$, the strengths of all fronts crossing the segment $\overline{A_\nu B_\nu}$ tend to zero. Moreover, the total strength of the fronts of the other family tends to zero. In fact, if not, either they are canceled in the neighborhood of $(x_0,t_0)$ or interacted with $y_{1,\nu}(t_{1,\nu}),\ y_{2,\nu}(t_{2,\nu})$, which implies the uniform positivity of $\mu_\nu^{IC}$ on a small region $\Gamma_\nu$. This contradicts the assumption that $\mu^{IC}(\{(x_0,t_0)\})=0$. Therefore, the values of each $u_\nu$ along the segment $\overline{A_\nu B_\nu}$ remain arbitrary close to $i$-rarefaction curve. On the other hand, {\bf since the sign of the strengths of $y_1$ and $y_2$ are different}, one can always find $A'_\nu,\ B'_\nu$ on $\overline{A_\nu B_\nu}$ and a positive constant $c$ such that $u_\nu(A'_{\nu}),u_\nu(B'_{\nu})\in \Delta^k_i$ for some $k$ and $|u_\nu(A'_{\nu})-u_\nu(B'_{\nu})|>c$, therefore \[ |\lambda_i(u_\nu(A'_\nu))-\lambda_i(u_\nu(B'_\nu))|\gtrsim c. \] Up to subsequence, we can assume that for all $\nu$ \begin{subequations} \label{char.diff} \begin{equation} \label{char.diff-1} \lambda_i(u_\nu(A'_\nu))-\lambda_i(u_\nu(B'_\nu))>c/2, \end{equation} \begin{equation} \label{char.diff-2} \text{or } \lambda_i(u_\nu(A'_\nu))-\lambda_i(u_\nu(B'_\nu))<c/2. \end{equation} \end{subequations} Let us consider the case \eqref{char.diff-1}, the other case is analogous to prove. We take $\chi^+$ through $A'_\nu$, $\chi^-$ through $B'_\nu$. Since $A'_\nu$ and $B'_\nu$ are in the same $\Delta_i^k$, if no uniformly large interaction occur on $\chi^+,\ \chi^-$, they will interact with each other. We consider the region $\Gamma_\nu$ bounded by $\overline{A_\nu B_\nu},\ \chi^+$ and $\chi^-$. Since no fronts can leave $\Gamma_\nu$ through $\chi^+$ and $\chi^-$. By \eqref{d:interation amount ijwaves} and \eqref{ICmeas}, we obtain that $\mu^{I}(\Gamma_\nu)\gtrsim c$ which contradicts the assumption $\mu^{IC}(\{(x_0,t_0)\})=0$. \end{proof} \section{Proof of Theorem \ref{t:main theorem}}\label{s:pf} Before proving the theorem, we recall the definition space-like curve. \begin{definition Let $\hat{\lambda}$ be a constant larger than the absolute value of all characteristic speed. We say a curve $x=y(t),\ t\in[a,b]$ is \emph{space-like} if \[ |y(t_2)-y(t_1)|>\hat{\lambda}(t_2-t_1)\quad {\rm for\ all}\ a<t_1<t_2<b. \] \end{definition} From the definition one knows that any fronts can cross a space-like curve at most once. \begin{proof}[Proof of Theorem \ref{t:main theorem}] Let $\Theta$ consists of all jump points of initial data, the atom points of interaction and cancelation measure $\mu^{IC}$ and the points where two sub-discontinuity curves of different families cross each other. Consider a point $P=(\tau,\xi)\notin \Theta$. Since $u(\cdot,\tau)$ has bounded variation, there exist the limits \[ u^L:=\lim_{x\rightarrow \xi-}u(x,\tau),\qquad u^R:=\lim_{x\rightarrow \xi+}u(x,\tau). \] Assuming that $\ensuremath{u^\mathrm{R}}=T_i[\ensuremath{u^\mathrm{L}}](s)$. We only consider the case for $s>0$ and the case for $s<0$ is analogous to prove. Applying the tame oscillation condition (see p.295 of \cite{BB}), one obtains \begin{equation}\label{e:lim in triangle} \lim_{\genfrac{}{}{0pt}{}{(x,t)\rightarrow(\xi,\tau)}{\tau\leq t <\tau+(\xi-x)/\hat{\lambda}} }u(x,t) = u^L,\quad \lim_{\genfrac{}{}{0pt}{}{(x,t)\rightarrow(\xi,\tau)}{\tau\leq t <\tau+(x-\xi)/\hat{\lambda}} }u(x,t) = u^R. \end{equation} for some constant $\hat \lambda$ which is larger than all characteristic speeds. Suppose that there are $i$-subdiscontinuity curves $y_1^{k_1},\cdots,y_l^{k_l}$ satisfying $y_j^{k_j}\in \ensuremath{\mathscr{T}}^{k_j}_i(\epsilon_j)$ with $\epsilon_j>0,\ 1\leq j\leq l$, \[ y^{k_1}_1(\tau)=\cdots=y^{k_l}_l(\tau)=\xi \] and there is no other subdiscontinuity curve passing through the point $P$. This can be done because of the conclusion of Lemma \ref{l:int of two subcurves} and the fact that $P\notin \Theta$. It is easy to show that $\epsilon_j$ must have the same sign, i.e. $\epsilon_{j_1}\epsilon_{j_2}>0$ for any $j_1,j_2\in\{1,\cdots,l\}$. By rearranging the index, we can assume that \begin{equation*} y^{k_1}_1\prec y^{k_2}_2\prec \cdots \prec y^{k_l}_l, \end{equation*} and \begin{itemize} \item[(H)] there is no $y^{k_0}_0 \in \ensuremath{\mathscr{T}}_i$ such that $ y^{k_0}_0 \prec y^{k_1}_1$ with $y^{k_0}_0(t_0)=y^{k_1}_1(t_0)$ or $y^{k_{l}}_{l}\prec y^{k_{0}}_{0}$ with $y^{k_{0}}_{0}(t_0)=y^{k_{l}}_{l}(t_0)$. \end{itemize} {\bf Step 1.} By the definition, there exist $y_{1,\nu}^{k_1},\cdots, y_{l,\nu}^{k_l}\in \ensuremath{\mathscr{T}}_{i,\nu}$ such that $y_{j,\nu}^{k_j}\rightarrow y_j^{k_j},\ \forall j\in\{1,\cdots,l\}$. We claim that \begin{subequations} \label{e:left_right_lim} \begin{equation} \label{e:leftlim} \lim_{r \rightarrow 0+} \limsup_{\nu \rightarrow \infty} \left( \sup_{\genfrac{}{}{0pt}{}{x<y_{1,\nu}^k(t)}{(x,t)\in B(P,r)}} \big| u_\nu(x,t) - u^L \big| \right) = 0, \end{equation} \begin{equation} \label{e:rightlim} \lim_{r \rightarrow 0+} \limsup_{\nu \rightarrow \infty} \left( \sup_{\genfrac{}{}{0pt}{}{x>y_{l,\nu}^{k_l}(t)} {(x,t)\in B(P,r)}} \big| u_\nu(x,t) - u^R \big| \right) = 0, \end{equation} \end{subequations} where $B(P,r)$ is a ball centred at $P$ with radius $r$. Indeed, if \eqref{e:leftlim} is not true , by the first limit in \eqref{e:lim in triangle} and $u_\nu \rightarrow u$ pointwise a.e., there exist two sequences of points $P_\nu,\ Q_\nu$ converging to $ P$ and $P_\nu,\ Q_\nu$ on the left of $y^\nu_1$ such that the segment $\overline{\ensuremath{P_\nu} \ensuremath{Q_\nu}}$ is space-like and \[ u(P_\nu)\rightarrow u^L \] and \[ |u_\nu(P_\nu)-\ensuremath{u_\nu}(Q_\nu)|\geq \epsilon_0. \] It is not restrictive to assume that the direction $\overrightarrow{P_\nu Q_\nu}$ towards $\ensuremath{y^{k_1}_{1,\nu}}$. Let $\Lambda_j(\ensuremath{\overline{P_\nu Q_\nu}})$ be the total wave strength of wave-fronts of $j$-th family which cross the segment $\ensuremath{\overline{P_\nu Q_\nu}}$. Then, one has $\Lambda_j(\ensuremath{\overline{P_\nu Q_\nu}})\gtrsim \epsilon_0$ for some $j\in\{1,\cdots,d\}$. We consider the following three cases. {\bf Case 1.} If $j>i$, we take the maximal forward generalized $j$-characteristic $\chi^+$ through $P_\nu$ and minimal generalized $j$-characteristic $\chi^-$ through $Q_\nu$. If $\chi^+$ and $\chi^-$ interact each other at $O_\nu$ before hitting $\ensuremath{y^{k_1}_{1,\nu}}$. We consider the region $\Gamma_\nu$ bounded by $\ensuremath{\overline{P_\nu Q_\nu}},\ \chi^+$ and $\chi^-$. Since no fronts can leave $\Gamma_\nu$ through $\chi^+$ and $\chi^-$. By \eqref{d:interation amount ijwaves} and \eqref{ICmeas}, we obtain that the amount of interaction and cancellation $\ensuremath{\mu^{IC}_\nu}(\bar \Gamma_\nu)$ for $u_\nu$ within the closure of $\Gamma_\nu$ remains uniformly positive as $\nu \to \infty$. If $\chi^+$ interacts $\ensuremath{y^{k_1}_{1,\nu}}$ at $A_\nu$ and $\chi^-$ interacts $\ensuremath{y^{k_1}_{1,\nu}}$ at $B_\nu$, we consider the region $\Gamma_\nu$ bounded by $\ensuremath{\overline{P_\nu Q_\nu}},\ \chi^+,\ \chi^-$ and $\ensuremath{y^{k_1}_{1,\nu}}$. Then either there exists a constant $0<c'_0<1$ such that $\ensuremath{\mu^{IC}_\nu}(\Gamma_\nu)>c'_0 \epsilon_0$ or there exists a constant $0<c''_0<1$ such that fronts with total strength lager than $c''_0 \epsilon_0$ hitting $\overline{A_\nu B_\nu}$. By \eqref{d:interation amount ijwaves} and \eqref{ICmeas} we can determine that $\ensuremath{\mu^{IC}_\nu}(\bar{\Gamma}_\nu)\gtrsim \epsilon$ uniformly. For both above two cases, $\Gamma_\nu$ is contained in a ball $B(P,\r_\nu)$ with $\r_\nu\to 0$ as $\nu\to \infty$, which implies that $\ensuremath{\mu^{IC}}(\{P\})>0$. This is against the assumption $P\notin \Theta$. {\bf Case 2.} If $j<i$, we consider the minimal backward generalized $j$-characteristic through the point $P_\nu$ and the maximal backward generalized $j$-characteristic through the point $Q_\nu$. Then by the similar argument for the case $j>i$, we get $\ensuremath{\mu^{IC}}(\{P\})>0$ against the assumptions. {\bf Case 3.} If $j=i$ and for any $ j'\ne i,\ 1\leq j' \leq d,\ \Lambda_{j'}(\ensuremath{\overline{P_\nu Q_\nu}})\rightarrow 0$ as $\nu \rightarrow \infty$. We claim that the maximum of the strengths of all fronts which cross $\ensuremath{\overline{P_\nu Q_\nu}}$ tends to zero when $\nu \rightarrow \infty$. If it is not true, since $\Lambda_{j'}(\ensuremath{\overline{P_\nu Q_\nu}})\rightarrow 0$ for $j'\ne i$, there must be fronts of $i$-th family across $\ensuremath{\overline{P_\nu Q_\nu}}$ with uniformly large strength We assume that up to a subsequence, their $(i,k_0)$-substrength $s^{k_0}_\nu$ are uniformly large for some $k_0\in \{1,\cdots,\bar{k}_i\}$., that is $|s^{k_0}_\nu|\gtrsim \epsilon$ for some $\epsilon>0$. Then by Definition \ref{d:asc} there must be, for some $\epsilon_0>0$, $(\epsilon_0,i,k_0)$-approximate subdiscontinuity curves $y^{k_0}_{0,\nu}$ which contains the wave fronts $s^{k_0}_\nu$ (we use the same notation as their strength for convenience) and since $y^{k_0}_{0,\nu}$ are uniformly Lipschitz continuous curves, up to a subsequence, there is a Lipschitz continuous curves $y^{k_0}_0$, such that \[ y^{k_0}_{0,\nu} \rightarrow y^{k_0}_0\in \ensuremath{\mathscr{T}}^{k_0}_i,\quad \nu \rightarrow \infty \] and $y^{k_0}_0(\tau)=\xi$. By Definition \ref{shock order} we obtain $y^{k_0}_0\prec y^{k_1}_1$, which contradicts the assumption (H). So we can always choose $\ensuremath{Q_\nu}',\ \ensuremath{P_\nu}'\in \overline{\ensuremath{P_\nu} \ensuremath{Q_\nu}}$ such that \[ \ensuremath{u_\nu}(\ensuremath{Q_\nu}')\rightarrow u^L \] and \[ |\ensuremath{u_\nu}(\ensuremath{Q_\nu}')-\ensuremath{u_\nu}(\ensuremath{P_\nu}')|\geq c_0\epsilon_0, \] where $0<c_0<1$ and $\ensuremath{u_\nu}(\ensuremath{Q_\nu}'),\ \ensuremath{u_\nu}(\ensuremath{P_\nu}')$ locate in the same $\Delta_i^k$ for some $k$. Since for $j'\ne i,\ \Lambda_{j'}(\ensuremath{\overline{P_\nu Q_\nu}})$ is arbitrary small when $\nu$ is large enough and the strength of fronts belonging to $i$-th family is small, one has \[ \max_{H_\nu\in \overline{P'_\nu Q'_\nu}}\min_{s^*\geq 0}|u_\nu(H_\nu)-R_i[u_\nu(P'_\nu)](s^*)|\ll 1. \] Which means the values of each $u_\nu$ along the segment $\overline{P'_\nu Q'_\nu}$ remain arbitrary close to the $i$-rarefaction curve through $u^L$. Then by the analogous argument in the proof of Lemma \ref{l:int of two subcurves}, one gets the contradiction $\ensuremath{\mu^{IC}}(\{P\})>0$. Therefore, we conclude that \eqref{e:left_right_lim} is true. And \eqref{e:rightlim} is similar to prove. {\bf Step 2.} Define $\ensuremath{\mathscr{T}}=\bigcup_i \ensuremath{\mathscr{T}}_i$. If $P\notin \Theta\cap \mathrm{Graph}(\ensuremath{\mathscr{T}})$ and if $u$ is not continuous at $P$, then there exist $\epsilon>0$ and $\ensuremath{P_\nu},\ Q_\nu\rightarrow P$ such that $\ensuremath{\overline{P_\nu Q_\nu}}$ is space like and \begin{equation*} \ensuremath{u_\nu}(\ensuremath{P_\nu})\rightarrow u(P), \qquad |\ensuremath{u_\nu}(\ensuremath{Q_\nu})-u(P)|\geq \epsilon \text{ for all }\nu. \end{equation*} Up to subsequence, we consider th following two cases. \begin{itemize} \item[1)] there exists $j\ne j'$, such that $\min\{\Lambda_j(\ensuremath{\overline{P_\nu Q_\nu}}),\ \Lambda_{j'}(\ensuremath{\overline{P_\nu Q_\nu}})\} \geq \epsilon_0$. This situation can be ruled out by the argument in Case 1 of Step 1. \item[2)] For some $j\in\{1,\cdots,j\}$ and all $\nu$, $\Lambda_j(\ensuremath{\overline{P_\nu Q_\nu}}) \geq\epsilon_0$ and for all $ j'\ne j,\ \Lambda_{j'}(\ensuremath{\overline{P_\nu Q_\nu}})\rightarrow 0$ as $\nu\rightarrow \infty$. Then one can use the argument in Case 3 of Step 1 to obtain the contradiction. \end{itemize} Therefore we get the continuity of $u$ outside $\mathrm{Graph}(\ensuremath{\mathscr{T}})\bigcup \Theta$. Then, we have proved the first part of the theorem. {\bf Step 3.} We now establish the Rankine-Hugoniot condition \eqref{d:R-H condition} for curves in $\ensuremath{\mathscr{T}}$. Let $P=(t_0,x_0)\in Graph(\ensuremath{\mathscr{T}})\setminus\Theta$, and write \begin{equation*} \ensuremath{u^\mathrm{L}}=\lim_{x\rightarrow x_0-} u(x,t_0),\qquad \ensuremath{u^\mathrm{R}}=\lim_{x\rightarrow x_0+} u(x,t_0). \end{equation*} We consider two cases. {\bf Case 1.} There is only one curve $y \in \ensuremath{\mathscr{T}}_i$ passing through point $P$. From \eqref{e:left_right_lim}, we know that the discontinuity $[\ensuremath{u^\mathrm{L}},\ensuremath{u^\mathrm{R}}]$ must be simple. Suppose that $\ensuremath{\mathscr{T}}_{i,\nu} \ni y_\nu \rightarrow y$ as $\nu\rightarrow \infty.$ By \eqref{R_H} and \eqref{front speed error}, we obtain \begin{equation*} \begin{split} \sigma_i(\ensuremath{u^\mathrm{L}}_\nu,\ensuremath{u^\mathrm{R}}_\nu)[\ensuremath{u^\mathrm{L}}_\nu-\ensuremath{u^\mathrm{R}}_\nu]=f(\ensuremath{u^L_\nu})-f(\ensuremath{u^R_\nu}),\\ |\dot{y}_\nu-\sigma_i(\ensuremath{u^L_\nu},\ensuremath{u^R_\nu})|<2\epsilon_\nu, \end{split} \end{equation*} where \begin{equation*} \ensuremath{u^\mathrm{L}}_\nu=\lim_{x\rightarrow y_\nu(t_\nu)-} u(t_\nu, x),\qquad \ensuremath{u^\mathrm{R}}_\nu=\lim_{x\rightarrow y_\nu(t_\nu)+} u(t_\nu, x), \quad t_\nu\rightarrow t_0 \ \text{as}\ \nu\rightarrow \infty. \end{equation*} Then by \eqref{e:left_right_lim}, one has for every $\epsilon>0$ there exists $\bar{\nu}(\epsilon)$ such that $\forall \nu>\bar{\nu}$, one has \begin{equation*} |y_\nu-\sigma_i(\ensuremath{u^\mathrm{L}},\ensuremath{u^\mathrm{R}})|\leq |y_\nu-\sigma_i(\ensuremath{u^L_\nu},\ensuremath{u^R_\nu})|+|\sigma_i(\ensuremath{u^\mathrm{L}},\ensuremath{u^\mathrm{R}})-\sigma_i(\ensuremath{u^L_\nu},\ensuremath{u^R_\nu})|<\epsilon. \end{equation*} From \eqref{subdis.-conv.} and the fact that the $i$-waves of Riemann problem constructed by Theorem \ref{t:ec} are Liu admissible, one deduces that \[ \dot{y}(t_0)(u^R-u^L)=f(u^R)-f(u^L), \] and \begin{equation*} \dot{y}(t_0)\leq \hat \sigma_i(S_i[\ensuremath{u^\mathrm{L}}](\tau),\ensuremath{u^\mathrm{L}}),\ \forall \tau\in [0,s]. \end{equation*} {\bf Case 2.} If the discontinuity $[\ensuremath{u^\mathrm{L}},\ensuremath{u^\mathrm{R}}]$ is a composition of $[u_0,u_1],\ [u_1,u_2],\cdots,[u_l,u_{l+1}]$ where $u_0=\ensuremath{u^\mathrm{L}},u_{l+1}=\ensuremath{u^\mathrm{R}}$ , $u_j=T_i[\ensuremath{u^\mathrm{L}}](s_j)$ and $\ensuremath{u^\mathrm{L}}\in\Delta^{k^*_1}_i,\ensuremath{u^\mathrm{R}}\in\Delta^{k^*_2}_i$. Let \begin{align*} k_1&=\min\{k\ \text{is even},\ k\geq k^*_1\},\\ k_{p}&=\max\{k\ \text{is even},\ k\leq k^*_2\} \end{align*} and $k_p=k_1+2(p-1)$. One has $p\geq l+1$. According to \eqref{e:left_right_lim}, there exist $P_\nu,Q_\nu\rightarrow P$ such that \[ u_\nu(P_\nu)\rightarrow \ensuremath{u^\mathrm{L}},\ u_\nu(Q_\nu)\rightarrow \ensuremath{u^\mathrm{R}}, \] and the segment $\ensuremath{\overline{P_\nu Q_\nu}}$ is space-like. We claim that there exist $p$ subdiscontinuity curves $y_1,\cdots,y_{p} \in \ensuremath{\mathscr{T}}_i$ passing through point $P$, where $y_j\in\ensuremath{\mathscr{T}}^{k_1+2j}_i,\ j=1,\cdots,p-1.$ In fact, let $\S^k_i(\ensuremath{\overline{P_\nu Q_\nu}})$ denote the maximal $(i,k)$-substrength of all fronts across $\ensuremath{\overline{P_\nu Q_\nu}}$. It is sufficient to show that, up to a subsequence, there is a constant $C>0$ such that \[ \S^k_i(\ensuremath{\overline{P_\nu Q_\nu}})\geq C,\ \forall k\in\{k_1,\cdots,k_{p}\}. \] If not, there exists $k_0\in \{k_1,\cdots,k_{p}\}$ such that $\S^{k_0}_i(\ensuremath{\overline{P_\nu Q_\nu}})\rightarrow 0$ as $\nu\rightarrow \infty$. Since $\mu^{IC}(P)=0$, we have \[ \Lambda_j(\ensuremath{\overline{P_\nu Q_\nu}})\rightarrow 0,\ \text{as}\ \nu\rightarrow \infty,\ \forall j\ne i. \] By Lemma \ref{l:int of two subcurves}, we conclude that $\S^k_i(\ensuremath{\overline{P_\nu Q_\nu}})\rightarrow 0,\ \text{as}\ \nu\rightarrow \infty,\ \forall k$ odd. Since the $(i,k_0)$-substrength of all wave fronts are arbitrary small, up to a subsequence, one can always find points $P'_\nu,Q'_\nu$ on $\ensuremath{\overline{P_\nu Q_\nu}}$ such that $u_\nu(P'_\nu),\ u_\nu(Q'_\nu)\in \Delta^{k_0}_i$ and all fronts are either admissible discontinuity with left and right value inside $\Delta^{k_0}_i$ or rarefaction fronts. Therefore by the analogous argument in Case 3 of Step 1, there exist a constant $c>0$ independent on $\nu$ such that in a small neighborhood $\Gamma_\nu$ of $P$, one has $\mu^{IC}_\nu(\Gamma_\nu)\geq c$. This contradicts with the assumption $\mu^{IC}(P)=0$. This concludes our claim. Moreover, by Lemma \ref{l:int of two subcurves} and the equalities \eqref{e:left_right_lim}, there are exactly $p$ subdiscontinuity curves passing through $P$, otherwise $P$ must be an interaction point with $\mu^{IC}(P)>0$. Suppose $u_{n+1}=T_i[u_n](s_n)$ for some $s_n>0$, $n\in\{0,\cdots,l\}$ and $T_i[u_n](\cdot)$ intersects $Z_i^{k_{n_1}},\cdots,Z_i^{k_{n_q}}$ at $u_{n,1},\cdots,u_{n,q}$. Then the subdiscontinuity curves with substrength $s^{k_{n_1}}_i,\cdots,s^{k_{n_q}}_i$ must conincide in neighborhood of time $\tau$. If in a neigborhood of $P$, $y_j$ and $y_{j+1}$ are not identical, by the similar argument for proving \eqref{e:left_right_lim}, one can show that \eqref{e:lim inside} is true. Suppose $\ensuremath{\mathscr{T}}_{i,\nu} \ni y_{j,\nu}\rightarrow y_{j},\ j\in \{1,\cdots,p\}$. As we discussed above, it is not restrictive to assume that $p=l+1$ and there is a neighborhood $U(t_0)$ of $t_0$, such that $\forall t\in U(t_0)$, \[ y_{1,\nu}(t)<\cdots<y_{p,\nu}. \] Then, using the similar argument in Step 1, besides \eqref{e:left_right_lim} and \eqref{e:rightlim}, one can also show that \begin{equation*} \begin{split} & \lim_{r \rightarrow 0+} \limsup_{\nu \rightarrow \infty} \left( \sup_{\genfrac{}{}{0pt}{}{x<y_{m,\nu}^{k_m}(t)}{(x,t)\in B(P,r)}} \big| u_\nu(x,t) - u_{m-1} \big| \right) = 0 \ m=2,\cdots, p,\\ &\lim_{r \rightarrow 0+} \limsup_{\nu \rightarrow \infty} \left( \sup_{\genfrac{}{}{0pt}{}{x>y_{n,\nu}^{k_n}(t)}{(x,t)\in B(P,r)}} \big| u_\nu(x,t) - u_{n} \big| \right) = 0,\ n=1,\cdots,l. \end{split} \end{equation*} For notational convenience, we write $u_0=u^L,\ u_{p}=u^R$. Therefore, by the same argument in Case 1, we obtain that for $n=1,\cdots,p$ \[ \dot{y}_n(t_0)(u_n-u_{n-1})=f(u_n)-f(u_{n-1}) \] and \[ \dot{y}_n(t_0)\leq \hat \sigma_i(u_n,S_i[\ensuremath{u^\mathrm{L}}](\tau)),\ \forall \tau\in [s_{j-1},s_{j}]. \] Adding them together, one finally obtain for $m=1,\cdots,p$ \[ \dot{y}_m(t_0)(u^R-u^L)=f(u^R)-f(u^L) \] and \begin{equation*} \dot{y}_m(t_0)\leq \hat \sigma_i(\ensuremath{u^\mathrm{L}},S_i[\ensuremath{u^\mathrm{L}}](\tau)),\quad \forall \tau\in [0,s]. \end{equation*} \end{proof} The proof of Theorem \ref{t:main theorem} already implies the results of Theorem \ref{t:approx_shock_conv}. Concerning a sequence of exact solution of \eqref{basic equation} such that $u_\nu\rightarrow u$ in $L^1_{loc}$, one can approximate each $u_\nu$ by a sequence of wave-front tracking approximations $u_{m,\nu}\rightarrow u_\nu$ and by a suitable diagonal sequence $u_{\nu,m(\nu)}\to u$ one has the following corollary. \begin{corollary}[stability of discontinuity curves] Considering a sequence of exact solutions $u_\nu$ such that $u_\nu\rightarrow u$ in $L^1_{loc}$, one has \begin{enumerate} \item Let $y_\nu:[t^-_\nu,t^+_\nu]\mapsto \ensuremath{\mathbb{R}}$ be a discontinuity curves of $u_\nu$ described in Theorem \ref{t:main theorem}. Assume $t^-_\nu\to t^-,t^+_\nu\to t^+$ and $y_\nu(t)\to y(t)$ for each $t\in[t^-,t^+]$. Then $y(\cdot)$ is a discontinuity curve of the limiting solution $u$ with the properties mentioned in Theorem \ref{t:main theorem}. \item Viceversa, let $y:[t^-,t+]\mapsto \ensuremath{\mathbb{R}}$ be a discontinuity curve of $u$ for a.e. $t\in[t^-,t^+]$. Then there exists a sequence of discontinuity curves $y_\nu:[t^-_\nu,t^+_\nu]\mapsto \ensuremath{\mathbb{R}}$ of $u_\nu$ such that \[ t^-_\nu\to t^-,\ t^+_\nu\to t^+,\quad \lim_{\nu\to \infty}y_\nu(t)=y(t), \] for a.e. $t\in[t^-,t^+]$. \end{enumerate} \end{corollary} \section{An remark on general strict hyperbolic systems} \label{s:example} We construct a strict hyperbolic system of conservation laws with one characteristic family which is not linearly degenerate or piecewise genuinely nonlinear. Therefore neither the assumption of Theorem \ref{t:main theorem} or that of Theorem 10.4 in \cite{Bre} holds. We show that the set of jump points of its admissible solution to some initial data can not be ``exactly'' covered by countably many Lipschitz continuous curves. Consider the following $2\times 2$ system \begin{equation} \label{e:example} \begin{cases} u_t+f(u,v)_x=0,\\ v_t-v_x=0. \end{cases} \end{equation} where $f$ is a smooth function and $u,v$ is the unknown variables. The Jacobian matrix of flux function is \[ DF(u,v) = \begin{pmatrix} f_u & f_v \\ 0 & -1 \end{pmatrix}. \] Then the eigenvalues are \[ \lambda_1=-1,\quad \lambda_2=f_u. \] And the corresponding right eigenvectors are \[ r_1(u,v)=(f_v,-f_u-1)^T,\quad r_2=(1,0)^T. \] The system is strict hyperbolic if $f_u>-1$. (In fact, $f$ constructed latter satisfies this property). Obviously, one has \[ Z_1=\{(u,v):\nabla \lambda_1\cdot r_1(u,v)=0\}=\ensuremath{\mathbb{R}}^2, \] which means that the 1-th characteristic family is linearly degenerate. Latter we will also show that \begin{equation}\label{e:inflection_manifold_2} Z_2=\{(u,v):\nabla \lambda_2\cdot r_2\}=\{(u,v):f_{uu}(u,v)=0\}=\{v=0\}. \end{equation} This yields that the vector field $r_2$ is tangent to the manifold $Z_2$, therefore the second characteristic family is not piecewise genuinely nonlinear or linearly degenerate.. Define $f(u,v)=e^{-1/v}u^2/{2}$ when $v>0$ and $f(u,0)\equiv 0$. In the following, we define the value of $f$ for $v<0$. Let the initial data to be \begin{equation} u_0(x)=\begin{cases} u_l & x<0,\\ u_r & x>0, \end{cases} \qquad v_0(x)=\begin{cases} -a & x<h,\\ a & x>h,\\ \end{cases} \end{equation} for some small constants $u_l>u_r$ and $a,h>0$. Since the second equation in \eqref{e:example} is a linear transport equation, one has \begin{equation} v(x,t)=\begin{cases} -a & x+t<h,\\ a & x+t>h. \end{cases} \end{equation} Then one can solve the system \eqref{e:example} by regarding it as a scalar conservation laws of $u$ \[ u_t+f(u,v)_x=0 \] with discontinuous coefficient when $v$ is a piecewise constant function. If w.r.t. $u$, $f$ is concave for some small fixed $v<0$ and concave for some fixed $v>0$, then $u$ is a center rarefaction wave in the area $\{x+t<h\}$. Immediately after the rarefaction wave crosses the characteristic line $x+t=h$, it turns out to be a compressive wave since $f(u,-a)$ is a convex function on $u$. It may generate a shock after a short time. Indeed, this can be done in the following way. Consider a center rarefaction wave of $u$ in the area $\{x+t<h\}$ \begin{equation} u(x,t)=\begin{cases} u_l & x/t< f_u(u_l,-a),\\ g(x/t) & x/t= f_u(g(x/t),-a),\\ u_r & x/t> f_u(u_r,-a), \end{cases} \end{equation} where $g(\cdot)$ is the converse function of $f_u(\cdot,-a)$. Let $u^-= g(\xi^*)$ for some $\xi^*$. The value of $u$ will be $u^-$ along the characteristic line \[ x=t f_u(u^-,-a) \] until it intersects another characteristic line $x+t=h$. Solving the equations \begin{equation} \begin{cases} x=t f_u(u^-,-a),\\ x=-t+h, \end{cases} \end{equation} we get the intersection point of two characteristics: \begin{equation} \begin{cases} x_0=h f_u(u^-,-a)/[1+f_u(u^-,-a)],\\ t_0={h}/[1+f_u(u^-,-a)]. \end{cases} \end{equation} Next, we compute the value of $u$ after the characteristic cross the line ${x+t=h}$. For each point $(x_0,t_0)$ on the line $x+t=h$, let \[ u^+(x_0,t_0)=\lim_{\substack{x+t>h\\ (x,t)\to(x_0,t_0)}}u(x,t). \] By Rankine-Hugoniot condition, one has \[ -(u^+-u^-)=\frac{e^{-1/a}(u^+)^2}{2}-f(u^-,-a). \] This yields \begin{equation} \label{e:value_u+} u^+=\frac{-1+\sqrt{1+2e^{-1/a}(f(u^-,-a)+u^-)}}{e^{-1/a}}. \end{equation} Since $f(u,v)={e^{-1/a}u^2}/{2}$ on area $\{h<x+t<2h\}$ , the characteristic line of the equation \[ u_t+f(u,a)_x=0 \] starting at $(x_0,t_0)$ is \[ x-\frac{h f_u(u^-,-a)}{1+f_u(u^-,-a)}=e^{-1/a} u^+ (t-\frac{h}{1+f_u(u^-,-a)}). \] We require that it passes through the point $(0,2h)$, that is \[ -\frac{h f_u(u^-,-a)}{1+f_u(u^-,-a)}=e^{-1/a} u^+ (2h-\frac{h}{1+f_u(u^-,-a)}), \] which yields \begin{equation} \label{e:value_fu} f_u(u^-,-a)=\frac{-e^{-1/a}u^+}{2e^{-1/a }u^+ +1}. \end{equation} Substitute \eqref{e:value_u+} into \eqref{e:value_fu}, we get \begin{equation*} f_u(u^-,-a)=\frac{1-g(u^-,-a)}{2g(u^-,-a)-1}, \end{equation*} where $g(u^-,-a)=\sqrt{1+2e^{-1/a}(f(u^-,-a)+u^-)}.$ Now, we consider the Cauchy problem of an ODE with parameter $a$ \begin{equation} \label{e:ode} \begin{cases} \frac{d}{du}F(u,a)=\frac{1-G(F,u,a)}{2G(Fu,a)-1},\\ F(0,a)=0, \end{cases} \end{equation} where $G(F,u,a)=\sqrt{1+2e^{-1/a}(F+u)}$. By the theory of the ODE, since $G$ is smooth when $(u,a,F)$ lies in a small neighborhood of the origin, there is a unique smooth solution $F$ defined on some interval $[-b,b]\ (b>0)$, smoothly depending on the parameter $a$. Therefore we defined $f$ for $v<0$ small as \[ f(u,v)=F(u,v). \] By our construction, $f(u,v)$ is concave about $u$ for a negative fixed $v$. In fact, from \eqref{e:ode}, one finds \[ f_{uu}=\frac{-g_u}{2 g+1}<0, \] since $g_u=\frac{2e^{-1/v}}{2h}>0$. This concludes that $f$ is concave with respect to $u$. It is not difficult to verify that all partial derivatives of $f$ is continuous on $\{v=0\}$. As $f$ is smooth on $\{v\ne 0\}$, it concludes that $f$ is smooth and independent of $h$. \begin{figure}[htbp] \begin{center} \begin{picture}(0,0)% \includegraphics{construction_f.pdf}% \end{picture}% \setlength{\unitlength}{1579sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(11203,7794)(-599,-6901) \put(8251,-6811){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2h$}% }}}} \put(5851,-6811){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$h$}% }}}} \put(10051,-6511){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x$}% }}}} \put(5701,-511){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}shock}% }}}} \put(6451,-2686){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}compressive wave}% }}}} \put(-599,-5536){\makebox(0,0)[lb]{\smash{{\SetFigFont{10}{12.0}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}rarefaction wave}% }}}} \end{picture}% \caption{} \label{f:construction_f} \end{center} \end{figure} By the above construction, as shown in Figure \ref{f:construction_f}. The rarefaction wave turns out to be compressive in the area $\{h<x+t<2h\}$ which eventually generates a shock starting at the point $(0,2h)$ and propagating along the $x=0$. Notice that there is another shock starting from the point $(h,0)$. However, we can modify the initial data a little to get rid of this shock. In fact, recalling the formula \eqref{e:value_u+} and letting \[ u_1=\frac{-1+\sqrt{1+2e^{-1/a}(f(u_r,-a)+u_r)}}{e^{-1/a}}. \] We can replace $u_0$ in the initial data by \[ \tilde{u}_0=\begin{cases} u_l & x<0,\\ u_r & 0<x<h,\\ u_1 & x>h \end{cases} \] to make the speed of the characteristics starting from $\{h<x<2h\}$ the same as nearby. By the total variation estimates for the general system \[ {\rm Tot.Var.}\{u(\cdot,t)\}\lesssim {\rm Tot.Var.}\{u_0(\cdot)\}, \] it is not restrict to assume that the total variation of $\tilde{u}_0$ is sufficiently small. Now, we begin to find the initial data with which \eqref{e:example} had an admissible solution containing `` Cantor set'' shocks. Now, we consider the initial data \begin{equation}\label{e:initial_example} u_0(x)=\begin{cases} u_l & x<0,\\ u_r & x>0, \end{cases} \qquad v_0(x)=\begin{cases} 0 & x<2h,\\ -a &2h< x<3h,\\ a & 3h<x<4h,\\ 0 & x>4h. \end{cases} \end{equation} Since the second equation in \eqref{e:example} is a linear transport equation, one has \begin{equation} v(x,t)=\begin{cases} 0 & x+t<2h,\\ -a &2h< x+t<3h,\\ a & 3h<x+t<4h,\\ 0 & x+t>4h. \end{cases} \end{equation} In the area $\{x+t<2h\}$, the first equation in \eqref{e:example} becomes $u_t=0$, then one has \begin{equation} u(x,t)=\begin{cases} u_l & x<\min\{2h-t,0\},\\ u_r & x>\max\{2h-t,0\} \end{cases} \end{equation} Next, we compute the value of $u$ in $\{2h<x+t<3h\}$. For any $(x',t')\ne(0,2h)$ on the line $x+t=2h$, write \[ u^-=\lim_{\substack{(x,t)\to(x',t')\\ x+t<2h}} u(x,t),\qquad u^+=\lim_{\substack{(x,t)\to(x',t')\\ x+t>2h}} u(x,t). \] By Rankine-Hugoniot conditions, one has \[ f(u^+,-a)-f(u^-,0)=-(u^--u^-). \] which yields \[ u^-=u^++f(u^+,-a). \] Regarding $u^+$ as a function of $u^-$ and differentiating the above equation on both sides with respect to $u^-$, one gets \[ (f_u+1) (u^+)'=1. \] By \eqref{e:value_fu}, one has \[ (u^+)'=\frac{1}{1+f_u}=\frac{2g-1}{g}>0 \] in a small neighborhood of origin. Thus $u^+$ is strictly increasing w.r.t. $u^-$. This concludes that at point $(0,2h)$ the left value of $u$ is still larger than the right value of $u$, i.e. \[ u^+_l>u^+_r, \] where \[ u^+_l=\lim_{\substack{(x,t)\to(0,2h)\\ 2h-t<x<0}} u(x,t),\qquad u^+_r=\lim_{\substack{(x,t)\to(0,2h)\\ x>\max\{2h-t,0\}}} u(x,t). \] As we discussed before, one gets a rarefaction wave by solving the Riemann problem in $\{2h<x+t<3h\}$ and a compressive waves in area $\{3h<x+t<4h\}$. Next, we compute the value of $u$ in the zone $\{x+t>4h\}$. For any point $(x_0,t_0)\ne(0,4h)$ on the line $x+t=4h$, let \[ u^+(x_0,t_0)=\lim_{\substack{x+t>4h\\ (x,t)\to(x_0,t_0)}}u(x,t),\quad u^-(x_0,t_0)=\lim_{\substack{x+t<4h\\ (x,t)\to(x_0,t_0)}}u(x,t). \] By Rankine-Hugoniot conditions, one has \[ f(u^+,0)-f(u^-,a)=-(u^+-u^-). \] Then since $f(u^+,0)=0$ and $f(u^-,a)={e^{-1/a}(u^-)^2}/{2}$, one obtains \[ u^+=\frac{e^{-1/a}(u^-)^2}{2}+u^-. \] We claim that $u^+_L>u^+_R$, where \[ u^+_L=\lim_{\substack{(x,t)\to(4h,0)\\ 4h-t<x<0}} u(x,t),\qquad u^+_R=\lim_{\substack{(x,t)\to(4h,0)\\ x>\max\{4h-t,0\}}} u(x,t). \] are the left/right of $u$ across the line $x+t=4h$ In fact, since $u^-_L>u^-_R$, where \[ u^-_L=\lim_{\substack{(x,t)\to(4h,0)\\ x+t<4h\\ x<\hat \lambda (t-4h) }} u(x,t),\qquad u^-_R=\lim_{\substack{(x,t)\to(4h,0)\\ x+t<4h\\ x>-\hat \lambda (t-4h)}} u(x,t). \] one has \begin{align*} & u^+_L-u^+_R\\ =&\left(\frac{e^{-1/a}(u^-_L)^2}{2}+u^-_L\right)-\left(\frac{e^{-1/a}(u^-_R)^2}{2}+u^-_R\right)\\ =& (u^-_L-u^-_R)\left(1-\frac{e^{-1/a}(u^-_L)^2}{2}(u^-_L+u^-_R)\right)>0 \end{align*} since $a$ and $(u^-_L,u^-_R)$ sufficiently small. Therefore as the first equation in \eqref{e:example} is $u_t=0$ in $\{x+t>4h\}$, the jump prorogate along $x=0$ which turns out to be a shock. Similarly, by modifying the initial data a little, we can guarantee that there is not other shocks in the solution. (See Figure \ref{f:1_example}.) \begin{figure}[htbp] \begin{center} \begin{picture}(0,0)% \includegraphics{1_example.pdf}% \end{picture}% \setlength{\unitlength}{1184sp}% \begingroup\makeatletter\ifx\SetFigFont\undefined% \gdef\SetFigFont#1#2#3#4#5{% \reset@font\fontsize{#1}{#2pt}% \fontfamily{#3}\fontseries{#4}\fontshape{#5}% \selectfont}% \fi\endgroup% \begin{picture}(13799,10194)(-974,-9301) \put(10726,-9211){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$6h$}% }}}} \put(8251,-9211){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$4h$}% }}}} \put(3451,-9211){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}0}% }}}} \put(5851,-9211){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$2h$}% }}}} \put(12226,-8911){\makebox(0,0)[lb]{\smash{{\SetFigFont{7}{8.4}{\familydefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x$}% }}}} \end{picture}% \caption{Afer modifying a little the initial data \eqref{e:initial_example}, the admissible solution contains two shocks along the $t$-axis.} \label{f:1_example} \end{center} \end{figure} Next, we construct an admissible solution to such that it consists of graph of shock curves which is an 1-dimensional Cantor set. Let \[ B^-_{m,n}=\left(\frac{3n+1}{3^m},\frac{3n+3/2}{3^m}\right)\cdot 6,\qquad B^+_{m,n}=\left(\frac{3n+3/2}{3^m},\frac{3n+2}{3^m}\right)\cdot 6, \quad n=0,1,\cdots,3^{m-1}-1. \] and \[ B_{m,n}=B^-_{m,n}\cup B^+_{m,n}\ \text{ and } B_m=\bigcup^{3^{m-1}-1}_{n=0} B_{m,n}. \] We consider initial datum as \begin{equation*} u_{0,m}(x)=\begin{cases} u_l & x<0,\\ u_r & x>0, \end{cases} \qquad v_{0,m} =\begin{cases} a_n & x\in B^-_{m,n},\\ -a_n & x\in B^+_{m,n},\\ 0 & x\in\ensuremath{\mathbb{R}}\setminus B_{m,n}, \end{cases} \end{equation*} where one can always choose $u_l,\ u_r$ and a suitable sequence $\{a_n\}$ such that total variation of $(u_0.v_0)$ is sufficiently small. By modifying $u_0$ properly similarly to get rid of extra shocks. One finds that the admissible solution $(u_m,v_m)$ of the system \eqref{e:example} is continuous inside $\hat{K}_m$ where \[ \hat{K}_m:=\bigcup^\infty_{m=1}\bigcup^{3^{m-1}-1}_{n=0} \{(x,t)\in \ensuremath{\mathbb{R}}^+\times\ensuremath{\mathbb{R}}:\ \frac{6(3n+1)}{3^m}-x<t<\frac{6(3n+2)}{3^m}-x \} \] and its shocks belongs to 1-th family are located on \[ \Big{\{}(0,t):t\in[0,\infty)\Big{\}}\setminus\bigcup^{3^{m-1}-1}_{k=0} \left\{(0,t):\ \frac{6(3k+1)}{3^m}<t<\frac{6(3k+2)}{3^m}\right\}. \] Obviously, $(u,v):=\lim_{m\rightarrow \infty} (u_m,v_m)$ is the admissible solution of \eqref{e:example} with initial data $(u_0,v_0):=\lim (u_{0,m},v_{0,m})$. As $m\rightarrow \infty$, $C_m:=[0,6]\setminus B_m$ converge to the Cantor set C (scaled by 6). Since the Cantor set is uncountable set and can not contain any interval of non-zero length, it is impossible to find countable many Lipschitz continuous curves which exactly cover discontinuities of $(u,v)$, that is, on these curves, either $(u,v)$ is discontinuous or it is an interaction point.This means that Theorem \ref{t:main theorem} fails for this situation.
1,116,691,497,086
arxiv
\section{Introduction} There have been many tantalizing signals which may be evidence for particle dark matter. Most recently, the PAMELA experiment has reported an cosmic ray positron excess of positrons with energy in the 10-100 GeV range \cite{Pamela}, which is consistent with annihilating dark matter \cite{theoryexcess}, confirming the excess observed by the HEAT \cite{Heat} and AMS \cite{AMS} experiments. The ATIC and PPB-BETS balloon experiments have likewise observed an excess, consistent with the PAMELA, HEAT and AMS results. ATIC and PPB-BETS suggest a dark matter particle annihilating to leptons with mass in the 500-800 GeV range \cite{ATIC}; the other observations are consistent with dark matter mass in this range. The recent Fermi results suggest, however, that the ATIC excess may be instrumental in origin. If this is the case, the annihilating DM particle may be much lighter, with mass in the $\sim 100-200 \mbox{ GeV}$ range to explain the PAMELA excess only. In addition, there is the observation of the synchrotron radiation toward the galactic center, the so-called ``WMAP haze,'' which is indicative of dark matter annihilating to electrons which emit photons in the galactic magnetic field \cite{Haze}. Indeed, an annihilation cross-section to $e^+e^-$ which produces the WMAP haze is roughly the right size (up to a boost factor) to produce the AMS, HEAT, ATIC, PPB-BETS and PAMELA excesses. The size of these signals is also roughly consistent with the freeze-out annihilation cross-section predicted for a thermal relic WIMP. In direct detection, the DAMA experiment has reported an $8.2\sigma$ significance modulation in the rate of recoils in their experiment \cite{Bernabei}. The phase and amplitude of their signal is consistent with a light elastically scattering WIMP with mass in the $\sim3-10$ GeV range \cite{PetrielloZurek} (though see \cite{DAMAfits} for a discussion of the effect of the lowest DAMA recoil bin on the fit in this window). While these signals are intriguing, detailed explanations of these signals in terms of standard models of WIMP dark matter, such as supersymmetry, may be challenging. One difficulty in explaining the AMS, HEAT, PAMELA, ATIC, PPB-BETS and haze excesses is that the dark matter must have a large annihilation cross-section to leptons and a small annihilation cross-section to hadrons, since the data shows a positron excess but no excess of anti-protons \cite{Pamela,HooperWeiner,Cirelli}. This is challenging for two reasons. First, hadrons carry an enhancement in the annihilation cross-section which goes like $N_c$, the number of colors; hence in many models, annihilation to colored particles is the preferred mode. Secondly, when the dark matter particle is Majorana, as in SUSY models, there is a chiral suppression which disfavors annihilation to light modes. In SUSY, annihilation to $\bar{b}b$, $\tau^+\tau^-$, and $W^+W^-$ is preferred; it has been shown that an annihilation cross-section big enough to produce the positron excess through this mode will produce too many anti-protons through the hadronic decays of these states (see e.g. \cite{Cirelli,Salati} for the case of $W^+W^-$). In this paper we develop models which naturally overcome this challenge, where the dark matter effectively carries lepton number, and hence annihilation to leptons is the only mode allowed. We also show that within this class of models, the dark matter may in fact also quite naturally be multi-component. A heavier component explains the PAMELA and synchrotron excesses, while the lighter component, residing in the hidden sector, may have a much lower mass, and may explain the DAMA signal. These low mass states may be reachable with low threshold analyses currently being planned by the CDMS and XENON experiments \cite{lowthresh}. The addition of these low mass hidden sectors with multi-component dark matter naturally suggests rich dynamics in the hidden sector. In many cases, there are new forces, both scalar and vector, which give rise to novel phenomenology, and in many ways, the rich dynamics of these low mass hidden sector dark matter models is motivated by the Hidden Valley \cite{hv}. The components of the model we discuss here, with multiple dark forces and low mass dark matter states coupled to the SM through kinetic mixing or TeV mass states, resemble features of the low mass hidden dark matter models constructed in \cite{ADM,Pospelov,HooperZurek}. Because multiple forces may reside in the hidden sector, these models may also provide a natural context for solving a second challenge for a model of DM explaining the positron excesses. That is, there must be a boost in the annihilation of the dark matter in the halo today relative to the cross-section required at thermal freeze-out, $\langle \sigma v \rangle \sim 3 \times 10^{-26} \mbox{ cm}^3/\mbox{s}$. For dark matter in the 500-800 GeV range, the boost is typically quite large, $\sim 100-1000$, for direct annihilation to $e^+e^-$ \cite{HooperWeiner,ATIC}. A smaller, though still significant, boost is required for lighter DM in the $\sim 100 \mbox{ GeV}$ range \cite{HooperWeiner}. The boost factor may come from a large overdensity in the dark matter locally in the galaxy, though simulations suggest that a boost factor much larger than $\sim5$ is difficult to produce. A boost factor may instead imply that the size of the dark matter annihilation cross-section in the halo today is larger than the annihilation at thermal freeze-out. A possible source of the needed enhancement of the cross-section today is the so-called Sommerfeld effect \cite{Nojiri}. This effect gives rise to an enhancement of the annihilation cross-section at low velocity $v$, so that the annihilation cross-section for particles locally in our halo ($v \sim 10^{-3}$) is enhanced with respect to the freeze-out cross-section ($v \sim 0.3$). One of the additional dark forces may provide for such an enhancement. (See \cite{nontherm} for a model where late decay of a meta-stable state produces the needed boost.) Models with such large boosted annihilation cross-sections potentially run into phenomenological constraints. It was shown in \cite{Cirelli} that gamma ray constraints from HESS in the galactic center and galactic ridge require $ B \langle \sigma v \rangle \lesssim 10^{-24} \mbox{ cm}^3/\mbox{s}$, where $B$ is the astrophysical boost factor, for DM in the several hundred GeV mass window. This constraint is problematic for the window preferred by ATIC for larger DM masses. For smaller DM masses consistent with the PAMELA signal the constraint is less significant due to the smaller annihilation cross-sections required. The constraint can be alleviated, however, even for dark matter in the several hundred GeV range. The astrophysical boost factor may be smaller in the galactic center than it is at the solar radius, where the positrons originate. This is expected where tidal disruption of dense objects in the galactic center occurs. Thus while $ B \langle \sigma v \rangle \lesssim 10^{-24} \mbox{ cm}^3/\mbox{s}$ at the galactic center, $B \langle \sigma v \rangle$ may be much larger locally to produce the large positron excesses. A second less stringent constraint on these Sommerfeld boosted cross-sections is derived from BBN \cite{BBN}, $\sigma v \lesssim 7 \times 10^{-24} \mbox{ cm}^3/s$ for annihilation to electrons and $\sigma v \lesssim 2 \times 10^{-23} \mbox{ cm}^3/s$ for annihilation to muons and taus. These constraints can be satisfied even for moderately large DM masses in the class of models we consider with boost factors which are on the order of a few. Although dark matter with multiple components could potentially be quite complicated, in this paper we propose that the dark sectors take on a simple basic structure. (See \cite{HyeSung} for another model of dark matter with more than one stable state.) To the SM sector, we add an ``$X$ sector.'' The $X$ sector contains generally high mass states, in the 100's of GeV range, which communicate to the SM through ${\cal O}(1)$ operators which carry lepton number. Thus the lightest state in this sector annihilates primarily to electron-positron (or unobserved neutrino) pairs, producing the observed PAMELA and synchrotron excesses, without any excess in anti-protons. The dark matter is essentially a sterile neutrino, which is stable by virtue of a $Z_2$ symmetry. When such a model is supersymmetrized, there is now a second stable state by a $R$-parity. To the $X$-sector, we may add a hidden dark matter (hDM) sector. The hDM sector does two things. First, the hDM sector makes the MSSM LSP unstable to decay to hDM states with the same $R$-symmetry charge. Thus the lightest $R$-symmetry odd state may in fact be much lighter than the weak scale. This was pointed out in \cite{SUSYHV} for hidden valleys and applied to dark matter in the context of a supersymmetric MeV hidden sector in \cite{HooperZurek}. Here we are interested in the case where that state has a mass in the 1-10 GeV range, and explains the DAMA signal. Second, the hDM sector provides a means for breaking the symmetry of the new dark forces and giving masses to the gauged mediators. In some cases, these forces may give rise to a Sommerfeld enhancement. The mass of those dark forces should be in the sub-50 GeV range, since the Sommerfeld enhancement is effective for mediator masses $m_M$ satisfying $g_D^2 M_X/(4 \pi) \gtrsim m_M$, where $g_D$ is the coupling of the dark force to the dark matter $X$. Thus for $g_D \sim 0.1-1$ we see that the 1-10 GeV range is motivated both for mediators of the Sommerfeld effect and for a dark matter candidate to explain the DAMA signal. Such light scalar or gauged mediators are natural in the presence of hidden sectors, as shown in \cite{HooperZurek} in the context of MeV dark matter. On the basis of naturalness considerations, one expects scalar forces (or massive gauged particles which get their masses from such scalars) to be at the weak scale. However, such light scalars can be natural if the hidden sector is shielded from MSSM SUSY breaking (which tends to push the mass of the force mediators to the weak scale) by a weak coupling to the MSSM sector. We will consider the case where the weak coupling is either a mixing angle $\theta$ between the dark force $U(1)_D$ and hypercharge $U(1)_Y$, or a small coupling $\lambda_D$ of a visible sector singlet scalar with the hDM sector. These weak couplings set the mass scale $m_D$ in the hDM sector to be $m_D \sim \theta m_{SUSY}$, or $m_D \sim \lambda_D m_{SUSY}$ (up to loop factors) where $m_{SUSY} \sim 0.1-1 \mbox{ TeV}$ are the MSSM SUSY breaking masses and $m_D$ is the typical scale for the dark forces and the DM in the hidden sector. Since the kinetic mixing between the two sectors may typically be a loop factor $\theta \sim 10^{-2}$, or a somewhat small coupling $\lambda_D \sim 10^{-1-2}$, the low mass 1-10 GeV scale is further motivated. While such a mechanism was introduced in the context of MeV dark matter for smaller mixings $\theta \sim 10^{-5}$, it was shown to be quite general for higher mass hidden sectors in the 0.1-100 GeV range \cite{FengKumar,ArkaniWeiner}. We now turn to constructing the $X$ model explicitly. To the MSSM we add \begin{equation} \Delta W = y'_i L_i H' \bar{X} + \lambda_X S_X \bar{X} X + \kappa_X S_X^3 \label{SUSYLDMW} \end{equation} where $H'$ is an electroweak doublet. There is a $Z_2$ symmetry under which $H'$ and $X$ are odd, and also an $R$-symmetry. If a component of $X$ is the lightest $Z_2$ odd particle, it is a stable dark matter candidate, and it effectively carries lepton number, explaining why it annihilates predominantly to leptons (for another leptophilic model see \cite{ErichPaddy}) through $t$-channel $H'$ exchange. The mass of such a dark matter state is in the 100's of GeV range, and it must be fermion to get an $s$-wave annihilation cross-section (which is unsuppressed at low velocities in the halo today). The annihilation cross-section is $ \sigma_{\rm ann} v = {y'_i}^4 m_{X}^2 /(16 \pi m_{H'}^4), $ which must be $\sigma_{\rm ann} v \simeq 3 \times10^{-26} \mbox{cm}^3/\mbox{s}$ in order to be consistent with the observed relic abundance. Thus for $m_{X} \approx 700 \mbox{ GeV}$, we find $y'_i \lesssim 0.6 $. The scalar component $\tilde{X}$ annihilating through $t$-channel Higgsino $\tilde{H'}$ exchange to $e^+ e^-$, on the other hand, gives a $p$-wave suppressed annihilation. Thus to have a viable model, $\tilde{X}$ must be heavier than $X$, and rapidly decay to the $X$ fermion plus the lightest $R$-symmetry odd state, which will reside in the hDM sector. With this annihilation cross-section to electron-positron pairs, the rate is a factor $\sim 100-1000$ below what is required to reproduce the ATIC and PAMELA signals together, and a factor $\sim 10$ below what is required to produce the PAMELA signal alone. The required boost from a Sommerfeld enhancement may be mediated by a singlet scalar $S_X$ which generates the mass for $X$, $\lambda_X \langle S_X \rangle = m_X$. This enhancement is relevant if $\lambda_X^2/(4 \pi) m_{X} \gtrsim m_{S_X}$ is satisfied. Since $\langle S_X \rangle = \frac{m_{S_X}}{3 \kappa_X}$, we find that the Sommerfeld condition is satisfied if $\frac{\lambda_X^3}{12 \pi} \gtrsim \kappa_X$, which is fulfilled for $\lambda_X \simeq 1$ and a relatively small $\kappa_X$. This singlet $S_X$ must have a relatively small mixing angle with the Higgs in order not to violate direct detection bounds for the DM candidate $X$ (though it may be possible that this scalar is that of the NMSSM, see \cite{NomuraThaler} for a possible model). We will see next that one of the scalars residing in the hDM sector may also quite naturally mediate the boost. To this point, we have two stable states: the DM fermion $X$ stable by the $Z_2$ and the LSP (either the scalar $\tilde{X}$ or an MSSM superpartner). With the addition of a supersymmetrized low mass hidden sector, the LSP becomes unstable to decay to the hidden sector, so that the LSP mass may be much lighter than the weak scale. For the purposes of this toy model, we consider the minimal hDM superpotential, \begin{equation} W_h = \lambda_D S_D \bar{D} D + \kappa_D S_D^3. \label{hiddenW} \end{equation} This hidden toy model is fashioned after that discussed in \cite{HooperZurek}, and is to be added to the $X$-sector super-potential, Eq.~(\ref{SUSYLDMW}). Here $S_D$ is a dark singlet field, and the dark Higgses, $\bar{D},~D$, may be charged under a new hidden gauge group $U(1)_D$, which is a dark force. $U(1)_D$ mixes with hypercharge through the kinetic term $\theta F_D^{\mu \nu}F_{\mu \nu}$. The lightest state in this hDM sector may be a candidate to explain the DAMA signal, if its mass is in the 1-10 GeV range. This mass may naturally be induced radiatively from two sources. First, kinetic mixing between hypercharge and $U(1)_D$ is $\theta \sim 10^{-2}-10^{-3}$, as expected when the mixing is induced by a loop of heavy particles \cite{Holdom}. This kinetic mixing introduces SUSY breaking into the hidden sector by a two loop gauge mediation diagram, with messengers in the loop, as in \cite{HooperZurek}. We term this mechanism for SUSY breaking in the hidden sector ``little gauge mediation.'' The size of the radiatively induced $D,~\bar{D}$ masses is $ m_{D,rad}^2 = \frac{3}{5} g_{D}^2 g_{Y}^2 \theta^2 m_{SUSY}^2, $ where $m_{SUSY} = \langle F_{mess} \rangle/(16\pi^2M_{mess})$ is the SUSY breaking mass in the messenger sector, $g_D$ is the gauge coupling of $U(1)_D$ and $g_Y$ the hypercharge gauge coupling. With $\theta \sim 10^{-2}-10^{-3}$, and ${\cal O}(1)$ couplings, we can see that the GeV mass scale is naturally generated in the hidden sector. In order to break $U(1)_D$, this mass-squared must be negative. One loop graphs with the scalar $S_D$ in the loop may easily induce such a negative mass-squared, $ m_{D,rad}^2 \simeq -\frac{4 \lambda_D^4 m_{S_D}^2}{16 \pi^2} \log \left(\frac{\Lambda^2}{m_{SUSY}^2} \right), $ where $\Lambda$ is the scale where the soft masses are generated, and $m_{S_D}^2$ is the soft SUSY breaking mass of $S_D$ (we assume that the singlet receives a moderate SUSY breaking mass in the 10 to 100's of GeV range through a coupling to the SUSY breaking messenger fields). For $\lambda_D \approx 10^{-1}-1$, soft masses for $D$ in the few GeV range result which are negative, even with the contribution from little gauge mediation included. With $\langle S_D \rangle = 0$ and $\langle D, \bar{D} \rangle \neq 0$, we review the spectrum briefly, but see \cite{HooperZurek} for details. With ${\cal O}(10^{-1-2})$ gauge coupling $g_D$ and Yukawa term $\lambda_D$, all masses in the hidden sector are ${\cal O}(\mbox{GeV})$. The $U(1)_D$ symmetry is broken by $\langle D, \bar{D} \rangle$ and the gauge boson acquires a mass. We have scalar mass eigenstates $m_{D_1}^2 = - \frac{4 g_D^2-2 \lambda_D^2}{\lambda_D^2} m_{D,rad}^2$, $m_{D_2}^2 = -2 m_{D,rad}^2$, and $m_{U_D}^2 = 4 g_D^2 \langle D \rangle^2$ from the breaking of the $U(1)_D$ with $\langle D,\bar{D} \rangle^2=-m_{D,rad}^2/\lambda_D^2$. The fermion masses arise through $\tilde{D},\tilde{\bar{D}},\tilde{U}_D,\tilde{S}_D$ mixing, two with masses $2 g_D \langle D \rangle$ (a $\tilde{U}_D$ gaugino-$\tilde{D}$ Higgsino mix) and two with masses $\sqrt{2}\lambda_D \langle D \rangle$ (a $\tilde{S}_D$ singlino-$\tilde{D}$ Higgsino mix). We assume $g_D \gtrsim \lambda_D$ so that the fermions with mass $\sqrt{2} \lambda_D \langle D \rangle$ are stable dark matter candidates, provided they are lighter than the gravitino. Now, we can see that such a sector can plausibly give rise to a signal in DAMA in the elastically scattering WIMP window. We take the $\tilde{D}-\tilde{S}_D$ fermions to be the dark matter with mass $m_{hDM}$ in the 3-10 GeV range. The DM may annihilate to the axion associated with the angular components of $D,\bar{D}$ which is light, as in the NMSSM. The annihilation cross-section of the hidden dark matter to these axions is \begin{eqnarray} \sigma_{ann} &\simeq& \frac{\lambda_D^4}{16 \pi} \frac{1}{m_{hDM}^2} \\ &\sim& 10^{-35} \mbox{ cm}^2 \left(\frac{\lambda_D}{0.1}\right)^2 \left(\frac{8~ {\rm GeV}}{m_{hDM}}\right)^2. \nonumber \end{eqnarray} This cross-section is of the order $\sim 10^{-36} \mbox{ cm}^2$ necessary to produce the correct relic density (this candidate need not be all the dark matter). The direct detection cross-section by exchanging a $U_D$ gauge boson is \begin{eqnarray} \sigma_{SI} & \simeq & \frac{g_D^2 g_Y^2 \theta^2}{\pi} \frac{m_{r}^2}{m_{U_D}^4} \\ & \sim & 10^{-40} \mbox{ cm}^2 \left(\frac{g_D g_Y \theta}{10^{-4}}\right)^2 \left(\frac{8 \mbox{ GeV}}{m_{U_D}} \right)^4\nonumber, \end{eqnarray} where $m_r$ is the reduced mass of the nucleon-DM system. We see that a hidden sector, which simultaneously generates natural GeV mediators and GeV scale dark matter candidates, produces a direct detection cross-section in a range to be the explanation for the DAMA signal. In addition, if the singlet $S_D$ couples to the visible Higgs through a term in the superpotential $\zeta S_D H_u H_d$, this provides an additional channel for direct detection. The size of the scattering cross-section is \begin{eqnarray} \sigma_n &\simeq& \frac{m_r^2}{2 \pi}N_n^2\, \lrf{\lambda_D\zeta\,v_u\langle D \rangle}{m_{h^0}^2}^2\frac{1}{m_{D_1}^4}\\ &\simeq& 2\times 10^{-41}\, \mbox{ cm}^2\, \lrf{N_n}{0.1}^2\lrf{\lambda_D}{0.1}^2 \lrf{\zeta}{10^{-2}}^2\nonumber\\ &&\times\lrf{\langle D \rangle}{20\,{\rm ~GeV }}^2\lrf{100\,{\rm ~GeV }}{m_{h^0}}^4 \lrf{10\,{\rm ~GeV }}{m_{D_1}}^4,\nonumber \end{eqnarray} where $N_n$ comes from the effective coupling of the exchanged scalar to the target nucleus and $h^0$ is the MSSM Higgs. We see again that this mechanism results in a scattering cross-section is in the $10^{-41}-10^{-39} \mbox{ cm}^2$ window for explaining the DAMA signal with light WIMPs (if the light state only composes a fraction of the DM, scattering cross-sections should be correspondingly larger). Alternatively, if the DAMA signal turns out not to be from DM scattering, it is easy to evade direct detection bounds by lowering the mixing $\zeta$ or correspondingly raising the mass of the mediators; these lower mass WIMPs may still be in reach of the low threshold runs of CDMS \cite{lowthresh} and XENON. The general conclusion here is that such hidden sectors with GeV mass dark matter particles and dark forces of GeV mass mediators arise naturally in a framework where the hidden sector communicates to the SM through kinetic mixing of dark force with hypercharge, or through mixing of a singlet scalar with both the hidden and visible sectors. The mixing simultaneously provides motivation for observation of these states by direct detection experiments. These light gauged or scalar mediators may in fact mediate the Sommerfeld enhancement as well, if $X$ is charged under the $U(1)_D$, or if $S_D$ also couples to $X$ in addition to $D$. We have discussed multi-component dark matter models in which the dark sector is more complex than a single weakly interacting field. In many cases, these models give rise to additional dark forces which enrich the dark matter dynamics. Phenomenologically, the focus of this paper has been on explanations of the PAMELA, ATIC, PPB-BETS, HEAT, AMS, and DAMA excesses. In the models discussed here, the dark matter candidate which explains the positron excess carries lepton number; it is stable by an additional $Z_2$ symmetry. We showed that in supersymmetric models of this type, there are naturally two dark matter candidates--the lighter candidate may explain the DAMA signal, and may be observable by low threshold runs of CDMS, XENON. We also showed how dark forces that arise in hidden sector dark matter models may naturally have their masses generated at the GeV scale, further motivating the low mass WIMP window as a well-motivated scale for direct detection of dark matter. Dark matter dynamics and dark matter sectors may be rich. As multiple experiments with varied detection techniques probe the dark sector, we may discover a dark hidden world in lieu of a single weakly interacting particle. \bigskip This work has been supported by the US Department of Energy, including grant DE-FG02-95ER40896, and by NASA grant NAG5-10842. We thank Paddy Fox, Dan Hooper, Peter Ouyang, Frank Petriello, and Erich Poppitz for discussions, and Alessandro Strumia for a comment on the first version.
1,116,691,497,087
arxiv
\section{Introduction } Burkina Faso is located in the heart of West Africa in the sahel region. It is bordered to the East by Niger, to the North and West by Mali, and to the South by the Ivory Coast, Ghana, Togo and Benin. Its area is 274,000 $\textrm{km}^2$. Its climate is dry and divided into two seasons: the dry season (from mid-October to mid-May) and the raining season (from mid-May to mid-October). Its average temperature is between 30 and 35 degrees. The official language is French; there are also about sixty national languages, the three main being the Moor\'e, Fulfuld\'e (or Fulani) and Dioula. The main foreign languages spoken in Burkina Faso are English, German and Arabic. Its capital is Ouagadougou (about 1, 6 million inhabitants), the second largest city Bobo-Dioulasso (600,000 inhabitants) is the economic capital. The population of Burkina-Faso is estimated at about 17 million in 2014. The country was colonized by France in the late 19th century. The Colony of Upper Volta was created in 1919. In 1932, the Upper Volta was suppressed and divided between the Ivory Coast, Mali and Niger. It was reestablished as a territorial entity in 1947 and became independent in 1960. The Upper Volta has been renamed Burkina Faso the 4th of August 1984. Burkina Faso is a country of agriculture and stock raising. The very large majority (80\%) lives off agriculture and raising livestock. The rest of the population is involved in various commercial and craft activities, private business, industry, or works in the public service for the State. The modern economy of Burkina Faso depends upon mines, processing industries of raw material such as fruits, vegetables, cereals, meat and textile industries based on cotton. Burkina Faso is considered as the biggest producer of cotton in Africa. The education system in Burkina Faso has three sectors: \begin{itemize} \item the formal sector, which includes pre-school, primary education, secondary education, higher education and professional training organized in the context of schools; \item the non-formal sector which includes rural education and adult literacy. This form of education is organized out of school context. \item Finally, the informal sector that takes into account the education received in the family circle or in a group. \end{itemize} The enrollment rate in 2014 was $72\%$ in primary school, $22\%$ in secondary school, and $6\%$ in higher education. These $6\%$, corresponding to a rate of about $0.3\%$ of the population, which is well below the standard value of Unesco that is $2\%$. We are interested to the formal sector of education. The preschool is optional and concern the range from 3 years old to 6 years old. The primary education starts from 6-7 years old to 12-13 years olds and lasts for six years. At the end of six years, the elementary student is required to pass the exam of the Certificate of Primary and Elementary Education (CEPE) which is required to access the secondary school. The secondary education is divided into two steps. The first step of the formal education lasts four years. At the fourth year the students pass the exam the first step studies (BEPC). The second step of the secondary school lasts 3 years and is divided in three mains options that are the general option, the technical option and the professional option. The last year of the second step of secondary school, the students pass an exam called BAC considered as the first degree of University which is compulsary for the registration at university. Higher Education in Burkina-Faso began its structure in 1960s, with the country's accession to independence by establishing institutes or higher education schools. In Burkina-Faso there are 4 national universities, 8 private universities and about 65 Private higher education institutions. The oldest university is the University of Ouagadougou established in 1974 and renamed in December 2015 as Universit\'e Ouaga I Pr Joseph Ki-Zerbo. The paper is organized as follows. In section \ref{sec1}, we brieffly mention the mathematics programmes in Burkina starting with the general education system in Burkina-Faso. Then in section \ref{sec2}, we list the African mathematicians native of Burkina-Faso. Most of them are faculty members in Burkina-Faso and the rest are faculty members in universities abroad. \section{Mathematics in Burkina-Faso}\label{sec1} Long before the colonial penetration, mathematics existed in Burkina Faso, although they were not in a formal way. They were applied in everyday life. for example the counting system in dialects, the construction of traditional houses as round huts, decorations of traditional houses in geometric figures, making masks, etc ... Then there has been some evolution with the introduction of the western-style education system. Since then the mathematics enjoy a certain prestige in Burkina-Faso. The teaching style and the programmes were quite similar to the ones in France until 1980s. Since then, different approaches, initiated in Africa and supported by France have proposed slightly different programmes. Let's brieffly mention mathematics programmes in the formal education system, the details being given in \cite{laure}. \begin{itemize} \item Mathematics begin at the preschool/kindergarden where the pupil learns to count, do activities and learn skills that will be extended in future mathematical learning like in primary school with the introduction of calculations, subtraction, division. In Burkina Faso, the teaching of geometry is effective earlier at level of kindergarten. At this stage, the child must be able to recognize and classify certain objects by referring to their shapes. \item The arithmetic, geometry and the metric system are the sub-disciplines of mathematics taught in primary school. \item At the first step of secondary school that last 4 years, the programmes focus on the basics of pure geometry, arithmetic, vector calculus and algebra. \item At the second step of secondary school that last 3 years the programmes of mathematics depend on the options that varies from scientific studies, literature studies or technical studies. For the scientific option, the programmes of the first year of this level are based on numerical function of a real variable, equations and inequations in $\mathbb{R}$, vectors of the plan, geometry in the plane, transformation in plane, geometry in space and statistics. In the second year for the scientific option, the programs are based on algebraic and numerical problems, numerical sequences, numerical functions, angles and trigonometry, transformations in plane, geometry in space, statistics, enumerations. In the last year the programmes of mathematics are about arithmetic, probabilities, complex numbers, numerical sequences, numerical functions, integral calculation, planar curves, vector calculation and configuration (plane and space), transformations and configurations. \item Before 1974, year of the creation of the University of Ouagadougou, and later the creation of the Institute of Mathematics and Physics (IMP), the first students in mathematics were trained abroad, mainly in France. Although some had begun their studies of mathematics in Africa, particularly in Dakar, Abidjan where there were the biggest african universities in 1960s -1970s, the first mathematicians completed their thesis in France. In Burkina Faso, only the university of Ouagadougou has a complete training in mathematics. The current programme is Licence-Master-Doctorate programme labelled (LMD) started in 2009. More details about the LMD programme in mathematics are given in \cite{laure}. \end{itemize} There are 4 research laboratories in mathematics: \begin{itemize} \item Le laboratoire d'analyse num\'erique, d'informatique et de biomath\'ematiques (LANIBIO)\\ Director: Professor Blaise Som\'e \item Le laboratoire de math\'ematiques et informatique (LAMI)\\ Director: Professor Hamidou Tour\'e \item Le laboratoire de th\'eorie des nombres, alg\`ebre, g\'eom\'etrie alg\'ebrique, topologie alg\'ebrique (TN-AGATA)\\ Director: Professor Moussa Ouattara \item Le Laboratoire de math\'ematiques, Universit\'e Polytechnique de Bobo-Dioulasso\\ Director: Professor Marie Yves Th\'eodore Tabsoba \end{itemize} \section {African mathematicians native of Burkina-Faso}\label{sec2} \begin{enumerate} \item Albert OUEDRAOGO (Male)\\ 1969: Doctorat de 3\`eme cycle\\ Title: Probl\`eme inverse de la diffusion et g\'en\'eralisation de l'\'equation de Marchenko ( Inverse problem of diffusion and generalisation of Marchenko's equation).\\ Universit\'e Pierre et Marie Curie (Paris 6), France.\\ Directeur: J. L. Destouches\\ 1981: Doctorat d'Etat\\ Title: Contr\^oles ponctuels de syst\`emes elliptiques et paraboliques d'ordre 2m: application \`a un syst\`eme parabolique avec masse de Dirac (Punctual controls of elliptic and parabolic systems of order 2m).\\ Universit\'e Pierre et Marie Curie (Paris 6), France.\\ Directeur: Jacques Louis Lions \item Akry KOULIBALY (Male) [Deceased]\\ 1976 : Doctorat de 3\`eme cycle\\ Title: Alg\`ebres de Malcev de basses dimensions ( Malcev agebras of low dimensions).\\ Universit\'e de Montpellier 2, France.\\ Directeur: Artibano Micali\\ 1984: Doctorat d'Etat\\ Title: Contributions \`a la th\'eorie de Malcev (Contributions to the theory of Malcev algebra).\\ Universit\'e Montpellier 2, France.\\ Directeur: Artibano Micali \item Ousseynou NAKOULIMA (Male)\\ 1977 : Doctorat de 3\`eme cycle\\ Title: Etude d'une in\'equation variationnelle bilat\'erale et d'un syst\`eme d'in\'equations quasi-variationnelles unilat\'erales associ\'ees.\\ Universit\'e de Bordeaux 1, France.\\ 1981: Doctorat Sc. Math.\\ Title: In\'equations variationnelles et in\'equations quasivariationnelles bilat\'erales associ\'ees \`a des probl\`emes de jeux stochastiques \`a somme nulle ou non nulle, Universit\'e de Bordeaux 1, France. \item G\'erard KIENTEGA (Male)\\ 1980: Doctorat de 3\`eme cycle\\ Title: Sur les corps alg\'ebriques du quatri\`eme d\'egr\'e.\\ Universit\'e Pierre et Marie Curie (Paris 6), France. \\ Directeur: Pierre Barrucand\\ 1992: PhD\\ Title: M\'etriques g\'en\'eralis\'ees et alg\`ebres affinement compl\`etes.\\ Universit\'e de Montr\'eal, Canada.\\ Directeur: Ivo Rosenberg \item Aboubakary Seynou (Male)\\ 1981: Doctorat de 3\`eme cycle\\ Title: Compatibilit\'e temporelle, Universit\'e Louis Pasteur de Strasbourg, France.\\ Directeur : Claude Dellacherie \item Alfred TOURE (Male)\\ 1981: Doctorat de 3\`eme cycle\\ Title: Divers aspects des connexions conformes.\\ Universit\'e Pierre et Marie Curie (Paris 6), France.\\ Directeur: Jacqueline Lelong Ferrand\\ 1993: Doctorat Unique\\ Title: Geom\'etrie diff\'erentielle de certains fibr\'es unitaires.\\ Universit\'e de Montpellier 2, France.\\ Directeur: Jacques Lafontaine \item Hamidou TOURE (Male)\\ 1982 : Doctorat de 3\`eme cycle\\ Title: Sur l'\'equation g\'en\'erale par la th\'eorie des semi-groupes non lin\'eaires dans L1 (Non linear semi-group theory in L1 for a general equation).\\ Universit\'e de Franche Comt\'e, Besan\c{c}on, France.\\ Directeur: Philippe Benilan\\ 1994 : Doctorat Unique\\ Title: Etude de probl\`emes fortement d\'eg\'ener\'es en une dimension d'espace (Study of strong degenerated parabolic problems in one space dimension).\\ Universit\'e de Franche Comt\'e, Besan\c{c}on, France.\\ Directeur: Philippe Benilan\\ 1995: Doctorat d'Etat\\ Title: Etude de probl\`emes paraboliques hyperboliques non lin\'eaires ( On nonlinear hyperbolic parabolic problems).\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Philippe Benilan, Albert Ouedraogo \item Dembo GADIAGA (Male)\\ 1982: Doctorat de 3\`eme cycle\\ Title: Sur une classe de tests qui contient le test $\chi^2$: le cas d'un processus stationnaire (On a class of tests which contain the $\chi^2$-test for a stationary process).\\ Universit\'e de Lille 1, France.\\ Directeur: Denis Bosq\\ 2003: Doctorat d'Etat\\ Title: Test fonctionnel d'ajustement et de non influence pour des variables al\'eatoires d\'ependantes (Functional tests and no effects hypothesis for dependent random variables).\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Denis Bosq, Albert Ouedraogo \item Sabeko Marcel BONKIAN (Male)\\ 1983: Doctorat de 3\`eme cycle\\ Title: Contribution \`a l'\'etude des mesures al\'eatoires du second ordre (Contribution to the study of random measures of the second order).\\ Universit\'e des Sciences et Techniques de Lille 1, France.\\ Directeur: Pierre Jacob \item Ba Amidou Boubacar YOBI (Male)\\ 1983: Doctorat de 3\`eme cycle\\ Title: Contribution \`a l'\'etude des diagrammes de De Finetti ( Contribution to the study of De Finetti Diagramms ). \\ Universit\'e des Sciences et Techniques du Languedoc, Montpellier, France.\\ Directeur: Artibano Micali \item Blaise SOME (Male)\\ 1984: Doctorat de 3\`eme cycle\\ Title: Identification, contr\^ole optimal et optimisation dans les syst\`emes diff\'erentiels compartimentaux (Identification, optimal control and optimisation in compartimental differential equation system).\\ Universit\'e Pierre et Marie Curie (Paris 6), France.\\ Directeur: Yves Cherruault\\ 1994 : Doctorat d'Etat\\ Title: Algorithmiques num\'eriques et r\'esolution de probl\`emes de contr\^ole optimal et d'\'equations int\'egrales (Numerical algorithm and resolution of optimal control and integral equation problems).\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Yves Cherruault \item Longin SOME (Male)\\ 1984: Doctorat de 3\`eme cycle\\ Title: Mise en oeuvre informatique de quelques m\'ethodes multigrilles dans le cadre de la m\'ethode des \'el\'ements finis (Computation of some multigrid methods under finite elements method).\\ Universit\'e Pierre et Marie Curie (Paris 6), France.\\ Directeur: Pierre Arnaud Raviard\\ 2007: Doctorat Unique\\ Title: M\'ethode de grille mobile sous la m\'ethode des lignes pour la r\'esolution num\'erique d'\'equations aux d\'eriv\'ees partielles mod\'elisant des ph\'enom\`enes \'evolutifs.\\ Universit\'e de Ouagadougou, Burkina-Faso et Facult\'e Polytechnique de Mons, Belgique.\\ Directeurs: Albert OUEDRAOGO, Philippe SAUCEZ \item Marie Yves Th\'eodore TABSOBA (Male)\\ 1987: Doctorat de 3\`eme cycle\\ Title: Complexit\'e de suites automatiques, Universit\'e Aix Marseille 2, France.\\ 1999: Doctorat d'Etat\\ Title: Contribution \`a l'\'etude des suites automatiques, Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs : G\'erard Rauzy et Akry Koulibaly \item Moussa OUATTARA (Male)\\ 1988: Doctorat Unique\\ Title: Alg\`ebre de Jordan et alg\`ebres g\'en\'etiques (Jordan algebras and genetic algebras).\\ Universit\'e de Montpellier 2, France. \\ Directeur: Artibano Micali\\ 1991: Doctorat d'Etat\\ Title: Alg\`ebres de la g\'en\'etique des populations (Algebras of the genetics of population).\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Artibano Micali et Akry Koulibaly \item Kaka Bernard BONZI (Male)\\ 1990: Doctorat Unique\\ Title: Etude des \'equilibres thermiques d'un supraconducteur, existence et stabilit\'e. Universit\'e de Nancy , France.\\ Directeur: Lanchon-Ducauquis H\'el\`ene \item Kalifa TRAORE (Male)\\ 1990: Doctorat 3\`eme cycle\\ Title: Cohomologie des alg\`ebres de Malcev (Cohomology of Malcev algebra).\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Akry Koulibaly\\ 2006: PhD\\ Title: Etudes des pratiques math\'ematiques d\'evelopp\'ees en contexte par les Siamous du Burkina Faso.\\ Universit\'e du Quebec \`a Montr\'eal, Canada.\\ Directeurs: Nadine Bednarz, Philippe Jonnaert. \item Bourama TONI (Male)\\ 1994 : PhD\\ Title: Bifurcations de p\'eriodes critiques locales.\\ Universit\'e de Montr\'eal, Canada.\\ Directeur: Christiane Rousseau. \item Sado TRAORE (Male)\\ 1994: Doctorat Unique\\ Title: Approche variationnelle de la dualit\'e quasi convexe (Variational approach to quasi convex duality).\\ Universit\'e d'Avignon et des pays du Vaucluse, Avignon, France.\\ Directeur: Michelle Volle. \item Nakelgbamba Boukary PILABRE (Male)\\ 1995: Doctorat 3\`eme cycle\\ Title: Sur la Lie admissibilit\'e de la dupliqu\'ee non commutative d'une alg\`ebre (On the Lie admissibility of the noncommutative duplication of an algebra).\\ Universit\'e de Ouagadougou, Burkina Faso.\\ Directeur: Akry Koulibaly\\ 2011 : Doctorat de l'Universit\'e de Ouagadougou.\\ Title: Dupliqu\'ee et quelques structures alg\'ebriques. \\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Moussa Ouattara \item C\^ome Jean Antoine BERE (Male)\\ 1997: Doctorat de 3\`eme cycle\\ Title: Superalg\`ebres de Malcev.\\ Universit\'e de Ouagadougou, Burkina Faso.\\ Directeur: Akry Koulibaly \item Pierre Clovis NITIEMA (Male)\\ 1998: PhD\\ Title: Approximation des puissances sectionn\'ees et des classes de fonctions \`a l'aide de polyn\^omes alg\'ebriques.\\ Universit\'e d'Etat de Dniepropetrovsk, Facult\'e de M\'ecanique et de Math\'ematiques, Ukraine.\\ Directeur: Motornyi Vitaly Pavlovitch. \item Mamadou SANGO (Male)\\ 1998: Ph D (Th\`ese Unique)\\ Title: Valeurs propres et vecteurs propres de probl\`emes elliptiques nonautoadjoints avec un poids ind\'efini.\\ Universit\'e de Valenciennes et du Hainaut-Cambr\'esis, France.\\ Directeur : Serge Nicaise \item Sibiri TRAORE (Male)\\ 1998: PhD\\ Title: Conception d'une architecture neuronique de grande taille inspir\'ee de l'anatomie et de la physiologie du cerveau humain pour la commande de robots.\\ Universit\'e Laval, Canada.\\ Directeur: Michel Guillot \item Joseph BAYARA (Male)\\ 1999: Doctorat 3\`eme cycle\\ Title: Sur les alg\`ebres d'\'evolution (On evolution algebras).\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur : Moussa Ouattara\\ 2013: Doctorat de l'Universit\'e de Ouagadougou\\ Title: Alg\`ebres train et d\'erivations dans les alg\`ebres associatives.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Moussa Ouattara \item Marie Fran\c{c}oise OUEDRAOGO (Female)\\ 1999: Doctorat 3\`eme cycle\\ Title: Sur les superalg\`ebres triples de Lie (On Lie Triple superalgebras).\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Akry Koulibaly\\ 2009: Doctorat de l'Universit\'e Blaise Pascal de Clermont-Ferrand\\ Title: Extension of the canonical trace and associated determinants.\\ Universit\'e Blaise Pascal de Clermont-Ferrand, France.\\ Directeurs: Sylvie Paycha et Akry Coulibaly \item Andr\'e CONSEIBO (Male)\\ 2001: Doctorat 3\`eme cycle \\ Title: Alg\`ebre de Berstein monog\`ene et \'equation diff\'erentielle (Monogenic Berstein algebra and differential equation).\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Moussa Ouattara\\ 2013: Doctorat Unique\\ Title: Alg\`ebres train de degr\'e 2 et d'exposant 3 (Train algebras of degree 2 and exponent 3).\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Moussa Ouattara \item Stanislas OUARO (Male)\\ 2001: Doctorat Unique \\ Title: Etude de probl\`emes elliptiques-paraboliques non lin\'eaires en une dimension d'espace (On non-linear elliptic parabolic problems in one space dimension).\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Hamidou Tour\'e \item Lucie Patricia ZOUNGRANA (Female)\\ 2001: Doctorat Unique\\ Title: Sur les sous boucles de Cartan d'une boucle homog\`ene et les sous alg\`ebres de Cartan d'une alg\`ebre triple de Lie [On Cartan subloops of homogenous loop and Cartan subalgebra of a Lie triple algebra].\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Akry Koulibaly \item Mahamadi Jacob WARMA (Male)\\ 2002: PhD\\ Title: The Laplacian with Robin boundary conditions on arbitrary domains.\\ University of Ulm, Germany.\\ Directeur : Wolfgang Arendt. \item Oumar TRAORE (Male)\\ 2002: Doctorat Unique\\ Title: Contr\^ole de probl\`emes dynamiques de la population (Control in population dynamics problems).\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Albert Ouedraogo. \item Idrissa KABORE (Male)\\ 2004: Doctorat Unique\\ Title: Combinatoire des mots et caract\'erisation de certaines classes.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Marie Yves Th\'eodore Tapsoba. \item Somdouda SAWADOGO (Male)\\ 2005: Doctorat unique\\ Title: Contr\^olabilit\'e de syst\`emes dynamiques \`a deux temps. Application \`a la th\'eorie des sentinelles ( Controlability of dissipative systems. Application to the sentinel theory). \\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Ousseynou Nakoulima et Albert Ouedraogo \item Ousseni SO (Male)\\ 2005: Doctorat Unique\\ Title: Mod\'elisation Math\'ematique et simulation num\'erique de la physiologie et du contr\^ole optimal des \'echanges gazeux respiratoires chez l'homme (Mathematics modeling and numerical simulation of physiology and optimal control of the human respiratory system).\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Blaise Som\'e \item Laure GOUBA (Female)\\ 2005: PhD \\ Title: Th\'eories de Jauge Ab\'eliennes Scalaires et Spinorielles \`a 1+1 Dimensions: une Etude non Perturbative.\\ Institut de Math\'ematiques et de Sciences Physiques (IMSP), Porto-Novo, Universit\'e d'Abomey Calavi (UAC), R\'epublique du B\'enin.\\ Directeurs: Jan Govaerts, Norbert Mahouton Hounkonnou \item Balira Ousmane KONFE (Male)\\ 2005: Th\`ese Unique\\ Title: Optimisation globale: M\'ethode Alienor.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Blaise Som\'e \item Genevi\`eve BARRO (Female)\\ 2005: Th\`ese Unique\\ Title: Contribution \`a la r\'esolution num\'erique de quelques probl\`emes de r\'eactions diffusion non lin\'eaires.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Benjamin Mampassi, Blaise Som\'e. \item Mikailou COMPAORE (Male)\\ 2007: Doctorat Unique \\ Title: Sur les d\'eformations infinit\'esimales des flots et la g\'eom\'etrie diff\'erentielle des fibres unitaires de certains espaces sym\'etriques de rang 1.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Edmond Fedida \item W. Jacob YOUGBARE (Male)\\ 2007: Th\`ese Unique \\ Title: Data envelopment analysis/ M\'ethodologie, th\'eorie et relation avec l'optimisation multicrit\`eres: application aux syst\`emes de l'enseignement de base, de la sant\'e et de quelques entreprises du Burkina-Faso.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Blaise Som\'e \item Jean de Dieu ZABSONRE (Male)\\ 2008: Doctorat Unique\\ Title: Mod\`eles visqueux en s\'edimentation et stratification ( obtention formelle, stabilit\'e th\'eorique et sch\'emas volumes finis bien \'equilibr\'es).\\ Universit\'e de Savoie, France.\\ Directeurs: Hamidou Tour\'e, Didier Bresch, E. Fernandez-Nieto. \item Aboudramane GUIRO (Male)\\ 2009: Doctorat Unique\\ Title: Sur quelques probl\`emes d'observateurs: Applications \`a certains mod\`eles d' \'ecosyst\`eme aquatique.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Hamidou Tour\'e et Abderrahman IGGIDR. \item Adama OUEDRAOGO (Male)\\ 2009: Doctorat Unique\\ Title: Solutions renormalis\'ees pour des probl\`emes paraboliques fortement d\'eg\'en\'er\'es: cas isotrope et non isotrope.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Hamidou Tour\'e et Mohamed Maliki \item Issa ZABSONRE (Male)\\ 2009: Doctorat Unique\\ Title: Contributions \`a l'\'etude d'une classe d'\'equations int\'egro-diff\'erentielles \`a retard en dimension infinie. \\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Hamidou Tour\'e et Khalil Ezzinbi. \item Gilbert BAYILI (Male)\\ 2009: Doctorat Unique\\ Title: Contr\^ole des coefficients de singularit\'es et contr\^olabilit\'e exacte dans un domaine polygonal avec fissures.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Hamidou Tour\'e et Mary Teuw Niane. \item Pascal ZONGO (Male)\\ 2009: Th\`ese Unique \\ Title: Mod\'elisation math\'ematique de la dynamique de transmission du paludisme.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Blaise Som\'e \item Seydou Eric TRAORE (Male)\\ 2010: Doctorat Unique\\ Title: Ensemble et Syst\`emes Flous et Applications \`a Quelques Mod\`eles de D\'ecisions en Sciences Environnementales.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Hamidou Tour\'e et Akry Koulibaly \item Hermann Wendpayande Baudoin SORE (Male)\\ 2010: Doctorat de l'Universit\'e Hambourg\\ Title: The Dold-Kan Correspondance and coalgebras structures.\\ Universitat Hamburg, Allemagne.\\ Directeur: Birgit Richter. \item Youssouf PARE (Male)\\ 2010: Th\`ese Unique\\ Title: R\'esolution de quelques \'equations fonctionnelles par la m\'ethode num\'erique SBA.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Blaise Som\'e \item Elis\'ee GOUBA (Male)\\ 2010: Th\`ese Unique\\ Title: Identification de param\`etres dans les syst\`emes distribu\'es \`a donn\'ees manquantes. Mod\`eles math\'ematiques de la performance en sport.\\ Universit\'e de Ouagadougou, Burkina-Faso et Universit\'e des Antilles et de la Guyane, France.\\ Directeurs: Olivier HUE, Ousseynou Nakoulima et Blaise Som\'e \item Diakarya BARRO (Male)\\ 2010: Th\`ese Unique \\ Title: Contributions \`a la mod\'elisation statistique des valeurs extr\^emes multivari\'ees.\\ Laboratoire LANIBIO, UFR-SEA, Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Dossou-Gb\'et\'e Simplice et Blaise Som\'e \item Safimba SOMA (Male)\\ 2011: Doctorat Unique \\ Title: Etude de probl\`emes elliptiques non lin\'eaires avec des donn\'ees mesures. \\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Stanislas Ouaro et Nouredine Igbida \item Blaise KONE (Male)\\ 2011: Doctorat Unique\\ Title: Etudes de probl\`emes anisotropiques non lin\'eaires.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur : Stanislas Ouaro \item Isma\"el NYANKINI (Male)\\ 2012: Doctorat Unique\\ Title: Etude de probl\`emes elliptiques non lin\'eaires sous des conditions assez g\'en\'erales sur les donn\'ees.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Stanislas Ouaro \item Olivier SAWADOGO (Male)\\ 2012: Th\`ese Unique\\ Title: Mod\'elisation hydrog\'eologique : Ecoulement en milieux poreux, fractur\'es, Probl\`eme inverse et transport de polluants.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Blaise Som\'e \item Malicki ZOROM (Male)\\ 2012: Th\`ese Unique\\ Title: Mod\'elisation compartimentale: dynamique de la vuln\'erabilit\'e socio-\'economique des ruraux sah\'eliens \`a la variabilit\'e climatique et contr\^ole optimal d'un mod\`ele de type metapopulation du paludisme.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Blaise Som\'e \item Boureima SANGARE (Male)\\ 2012: Th\'ese Unique\\ Title: Adaptation dynamique de maillage pour la r\'esolution des \'equations aux d\'eriv\'ees partielles \'evolutives.\\ Universit\'e de Ouagadougou, Burkina-Faso et Universit\'e des Sciences, des Techniques et des Technologies de Bamako, Mali.\\ Directeurs : Ouateni Diallo et Longin Som\'e \item Moumini KERE (Male) \\ 2012: Th\`ese Unique \\ Title: Contr\^olabilit\'e simultan\'ee et application \`a un probl\`eme d'identification simultan\'ee de param\`etre.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Ousseynou Nakoulima et Blaise Som\'e \item Sadou TAO (Male)\\ 2012: Th\`ese Unique\\ Title: Contr\^olabilit\'e de syst\`emes paraboliques et application aux sentinelles.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Ousseynou Nakoulima et Blaise Som\'e \item Victorien Fourtoua KONANE (Male)\\ 2013: Doctorat Unique\\ Title: Etude de Syst\`emes Dynamiques (Mod\'elisation de batteries non rechargeables).\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Dembo Gadiaga et Ingemar Kaj \item Bila Adolphe KYELEM (Male)\\ 2013: Doctorat Unique\\ Title: Contribution \`a l'\'etude d'existence de solutions p\'eriodiques pour une classe de probl\`emes d'\'evolution \`a retard et applications.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs : Stanislas Ouaro et Khalil Ezzinbi \item Ibrahim NONKANE (Male)\\ 2013: Doctorat de l'Universit\'e de Ouagadougou\\ Title: Geom\'etrie des modules et des modules diff\'erentiels associ\'es aux repr\'esentations du groupe sym\'etrique.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Rikard Bogvad et G\'erard Kientega \item Duni Yegbonoma Fr\'ed\'eric ZONGO (Male)\\ 2013: Doctorat Unique\\ Title: Etude de probl\`emes anisotropiques nonlin\'eaires et d'\'equations quasi relativistes de type Choquard. \\ Universit\'e de Ouagadougou, Burkina-Faso. \\ Directeurs: Stanislas Ouaro et Michael Melgaard \item Francis BASSONO (Male)\\ 2013: Th\`ese Unique\\ Title: Etude de quelques \'equations fonctionnelles par les m\'ethodes SBA, d\'ecompositionnelle d'Adomian et des perturbations. \\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Gabriel Bissanga et Blaise Som\'e \item Dalomi BAHAN (Male)\\ 2013: Th\`ese Unique\\ Title: M\'ethode Ali\'enor pour la r\'esolution des probl\`emes d'optimisation quadratiques en variables binaires: applications et complexit\'e. \\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Blaise Som\'e \item Kounhinir SOME (Male)\\ 2013: Th\`ese Unique\\ Title: Nouvelle m\'eta heuristique bas\'ee sur la m\'ethode Ali\'enor pour la r\'esolution des probl\`emes d'optimisation multi objectif: th\'eorie et applications.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeurs: Blaise Som\'e et Berthold Ulungu \item Arouna OUEDRAOGO (Male)\\ 2014: Doctorat Unique\\ Title: Etude de probl\`emes elliptiques et paraboliques non lin\'eaires.\\ Universit\'e de Ouagadougou, Burkina-Faso.\\ Directeur: Stanislas Ouaro \item Salifou NIKIEMA (Male)\\ 2015: Doctorat de l'Universit\'e de Ouagadougou\\ Title: Polyn\^omes \`a coefficients entiers : hauteurs et bornes pour les facteurs.\\ Universit\'e de Ouagadougou, Burkina-Faso. \\ Directeur: G\'erard Kientega \end{enumerate}
1,116,691,497,088
arxiv
\section{Introduction} In the Influence Maximization (IM) problem, we are given a social network, a stochastic model for diffusion of influence over the network, and a budget $k$, and we ask to find a set of $k$ nodes, called \emph{seeds}, that maximize their \emph{spread of influence}, which is the expected number of nodes reached by a cascade of influence diffusion generated by the seeds according to the given diffusion model. One of the most studied model for influence diffusion is the Independent Cascade (IC), where each edge is associated with an independent probability of transmitting influence from the source node to the tail node. In the IC model the spread of influence is a monotone submodular function of the seed set, therefore a greedy algorithm guarantees a $1-\frac{1}{e}$ approximation factor for the IM problem~\cite{Kempe2015a}. Since his definition by Domingos and Richardson~\cite{Domingos2001,Richardson2002} and formalization as an optimization problem by Kempe et al.~\cite{Kempe2003,Kempe2015a}, the IM problem and its variants have been extensively investigated, motivated by applications in viral marketing~\cite{Chen10}, adoption of technological innovations~\cite{Goldberg2013}, and outbreak or failure detection~\cite{Leskovec2007}. See~\cite{DBLP:series/synthesis/2013Chen,Li2018} for surveys on the IM problem. Recently, Golovin and Krause~\cite{Golovin2011a} initiated the study of the IM problem under the framework of adaptive optimization, where, instead of selecting all the seeds at once at the beginning of the process, we can select one seed at a time and observe, to some extent, the portion of the network reached by a new selected seed. The advantage is that the decision on the next seed to choose can be based on the observed spread of previously selected seeds, usually called \emph{feedback}. Two main feedback models have been introduced: in the \emph{full-adoption} feedback the whole spread from each seed can be observed, while in the \emph{myopic} feedback one can only observe the direct neighbors of each seed. Golovin and Krause considered the Independent Cascade model and showed that, under full-adoption feedback, the objective function satisfies the property of \emph{adaptive submodularity} (introduced in the same paper) and therefore a greedy algorithm achieves a $1-\frac{1}{e}$ approximation for the adaptive IM problem. They also conjectured that there exists a constant factor approximation algorithm for the myopic feedback model, which has been indeed found by Peng and Chen~\cite{Peng2019} who proposed a $\frac{1}{4}\left(1-\frac{1}{e}\right)$-approximation algorithm. However, the approximation ratio for the adaptive IM problem, which compares a given adaptive algorithm with an optimal adaptive one, does not measure the benefits of implementing adaptive policies over non-adaptive ones. To this aim, Chen and Peng~\cite{Chen2019,Peng2019} introduced the adaptivity gap, which is the supremum, over all possible inputs, of the ratio between the spread of an optimal adaptive policy and that of an optimal non-adaptive one. In~\cite{Peng2019}, Peng and Chen considered independent cascade model with myopic feedback and showed that the adaptivity gap is between $\frac{e}{e-1}$ and 4 for any graph. In~\cite{Chen2019}, the same authors showed some upper and lower bounds on the adaptivity gap in the case of full-adoption feedback, still under independent cascade, for some particular graph classes. Specifically, they showed that the adaptivity gap is in the interval $\left[ \frac{e}{e-1}, \frac{2e}{e-1}\right]$ for in-arborescences and it is in the interval $\left[ \frac{e}{e-1},2\right]$ for out-arborescences. Moreover, it is equal to $\frac{e}{e-1}$ in one-directional bibartite graphs. In order to show these bounds, they followed an approach introduced by Asadpour and Nazerzadeh~\cite{Asadpour16} which consists in transforming an adaptive policy into a non-adaptive one by means of multilinear extensions, and constructing a Poisson process to relate the influence spread of the non-adaptive policy to that of the adaptive one. For general graphs and full-adoption feedback, the only known upper bounds on the adaptivity gap are linear in the size of the graph and can be trivially derived. In this paper, we consider the independent cascade model with full-adoption feedback, and show the first sub-linear upper bound on the adaptivity gap that holds for general graphs. In detail we show that that the adaptivity gap is at most $\lceil n^{1/3}\rceil$, where $n$ is the number of nodes in the graph. Moreover, we tighten the upper bound on the adaptivity gap for in-arborescences by showing that it is at most $\frac{2e^2}{e^2-1}<\frac{2e}{e-1}$. Using similar techniques we study the adaptivity gap of \emph{$\alpha$-bounded graphs}, which is the class of undirected graphs where the sum of node degrees higher than two is at most $\alpha$. We show that the adaptivity gap is upper-bounded by $\sqrt{\alpha}+O(1)$, which is smaller that $O(n^{1/3})$ for several graph classes. In 0-bounded graphs, i.e. undirected graphs in which each connected component is a path or a cycle, the adaptivity gap is at most $\frac{3e^3}{e^3-1}$. To prove our bounds, we introduce new techniques to connect adaptive policies with non-adaptive ones that might be of their own interest. \subsection*{Related Work} \subparagraph*{Influence Maximization.} Several studies based on general graphs \cite{Lowalekar2016, 7403743,Schoenebeck2019, Tang2014} have been conducted since the seminal paper by Kempe et al.~\cite{Kempe2015a}. Schoenebeck and Tao~\cite{Schoenebeck2019} studied the influence maximization problem on undirected graphs and proves that it is APX-hard for both the independent cascade and the linear threshold problem. Borgs et al.~\cite{Borgs2014} propose an efficient algorithm that runs in quasilinear time and still guarantees an approximation factor of $1-\frac{1}{e}-\epsilon$, for any $\epsilon>0$. Tang et al.~\cite{Tang2014} propose an algorithm which is experimentally close to the optimal one under the independent cascade model. Mihara et al.~\cite{7403743} consider unknown graphs for the influence maximization problem and devises an algorithm which achieves a fraction between 0.6 and 0.9 of the influence spread with minimal knowledge of the graph topology. Extensive literature reviews on influence maximization and its machinery is provided by Chen et al.~\cite{DBLP:series/synthesis/2013Chen} and Li et al.~\cite{Li2018}. Several works on the adaptive influence maximization problem~\cite{Han2018a,Sun2018, Tang2019, Tong2019, Tong2017, DBLP:journals/corr/VaswaniL16, Yuan2017} evolved after the concept introduced by Golovin and Krause~\cite{Golovin2011a}, and explore the adaptive optimization under different feedback models. The myopic model (in which, one can only observe the nodes influenced by the seed nodes) has been studied in~\cite{ Peng2019,Salha2018}. Sun et al.~\cite{Sun2018} capture the scenario in which, instead of considering one round, the diffusion process takes over $T$ rounds, and a seed set of at most $k$ nodes is selected at each round. The authors designed a greedy approximation algorithm that guarantees a constant approximation ratio. Tong and Wang~\cite{Tong2015} introduce a new version of the adaptive influence maximization problem by adding a time constraint. Other than the classic full-adoption and myopic feedback model, Yuan and Tang~\cite{Yuan2017}, and Tong and Wang~\cite{Tong2019}, have also introduced different feedback models that use different parameters to overcome the need of submodularity to guarantee a good approximation. Han et al.~\cite{Han2018a} propose a framework which uses existing non-adaptive techniques to construct a strong approximation for a generalization of the adaptive influence maximization problem in which in each step a batch of node is selected. \subparagraph*{Adaptivity Gaps.} Adaptivity gaps for the problem of maximizing stochastic monotone submodular functions have been studied by Asadpour and Nazerzadeh~\cite{Asadpour16}. A series of work studied adaptivity gaps for a two-step adaptive influence maximization problem~\cite{Badanidiyuru2016,Rubinstein2015, Seeman2013, Singer2016}. Gupta et al.~\cite{Jiang2020, Gupta2017} worked on the adaptivity gaps for stochastic probing. A recent line of studies has been conducted~\cite{Chen2019, Chen2019a, Peng2019} which focuses on finding the adaptivity gaps on different graph classes using the classical feedback models. Peng and Chen~\cite{Peng2019} confirmed a conjecture of Golovin and Krause~\cite{Golovin2011a}, which states that the adaptive greedy algorithm with myopic feedback is a constant approximation of the adaptive optimal solution. They show that the adaptivity gap of the independent cascade model with myopic feedback belongs to $[\frac{e}{e-1}, 4]$. Chen et al.~\cite{Chen2019a} introduced the greedy adaptivity gap, which compares the performance of the adaptive and the non-adaptive greedy algorithms. They show that the infimum of the greedy adaptivity gap is $1-\frac{1}{e}$ for every combination of diffusion and feedback models. The most related work to our results is that of~\cite{Chen2019}. Chen and Peng~\cite{Chen2019} derive upper and lower bounds on the adaptivity gap under the independent cascade model with full-adoption feedback, when the considered graphs are in-arborescences, out-arborescences, or one-directional bipartite graphs. In particular, they show that the adaptivity gaps of in-arborescences and out-arborescences are in the intervals $\left[ \frac{e}{e-1}, \frac{2e}{e-1}\right]$ and $\left[ \frac{e}{e-1}, 2\right]$, respectively, and they provide a tight bound of $\frac{e}{e-1}$ on the adaptivity gap of one-directional bipartite graphs. \subsection*{Organization of the Paper} In Section \ref{sec_prel} we give the preliminary definitions and notations which this work is based on. Sections~\ref{sec_inarb}--\ref{sec_other} are devoted to the main technical contribution of the paper (i.e., adaptivity gaps of in-arborescences, general graphs, and $\alpha$-bounded graphs). In Section \ref{sec_future}, we highlight some future research directions. Due to the lack of space, some missing proofs are deferred to the appendix. \section{Preliminaries}\label{sec_prel} For two integers $h$ and $k$, $h\leq k$, let $[k]_h:=\{h,h+1,\ldots, k\}$ and $[k]:=[k]_1$. \subparagraph*{Independent Cascade Model.} In the {\em independent cascade model} (IC), we have an {\em influence graph} $G=(V,E,(p_{uv})_{(u,v)\in E})$, where $p_{uv}\in [0,1]$ is an {\em activation probability} associated to each edge $(u,v)\in E$. Given a set of {\em seed nodes} $S\subseteq V$ which are initially \emph{active}, the diffusion process in the IC model is defined in $t\geq 0$ discrete steps as follows: (i) let $A_t$ be the set of active nodes which are activated at each step $t\geq 0$; (ii) $A_0:=S$; (iii) given a step $t\geq 0$, for any edge $(u,v)$ such that $u\in A_t$, node $u$ can activate node $v$ with probability $p_{uv}$ independently from any other node, and, in case of success, $v$ is included in $A_{t+1}$; (iv) the diffusion process ends at a step $r\geq 0$ such that $A_{r}=\emptyset$, i.e., no node can be activated at all, and $\bigcup_{t\leq r} {A_t}$ is the {\em influence spread}, i.e., the set of nodes activated/reached by the diffusion process. The above diffusion process can be equivalently defined as follows. The {\em live-edge graph} $L=(V,L(E))$ of $G$ is a random graph made from $G$, where $L(E)\subseteq E$ is a subset of edges such that each edge $(u,v)\in E$ is included in $L(E)$ with probability $p_{uv}$, independently from the other edges. Given a live-edge graph $L$, let $R(S,L):=\{v\in V:\text{ there exists a path from $u$ to $v$ in $L$ for some $u\in S$}\}$, i.e., the set of nodes reached by nodes in $S$ in the live-edge graph $L$. Informally, if $S$ is the set of seed nodes, and $L$ is a live-edge graph, $R(S,L)$ equivalently denotes the set of nodes which are reached/activated by the above diffusion process. Given a set of seed nodes $S$, the {\em expected influence spread} of $S$ is defined as $\sigma(S):=\mathbb{E}_L[|R(S,L)|]$. \subparagraph*{Non-adaptive Influence Maximization.} The {\em non-adaptive influence maximization problem under the IC model} is the computational problem that, given an influence graph $G$ and an integer $k\geq 1$, asks to find a set of seed nodes $S\subseteq V$ with $|S|=k$ such that $\sigma(S)$ is maximized. \subparagraph*{Adaptive Influence Maximization.} Differently from the non-adaptive setting, in which all the seed nodes are activated at the beginning and then the influence spread is observed, an {\em adaptive policy} activates the seeds sequentially in $k$ steps, one seed node at each step, and the decision on the next seed node to select is based on the feedback resulting from the observed spread of previously selected nodes. The feedback model considered in this work is {\em full-adoption}: when a node is selected, the adaptive policy observes its entire influence spread. An adaptive policy under the full-adoption feedback model is formally defined as follows. Given a live-edge graph $L$, the {\em realisation} $\phi_L:V\rightarrow 2^V$ associated to $L$ assigns to each node $v\in V$ the value $\phi_L(v):=R(\{v\},L)$, i.e., the set of nodes activated by $v$ under a live-edge graph $L$. Given a set $S\subseteq V$, a {\em partial realisation} $\psi:S\rightarrow 2^V$ is the restriction to $S$ of some realisation, i.e., there exists a live-edge graph $L$ such that $\psi(v)=\phi_L(v)$ for any $v\in S$. Given a partial realisation $\psi:S\rightarrow 2^V$, let $dom(\psi):=S$, i.e., $dom(\psi)$ is the domain of partial realisation $\psi$, let $R(\psi):=\bigcup_{v\in S}\psi(v)$, i.e., $R(\psi)$ is the set of nodes reached/activated by the diffusion process when the set of seed nodes is $S$, and let $f(\psi):=|R(\psi)|$. A partial realisation $\psi'$ is a {\em sub-realisation} of $\psi$ (or, equivalently, $\psi'\subseteq \psi$), if $dom(\psi')\subseteq dom(\psi)$ and $\psi'(v)=\psi(v)$ for any $v\in dom(\psi')$. We observe that a partial realisation $\psi$ can be equivalently represented as $\{(v,R(\{v\},L)):v\in dom(\psi)\}$ for some live-edge graph $L$. An adaptive policy $\pi$ takes as input a partial realisation $\psi$ and, either returns a node $\pi(\psi)\in V$ and activates it as seed, or interrupts the activation of new seed nodes, e.g., by returning a string $\pi(\psi):=STOP$. An adaptive policy $\pi$ can be run as in Algorithm \ref{ad_alg}. \begin{algorithm}[ht] \caption{Adaptive algorithm} \label{ad_alg} \begin{algorithmic}[1] \REQUIRE an influence graph $G$ and an adaptive policy $\pi$; \ENSURE a partial realisation; \STATE let $L$ be the live-edge graph; \STATE let $\psi:=\emptyset$ (i.e., $\psi$ is the empty partial realisation); \WHILE{$\pi(\psi)\neq STOP$} \STATE $v:=\pi(\psi)$; \STATE $\psi:=\psi\cup \{(v,R(\{v\},L))\}$; \ENDWHILE \RETURN $\psi_{\pi,L}:=\psi$; \end{algorithmic} \end{algorithm} The {\em expected influence spread} of an adaptive policy $\pi$ is defined as $\sigma(\pi):=\mathbb{E}_L[f(\psi_{\pi,L})]$, i.e., it is the expected value (taken on all the possible live-edge graphs) of the number of nodes reached by the diffusion process at the end of Algorithm \ref{ad_alg}. We say that $|\pi|=k$ if policy $\pi$ always return a partial realisation $\psi_{\pi,L}$ with $|dom(\psi_{\pi,L})|=k$. The {\em adaptive influence maximization problem under the IC model} is the computational problem that, given an influence graph $G$ and an integer $k\geq 1$, asks to find an adaptive policy $\pi$ that maximizes the expected influence spread $\sigma(\pi)$ subject to constraint $|\pi|=k$. \subparagraph*{Adaptivity gap.} Given an influence graph $G$ and an integer $k\geq 1$, let $OPT_N(G,k)$ (resp. $OPT_A(G,k)$) denote the optimal value of the non-adaptive (resp. adaptive) influence maximization problem with input $G$ and $k$. Given a class of influence graphs $\mathcal{G}$ and an integer $k\geq 1$, the {\em $k$-adaptivity gap} of $\mathcal G$ is defined as $$AG(\mathcal{G},k):=\sup_{G\in\mathcal{G}}\frac{OPT_A(G,k)}{OPT_N(G,k)},$$ and measures how much an adaptive policy outperforms a non-adaptive solution for the influence maximization problem applied to influence graphs in $\mathcal{G}$, when the maximum number of seed nodes is $k$. The {\em adaptivity gap} of $\mathcal{G}$ is defined as $AG(\mathcal{G}):=\sup_{k\geq 1}AG(\mathcal{G},k)$. We observe that for $k=1$ or $n\leq k$ the $k$-adaptivity gap is trivially equal to 1, thus we omit such cases in the following. \section{Adaptivity Gap for In-arborescences}\label{sec_inarb} An {\em in-arborescence} is a graph $G=(V,E)$ that can be constructed from a rooted tree $T=(V,F)$, by adding in $E$ an edge $(v,u)$ if $u$ is a father of $v$ in tree $T$. An upper bound of $\frac{2e}{e-1}\approx 3.16$ on the adaptivity gap of in-arborescences has been provided in \cite{Chen2019}. In this section we provide an improved upper bound for such graphs. \begin{theorem}\label{thm1} If $\mathcal{G}$ is the class of all the in-arborescences, then $$AG(\mathcal{G},k)\leq \frac{2}{1-(1-2/k)^k}\leq \frac{2e^2}{e^{2}-1}\approx 2.31,\ \forall k\geq 2.$$ \end{theorem} Let $G=(V=[n],E,(p_{uv})_{(u,v)\in E})$ be an in-arborescence, where $n>k$ is the number of nodes. To show the claim of Theorem \ref{thm1}, we need some preliminary notations and lemmas. Given a partial realisation $\psi$, and a node $i\in [n]$, let $$\Delta(i|\psi):=\mathbb{E}_L[f(\psi\cup \{(i,R(\{i\},L))\})-f(\psi)|\psi\subseteq \phi_L],$$ i.e., $\Delta(i|\psi)$ is the expected increment of the influence spread due to node $i$ when the observed partial realisation is $\psi$. We have the following claim (from \cite{Golovin2011a}), holding even for general graphs, whose proof is trivial. \begin{claim}[Adaptive Submodularity, \cite{Golovin2011a}]\label{lem0} Let $G$ be an arbitrary influence graph. For any partial realisations $\psi,\psi'$ of $G$ such that $\psi\subseteq \psi'$, and any node $i\notin R(\psi')$, we have that $\Delta(i|\psi')\leq \Delta(i|\psi)$. \end{claim} An adaptive policy $\pi$ is called {\em randomized} if, for any partial realisation $\psi$, node $\pi(\psi)$ is not selected deterministically (in general), but randomly (according to a probability distribution $p_{\psi}$ depending on $\psi$). Given a vector $\bm y=(y_1,\ldots, y_n)$ such that $y_i\in [0,1]$ for any $i\in [n]$, we say that $\mathbb{P}(\pi)=\bm y$ if the probability that each node $i$ belongs to $dom(\psi_{\pi,L})$ is $y_i$, where $\psi_{\pi,L}$ is the partial realisation returned by Algorithm \ref{ad_alg} with policy $\pi$. Let $OPT_A(G,\bm y)$ be the optimal expected influence spread $\sigma(\pi)$ over all the randomized adaptive policies $\pi$ such that $\mathbb{P}(\pi)=\bm y$.\footnote{We observe that, if $\bm y$ is arbitrary, a deterministic policy $\pi$ verifying $\mathbb{P}(\pi)=\bm y$ might not exists, and the introduction of randomization solves this issue.} Let $\pi^*$ be an optimal adaptive policy for the adaptive influence maximization problem (with $|\pi^*|=k$), and let $\bm x=(x_1,\ldots, x_n)$ be the vector such that $\mathbb{P}(\pi^*)=\bm x$. As $|\pi^*|=k$, we have that $\sum_{i\in [n]} x_i=k$. For any $t\in [k]_0$, let $S_t$ be the optimal set of $t$ seed nodes in the non-adaptive influence maximization problem, i.e., such that $OPT_N(G,t)=\mathbb{E}_L(|R(S_t,L)|)$. Let $\psi_{t,L}$ be the random variable denoting the sub-realisation of $\phi_L$ such that $dom(\psi_{t,L})=S_t$. Let $\rho$ be the random variable equal to node $i\in [n]$ with probability $x_i/k$. Observe that the above random variable is well-defined, as $\sum_{i\in [n]}(x_i/k)=k/k=1$. For any $t\in [k]$, let ${\psi}_{\rho,t,L}$ be the random variable denoting the sub-realisation of $\phi_L$ such that $dom({\psi}_{\rho,t,L})=S_{t-1}\cup\{\rho\}$. We observe that ${\psi}_{\rho,t,L}$ is the partial realisation coming from the following {\em hybrid non-adaptive policy}: initially, we activate the first $t-1$ seed nodes as in the optimal non-adaptive solution guaranteeing an expected influence spread of $OPT_N(G,t-1)$; then, we randomly choose a node $v$ according to random variable $\rho$ and we select $v$ as $t$-th seed node (if not already selected as seed). We use this hybrid non-adaptive policy as a main tool to obtain an improved upper bound on the adaptivity gap for in-arborescences. In the following lemma, holding even for general graphs, we relate the hybrid non-adaptive policy and the optimal non-adaptive solution, with the optimal adaptive policy. The proof structure exhibits some similarities with Lemma 6 of \cite{Asadpour16} and Lemma 3.3 of \cite{Chen2019}, but in their approach, they relate non-adaptive policies based on the Poisson process and multi-linear extensions, with the optimal adaptive policy. \begin{lemma}\label{lem1} Let $G$ be an arbitrary influence graph. For any $t\in [k]$, and any fixed partial realisation $\psi$ of $G$ such that $\mathbb{P}[\psi_{t-1,L}=\psi]>0$, we have \begin{equation*} \sigma(R(\psi))+k\cdot \mathbb{E}_{L,\rho}\left[f({\psi}_{\rho,t,L})-f(\psi_{t-1,L})|\psi_{t-1,L}=\psi \right]\geq OPT_A(G,k) \end{equation*} \end{lemma} \begin{proof} We have \begin{align} &k\cdot \mathbb{E}_{L,\rho}\left[f({\psi}_{\rho,t,L})-f(\psi_{t-1,L})|\psi_{t-1,L}=\psi \right]\nonumber\\ =&k\cdot \sum_{i\in [n]}\mathbb{P}[\rho=i]\cdot \Delta(i|\psi)\nonumber\\ =&k\cdot \sum_{i\in [n]\setminus R(\psi)}\frac{x_i}{k}\cdot \Delta(i|\psi)\label{eq0}\\ =&\sum_{i\in [n]\setminus R(\psi)}x_i\cdot \Delta(i|\psi),\label{eq1} \end{align} where \eqref{eq0} holds since $\Delta(i|\psi)=0$ for any $i\in R(\psi)$. Let $\bm x'=(x_1',\ldots x_n')$ be the vector such that $x_i'=1$ if $i\in R(\psi)$, and $x_i'=x_i$ otherwise. As $x_i'\geq x_i$ for any $i\in [n]$ we have \begin{equation}\label{eq2} OPT_A(G,k)\leq OPT_A(G,\bm x)\leq OPT_A(G,\bm x'). \end{equation} Let $\pi'$ be the optimal randomized adaptive policy such that $\mathbb{P}(\pi')=\bm x'$. Policy $\pi'$ selects each node in $R(\psi)$ with probability $1$, thus we can assume that such seed nodes are selected at the beginning and that the adaptive policy starts by observing the resulting partial realisation. Furthermore, we can assume that, for any partial realisation $\psi'$, $\pi'$ does not select any node $i\in R(\psi')$, otherwise there is no increase of the influence spread. Given $j\in [n]$, let $\Delta'(j)$ denote the expected increment of the influence spread when $\pi'$ selects the $j$-th seed node (in order of selection, and without considering in the count the initial seeds of $R(\psi)$); analogously, let $\Delta'(j|i)$ denote the expected increment of the influence spread when $\pi'$ selects the $j$-th seed node, conditioned by the fact that the $j$-th seed is node $i$.\footnote{If an execution of $\pi'$ requires less than $j$ steps, we assume that the increase of the influence spread at step $j$(that contributes to the expected values $\Delta'(j)$ and $\Delta'(j|i)$) is null.} We get \begin{align} &OPT_A(G,\bm x')\nonumber\\ =&\sigma(R(\psi))+\sum_j \Delta'(j)\nonumber\\ =&\sigma(R(\psi))+\sum_j \sum_{i\in [n]\setminus R(\psi)}\mathbb{P}[\text{the $j$-th seed node is $i$}]\cdot \Delta'(j|i)\nonumber\\ =&\sigma(R(\psi))+\sum_{i\in [n]\setminus R(\psi)}\sum_j \mathbb{P}[\text{the $j$-th seed node is $i$}]\cdot \Delta'(j|i)\nonumber\\ =&\sigma(R(\psi))+\sum_{i\in [n]\setminus R(\psi)}\sum_j \mathbb{P}[\text{the $j$-th seed node is $i$}]\cdot\nonumber\\ &\ \ \ \cdot \mathbb{E}_{\pi'}[\Delta(i|\psi')|\text{$i=\pi'(\psi')$ for some $\psi'\supseteq \psi$ observed at step $j$}]\nonumber\\ \leq &\sigma(R(\psi))+\sum_{i\in [n]\setminus R(\psi)}\sum_j \mathbb{P}[\text{the $j$-th seed node is $i$}]\cdot \Delta(i|\psi)\label{eq3.0}\\ =&\sigma(R(\psi))+\sum_{i\in [n]\setminus R(\psi)}\mathbb{P}[\text{$i$ is selected as seed}]\cdot \Delta(i|\psi)\nonumber\\ =& \sigma(R(\psi))+\sum_{i\in [n]\setminus R(\psi)}x_i'\cdot \Delta(i|\psi)\nonumber\\ = & \sigma(R(\psi))+\sum_{i\in [n]\setminus R(\psi)}x_i\cdot \Delta(i|\psi),\label{eq3} \end{align} where \eqref{eq3.0} holds since $\Delta(i|\psi')\leq \Delta(i|\psi)$ for any partial realisation $\psi'\supseteq \psi$ by adaptive submodularity (Claim \ref{lem0}). By putting together \eqref{eq1}, \eqref{eq2}, and \eqref{eq3}, we get \begin{align*} & \sigma(R(\psi))+k\cdot \mathbb{E}_{L,\rho}\left[f({\psi}_{\rho,t,L})-f(\psi_{t-1,L})|\psi_{t-1,L}=\psi \right]\\ =& \sigma(R(\psi))+\sum_{i\in [n]\setminus R(\psi)}x_i\cdot \Delta(i|\psi)\\ \geq & OPT_A(G,\bm x')\\ \geq & OPT_A(G,k), \end{align*} thus showing the claim. \end{proof} The following lemma is similar to Lemma 3.8 in~\cite{Chen2019} (for the proof, see the appendix). \begin{lemma}\label{lem2} When the input influence graph $G$ is an in-arborescence, we have that \begin{equation*} \sigma(R(\psi_{t-1,L}))\leq f(\psi_{t-1,L})+OPT_N(G,t-1) \end{equation*} for any $t\in [k]$ and any live-edge graph $L$. \end{lemma} Armed with the above lemmas, we can now prove Theorem~\ref{thm1}. \begin{proof}[Proof of Theorem~\ref{thm1}] For any $t\in [k]$, we have \begin{align} &k\cdot (OPT_N(G,t)-OPT_N(G,t-1))\nonumber\\ =&k\cdot (\sigma(S_t)-\sigma(S_{t-1}))\nonumber\\ =&k\cdot (\mathbb{E}_L[f(\psi_{t,L})]-\mathbb{E}_L[f(\psi_{t-1,L})])\nonumber\\ \geq &k\cdot (\mathbb{E}_{L,\rho}[f({\psi}_{\rho,t,L})]-\mathbb{E}_L[f(\psi_{t-1,L})])\label{eqthm1.0}\\ = &k\cdot (\mathbb{E}_{L,\rho}[f({\psi}_{\rho,t,L})]-\mathbb{E}_{L,\rho}[f(\psi_{t-1,L})])\nonumber\\ =&k\cdot \mathbb{E}_{L,\rho}[f({\psi}_{\rho,t,L})-f(\psi_{t-1,L})]\nonumber\\ =& \mathbb{E}_{\psi_{t-1,L}}\left[k\cdot \mathbb{E}_{L,\rho}[f({\psi}_{\rho,t,L})-f(\psi_{t-1,L})|\psi_{t-1,L}]\right]\nonumber\\ \geq &\mathbb{E}_{\psi_{t-1,L}}[OPT_A(G,k)-\sigma(R(\psi_{t-1,L}))]\label{eqthm1}\\ \geq &\mathbb{E}_{\psi_{t-1,L}}[OPT_A(G,k)-f(\psi_{t-1,L})-OPT_N(G,t-1)]\label{eqthm3}\\ =& \mathbb{E}_{\psi_{t-1,L}}[OPT_A(G,k)]-\mathbb{E}_{\psi_{t-1,L}}[f(\psi_{t-1,L})]-\mathbb{E}_{\psi_{t-1,L}}[OPT_N(G,t-1)]\nonumber\\ = &OPT_A(G,k)-\sigma(S_{t-1})-OPT_N(G,t-1)\nonumber\\ = &OPT_A(G,k)-2\cdot OPT_N(G,t-1)\label{eqthm4}, \end{align} where \eqref{eqthm1.0} holds since $dom(\psi_{t,L})$ is the optimal set of $t$ seed nodes for the non-adaptive influence maximization problem, \eqref{eqthm1} comes from Lemma \ref{lem1}, and \eqref{eqthm3} comes from Lemma \ref{lem2}. Thus, by \eqref{eqthm4}, we get $k\cdot (OPT_N(G,t)-OPT_N(G,t-1))\geq OPT_A(G,k)-2\cdot OPT_N(G,t-1)$, that after some manipulations leads to the following recursive relation: \begin{equation}\label{fundeqthm} OPT_N(G,t)\geq \frac{1}{k}\cdot OPT_A(G,k)+\left(1-\frac{2}{k}\right)\cdot OPT_N(G,t-1),\quad \forall t\in [k]. \end{equation} By applying iteratively \eqref{fundeqthm}, we get \begin{equation*} OPT_N(G,k)\geq \frac{1}{k}\cdot \sum_{t=0}^{k-1}\left(1-\frac{2}{k}\right)^{t}\cdot OPT_A(G,k)=\frac{1-\left(1-2/k\right)^k}{2}\cdot OPT_A(G,k), \end{equation*} that leads to \begin{equation} \frac{OPT_A(G,k)}{OPT_N(G,k)}\leq \frac{2}{1-(1-2/k)^k}\leq \frac{2}{1-e^{-2}} = \frac{2e^2}{e^{2}-1}, \end{equation} and this shows the claim. \end{proof} \section{Adaptivity Gap for General Influence Graphs}\label{sec_gen} In this section, we exhibit upper bounds on the $k$-adaptivity gap of general graphs. In the following theorem, we first give an upper bound that is linear in the number of seed nodes (see the appendix for the proof). \begin{theorem}\label{lemk} Given an arbitrary class of influence graphs $\mathcal{G}$ and $k\geq 2$, we get $AG(\mathcal{G},k)\leq k$. \end{theorem} In the next theorem we give an upper bound on the adaptivity gap that is sublinear in the number of nodes of the considered graph. \begin{theorem}\label{thm2} If $\mathcal{G}$ is the class of influence graphs having at most $n$ nodes, we get $AG(\mathcal{G})\leq \lceil n^{1/3}\rceil .$ \end{theorem} Let $G=(V,E,(p_{uv})_{(u,v)\in E})$ be the input influence graph. To show Theorem \ref{thm2}, we recall the preliminary notations considered for the proof of Theorem \ref{thm1}, and we give a further preliminary lemma (see the appendix for the proof of the lemma). \begin{lemma}\label{lemthm2} Given a set $U\subseteq V$ of cardinality $h\geq k$, we have $$\sigma(U)\leq \frac{h}{k}\cdot OPT_N(G,k).$$ \end{lemma} We use Theorem~\ref{lemk} and Lemma~\ref{lemthm2} to show Theorem~\ref{thm2}. \begin{proof}[Proof of Theorem~\ref{thm2}] We assume w.l.o.g. that $k>\lceil n^{1/3}\rceil $ and that $OPT_N(G,k)<(\lceil n^{1/3}\rceil)^2$. Indeed, if $k\leq \lceil n^{1/3}\rceil $, by Theorem \ref{lemk} the claim holds, and if $OPT_N(G,k)\geq (\lceil n^{1/3}\rceil)^2$, then $\frac{OPT_A(G,k)}{OPT_N(G,k)}\leq \frac{|V|}{OPT_N(G,k)}\leq \frac{n}{(\lceil n^{1/3}\rceil)^2}\leq \lceil n^{1/3}\rceil $, and the claim holds as well. For any $t\in [k]$, we have \begin{align} &k\cdot (OPT_N(G,t)-OPT_N(G,t-1))\nonumber\\ \geq &k\cdot (\mathbb{E}_{L,\rho}[f({\psi}_{\rho,t,L})]-\mathbb{E}_{L,\rho}[f(\psi_{t-1,L})])\nonumber\\ =& \mathbb{E}_{\psi_{t-1,L}}\left[k\cdot \mathbb{E}_{L,\rho}[f({\psi}_{\rho,t,L})-f(\psi_{t-1,L})|\psi_{t-1,L}]\right]\nonumber\\ \geq &\mathbb{E}_{\psi_{t-1,L}}[OPT_A(G,k)-\sigma(R(\psi_{t-1,L}))]\label{eqthm12}\\ =&\mathbb{E}_{\psi_{t-1,L}}[OPT_A(G,k)]-\mathbb{E}_{\psi_{t-1,L}}[\sigma(R(\psi_{t-1,L}))]\nonumber\\ \geq &\mathbb{E}_{\psi_{t-1,L}}[OPT_A(G,k)]-\mathbb{E}_{\psi_{k,L}}[\sigma(R(\psi_{k,L}))]\nonumber\\ \geq &\mathbb{E}_{\psi_{t-1,L}}[OPT_A(G,k)]-\mathbb{E}_{\psi_{k,L}}\left[\frac{|R(\psi_{k,L})|}{k}\cdot OPT_N(G,k)\right]\label{eqthm32}\\ = &OPT_A(G,k)-\frac{\mathbb{E}_{\psi_{k,L}}[|R(\psi_{k,L})|]}{k}\cdot OPT_N(G,k)\nonumber\\ \geq &OPT_A(G,k)-\frac{\mathbb{E}_{\psi_{k,L}}[|R(\psi_{k,L})|]}{\lceil n^{1/3}\rceil+1}\cdot ((\lceil n^{1/3}\rceil)^2-1)\label{eqthm42}\\ =&OPT_A(G,k)-(\lceil n^{1/3}\rceil-1)\cdot \mathbb{E}_{\psi_{k,L}}[|R(\psi_{k,L})|]\nonumber\\ =&OPT_A(G,k)-(\lceil n^{1/3}\rceil-1)\cdot OPT_N(G,k)\label{eqthm52}, \end{align} where \eqref{eqthm12} comes from Lemma \ref{lem1}, \eqref{eqthm32} comes from Lemma \ref{lemthm2}, and \eqref{eqthm42} comes from the hypothesis $k>\lceil n^{1/3}\rceil $ and $OPT_N(G,k)<(\lceil n^{1/3}\rceil)^2$. By \eqref{eqthm52}, we get $OPT_N(G,t)-OPT_N(G,t-1) \geq (OPT_A(G,k)-(\lceil n^{1/3}\rceil-1)\cdot OPT_N(G,k))/k$ for any $t\in [k]$, and by summing such inequality over all $t\in [k]$, we get \begin{align} & OPT_N(G,k)\nonumber\\ =&\sum_{t=1}^k (OPT_N(G,t)-OPT_N(G,t-1))\nonumber\\ \geq &\sum_{t=1}^k\frac{OPT_A(G,k)-(\lceil n^{1/3}\rceil-1)\cdot OPT_N(G,k)}{k}\nonumber\\ =&OPT_A(G,k)-(\lceil n^{1/3}\rceil-1)\cdot OPT_N(G,k).\label{eqthm62} \end{align} Finally, \eqref{eqthm62} implies that $OPT_A(G,k)\leq \lceil n^{1/3}\rceil \cdot OPT_N(G,k)$, and this shows the claim. \end{proof} \section{Adaptivity Gap for Other Influence Graphs}\label{sec_other} In this section, we extend the results obtained in Theorem \ref{thm1}, and we get upper bounds on the adaptivity gap of other classes of influence graphs. In particular, we consider the class of {\em $\alpha$-bounded graphs}: a class of undirected graphs parametrized by an integer $\alpha\geq 0$ that includes several known graph topologies. In the following, when we refer to undirected influence graphs, we implicitly assume that, for any undirected edge $\{u,v\}$, there are two directed edges $(u,v)$ and $(v,u)$ having respectively two (possibly) distinct probabilities $p_{uv}$ and $p_{vu}$. \subparagraph*{$\alpha$-bounded graphs.} Given an undirected graph $G=(V,E)$ and a node $v\in V$, let $deg_v(G)$ be the degree of node $v$ in graph $G$. Given an integer $\alpha\geq 0$, graph $G$ is an {\em $\alpha$-bounded graph} if $\sum_{v\in V:deg_v(G)>2}deg_v(G)\leq \alpha$, i.e., the sum all the node degrees higher than $2$ is at most $\alpha$. In the following, we exhibit some interesting classes of $\alpha$-bounded graphs: \begin{itemize} \item the set of $0$-bounded graphs is made of all the graphs $G$ such that each connected component of $G$ is either an undirected path, or an undirected cycle; \item if graph $G$ is homeomorphic to a star with $h$ edges, then $G$ is a $h$-bounded graph; \item if graph $G$ is homeomorphic to a parallel-link graph with $h$ edges, then $G$ is a $2h$-bounded graph; \item if graph $G$ is homeomorphic to a cycle with $h$ chords, then $G$ is a $6h$-bounded graph; \item if graph $G$ is homeomorphic to a clique with $h$ nodes, then $G$ is a $h(h-1)$-bounded graph. \end{itemize} In the following, we provide an upper bound on the adaptivity gap of $\alpha$-bounded influence graphs for any $\alpha\geq 0$. \begin{theorem}\label{thmlast} Given $\alpha\geq 0$, let $\mathcal{G}$ be the class of $\alpha$-bounded influence graphs. Then $$AG(\mathcal{G},k)\leq \min\left\{k,\frac{\alpha}{k}+2+ \frac{1}{1-(1-1/k)^k}\right\}\leq \frac{\sqrt{4(e-1)^2\alpha+(3e-2)^2}+3e-2}{2(e-1)}$$ for any $k\geq 2$, i.e., $AG(\mathcal{G})\leq \sqrt{\alpha}+O(1)$. \end{theorem} Let $G=(V=[n],E,(p_{uv})_{(u,v)\in E})$ be an $\alpha$-bounded influence graph, and we recall the preliminary notations from Theorem \ref{thm1}. The proof of Theorem \ref{thmlast} is a non-trivial generalization of Theorem \ref{thm1}. In particular, the proof resorts to Theorem \ref{lemk} to get the upper bound of $k$, and, by following the approach of Theorem \ref{thm1}, the following technical lemma is used in place of Lemma \ref{lem2} to get the final upper bound. \begin{lemma}\label{lemlast} When the input influence graph $G$ is an $\alpha$-bounded graph with $\alpha\geq 0$, we have that \begin{equation*} \sigma(R(\psi_{t-1,L}))\leq f(\psi_{t-1,L})+\left(\frac{\alpha}{k}+2\right)\cdot OPT_N(G,k), \end{equation*} for any $t\in [k]$ and live-edge graph $L$. \end{lemma} \begin{proof Given a subset $U\subseteq V$, let $\partial U:=\{u\in U: \exists (u,v)\in E,v\notin U\}$. We have that $\sigma(R(\psi))\leq |R(\psi)|+\sigma(\partial R(\psi))=f(\psi)+\sigma(\partial R(\psi))$ for any partial realisation $\psi$. Thus, to show the claim, it suffices to show that \begin{equation* \sigma(\partial R(\psi_{t-1,L}))\leq \left(\frac{\alpha}{k}+2\right)\cdot OPT_N(G,k). \end{equation*} Let $U\subseteq V$ such that $U$ has at most $k$ connected components. Let $A$ be the set of connected components containing at least one node of degree higher than $2$, and let $B$ be the set of the remaining components, i.e., containing nodes with degree in $[2]_0$ only. By definition of $A$ and $B$, we necessarily have that $|\partial A|\leq \sum_{v\in V:deg_v(G)>2}deg_v(G)\leq \alpha$ and $|\partial B|\leq 2k$. Thus $|\partial U|\leq |\partial A|+|\partial B|\leq \alpha+2k$, and the next claim follows. \begin{claim}\label{lastclaim} Given a subset $U\subseteq V$ made of at most $k$ connected components, then $|\partial U|\leq \alpha+2k$. \end{claim} Now, we have that \begin{align} &\sigma(\partial R(\psi_{t-1,L}))\nonumber\\ \leq & \sigma(\partial R(\psi_{k,L}))\nonumber\\ \leq & \frac{|\partial R(\psi_{k,L})|}{k}\cdot OPT_N(G,k)\label{lastlem_eq2}\\ \leq & \frac{\alpha+2k}{k}\cdot OPT_N(G,k),\label{lastlem_eq3} \end{align} where \eqref{lastlem_eq2} comes from Lemma \ref{lemthm2}, and \eqref{lastlem_eq3} holds since $R(\psi_{k,L})$ contains at most $k$ connected components and because of Claim \ref{lastclaim}. Thus, by \eqref{lastlem_eq3}, the claim of the lemma follows. \end{proof} We can now prove Theorem~\ref{thmlast}. \begin{proof}[Proof of Theorem~\ref{thmlast}] For any $t\in [k]$, we have \begin{align} &k\cdot (OPT_N(G,t)-OPT_N(G,t-1))\nonumber\\ \geq &k\cdot (\mathbb{E}_{L,\rho}[f({\psi}_{\rho,t,L})]-\mathbb{E}_{L,\rho}[f(\psi_{t-1,L})])\nonumber\\ =& \mathbb{E}_{\psi_{t-1,L}}\left[k\cdot \mathbb{E}_{L,\rho}[f({\psi}_{\rho,t,L})-f(\psi_{t-1,L})|\psi_{t-1,L}]\right]\nonumber\\ \geq &\mathbb{E}_{\psi_{t-1,L}}[OPT_A(G,k)-\sigma(R(\psi_{t-1,L}))]\label{eqthm1_last}\\ \geq &\mathbb{E}_{\psi_{t-1,L}}\left[OPT_A(G,k)-f(\psi_{t-1,L})-\left(\frac{\alpha}{k}+2\right)\cdot OPT_N(G,k)\right]\label{eqthm3_last}\\ =& \mathbb{E}_{\psi_{t-1,L}}[OPT_A(G,k)]-\mathbb{E}_{\psi_{t-1,L}}[f(\psi_{t-1,L})]-\left(\frac{\alpha}{k}+2\right)\cdot \mathbb{E}_{\psi_{t-1,L}}\left[OPT_N(G,k)\right]\nonumber\\ = &OPT_A(G,k)-\sigma(S_{t-1})-\left(\frac{\alpha}{k}+2\right)\cdot OPT_N(G,k)\nonumber\\ = &OPT_A(G,k)-\left(\frac{\alpha}{k}+2\right)\cdot OPT_N(G,k)-OPT_N(G,t-1)\label{eqthm4_last}, \end{align} where \eqref{eqthm1_last} comes from Lemma \ref{lem1} and \eqref{eqthm3_last} comes from Lemma \ref{lemlast}. Thus, by \eqref{eqthm4_last}, we get the following recursive relation: \begin{equation}\label{fundeqthm_last} OPT_N(G,t)\geq \frac{1}{k}\cdot \left(OPT_A(G,k)-\left(\frac{\alpha}{k}+2\right)\cdot OPT_N(G,k)\right)+\left(1-\frac{1}{k}\right)\cdot OPT_N(G,t-1), \end{equation} for any $t\in [k]$. By applying iteratively \eqref{fundeqthm_last}, we get \begin{align*} &OPT_N(G,k)\\ \geq &\frac{1}{k}\cdot \left(OPT_A(G,k)-\left(\frac{\alpha}{k}+2\right)\cdot OPT_N(G,k)\right)\cdot \sum_{t=0}^{k-1} \left(1-\frac{1}{k}\right)^j\\ = &\left(OPT_A(G,k)-\left(\frac{\alpha}{k}+2\right)\cdot OPT_N(G,k)\right)\cdot \left(1-\left(1-\frac{1}{k}\right)^k\right), \end{align*} that, after some manipulations, leads to \begin{equation}\label{semifinal_bound} \frac{OPT_A(G,k)}{OPT_N(G,k)}\leq \frac{\alpha}{k}+2+ \frac{1}{1-(1-1/k)^k} \leq \frac{\alpha}{k}+2+ \frac{1}{1-e^{-1}}~. \end{equation} By Theorem \ref{lemk}, we have that $\frac{OPT_A(G,k)}{OPT_N(G,k)}\leq k$, thus, by \eqref{semifinal_bound}, we get \begin{align} &\frac{OPT_A(G,k)}{OPT_N(G,k)}\nonumber\\ \leq & \min\left\{k, \frac{\alpha}{k}+2+ \frac{1}{1-(1-1/k)^k} \ \right\}\nonumber\\ \leq & \min\left\{k, \frac{\alpha}{k}+2+ \frac{1}{1-e^{-1}} \ \right\}\nonumber\\ \leq & \frac{\sqrt{4(e-1)^2\alpha+(3e-2)^2}+3e-2}{2(e-1)}\label{last_eqq}, \end{align} where \eqref{last_eqq} is equal to the real value of $k\geq 0$ such that $k=\frac{\alpha}{k}+2+ \frac{1}{1-e^{-1}}$. By \eqref{last_eqq} the claim follows. \end{proof} For the particular case of $0$-bounded influence graphs, the following theorem provides a better upper bound on the adaptivity gap (the proof is analogue to that of Theorem \ref{thm1}, and is deferred to the appendix). \begin{theorem}\label{thm_0bou} Let $\mathcal{G}$ be the class of $0$-bounded influence graphs. Then $$AG(\mathcal{G},k)\leq\min\left\{k,\frac{3}{1-(\max\{0,1-3/k\})^k}\right\}\leq \frac{3e^3}{e^{3}-1}\approx 3.16,\ \forall k\geq 2.$$ \end{theorem} \section{Future Works}\label{sec_future} The first problem that is left open by our results is the gap between the constant lower bound provided by Chen and Peng~\cite{Chen2019} and our upper bound on the adaptivity gap for general graphs. Besides trying to lower the upper bound, a possible direction could be that of increasing the lower bound by finding instances with a non constant adaptivity gap. Since the lower bound given in~\cite{Chen2019} holds even when the graph is a directed path, one direction could be to exploit different graph topologies. Although in this work we have improved the upper bound on the adaptivity gap of in-arborescence, there is still a gap between upper and lower bound, thus another open problem is to close it. It would be also interesting to find better bounds on the adaptivity gap of other graph classes, like e.g. out-arborescences. A further interesting research direction is to study the adaptivity gap of some graph classes modelling real-world networks, both theoretically and experimentally. The study of the adaptive IM problem in the Linear Threshold model is still open, in terms of both approximation ratio and adaptivity gap. We observe that in this case the objective function is not adaptive submodular in both myopic and full-adoption feedbacks and therefore the greedy approach by Golovin and Krause~\cite{Golovin2011a} cannot be applied in this case. The techniques introduced in this paper to relate adaptive policies with non-adaptive ones might be useful to find better upper bounds on the adaptivity gaps in different feedback models, like e.g. the myopic one, or in different graph classes.
1,116,691,497,089
arxiv
\section{Introduction} \setcounter{equation}{0} Define the Fourier transform ${\mathcal F}\! f$ (or $\hat f$ for brevity) of an integrable function $f$ on the $d$-dimensional Euclidean space ${\mathbb{R}}^d$ by \begin{equation}\label{fouriertransform.def} {\mathcal F}\! f(\xi):=\int_{{\mathbb{R}}^d} e^{-i \langle {\bf x}, \xi\rangle} f({\bf x}) d{\bf x},\end{equation} and extend the above definition to all tempered distributions as usual. Here we denote by $\langle \cdot, \cdot\rangle$ and $\|\cdot\|$ the standard inner product and norm on ${\mathbb{R}}^d$ respectively. Let ${\mathcal S}:={\mathcal S}({\mathbb{R}}^d)$ be the space of all Schwartz functions on ${\mathbb{R}}^d$ and ${\mathcal S}':={\mathcal S}'({\mathbb{R}}^d)$ the space of all tempered distributions on ${\mathbb{R}}^d$. For $\gamma>0$, define the {\em fractional Laplacian} $(-\triangle)^{\gamma/2}$ by \begin{equation}\label{fractionallaplacian.def} {\mathcal F}((-\triangle)^{\gamma/2}f)(\xi):=\|\xi\|^\gamma \ {\mathcal F} f(\xi),\quad f\in {\mathcal S}. \end{equation} The fractional Laplacian has the remarkable property of being dilation-invariant. It plays a crucial role in the definition of thin plate splines \cite{duchon1977}, is intimately tied to fractal stochastic processes (e.g., fractional Brownian fields) \cite{mandelbrot1968,tafti2009} and stable Levy processes \cite{chen10}, and has been used in the study of singular obstacle problems \cite{caffarelli08, silvestre06}. In this paper, we present a detailed mathematical investigation of the functional properties of dilation-invariant differential operators together with a characterization of their inverses. Our primary motivation is to provide a rigorous operator framework for solving the stochastic partial differential equation \begin{equation}\label{randompde.def} (-\triangle)^{\gamma/2} \Phi=w\end{equation} with white noise $w$ as its driving term. We will show that this is feasible via the specification of a novel family of dilation-invariant left-inverses of the fractional Laplacian $(-\triangle)^{\gamma/2}$ which have appropriate $L^p$-boundedness properties. We say that a continuous linear operator $I$ from ${\mathcal S}$ to ${\mathcal S}'$ is {\em dilation-invariant} if there exists a real number $\gamma$ such that \begin{equation} I (\delta_t f)= t^\gamma \delta_t (If)\quad{\rm for\ all} \ f\in {\mathcal S}\ {\rm and} \ t>0, \end{equation} and {\em translation-invariant} if \begin{equation}I (\tau_{{\bf x}_0} f)= \tau_{{\bf x}_0} (If)\quad {\rm for\ all} \ f\in {\mathcal S}\ {\rm and} \ {\bf x}_0\in {\mathbb{R}}^d, \end{equation} where the {\em dilation operator} $\delta_t, t>0$ and the {\em translation operator} $\tau_{{\bf x}_0}, {\bf x}_0\in {\mathbb R}^d$ are defined by $(\delta_t f) ({\bf x})= f(t {\bf x})$ and $\tau_{{\bf x}_0} f({\bf x})= f({\bf x}-{\bf x}_0), f\in {\mathcal S}$, respectively. One may verify that the fractional Laplacian $(-\triangle)^{\gamma/2}, \gamma>0$, is dilation-invariant and translation-invariant, a central property used in the definition of thin plate splines \cite{duchon1977}. Next, we define the {\em Riesz potential} $I_\gamma$ (\cite{riesz49}) by \begin{equation} \label{rieszpotential.eq1} I_\gamma f({\bf x})=\pi^{-d/2} 2^{-\gamma} \frac{\Gamma((d-\gamma)/2)}{\Gamma(\gamma/2)}\int_{{\mathbb R}^d} \|{\bf x}-{\bf y}\|^{\gamma-d} f({\bf y}) d{\bf y}, \quad f\in {\mathcal S}, \end{equation} where $0<\gamma<d$. Here the Gamma function $\Gamma$ is given by $\Gamma(z)=\int_0^\infty t^{z-1} e^{-t} dt$ when the real part ${\rm Re}\ z$ is positive, and is extended analytically to a meromorphic function on the complex plane. For any Schwartz function $f$, $I_\gamma f$ is continuous and satisfies \begin{equation} \label{rieszpotential.eq2} |I_\gamma f({\bf x})|\le C_\epsilon \Big(\sup_{{\bf z}\in {\mathbb R}^d} |f({\bf z})| (1+\|{\bf z}\|)^{d+\epsilon}\Big) (1+\|{\bf x}\|)^{\gamma-d}\quad {\rm for \ all}\ {\bf x}\in {\mathbb R}^d, \end{equation} where $\epsilon>0$ and $C_\epsilon$ is a positive constant, see also Theorem \ref{generalizedrieszomega1.tm}. Then the Riesz potential $I_\gamma$ is a continuous linear operator from ${\mathcal S}$ to ${\mathcal S}'$. Moreover one may verify that $I_\gamma$ is dilation-invariant and translation-invariant, and also that $I_\gamma, 0<\gamma<d$, is the inverse of the fractional Laplacian $(-\triangle)^{\gamma/2}$; i.e., \begin{equation}\label{rieszpotential.eq3} I_\gamma (-\triangle)^{\gamma/2} f=(-\triangle)^{\gamma/2} I_\gamma f=f\quad {\rm for\ all} \ f\in {\mathcal S} \end{equation} because \begin{equation}\label{rieszpotential.eq4} {\mathcal F}(I_\gamma f)(\xi)= \|\xi\|^{-\gamma} {\mathcal F}f (\xi), \ f\in {\mathcal S}. \end{equation} A natural question then is as follows: {\bf Question 1}: {\em For any $\gamma>0$, is there a continuous linear operator $I$ from ${\mathcal S}$ to ${\mathcal S}'$ that is translation-invariant and dilation-invariant, and that is an inverse of the fractional Laplacian $(-\triangle)^{\gamma/2}$?} In the first result of this paper (Theorem \ref{generalizedriesz.tm}), we give an affirmative answer to the above existence question for all positive non-integer numbers $\gamma$ with the invertibility replaced by the left-invertibility, and further prove the uniqueness of such a continuous linear operator. To state that result, we recall some notation and definitions. Denote the dual pair between a Schwartz function and a tempered distribution using angle bracket $\langle \cdot, \cdot\rangle$, which is given by $\langle f, g\rangle=\int_{{\mathbb R}^d} f({\bf x}) {g({\bf x})} d{\bf x}$ when $f, g\in {\mathcal S}$ (we remark that the dual pair between two complex-valued square-integrable functions is different from their standard inner product). A tempered distribution $f$ is said to be {\em homogeneous of degree $\gamma$} if $\langle f, \delta_t g\rangle= t^{-\gamma-d} \langle f, g\rangle$ for all Schwartz functions $g$ and all positive numbers $t$. We notice that the multiplier $\|\xi\|^{-\gamma}$ in the Riesz potential $I_\gamma$, see \eqref{rieszpotential.eq4}, is a homogenous function of degree $-\gamma\in (-d, 0)$. This observation inspires us to follow the definition of homogeneous tempered distribution in \cite{hormanderbook} and then to extend the definition of the Riesz potential $I_\gamma$ to any non-integer number $\gamma>d$ as follows: \begin{eqnarray}\label{generalizedriesz.def} I_\gamma f({\bf x})& := & \frac{(2\pi)^{-d}\Gamma(d-\gamma)}{ \Gamma(d+k_0-\gamma)} \int_{S^{d-1}} \int_0^\infty r^{k_0-\gamma+d-1}\nonumber \\ & & \times \Big(-\frac{d}{dr}\Big)^{k_0} \Big(e^{ir\langle {\bf x}, \xi'\rangle} \hat f(r\xi')\Big) dr d\sigma(\xi'), \quad f\in {\mathcal S}, \end{eqnarray} where $S^{n-1}=\{\xi'\in {\mathbb{R}}^d: \ \|\xi'\|=1\}$ is the unit sphere in ${{\mathbb{R}}}^d$, $d\sigma$ is the area element on $S^{n-1}$, and $k_0$ is a nonnegative integer larger than $\gamma-d$. Integration by parts shows that the above definition \eqref{generalizedriesz.def} of $I_\gamma f$ is independent on the nonnegative integer $k_0$ as long as it is larger than $\gamma-d$, and also that it coincides with the classical Riesz potential when $0<\gamma<d$ by letting $k_0=0$ and recalling that the inverse Fourier transform ${\mathcal F}^{-1} f$ of an integrable function $f$ is given by \begin{equation}\label{inversefouriertransform.def} {\mathcal F}^{-1} f({\bf x}):=(2\pi)^{-d} \int_{{\mathbb{R}}^d} e^{i \langle {\bf x}, \xi\rangle} f(\xi) d\xi.\end{equation} Because of the above consistency of definition, we call the continuous linear operator $I_\gamma, \gamma\in (0, \infty)\backslash ({\mathbb{Z}}_++d)$ in \eqref{generalizedriesz.def} the {\em generalized Riesz potential}, where ${\mathbb{Z}}_+$ is the set of all nonnegative integers. \begin{Tm} \label{generalizedriesz.tm} Let $\gamma$ be a positive number with $\gamma-d\not\in {\mathbb{Z}}_+$, and let $I_\gamma$ be the linear operator defined by \eqref{generalizedriesz.def}. Then $I_\gamma$ is the {\bf unique} continuous linear operator from ${\mathcal S}$ to ${\mathcal S}'$ that is dilation-invariant and translation-invariant, and that is a left inverse of the fractional Laplacian $(-\triangle)^{\gamma/2}$. \end{Tm} Let $L^p:=L^p({\mathbb R}^d), 1\le p\le \infty$, be the space of all $p$-integrable functions on ${\mathbb R}^d$ with the standard norm $\|\cdot\|_p$. The Hardy-Littlewood-Sobolev fractional integration theorem (\cite{steinbook}) says that the Riesz potential $I_\gamma$ is a bounded linear operator from $L^q$ to $L^p$ when $1<p\le \infty, 0<\gamma<d(1-1/p)$ and $q=pd/(d+\gamma p)$. Hence $I_\gamma f\in L^p$ for any Schwartz function $f$ when $0<\gamma<d(1-1/p)$. We observe that for any non-integer number $\gamma$ larger than or equal to $d(1-1/p)$, there exists a Schwartz function $f$ such that $I_\gamma f\not\in L^p$, see Corollary \ref{nonintegrable.cr}. An implication of this negative result, which will become clearer in the sequel (cf. Section 4), is that we cannot generally use the translation-invariant inverse $I_\gamma$ to solve the stochastic partial differential equation \eqref{randompde.def}. What is required instead is a special left-inverse of the fractional Laplacian that is dilation-invariant and $p$-integrable. Square-integrability in particular ($p=2$) is a strict requirement when the driving noise is Gaussian and has been considered in prior work \cite{tafti2009}; it leads to a fractional Brownian field solution, which is the multi-dimensional extension of Mandelbrot's celebrated fractional Brownian motion \cite{blu, mandelbrot1968}. Our desire to extend this method of solution for non-Gaussian brands of noise leads to the second question. {\bf Question 2}: {\em Let $1\le p\le \infty$ and $\gamma>0$. Is there a continuous linear operator $I$ from ${\mathcal S}$ to ${\mathcal S}'$ that is dilation-invariant and a left-inverse of the fractional Laplacian $(-\triangle)^{\gamma/2}$ such that $If\in L^p$ for all Schwartz functions $f$?} In the second result of this paper (Theorem \ref{integrablefractionalderivative.tm}), we give an affirmative answer to the above question when both $\gamma$ and $\gamma-d(1-1/p)$ are not integers, and show the uniqueness of such a continuous linear operator. To state that result, we introduce some additional multi-integer notation. For ${\bf x}=(x_1, \ldots, x_d)\in {\mathbb R}^d$ and ${\bf j}=(j_1, \ldots, j_d)\in {\mathbb Z}_+^d$ (the $d$-copies of the set ${\mathbb Z}_+$), we set $|{\bf j}|:=|j_1|+\cdots+|j_d|$, ${\bf j}!:=j_1!\cdots j_d!$ with $0!:=1$, ${\bf x}^{\bf j}:=x_1^{j_1}\cdots x_d^{j_d}$ and $\partial^{\bf j} f({\bf x}):=\partial^{j_1}_{x_1}\cdots\partial^{j_d}_{x_d} f({\bf x})$. For $1\le p\le \infty$ and $\gamma>0$, we define the linear operator $I_{\gamma, p}$ from ${\mathcal S}$ to ${\mathcal S}'$ with the help of the Fourier transform: \begin{equation}\label{fractionalderivative.veryolddef} {\mathcal F}(I_{\gamma, p} f)(\xi)=\Big({\mathcal F} f(\xi)-\sum_{|{\bf j}|\le \gamma-d(1-1/p)} \frac{\partial^{\bf j} ({\mathcal F} f)({\bf 0})}{{\bf j}!} \xi^{\bf j}\Big) \|\xi\|^{-\gamma}, \quad f\in {\mathcal S}, \end{equation} which is the natural $L^p$ extension of the fractional integral operator that was introduced in \cite{blu, tafti2009,taftu2010} for $p=2$ and $\gamma\not\in {\mathbb{Z}}/2$. We call $I_{\gamma, p}$ the {\em $p$-integrable Riesz potential of degree $\gamma$}, or the {\em integrable Riesz potential} for brevity. Indeed, when both $\gamma$ and $\gamma-d(1-1/p)$ are non-integers, the linear operator $I_{\gamma, p}$ is the unique left-inverse of the fractional Laplacian $(-\triangle)^{\gamma/2}$ that enjoys the following dilation-invariance and stability properties. \begin{Tm}\label{integrablefractionalderivative.tm} Let $1\le p\le \infty$, and $\gamma$ is a positive number such that both $\gamma$ and $\gamma-d+d/p$ are not nonnegative integers. Then $I_{\gamma, p}$ in \eqref{fractionalderivative.veryolddef} is the {\bf unique} dilation-invariant left-inverse of the fractional Laplacian $(-\triangle)^{\gamma/2}$ such that its image of the Schwartz space ${\mathcal S}$ is contained in $L^p$. \end{Tm} One of the primary application of the $p$-integrable Riesz potentials is the construction of generalized random processes by suitable functional integration of white noise \cite{tafti2009, taftu2010, Unser2009}. These processes are defined by the stochastic partial differential equation \eqref{randompde.def}, the motivation being that the solution should essentially display the same invariance properties as the defining operator (fractional Laplacian). In particular, these processes will exhibit some level of self-similarity (fractality) because $I_{\gamma, p}$ is dilation-invariant. However, they will in general not be stationary because the requirement for a stable inverse excludes translation invariance. It is this last aspect that deviates from the classical theory of stochastic processes and requires the type of mathematical safeguards that are provided in this paper. While the case of a white Gaussian noise excitation is fairly well understood \cite{tafti2009}, it is not yet so when the driving term is impulse Poisson noise which leads to the specification of sparse stochastic processes with a finite rate of innovation. The current status has been to use the operator $I_{\gamma,2}$ to specify sparse processes with the restriction that the impulse amplitude distribution must be symmetric \cite[Theorem 2]{Unser2009}. Our present contribution is to show that one can lift this restriction by considering the operator $I_{\gamma,1}$, which is the proper inverse to handle general impulsive Poisson noise. To state our third result, we recall some concepts about generalized random processes and Poisson noises. Let ${\mathcal D}$ be the space of all compactly supported $C^\infty$ functions with standard topology. A {\em generalized random process} is a random functional $\Phi$ defined on ${\mathcal D}$ (i.e., a random variable $\Phi(f)$ associated with every $f\in {\mathcal D}$) which is linear, continuous and compatible \cite{gelfandbook}. The white Poisson noise \begin{equation}\label{whitepoisson.def} w({\bf x}):=\sum_{k\in {\mathbb Z}} a_k \delta({\bf x}-{\bf x}_k)\end{equation} is a generalized random process such that the random variable associated with a function $f\in {\mathcal D}$ is given by \begin{equation}w(f):=\sum_{k\in {\mathbb{Z}}} a_k f({\bf x}_{k}),\end{equation} where the $a_k$'s are i.i.d. random variables with probability distribution $P(a)$, and where the ${\bf x}_k$'s are random point locations in ${\mathbb R}^n$ which are mutually independent and follow a spatial Poisson distribution with Poisson parameter $\lambda>0$. The random point locations ${\bf x}_k$ in ${\mathbb R}^n$ follow a {\em spatial Poisson distribution} with Poisson parameter $\lambda>0$ meaning that for any measurable set $E$ with finite Lebesgue measure $|E|$, the probability of observing $n$ events in $E$ (i.e., the cardinality of the set $\{k| \ {\bf x}_k\in E\}$ is equal to $n$) is $\exp(-\lambda |E|) (\lambda |E|)^n/ n!$. Thus, the Poisson parameter $\lambda$ represents the average number of random impulses per unit. As the white Poisson noise $w$ is a generalized random process, the stochastic partial differential equation \eqref{randompde.def} can be interpreted as the following: \begin{equation}\label{randompde.def2} \langle \Phi, (-\triangle)^{\gamma/2} f\rangle=\langle w, f\rangle\quad {\rm for\ all} \ f\in {\mathcal D}. \end{equation} So if $I$ is a left-inverse of the fractional Laplacian operator $(-\triangle)^{\gamma/2}$, then \begin{equation}\Phi=I^* w\end{equation} is {\em literally} the solution of the stochastic partial differential equation \eqref{randompde.def} as \begin{equation} \langle I^*w, (-\triangle)^{\gamma/2} f\rangle=\langle w, I(-\triangle)^{\gamma/2} f\rangle=\langle w, f\rangle \quad {\rm for \ all} \ f\in {\mathcal D}, \end{equation} where $I^*$ is the conjugate operator of the continuous linear operator $I$ from ${\mathcal S}$ to ${\mathcal S}'$ defined by $$\langle I^*f, g\rangle:=\langle f, Ig\rangle\quad {\rm for\ all} \ f, g\in {\mathcal S}.$$ The above observation is usable only if we can specify a left-inverse (or equivalently we can impose appropriate boundary condition) so that $I^*w$ defines a bona fide generalized random process in the sense of Gelfand and Vilenkin; mathematically, the latter is equivalent to providing its characteristic functional by the Minlos-Bochner Theorem (cf. Section 4). The following result establishes that $P_\gamma w:=I_{\gamma, 1}^*w$ is a proper solution of the stochastic partial differential equation \eqref{randompde.def}, where $w$ is the Poisson noise defined by (\ref{whitepoisson.def}). \begin{Tm}\label{generalizedpoisson.tm} Let $\gamma$ be a positive non-integer number, $\lambda$ be a positive number, $P(a)$ be a probability distribution with $\int_{{\mathbb{R}}} |a| dP(a)<\infty$, and $I_{\gamma, 1}$ be defined as in \eqref{fractionalderivative.veryolddef}. For any $f\in {\mathcal D}$, define the random variable $P_\gamma w$ associated with $f$ by \begin{equation}\label{generalizedpoisson.tm.eq1} P_\gamma w (f):=\sum_{k} a_k I_{\gamma, 1}(f)({\bf x}_k) \end{equation} where the $a_k$'s are i.i.d. random variables with probability distribution $P(a)$, and the ${\bf x}_k$'s are random point locations in ${\mathbb R}^n$ which are mutually independent and follow a spatial Poisson distribution with Poisson parameter $\lambda$. Then $P_{\gamma} w$ is the generalized random process associated with the characteristic functional \begin{equation}\label{generalizedpoisson.tm.eq2} {\mathcal Z}_{P_\gamma w}(f)= \exp\Big(\lambda \int_{{\mathbb{R}}^d}\int_{{\mathbb{R}}} \big(e^{-ia (I_{\gamma, 1}f)({\bf x})}-1\big) dP(a)d{\bf x}\Big), \quad f\in {\mathcal D}. \end{equation} \end{Tm} \bigskip The organization of the paper is as follows. In Section \ref{grp.section}, we first introduce a linear operator $J_\Omega$ for any homogeneous function $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ of degree $-\gamma$, where $\gamma-d\not\in {\mathbb{Z}}_+$. The linear operator $J_\Omega$ becomes the generalized Riesz potential $I_\gamma$ in \eqref{generalizedriesz.def} when $\Omega(\xi)=\|\xi\|^{-\gamma}$; conversely, any derivative of the generalized Riesz potential $I_\gamma$ is a linear operator $J_\Omega$ associated with some homogeneous function $\Omega$: $$\partial^{\bf j} I_\gamma f= J_{\Omega_{\bf j}} f\quad {\rm for\ all} \ f\in {\mathcal S}\ {\rm and} \ {\bf j}\in {\mathbb{Z}}_+^d,$$ where $\Omega_{\bf j}(\xi)=(i\xi)^{\bf j} \|\xi\|^{-\gamma}$. We then study various properties of the above linear operator $J_\Omega$, such as polynomial decay property, dilation-invariance, translation-invariance, left-invertibility, and non-integrability in the spatial domain and in the Fourier domain. The proof of Theorem \ref{generalizedriesz.tm} is given at the end of Section \ref{grp.section}. In Section \ref{irp.section}, we introduce a linear operator $U_{\Omega,p}$ for any homogeneous function $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ of degree $-\gamma$, where $1\le p\le \infty$. The above linear operator $U_{\Omega, p}$ becomes the operator $I_{\gamma,p}$ in \eqref{fractionalderivative.veryolddef} when $\Omega(\xi)=\|\xi\|^{-\gamma}$, and the operator $J_\Omega$ in \eqref{fractionalderivative.def} when $0<\gamma<d(1-1/p)$. We show that the linear operator $U_{\Omega,p}$ is dilation-invariant, translation-variant and $p$-integrable, and is a left-inverse of the fractional Laplacian $(-\triangle)^{\gamma/2}$ when $\Omega(\xi)=\|\xi\|^{-\gamma}$. The proof of Theorem \ref{integrablefractionalderivative.tm} is given at the end of Section \ref{irp.section}. In Section \ref{poisson.section}, we give the proof of Theorem \ref{generalizedpoisson.tm} and show that the generalized random process $P_\gamma w$ can be evaluated pointwise in the sense that we can replace the function $f$ in \eqref{generalizedpoisson.tm.eq1} by the delta functional $\delta$. In this paper, the capital letter $C$ denotes an absolute positive constant which may vary depending on the occurrence. \section{Generalized Riesz Potentials}\label{grp.section} Let $\gamma$ be a real number such that $\gamma-d\not\in {\mathbb{Z}}_+$, and let $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ be a homogeneous function of degree $-\gamma$. Following the definition of homogenous tempered distributions in \cite{hormanderbook}, we define the linear operator $J_\Omega$ from ${\mathcal S}$ to ${\mathcal S}'$ by \begin{eqnarray}\label{fractionalderivative.def} J_\Omega f({\bf x})\!\! & := & \!\! \frac{(2\pi)^{-d}\Gamma(d-\gamma)}{ \Gamma(d+k_0-\gamma)} \int_{S^{d-1}} \int_0^\infty \Omega(\xi')r^{k_0-\gamma+d-1}\nonumber \\ & & \times \Big(-\frac{d}{dr}\Big)^{k_0} \Big(e^{ir\langle {\bf x}, \xi'\rangle} \hat f(r\xi')\Big) dr d\sigma(\xi'), \quad f\in {\mathcal S}, \end{eqnarray} where $S^{n-1}=\{\xi'\in {\mathbb{R}}^d: \ \|\xi'\|=1\}$ is the unit sphere in ${{\mathbb{R}}}^d$, $d\sigma$ is the area element on $S^{n-1}$, and $k_0$ is a nonnegative integer larger than $\gamma-d$. Note that the linear operator $J_\Omega$ in \eqref{fractionalderivative.def} becomes the generalized Riesz potential $I_\gamma$ in \eqref{generalizedriesz.def} when $\Omega(\xi)=\|\xi\|^{-\gamma}$ and $\gamma>0$. Therefore we call the linear operator $J_\Omega$ in \eqref{fractionalderivative.def} {\em the generalized Riesz potential associated with the homogeneous function $\Omega$ of degree $-\gamma$}, or {\em the generalized Riesz potential} for brevity. The above definition of the generalized Riesz potential $J_\Omega$ is independent on the nonnegative integer $k_0$ as long as it satisfies $k_0>\gamma-d$, that can be shown by integration by parts. Then, for $\gamma \in (-\infty, d)$, we may take $k_0=0$ and reformulate \eqref{fractionalderivative.def} as follows: \begin{equation}\label{fractionalderivative.neweq2} J_\Omega f({\bf x})=(2\pi)^{-d} \int_{{\mathbb{R}}^d} e^{i\langle {\bf x}, \xi\rangle} \Omega(\xi) \hat f(\xi) d\xi \quad {\rm for \ all} \ f\in {\mathcal S}, \end{equation} or equivalently \begin{equation} \widehat{J_\Omega f}(\xi)= \Omega(\xi) \hat f(\xi)\quad {\rm for \ all} \ f\in {\mathcal S},\end{equation} so that the role of the homogeneous function $\Omega(\xi)$ in \eqref{fractionalderivative.def} is essentially that of the Fourier symbol for a conventional translation-invariant operator. Let ${\mathcal S}_\infty$ be the space of all Schwartz functions $f$ such that $ \partial^{\bf i}\hat f({\bf 0})=0$ for all ${\bf i}\in {\mathbb{Z}}_+^d$, or equivalently that $\int_{{\mathbb R}^d} {\bf x}^{\bf j} f({\bf x}) d{\bf x}=0$ for all ${\bf j}\in {\mathbb{Z}}_+^d$. Given a homogenous function $\Omega\in C^\infty({\mathbb R}^d\backslash \{{\bf 0}\})$, define the linear operator $i_\Omega$ on ${\mathcal S}_\infty$ by \begin{equation}\label{fractionalderivative.def1} \widehat{i_\Omega f}(\xi)= \Omega(\xi) \hat f(\xi),\quad f \in {\mathcal S}_\infty. \end{equation} Clearly $i_\Omega$ is a continuous linear operator on the closed linear subspace ${\mathcal S}_\infty$ of ${\mathcal S}$. For any function $f\in {\mathcal S}_\infty$, applying the integration-by-parts technique $k_0$ times and noticing that $\lim_{\epsilon\to 0} \epsilon^{-\gamma} |\partial^{\bf i} \hat f(\epsilon \xi')|=0$ for all $\xi'\in S^{d-1}$ and ${\bf i}\in {\mathbb{Z}}_+^d$, we obtain that \begin{eqnarray}\label{extension.eq} J_\Omega f({\bf x}) & = & \frac{(2\pi)^{-d}\Gamma(d-\gamma)}{ \Gamma(d+k_0-\gamma)} \lim_{\epsilon\to 0}\int_{S^{d-1}} \int_\epsilon^\infty r^{k_0+d-\gamma-1}\Omega(\xi')\nonumber \\ & & \quad \times \Big(-\frac{d}{dr}\Big)^{k_0} \Big(e^{ir\langle {\bf x}, \xi'\rangle} \hat f(r\xi')\Big) dr d\sigma(\xi')\nonumber\\ & = & (2\pi)^{-d} \lim_{\epsilon\to 0} \int_{S^{d-1}} \int_\epsilon^\infty \Omega(\xi')r^{d-\gamma-1} e^{ir\langle {\bf x}, \xi'\rangle} \hat f(r\xi') dr d\sigma(\xi')\nonumber\\ & = & (2\pi)^{-d}\int_{{\mathbb{R}}^d} e^{i\langle {\bf x}, \xi\rangle} \Omega(\xi) \hat f(\xi) d\xi= i_\Omega f({\bf x}). \end{eqnarray} Hence the generalized Riesz potential $J_\Omega$ is the extension of the linear operator $i_\Omega$ from the closed subspace ${\mathcal S}_\infty$ to the whole space ${\mathcal S}$. In the sequel, we will study further properties of the generalized Riesz potential $J_\Omega$, such as the polynomial decay property (Theorem \ref{generalizedrieszomega1.tm}), the continuity as a linear operator from ${\mathcal S}$ to ${\mathcal S}'$ (Corollary \ref{generalizedrieszomega1.cr}), the translation-invariance and dilation-invariance (Theorem \ref{maintheorem.iomega1}), the composition and left-inverse property (Theorem \ref{composition.tm} and Corollary \ref{composition.cr}), the uniqueness of various extensions of the linear operator $i_\Omega$ from the closed subspace ${\mathcal S}_\infty$ to the whole space ${\mathcal S}$ (Theorems \ref{iomega2.tm} and \ref{iomega4.tm}), the non-integrability in the spatial domain (Theorem \ref{time1.tm}), and the non-integrability in the Fourier domain (Theorem \ref{frequency.tm1}). Some of those properties will be used to prove Theorem \ref{generalizedriesz.tm}, which is included at the end of this section. \subsection{Polynomial decay property and continuity} \begin{Tm} \label{generalizedrieszomega1.tm} Let $\gamma$ be a positive number with $\gamma-d\not\in {\mathbb{Z}}_+$, $k_0$ be the smallest nonnegative integer larger than $\gamma-d$, and let $\Omega\in C^\infty ({\mathbb{R}}^d\backslash \{\bf 0\})$ be a homogeneous function of degree $-\gamma$. If there exist positive constants $\epsilon$ and $C_\epsilon$ such that \begin{equation}\label{generalizedrieszomega1.tm.eq1} |f({\bf x})|\le C_\epsilon (1+\|{\bf x}\|)^{-k_0-d-\epsilon} \ {\rm for \ all} \ {\bf x}\in {\mathbb R}^d, \end{equation} then there exists a positive constant $C$ such that \begin{eqnarray}\label{generalizedrieszomega1.tm.eq2} |J_\Omega f({\bf x})| \le C \Big(\sup_{{\bf z}\in {\mathbb R}^d} |f({\bf z})| (1+\|{\bf z}\|)^{k_0+d+\epsilon}\Big) (1+\|{\bf x}\|)^{\gamma-d}, \ \ {\bf x}\in {\mathbb R}^d. \end{eqnarray} \end{Tm} \begin{proof} Noting that $ \big(\frac{d}{dr}\big)^{s} e^{ir\langle {\bf x}, \xi'\rangle} = s! \Big(\sum_{|{\bf i}|=s} \frac{(i{\bf x})^{\bf i} \xi'^{\bf i}}{ {\bf i}!} \Big)e^{ir\langle {\bf x}, \xi'\rangle}$ and $\big(\frac{d}{dr}\big)^{k_0-s} \hat f(r\xi')= (k_0-s)! \sum_{|{\bf j}|=k_0-s} \frac{ (\xi')^{\bf j} \partial^{\bf j} \hat f(r\xi')}{{\bf j}!} $ for all $0\le s\le k_0$, we obtain from the Leibniz rule that \begin{eqnarray*} \Big(\frac{d}{dr}\Big)^{k_0} \Big(e^{ir\langle {\bf x}, \xi'\rangle} \hat f(r\xi')\Big) & = & \sum_{s=0}^{k_0}\binom{k_0}{s} \Big\{\Big(\frac{d}{dr}\Big)^{k_0-s} e^{ir\langle {\bf x}, \xi'\rangle}\Big\}\cdot \Big\{\Big(\frac{d}{dr}\Big)^{k_0} \hat f(r\xi')\Big\}\nonumber\\ & = & \Big(\sum_{|{\bf i}|+|{\bf j}|=k_0}\frac{k_0!}{{\bf i}!{\bf j}!} { (i{\bf x})}^{\bf i} (\xi')^{{\bf i}+{\bf j}} \partial^{\bf j} \hat f(r\xi')\Big)e^{ir\langle {\bf x}, \xi'\rangle}. \end{eqnarray*} Substituting the above expression into \eqref{fractionalderivative.def} we get \begin{eqnarray}\label{generalizedrieszomega1.tm.pf.eq1} J_\Omega f({\bf x} & = & (-1)^{k_0}\sum_{|{\bf i}|+|{\bf j}|=k_0}\frac{k_0!}{{\bf i}!{\bf j}!} (i{\bf x})^{\bf i} \Big\{\frac{(2\pi)^{-d}\Gamma(d-\gamma)}{ \Gamma(d+k_0-\gamma)}\nonumber\\ & & \times \int_{{\mathbb{R}}^d} e^{i\langle {\bf x}, \xi\rangle} \big(\xi^{{\bf i}+{\bf j}}\Omega(\xi)\big) \partial^{\bf j}\hat f(\xi)d\xi\Big\}\nonumber\\ & = & \frac{\Gamma(d-\gamma)}{\Gamma(d+k_0-\gamma)} \sum_{|{\bf i}|+|{\bf j}|=k_0}\frac{k_0! }{{\bf i}!{\bf j}!} (-{\bf x})^{\bf i} J_{\Omega_{{\bf i}+{\bf j}}} (f_{\bf j})({\bf x}), \end{eqnarray} where $\Omega_{{\bf i}+{\bf j}}(\xi)= (i\xi)^{{\bf i}+{\bf j}}\Omega(\xi)$ and $f_{\bf j}({\bf x})= {\bf x}^{\bf j} f({\bf x})$. Denote the inverse Fourier transform of $\Omega_{{\bf k}}, |{\bf k}|=k_0$, by $K_{{\bf k}}$. Then $K_{{\bf k}}\in C^\infty({\mathbb R}^d\backslash \{{\bf 0}\})$ is a homogeneous function of degree $\gamma-k_0-d$ (\cite[Theorems 7.1.16 and 7.1.18]{hormanderbook}), and hence there exists a positive constant $C$ such that \begin{equation}\label{generalizedrieszomega1.tm.pf.eq2} |K_{\bf k}({\bf x})|\le C \|{\bf x}\|^{\gamma-k_0-d} \quad {\rm for \ all} \ {\bf x}\in {\mathbb R}^d\backslash \{{\bf 0}\}. \end{equation} For any $\epsilon>0$ and $\beta\in (0, d)$, we have \begin{eqnarray}\label{generalizedrieszomega1.tm.pf.eq3} & & \int_{{\mathbb R}^d} \|{\bf x}-{\bf y}\|^{-\beta} (1+\|{\bf y}\|)^{-d-\epsilon} d{\bf y}\nonumber\\ & \le & \Big( \int_{\|{\bf y}\|\le (\|{\bf x}\|+1)/2}+\int_{(\|{\bf x}\|+1)/2\le \|{\bf y}\|\le 2(\|{\bf x}\|+1)}+ \int_{\|{\bf y}\|\ge 2(\|{\bf x}\|+1)} \Big) \nonumber\\ & & \qquad \|{\bf x}-{\bf y}\|^{-\beta} (1+\|{\bf y}\|)^{-d-\epsilon} d{\bf y}\nonumber\\ & \le & C (1+\|{\bf x}\|)^{-\beta}. \end{eqnarray} Combining \eqref{generalizedrieszomega1.tm.pf.eq1}, \eqref{generalizedrieszomega1.tm.pf.eq2} and \eqref{generalizedrieszomega1.tm.pf.eq3} yields \begin{eqnarray*} |J_\Omega f({\bf x})| & \le & C \sum_{|{\bf i}|+|{\bf j}|=k_0} |{\bf x}|^{|{\bf i}|} \Big|\int_{{\mathbb R}^d} K_{{\bf i}+{\bf j}}({\bf x}-{\bf y}) {\bf y}^{\bf j} f({\bf y}) \Big| d{\bf y}\nonumber\\ & \le & C (1+\|{\bf x}\|)^{k_0} \int_{{\mathbb R}^d} \|{\bf x}-{\bf y}\|^{\gamma-k_0-d} (1+\|{\bf y}\|)^{k_0} |f({\bf y})| d{\bf y}\nonumber\\ & \le & C \Big(\sup_{{\bf z}\in {\mathbb R}^d} |f({\bf z})| (1+\|{\bf z}\|)^{k_0+d+\epsilon}\Big) (1+\|{\bf x}\|)^{\gamma-d}.\end{eqnarray*} This proves the desired polynomial decay estimate \eqref{generalizedrieszomega1.tm.eq2}. \end{proof} For any $f\in {\mathcal S}$ and ${\bf j}\in {\mathbb{Z}}_+^d$ with $|{\bf j}|=1$, it follows from \eqref{fractionalderivative.def} that \begin{eqnarray*} \partial^{\bf j}(J_\Omega f) ({\bf x}) &= & J_\Omega(\partial^{\bf j} f) ({\bf x})\nonumber\\ & = & \frac{(2\pi)^{-d}\Gamma(d-\gamma)}{ \Gamma(d+k_0-\gamma)} \int_{S^{d-1}} \int_0^\infty \Omega(\xi') (i\xi')^{\bf j} r^{k_0+d-\gamma-1}\nonumber \\ & & \times \Big(-\frac{d}{dr}\Big)^{k_0} \Big(e^{ir\langle {\bf x}, \xi'\rangle} \hat f(r\xi') r \Big) dr d\sigma(\xi')\nonumber\\ & = & \frac{(2\pi)^{-d}\Gamma(d-\gamma)}{ \Gamma(d+k_0-\gamma)} \int_{S^{d-1}} \int_0^\infty \Omega(\xi') (i\xi')^{\bf j} r^{k_0+d-\gamma-1}\nonumber \\ & & \times \Big\{ r \Big(-\frac{d}{dr}\Big)^{k_0} \Big(e^{ir\langle {\bf x}, \xi'\rangle} \hat f(r\xi') \Big) \nonumber\\ & & \quad -{k_0}\Big(-\frac{d}{dr}\Big)^{k_0-1} \Big(e^{ir\langle {\bf x}, \xi'\rangle} \hat f(r\xi') \Big)\Big\} dr d\sigma(\xi')\nonumber\\ & = & \Big(\frac{d+k_0-\gamma}{d-\gamma}-k_0\frac{1}{d-\gamma}\Big) J_{\Omega_{\bf j}} f({\bf x})= J_{\Omega_{\bf j}} f({\bf x}), \end{eqnarray*} where $\Omega_{\bf j}(\xi)=(i\xi)^{\bf j} \Omega(\xi) $. Applying the argument inductively leads to \begin{equation} \label{fractionalderivative.eq00} \partial^{\bf j} (J_\Omega f)= J_\Omega(\partial^{\bf j} f) = J_{\Omega_{\bf j}} f \quad {\rm for \ all} \ f\in {\mathcal S} \ {\rm and} \ {\bf j}\in {\mathbb{Z}}_+^d, \end{equation} where $\Omega_{\bf j}(\xi)= (i\xi)^{\bf j}\Omega(\xi)$. This together with Theorem \ref{generalizedrieszomega1.tm} shows that $J_\Omega f$ is a smooth function on ${\mathbb R}^d$ for any Schwartz function $f$. \begin{Cr} \label{generalizedrieszomega1.cr0} Let $\gamma, k_0$ and $\Omega$ be as in Theorem \ref{generalizedrieszomega1.tm}. If $f$ satisfies \eqref{generalizedrieszomega1.tm.eq1} for some positive constants $\epsilon$ and $C_\epsilon$, then for any ${\bf j}\in {\mathbb{Z}}_+^d$ with $|{\bf j}|<\gamma$ there exists a positive constant $C_{\bf j}$ such that \begin{equation}\label{generalizedrieszomega1.cr0.eq2} |\partial^{\bf j} (J_\Omega f)({\bf x})| \le C_{\bf j} \Big(\sup_{{\bf z}\in {\mathbb R}^d} |f({\bf z})| (1+\|{\bf z}\|)^{k_0+d+\epsilon}\Big) (1+\|{\bf x}\|)^{\gamma-|{\bf j}|-d}, \ {\bf x}\in {\mathbb{R}}^d. \end{equation} \end{Cr} An easy consequence of the above smoothness result about $J_\Omega f$ is the continuity of the generalized Riesz potential $J_\Omega$ from ${\mathcal S}$ to ${\mathcal S}'$. \begin{Cr}\label{generalizedrieszomega1.cr} Let $\gamma$ be a positive number with $\gamma-d\not\in {\mathbb{Z}}_+$, and let $\Omega\in C^{\infty} ({\mathbb{R}}^d\backslash \{\bf 0\})$ be a homogeneous function of degree $-\gamma$. Then the generalized Riesz potential $J_\Omega $ associated with the homogeneous function $\Omega$ is a continuous linear operator from ${\mathcal S}$ to ${\mathcal S}'$. \end{Cr} Now consider the generalized Riesz potential $J_\Omega$ when $\Omega$ is a homogeneous function of positive degree $\alpha$. In this case, $$ J_\Omega f({\bf x}) = (2\pi)^{-d} \int_{{\mathbb{R}}^d} e^{i\langle {\bf x}, \xi\rangle} \Omega(\xi)\hat f(\xi) d\xi\quad {\rm for \ all} \ f\in {\mathcal S}$$ by \eqref{fractionalderivative.neweq2}. Applying the integration-by-parts technique then gives $$ J_\Omega f({\bf x}) = (2\pi)^{-d} (-i {\bf x}^{\bf i})^{-1} \sum_{{\bf j}+{\bf k}={\bf i}} \frac{{\bf i}!}{{\bf j}!{\bf k}!} \int_{{\mathbb{R}}^d} e^{i\langle {\bf x}, \xi\rangle} \partial^{\bf j} \Omega(\xi) \partial^{\bf k}\hat f(\xi) d\xi $$ for any ${\bf i}\in {\mathbb{Z}}_+^d$. This, together with the identity $$1=\sum_{|{\bf l}|=\lceil \alpha\rceil-|{\bf j}|} \frac{(\lceil \alpha \rceil-|{\bf j}|)!}{{\bf l}!} \Big(\frac{i\xi}{\|{\bf \xi}\|^2}\Big)^{\bf l} (-i\xi)^{\bf l},\quad |{\bf j}|\le \lceil \alpha\rceil,$$ leads to the following estimate of $J_\Omega f({\bf x})$: \begin{eqnarray*} |J_\Omega f({\bf x})|\!\! & \le & \!\! C (1+\|{\bf x}\|)^{-\lceil \alpha\rceil} \sum_{|{\bf j}|+|{\bf k}|\le \lceil \alpha\rceil, |{\bf l}|=\lceil \alpha\rceil-|{\bf j}|} \Big|\int_{{{\mathbb{R}}}^d} e^{i\langle {\bf x}, \xi\rangle} \Omega_{{\bf j}, {\bf l}}(\xi) \xi^{\bf l} \partial^{\bf k} \hat f(\xi) d\xi\Big| \nonumber\\ \!\! & \le & \!\! C (1+\|{\bf x}\|)^{-\lceil \alpha\rceil} \sum_{|{\bf j}|+|{\bf k}|\le \lceil \alpha\rceil, |{\bf l}|+|{\bf j}|=\lceil \alpha\rceil} |I_{\Omega_{{\bf j}, {\bf l}}} f_{{\bf l}, {\bf k}}({\bf x})|, \end{eqnarray*} where $\lceil \alpha\rceil$ is the smallest integer larger than $\alpha$, $\Omega_{{\bf j}, {\bf l}}(\xi)=\partial^{\bf j}\Omega(\xi) (i\xi/\|\xi\|^2)^{\bf l}$, and $\widehat{ f_{{\bf l}, {\bf k}}} (\xi)=(-i\xi)^{\bf l} \partial^{{\bf k}} \hat f(\xi)$. Note that $\Omega_{{\bf j}, {\bf l}}\in C^\infty({\mathbb{R}}^d\backslash \{{\bf 0}\})$ is a homogeneous function of degree $\alpha-\lceil \alpha\rceil<0$ when $|{\bf j}|+|{\bf l}|=\lceil \alpha\rceil$, and also that functions $f_{{\bf l}, {\bf k}}({\bf x}), |{\bf k}|, |{\bf l}|\le \lceil \alpha\rceil$ are linear combinations of ${\bf x}^{\bf i} \partial^{\bf j} f({\bf x}), |{\bf i}|, |{\bf j}|\le \lceil \alpha\rceil$. We then apply Theorem \ref{generalizedrieszomega1.tm} to obtain the following polynomial decay estimate of $J_\Omega f$ when $\Omega$ is a homogeneous function of positive degree: \begin{pr}\label{positivegeneralizedrieszomega1.cr} Let $\alpha$ be a positive non-integer number, and $\Omega\in C^\infty ({\mathbb{R}}^d\backslash \{\bf 0\})$ be a homogeneous function of degree $\alpha$. If there exist positive constants $\epsilon$ and $C_\epsilon$ such that \begin{equation*}\label{positivegeneralizedrieszomega1.cr.eq1} \sum_{|{\bf i}|\le \lceil \alpha\rceil} |\partial^{\bf i} f({\bf x})|\le C_\epsilon (1+\|{\bf x}\|)^{-\lceil \alpha\rceil-d-\epsilon} \ {\rm for \ all} \ {\bf x}\in {\mathbb R}^d, \end{equation*} then there exists a positive constant $C$ such that \begin{equation} \label{positivegeneralizedrieszomega1.cr.eq2} |J_\Omega f({\bf x})| \le C \Big(\sum_{|{\bf i}|\le \lceil \alpha\rceil} \sup_{{\bf z}\in {\mathbb R}^d} |\partial^{\bf i} f({\bf z})| (1+\|{\bf z}\|)^{\lceil \alpha\rceil+d+\epsilon}\Big) (1+\|{\bf x}\|)^{-\alpha-d} \end{equation} for all ${\bf x}\in {\mathbb R}^d$. \end{pr} The estimates in \eqref{generalizedrieszomega1.tm.eq2} and \eqref{positivegeneralizedrieszomega1.cr.eq2} indicate that the generalized Riesz potential $J_\Omega f$ has faster polynomial decay at infinity when the degree of the homogeneous function $\Omega$ becomes larger. Next, we show that the generalized Riesz potential $J_\Omega f$ has faster polynomial decay at infinity when $f$ has vanishing moments up to some order; i.e., \begin{equation}\label{momentcondition} \int_{{\mathbb{R}}^d} {\bf x}^{\bf i} f({\bf x}) d{\bf x}=0,\ |{\bf i}|\le m_0 \end{equation} where $m_0\ge 0$. In this case, $\partial^{\bf i}\hat f({\bf 0})=0$ for all $|{\bf i}|\le m_0$, and hence \begin{equation} \hat f(\xi)= \sum_{|{\bf k}|=m_0+1}\frac{m_0+1}{{\bf k}!} \int_0^1 \xi^{\bf k} \partial^{\bf k} \hat f(t\xi) (1-t)^{m_0} dt \end{equation} by the Taylor expansion to $\hat f$ at the origin. Now we assume that $\Omega\in C^\infty({\mathbb{R}}^d\backslash\{{\bf 0}\})$ is a homogeneous function of degree $\alpha\in (-m_0-1, \infty)\backslash {\mathbb{Z}} $. Then \begin{eqnarray}\label{momentgeneralizedrieszomega1.pr.pf1} |J_\Omega f({\bf x})|\!\! \!& \le &\!\! C \sum_{|{\bf k}|=m_0+1}\int_0^1\int_{\|\xi\|\le 1} |\xi|^{\alpha+m_0+1} |\partial^{\bf k} \hat f(t\xi)| d\xi dt+ C \int_{|\xi|\ge 1} |\xi|^{\alpha} |\hat f(\xi)|d\xi\nonumber\\ & \le & C \sum_{|{\bf i}|\le m_0+1} \sup_{\xi \in {\mathbb{R}}^d} \big((1+\|\xi\|)^{\lceil \alpha\rceil +d} |\partial^{\bf i}\hat f(\xi)|\big) \end{eqnarray} for all ${\bf x}\in {\mathbb{R}}^d$ with $\|{\bf x}\|\le 1$, and \begin{eqnarray}\label{momentgeneralizedrieszomega1.pr.pf2} |J_\Omega f({\bf x})|\!\! &\le &\!\! C \sum_{|{\bf k}|=m_0+1} \int_0^1 \Big|\int_{{\mathbb{R}}^d} e^{-i\langle {\bf x}, \xi\rangle} \phi(\|{\bf x}\| \xi) \xi^{\bf k} \Omega(\xi) \partial^{\bf k} \hat f(t\xi) d\xi\Big| dt\nonumber\\ \!\! & & \!\! +C\sum_{|{\bf k}|=m_0+1} \int_0^1 \Big|\int_{{\mathbb{R}}^d} e^{-i\langle {\bf x}, \xi\rangle} \big(\phi(\xi)-\phi(\|{\bf x}\| \xi)\big)\xi^{\bf k} \Omega(\xi) \partial^{\bf k} \hat f(t\xi) d\xi\Big|dt\nonumber \\ \!\!& &\!\! + C \Big|\int_{{\mathbb{R}}^d} e^{-i\langle {\bf x}, \xi\rangle} \big(1-\phi(\xi)\big) \Omega(\xi) \hat f(\xi) d\xi\Big|\nonumber\\ \!\!&\le & \!\! C (1+\|{\bf x}\|)^{-\lceil \alpha\rceil-m_0-d} \Big\{\sum_{|{\bf k}|=m_0+1, |{\bf j}|\le \lceil \alpha\rceil+m_0+d}\nonumber\\ \!\!& &\!\! \quad \int_0^1\int_{{\mathbb{R}}^d} \Big|\partial^{\bf j} \big(\phi(\|{\bf x}\|\xi) \xi^{\bf k} \Omega(\xi) \partial^{\bf k}\hat f(t\xi)\big)\Big| d\xi dt\Big\} \nonumber\\ \!\!& &\!\! + C (1+\|{\bf x}\|)^{-\lceil \alpha\rceil-m_0-d-1} \Big\{\sum_{|{\bf k}|=m_0+1, |{\bf j}|\le \lceil \alpha\rceil+m_0+d+1}\nonumber\\ \!\!& &\!\! \quad \int_0^1\int_{{\mathbb{R}}^d} \Big|\partial^{\bf j}\big( (\phi(\xi)-\phi(\|{\bf x}\|\xi)) \xi^{\bf k} \Omega(\xi) \partial^{\bf k}\hat f(t\xi)\big)\Big| d\xi dt\Big\}\nonumber\\ \!\!& &\!\! + C (1+\|{\bf x}\|)^{-\lceil \alpha\rceil-m_0-d-1} \nonumber\\ & & \quad \times \Big\{\sum_{|{\bf j}|\le \lceil \alpha\rceil+m_0+d+1} \int_{{\mathbb{R}}^d} \Big|\partial^{\bf j}\big( (1-\phi(\xi)) \Omega(\xi) \hat f(\xi)\big)\Big| d\xi\Big\} \nonumber\\ \!\!& \le & \!\! C \Big(\sum_{|{\bf i}|\le \lceil \alpha\rceil+2m_0+d+2} \sup_{\xi\in {\mathbb{R}}^d} (1+\|\xi\|)^{\lceil \alpha\rceil+d} |\partial^{\bf i} \hat f(\xi)|\Big) (1+\|{\bf x}\|)^{-\alpha-m_0-d-1} \end{eqnarray} for all ${\bf x}\in {\mathbb{R}}^d$ with $\|{\bf x}\|\ge 1$, where $\phi$ is a $C^\infty$ function such that $\phi(\xi)=1$ for all $\xi$ in the unit ball $B({\bf 0}, 1)$ centered at the origin, and $\phi(\xi)=0$ for all $\xi$ not in the ball $B({\bf 0}, 2)$ with radius 2 and center at the origin. This proves the following result about the generalized Riesz potential $J_\Omega f$ when $f$ has vanishing moments upto some order. \begin{pr}\label{momentgeneralizedrieszomega1.pr} Let $m_0\ge 0, \alpha\in (-m_0-1, \infty)\backslash {\mathbb{Z}}$, and $\Omega\in C^\infty({\mathbb{R}}^d\backslash\{{\bf 0}\})$ be a homogeneous function of degree $\alpha$. Then the following statements hold. \begin{itemize} \item [{(i)}] If $f$ satisfies \eqref{momentcondition} and \begin{equation} \sum_{|{\bf i}|\le\lceil \alpha\rceil+2m_0+d+2} \sup_{\xi\in {\mathbb{R}}^d} (1+\|\xi\|)^{\lceil \alpha\rceil+d} |\partial^{\bf i} \hat f(\xi)|<\infty, \end{equation} then there exists a positive constant $C$ such that \begin{eqnarray} |J_\Omega f({\bf x})| & \le & C \Big(\sum_{|{\bf i}|\le\lceil \alpha\rceil+2m_0+d+2} \sup_{\xi\in {\mathbb{R}}^d} (1+\|\xi\|)^{\lceil \alpha\rceil+d} |\partial^{\bf i} \hat f(\xi)|\Big) \nonumber\\ & & \quad \times (1+\|{\bf x}\|)^{-\alpha-m_0-d-1} \quad {\rm for \ all} \ {\bf x}\in {\mathbb{R}}^d. \end{eqnarray} \item[{(ii)}] If $f$ satisfies \eqref{momentcondition} and \begin{equation} \sum_{|{\bf i}|\le \max(\lceil \alpha\rceil +d, 0)} \sup_{{\bf z}\in {\mathbb{R}}^d} \big( (1+\|{\bf z}\|)^{\lceil \alpha\rceil+2m_0+2d+2+\epsilon} |\partial^{\bf i} f({\bf z})|\big)<\infty \end{equation} for some $\epsilon>0$, then \begin{eqnarray} |J_\Omega f({\bf x})| & \le & C \Big( \sum_{|{\bf i}|\le \max(\lceil \alpha\rceil +d, 0)} \sup_{{\bf z}\in {\mathbb{R}}^d} \big( (1+\|{\bf z}\|)^{\lceil \alpha\rceil+2m_0+2d+2+\epsilon} |\partial^{\bf i} f({\bf z})| \Big) \nonumber\\ & & \quad \times (1+\|{\bf x}\|)^{-\alpha-m_0-d-1} \quad {\rm for \ all} \ {\bf x}\in {\mathbb{R}}^d. \end{eqnarray} \end{itemize} \end{pr} The conclusions in Proposition \ref{momentgeneralizedrieszomega1.pr} do not apply to the generalized Riesz potential $J_\Omega f$ where $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{{\bf 0}\})$ is a homogeneous function of degree zero. In this case, applying the argument used to establish \eqref{momentgeneralizedrieszomega1.pr.pf1} and \eqref{momentgeneralizedrieszomega1.pr.pf2}, we have that \begin{eqnarray}\label{momentgeneralizedrieszomega2.pr.pf1} |J_\Omega f({\bf x}) & \le & C \sum_{|{\bf i}|\le m_0+1} \sup_{\xi \in {\mathbb{R}}^d} \big((1+\|\xi\|)^{d+\epsilon} |\partial^{\bf i}\hat f(\xi)|\big) \end{eqnarray} for all ${\bf x}\in {\mathbb{R}}^d$ with $\|{\bf x}\|\le 1$, and \begin{eqnarray}\label{momentgeneralizedrieszomega2.pr.pf2} |J_\Omega f({\bf x}) \!\!&\le & \!\! C (1+\|{\bf x}\|)^{-m_0-d} \nonumber\\ & & \times \Big\{ \sum_{|{\bf k}|=m_0+1, |{\bf j}|\le m_0+d} \int_0^1\int_{{\mathbb{R}}^d} \big|\partial^{\bf j} \big(\phi(\|{\bf x}\|\xi) \xi^{\bf k} \Omega(\xi) \partial^{\bf k}\hat f(t\xi)\big)\big| d\xi dt\Big\} \nonumber\\ \!\!& &\!\! + C (1+\|{\bf x}\|)^{-m_0-d-1} \sum_{|{\bf k}|=m_0+1, |{\bf j}|+|{\bf l}|\le m_0+d+1, |{\bf j}|\le m_0+d}\nonumber\\ \!\!& &\!\! \quad \int_0^1\int_{{\mathbb{R}}^d} \big|\partial^{\bf j}\big( (\phi(\xi)-\phi(\|{\bf x}\|\xi)) \xi^{\bf k} \Omega(\xi)\big)\big|\times \big| \partial^{{\bf k}+{\bf l}}\hat f(t\xi)\big)\big| d\xi dt\Big\}\nonumber\\ \!\!& &\!\! + C (1+\|{\bf x}\|)^{-m_0-d-2} \sum_{|{\bf k}|=m_0+1, |{\bf j}|+|{\bf l}|\le m_0+d+2, |{\bf l}|\le 1}\nonumber\\ \!\!& &\!\! \quad \int_0^1\int_{{\mathbb{R}}^d} \big|\partial^{\bf j}\big( (\phi(\xi)-\phi(\|{\bf x}\|\xi)) \xi^{\bf k} \Omega(\xi)\big)\big|\times \big| \partial^{{\bf k}+{\bf l}}\hat f(t\xi)\big)\big| d\xi dt\Big\}\nonumber\\ \!\!& &\!\! + C (1+\|{\bf x}\|)^{-m_0-d-1} \sum_{|{\bf j}|\le m_0+d+1} \int_{{\mathbb{R}}^d} \big|\partial^{\bf j}\big( (1-\phi(\xi)) \Omega(\xi) \hat f(\xi)\big)\big| d\xi \nonumber\\ \!\!& \le & \!\! C \Big(\sum_{|{\bf i}|\le 2m_0+d+2} \sup_{\xi\in {\mathbb{R}}^d} (1+\|\xi\|)^{d+\epsilon} |\partial^{\bf i} \hat f(\xi)|\Big) (1+\|{\bf x}\|)^{-m_0-d-1} \end{eqnarray} for all ${\bf x}\in {\mathbb{R}}^d$ with $\|{\bf x}\|\ge 1$, where $\epsilon\in (0,1)$. Therefore \begin{pr}\label{momentgeneralizedrieszomega2.pr} Let $\Omega\in C^\infty({\mathbb{R}}^d\backslash\{{\bf 0}\})$ be a homogeneous function of degree zero. Then the following statements hold. \begin{itemize} \item [{(i)}] If $f$ satisfies \eqref{momentcondition} for some $m_0\ge 0$ and $$ \sum_{|{\bf i}|\le 2m_0+d+2} \sup_{\xi\in {\mathbb{R}}^d} (1+\|\xi\|)^{d+\epsilon} |\partial^{\bf i} \hat f(\xi)|<\infty$$ for some $\epsilon>0,$ then there exists a positive constant $C$ such that $$ |J_\Omega f({\bf x})| \le C \Big(\sum_{|{\bf i}|\le 2m_0+d+2} \sup_{\xi\in {\mathbb{R}}^d} (1+\|\xi\|)^{d+\epsilon} |\partial^{\bf i} \hat f(\xi)|\Big) (1+\|{\bf x}\|)^{-m_0-d-1} \quad {\rm for \ all} \ {\bf x}\in {\mathbb{R}}^d. $$ \item[{(ii)}] If $f$ satisfies \eqref{momentcondition} for some $m_0\ge 0$ and $$ \sum_{|{\bf i}|\le d+1} \sup_{{\bf z}\in {\mathbb{R}}^d} \big( (1+\|{\bf z}\|)^{2m_0+2d+2+\epsilon} |\partial^{\bf i} f({\bf z})|\big)<\infty $$ for some $\epsilon>0$, then \begin{eqnarray*} |J_\Omega f({\bf x})| & \le & C \Big( \sum_{|{\bf i}|\le d+1} \sup_{{\bf z}\in {\mathbb{R}}^d} \big( (1+\|{\bf z}\|)^{2m_0+2d+2+\epsilon} |\partial^{\bf i} f({\bf z})| \Big) \nonumber\\ & & \quad \times (1+\|{\bf x}\|)^{-m_0-d-1} \quad {\rm for \ all} \ {\bf x}\in {\mathbb{R}}^d. \end{eqnarray*} \end{itemize} \end{pr} \subsection{Translation-invariance and dilation-invariance} In this subsection, we show that the generalized Riesz potential $J_\Omega$ from ${\mathcal S}$ to ${\mathcal S}'$ is dilation-invariant and translation-invariant, and that its restriction on the closed subspace ${\mathcal S}_\infty$ of ${\mathcal S}$ is the same as the linear operator $i_\Omega$ on ${\mathcal S}_\infty$. \begin{Tm}\label{maintheorem.iomega1} Let $\gamma\in {\mathbb{R}}$ with $\gamma-d\not \in {\mathbb{Z}}_+$, $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ be a homogeneous function of degree $-\gamma$, and let $J_\Omega$ be defined by \eqref{fractionalderivative.def}. Then \begin{itemize} \item[{(i)}] $J_\Omega$ is dilation-invariant; \item[{(ii)}] $J_\Omega$ is translation-invariant; and \item[{(iii)}] $\widehat{J_\Omega f}(\xi)= \Omega(\xi) \hat f(\xi)$ for any function $f\in {\mathcal S}_\infty$. \end{itemize} \end{Tm} \begin{proof} {\em (i)}\quad For any $f\in {\mathcal S}$ and any $t>0$, \begin{eqnarray*} J_\Omega (\delta_t f) ({\bf x}) & = & \frac{(2\pi t)^{-d}\Gamma(d-\gamma)}{ \Gamma(d+k_0-\gamma)} \int_{S^{d-1}} \int_0^\infty \Omega(\xi')r^{k_0-\gamma+d-1}\nonumber \\ & & \times \Big(-\frac{d}{dr}\Big)^{k_0} \Big(e^{ir\langle {\bf x}, \xi'\rangle} \hat f(r\xi'/t)\Big) dr d\sigma(\xi') = t^{-\gamma} \delta_t(J_\Omega f)({\bf x}), \end{eqnarray*} where the first equality follows from $\widehat {\delta_t f}(\xi)=t^{-d} \hat f(\xi/t)$ and the second equality is obtained by change of variables. This leads to the dilation-invariance of the generalized Riesz potential $J_\Omega$. {\em (ii)}\quad For any $f\in {\mathcal S}$ and a vector ${\bf x}_0\in {\mathbb{R}}^d$, we obtain from \eqref{fractionalderivative.def} that \begin{eqnarray*} J_\Omega (\tau_{{\bf x}_0}f)({\bf x}) & = & \frac{(2\pi)^{-d}\Gamma(d-\gamma)}{ \Gamma(d+k_0-\gamma)} \int_{S^{d-1}} \int_0^\infty r^{k_0-\gamma+d-1} \Omega(\xi') \nonumber \\ & & \times \big(-\frac{d}{dr}\big)^{k_0} \Big(e^{ir\langle {\bf x}-{\bf x}_0, \xi'\rangle} \hat f(r\xi')\Big) dr d\sigma(\xi')= J_\Omega f( {\bf x}-{\bf x}_0), \end{eqnarray*} where $k_0$ is a nonnegative integer larger than $\gamma-d$. This shows that the generalized Riesz potential $J_\Omega$ is translation-invariant. {\em (iii)}\quad The third conclusion follows by taking Fourier transform of the equation \eqref{extension.eq} on both sides. \end{proof} \subsection{Composition and left-inverse} In this subsection, we consider the composition and left-inverse properties of generalized Riesz potentials. \begin{Tm}\label{composition.tm} Let $\gamma_1$ and $\gamma_2\in {\mathbb{R}}$ satisfy $\gamma_2<d, \gamma_1+\gamma_2<d$ and $\gamma_1-d\not\in {\mathbb{Z}}_+$, and let $\Omega_1, \Omega_2\in C^\infty({\mathbb{R}}^d\backslash\{{\bf 0}\})$ be homogeneous functions of degree $-\gamma_1$ and $-\gamma_2$ respectively. Then \begin{equation}\label{composition.tm.eq1} J_{\Omega_1}(J_{\Omega_2} f)=J_{\Omega_1\Omega_2} f \quad {\rm for\ all} \ f\in {\mathcal S}. \end{equation} \end{Tm} As a consequence of Theorem \ref{composition.tm}, we have the following result about left-invertibility of the generalized Riesz potential $J_\Omega$. \begin{Cr}\label{composition.cr} Let $\gamma\in (-d, \infty)$ with $\gamma-d\not\in {\mathbb{Z}}_+$ and $\Omega\in C^\infty({\mathbb{R}}^d\backslash\{{\bf 0}\})$ be homogeneous of degree $-\gamma$ with $\Omega(\xi)\ne 0$ for all $\xi\in S^{d-1}$. Then $J_{\Omega}J_{\Omega^{-1}}$ is an identity operator on ${\mathcal S}$. If we further assume that $\gamma\in (-d, d)$, then both $J_{\Omega^{-1}}J_\Omega$ and $J_{\Omega}J_{\Omega^{-1}}$ are identity operators on ${\mathcal S}$. \end{Cr} Taking $\Omega(\xi)=\|\xi\|^{-\gamma}$ in the above corollary yields that the linear operator $I_\gamma$ in \eqref{generalizedriesz.def} is a left-inverse of the fractional Laplacian $(-\triangle)^{\gamma/2}$. \begin{Cr}\label{invariantleftinverse.cr} Let $\gamma$ be a positive number with $\gamma-d\not\in {\mathbb{Z}}_+$. Then $I_\gamma$ is a left-inverse of the fractional Laplacian $(-\triangle)^{\gamma/2}$. \end{Cr} \begin{proof}[Proof of Theorem \ref{composition.tm}] Let $k_0$ be the smallest nonnegative integer such that $k_0-\gamma_1+d>0$, and set $\Omega(\xi)=\Omega_1(\xi)\Omega_2(\xi)$. If $k_0=0$, then the conclusion \eqref{composition.tm.eq1} follows from \eqref{fractionalderivative.neweq2}. Now we assume that $k_0\ge 1$. Then \begin{eqnarray*}\label{composition.tm.pf.eq2} J_{\Omega_1}(J_{\Omega_2} f)({\bf x}) \!\! & = & \frac{(2\pi)^d\Gamma(d-\gamma_1)}{ \Gamma(d+k_0-\gamma_1)} \lim_{\epsilon\to 0} \int_{S^{d-1}} \int_\epsilon^\infty \Omega(\xi') r^{k_0+d-\gamma_1-1} \nonumber \\ & &\times \Big\{ r \Big(-\frac{d}{dr}\Big)^{k_0} \Big(e^{ir\langle {\bf x}, \xi'\rangle} \hat f(r\xi') r^{-\gamma_2-1} \Big)\nonumber\\ & & -k_0 \Big(-\frac{d}{dr}\Big)^{k_0-1} \Big(e^{ir\langle {\bf x}, \xi'\rangle} \hat f(r\xi') r^{-\gamma_2-1} \Big)\Big\} dr d\sigma(\xi')\nonumber\\ & = & \frac{(2\pi)^d\Gamma(d+1-\gamma_1)}{ \Gamma(d+k_0-\gamma_1)} \lim_{\epsilon\to 0} \int_{S^{d-1}} \int_\epsilon^\infty \Omega(\xi') r^{k_0+d-\gamma_1-1} \nonumber \\ & &\times \Big(-\frac{d}{dr}\Big)^{k_0-1} \Big(e^{ir\langle {\bf x}, \xi'\rangle} \hat f(r\xi') r^{-\gamma_2-1} \Big) dr d\sigma(\xi')\nonumber\\ & = & \cdots\nonumber\\ & = & \frac{ (2\pi)^{-d}\Gamma(d+k_0-\gamma_1)}{ \Gamma(d+k_0-\gamma_1)} \lim_{\epsilon\to 0} \int_{S^{d-1}} \int_\epsilon^\infty \Omega(\xi') r^{k_0+d-\gamma_1-1} \nonumber \\ & &\times \Big(e^{ir\langle {\bf x}, \xi'\rangle} \hat f(r\xi') r^{-\gamma_2-k_0} \Big) dr d\sigma(\xi')\nonumber\\ & = & J_{\Omega_1\Omega_2} f({\bf x})\quad {\rm for\ all} \ {\bf x}\in {\mathbb{R}}^d, \end{eqnarray*} where the second equality is obtained by applying the integration-by-parts technique and using the fact that $ \epsilon^{k_0+d-\gamma_1} \big(\frac{d}{dr}\big)^{k_0-1} \big(e^{ir\langle {\bf x}, \xi'\rangle} \hat f(r\xi') r^{-\gamma_2-1} \big)\big|_{r=\epsilon}$ converges to zero uniformly on $\ \xi\in S^{d-1}$ under the assumption that $\gamma_1+\gamma_2<d$. The conclusion \eqref{composition.tm.eq1} then follows. \end{proof} \subsection{Translation-invariant and dilation-invariant extensions of the linear operator $i_\Omega$} In this subsection, we show that the generalized Riesz potential $J_\Omega$ in \eqref{fractionalderivative.def} is the {\bf only} continuous linear operator from ${\mathcal S}$ to ${\mathcal S}'$ that is translation-invariant and dilation-invariant, and that is an extension of the linear operator $i_\Omega$ in \eqref{fractionalderivative.def1} from the closed subspace ${\mathcal S}_\infty$ to the whole space ${\mathcal S}$. \begin{Tm}\label{iomega2.tm} Let $\gamma$ be a positive number with $\gamma-d\not\in {\mathbb{Z}}_+$, $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ be a nonzero homogeneous function of degree $-\gamma$, and let $J_\Omega$ be defined by \eqref{fractionalderivative.def}. Then $I$ is a continuous linear operator from ${\mathcal S}$ to ${\mathcal S}'$ such that $I$ is dilation-invariant and translation-invariant, and that the restriction of $I$ on ${\mathcal S}_\infty$ is the same as the linear operator $i_\Omega$ in \eqref{fractionalderivative.def1} if and only if $I=J_\Omega$. \end{Tm} To prove Theorem \ref{iomega2.tm}, we need two technical lemmas about extensions of the linear operator $i_\Omega$ on ${\mathcal S}_\infty$. \begin{Lm}\label{homogeneous1.lm} Let $\gamma$ be a positive number with $\gamma-d\not\in {\mathbb{Z}}_+$, $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ be a homogeneous function of degree $-\gamma$, and let $J_\Omega$ be defined by \eqref{fractionalderivative.def}. Then a continuous linear operator $I$ from ${\mathcal S}$ to ${\mathcal S}'$ is an extension of the linear operator $i_\Omega$ on ${\mathcal S}_\infty$ if and only if \begin{equation}\label{homogeneous1.lm.eq1} If= J_\Omega f+\sum_{|{\bf i}|\le N} \frac{\partial^{\bf i} \hat f({\bf 0})}{{\bf i}!} H_{\bf i} \end{equation} for some integer $N$ and tempered distributions $H_{\bf i}, {\bf i}\in {\mathbb{Z}}_+^d$ with $|{\bf i}|\le N$. \end{Lm} \begin{proof} The sufficiency follows from Theorem \ref{maintheorem.iomega1} and the assumption that $H_{\bf i}, |{\bf i}|\le N$, in \eqref{homogeneous1.lm.eq1} are tempered distributions. Now the necessity. By Corollary \ref{generalizedrieszomega1.cr} and Theorem \ref{maintheorem.iomega1}, $I-J_\Omega$ is a continuous linear operator from ${\mathcal S}$ to ${\mathcal S}'$ that satisfies that $(I-J_\Omega)f=0$ for all $f\in {\mathcal S}_\infty$. This implies that the inverse Fourier transform of the tempered distribution $(I-J_\Omega)^* g$ is supported on the origin for any Schwartz function $g$. Hence there exist an integer $N$ and tempered distribution $H_{\bf i}, |{\bf i}|\le N$, such that ${\mathcal F}^{-1}((I-J_\Omega)^* g)=\sum_{|{\bf i}|\le N} \langle g, H_{\bf i}\rangle \delta^{({\bf i})}/{{\bf i}!}$, where the tempered distributions $\delta^{({\bf i})}, {\bf i}\in {\mathbb{Z}}_+^d$, are defined by $\langle \delta^{({\bf i})}, f\rangle=\partial^{\bf i} f({\bf 0})$ \cite[Theorem 2.3.4]{hormanderbook}. Then $\langle (I-J_\Omega)f, g\rangle= \langle \hat f, {\mathcal F}^{-1}(I-J_\Omega)^*g\rangle= \sum_{|{\bf i}|\le N} \langle H_{\bf i}, g\rangle {\partial^{\bf i} \hat f({\bf 0})}/{{\bf i}!} $ for all Schwartz functions $f$ and $g$, and hence \eqref{homogeneous1.lm.eq1} is established. \end{proof} \begin{Lm}\label{homogeneous2.lm} Let $\gamma$ be a positive number with $\gamma-d\not\in {\mathbb{Z}}_+$, and consider the continuous linear operator $K$ from ${\mathcal S}$ to ${\mathcal S}'$: \begin{equation}\label{homogeneous2.lm.eq-1} Kf=\sum_{|{\bf i}|\le N} \frac{\partial^{\bf i} \hat f({\bf 0})}{{\bf i}!} H_{\bf i}, \quad f\in {\mathcal S}\end{equation} where $N\in {\mathbb Z}_+$ and $H_{\bf i}, |{\bf i}|\le N$, are tempered distributions, Then the following statements hold. \begin{itemize} \item[{(i)}] The equation \begin{equation} \label{homogeneous2.lm.eq0} K (\delta_t f)= t^{-\gamma} \delta_t (Kf)\end{equation} holds for any $f\in {\mathcal S}$ and $t>0$ if and only if for every ${\bf i}\in {\mathbb{Z}}_+^d$ with $|{\bf i}|\le N$, $H_{\bf i}$ is homogeneous of degree $\gamma-d-|{\bf i}|$. \item[{(ii)}] The linear operator $K$ is translation-invariant if and only if there exists a polynomial $P$ of degree at most $N$ such that $H_{\bf i}=(-i\partial)^{\bf i} P$ for all ${\bf i}\in {\mathbb{Z}}_+^d$ with $|{\bf i}|\le N$. \item[{(iii)}] The linear operator $K$ is translation-invariant and satisfies \eqref{homogeneous2.lm.eq0} if and only if $H_{\bf i}=0$ for all ${\bf i}\in {\mathbb{Z}}_+^d$ with $|{\bf i}|\le N$. \end{itemize} \end{Lm} \begin{proof} {\em (i)}\quad The sufficiency follows from the homogeneous assumption on $H_{\bf i}, |{\bf i}|\le N$, and the observation that \begin{equation}\label{homogeneous2.lm.pf.eq1} \partial^{\bf i} \widehat {\delta_t f}({\bf 0})=t^{-d-|{\bf i}|} \partial^{\bf i} \hat f({\bf 0})\quad {\rm for\ all}\ f\in {\mathcal S} \ {\rm and} \ {\bf i}\in {\mathbb{Z}}_+^d.\end{equation} Now the necessity. Let $\phi$ be a $C^\infty$ function such that $\phi(\xi)=1$ for all $\xi\in B({\bf 0}, 1)$ and $\phi(\xi)=0$ for all $\xi\not\in B({\bf 0}, 2)$, where $B({\bf x}, r)$ is the ball with center ${\bf x}\in {\mathbb{R}}^d$ and radius $r>0$. Define $\psi_{\bf i}\in {\mathcal S}, {\bf i}\in {\mathbb{Z}}_+^d,$ with the help of the Fourier transform by \begin{equation}\label{homogeneous2.lm.pf.eq7} \widehat {\psi_{\bf i}}(\xi)=\frac{\xi^{\bf i}}{{\bf i} !} \phi(\xi). \end{equation} One may verify that \begin{equation}\label{homogeneous2.lm.pf.eq8} \partial^{{\bf i}'} \widehat{\psi_{\bf i}}({\bf 0})=\left\{\begin{array}{ll} 1 & {\rm if} \ {\bf i}'={\bf i},\\ 0 & {\rm if} \ {\bf i}'\ne {\bf i}.\end{array}\right. \end{equation} For any ${\bf i}\in {\mathbb{Z}}_+^d$ with $|{\bf i}|\le N$, the homogeneous property of the tempered distribution $H_{\bf i}$ follows by replacing $f$ in \eqref{homogeneous2.lm.eq0} by $\psi_{\bf i}$ and using \eqref{homogeneous2.lm.pf.eq8}. {\em (ii)}\quad ($\Longleftarrow$) Given $f\in {\mathcal S}$ and ${\bf x}_0\in {\mathbb{R}}^d$, \begin{eqnarray} \label{homogeneous2.lm.pf.eq10} K(\tau_{{\bf x}_0}f) ({\bf x}) & = & \sum_{|{\bf i}|\le N} \sum_{{\bf j}+{\bf k}= {\bf i}} \frac{ (-i{\bf x}_0)^{{\bf k}}}{{\bf k}!} \frac{\partial^{\bf j} \hat f({\bf 0})}{{\bf j}!} (-i\partial)^{\bf i}P({\bf x})\nonumber\\ &= & \sum_{|{\bf j}|\le N} \frac{(-i)^{\bf j}\partial^{\bf j} \hat f({\bf 0})}{{\bf j}!} \Big(\sum_{|{\bf k}|\le N-|{\bf j}|}\frac{\partial^{{\bf j}+{\bf k}} P({\bf x})}{{\bf k}!} (-{\bf x}_0)^{\bf k}\Big) \nonumber\\ & = & \sum_{|{\bf j}|\le N} \frac{(-i)^{\bf j}\partial^{\bf j} \hat f({\bf 0})}{{\bf j}!}\partial^{{\bf j}} P({\bf x}-{\bf x}_0)=Kf({\bf x}-{\bf x}_0), \end{eqnarray} where the first equality follows from \begin{equation} \label{homogeneous2.lm.pf.eq11} \partial^{\bf i} \widehat {\tau_{{\bf x}_0}f}({\bf 0})=\sum_{{\bf j}\le {\bf i}} \binom{{\bf i}}{{\bf j}} (-i{\bf x}_0)^{{\bf i}-{\bf j}} \partial^{\bf j} \hat f({\bf 0}), \end{equation} and the third equality is deducted from the Taylor expression of the polynomial $\partial^{\bf j} P$ of degree at most $N-|{\bf j}|$. ($\Longrightarrow$) By \eqref{homogeneous2.lm.pf.eq11} and the translation-invariance of the linear operator $K$, \begin{equation} \label{homogeneous2.lm.pf.eq12} \sum_{|{\bf i}|\le N} \sum_{{\bf j}+{\bf k}= {\bf i}} \frac{ (-i{\bf x}_0)^{{\bf k}}}{{\bf k}!} \frac{\partial^{\bf j} \hat f({\bf 0})}{{\bf j}!} H_{\bf i}= \sum_{|{\bf i}|\le N} \frac{\partial^{\bf j} \hat f({\bf 0})}{{\bf j}!} \tau_{ {\bf x}_0}H_{\bf j} \end{equation} holds for any Schwartz function $f$ and ${\bf x}_0\in {\mathbb{R}}^d$. Replacing $f$ in the above equation by the function $\psi_{\bf 0}$ in \eqref{homogeneous2.lm.pf.eq7} and then using \eqref{homogeneous2.lm.pf.eq8}, we get \begin{equation} \label{homogeneous2.lm.pf.eq13} \tau_{{\bf x}_0} H_{\bf 0} = \sum_{|{\bf i}|\le N} \frac{ (-i{\bf x}_0)^{{\bf i}}}{{\bf i}!} H_{\bf i}. \end{equation} This implies that $ \langle H_{{\bf 0}}, g(\cdot+{\bf x}_0)\rangle= \sum_{ |{\bf i}|\le N} \frac{ (-i{\bf x}_0)^{{\bf i}}}{{\bf i}!} \langle H_{{\bf i}}, g\rangle $ for any Schwartz function $g$. By taking partial derivatives $\partial^{\bf k}, |{\bf k}|=N+1$, with respect to ${\bf x}_0$ of both sides of the above equation, using the fact that $\partial^{{\bf k}} {\bf x}^{{\bf i}}=0$ for all ${\bf k}\in {\mathbb{Z}}_+$ with $|{\bf k}|=N+1$, and then letting ${\bf x}_0={\bf 0}$, we obtain that $\langle H_{{\bf 0}}, \partial^{\bf k} g\rangle =0 $ holds for any $g\in {\mathcal S}$ and ${\bf k}\in {\mathbb{Z}}_+$ with $|{\bf k}|=N+1$. Hence $H_{\bf 0}=P$ for some polynomial $P$ of degree at most $N$. The desired conclusion about $H_{\bf i}, |{\bf i}|\le N$, then follows from \eqref{homogeneous2.lm.pf.eq13} and $\tau_{{\bf x}_0}H_{\bf 0}({\bf x})=\sum_{|{\bf i}|\le N} \frac{ (-{\bf x}_0)^{\bf i}}{{\bf i}!} \partial^{\bf i} P({\bf x}) $ by the Taylor expansion of the polynomial $P$. {\em (iii)} \quad Clearly if $H_{\bf i}=0$ for all $|{\bf i}|\le N$, then $Kf=0$ for all $f\in {\mathcal S}$ and hence $K$ is translation-invariant and satisfies \eqref{homogeneous2.lm.eq0}. Conversely, if $K$ is translation-invariant and satisfies \eqref{homogeneous2.lm.eq0}, it follow from the conclusions (i) and (ii) that for every ${\bf i}\in {\mathbb{Z}}_+^d$ with $|{\bf i}|\le N$, $H_{\bf i}$ is homogeneous of degree $\gamma-d-|{\bf i}|\not\in {\mathbb{Z}}$ and also a polynomial of degree at most $N-|{\bf i}|$. Then $H_{\bf i}=0$ for all $ {\bf i}\in {\mathbb{Z}}_+^d$ with $|{\bf i}|\le N$ because the homogeneous degree of any nonzero polynomial is a nonnegative integer if it is homogeneous. \end{proof} We now have all of ingredients to prove Theorem \ref{iomega2.tm}. \begin{proof}[Proof of Theorem \ref{iomega2.tm}] The sufficiency follows from Corollary \ref{generalizedrieszomega1.cr} and Theorem \ref{maintheorem.iomega1}. Now the necessity. By Lemma \ref{homogeneous1.lm}, there exist an integer $N$ and tempered distributions $H_{\bf i}, |{\bf i}|\le N$, such that \eqref{homogeneous1.lm.eq1} holds. Define $Kf=\sum_{|{\bf i}|\le N} \frac{\partial^{\bf i} \hat f({\bf 0})}{{\bf i}!} H_{\bf i}$ for any $f\in {\mathcal S}$. Then $Kf $ is a continuous linear operator from ${\mathcal S}$ to ${\mathcal S}'$ and \begin{equation}\label{iomega2.tm.pf.eq1} If=J_\Omega f+Kf, \quad \ f\in {\mathcal S}.\end{equation} Moreover the linear operator $K$ satisfies \eqref{homogeneous2.lm.eq0} and is translation-invariant by \eqref{iomega2.tm.pf.eq1}, Theorem \ref{maintheorem.iomega1} and the assumption on $I$. Then $Kf=0$ for all $f\in {\mathcal S}$ by Lemma \ref{homogeneous2.lm}. This together with \eqref{iomega2.tm.pf.eq1} proves the desired conclusion that $I=J_\Omega$. % \end{proof} \subsection{Translation-invariant extensions of the linear operator $i_\Omega$ with additional localization in the Fourier domain} Given a nonzero homogeneous function $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ of degree $-\gamma$, we recall from \eqref{fractionalderivative.neweq2} and Theorem \ref{maintheorem.iomega1} that $J_\Omega$ is translation-invariant and the Fourier transform of $J_\Omega f$ belongs to $K_1$ when $\gamma\in (0, d)$, where \begin{equation} K_1=\Big\{h:\ \int_{{\mathbb{R}}^d} |h(\xi)| (1+\|\xi\|)^{-N} d\xi<\infty \ \ {\rm for \ some} \ N\ge 1\Big\}.\end{equation} In fact, the generalized Riesz potential $J_\Omega$ is the {\bf only} extension of the linear operator $i_\Omega$ on ${\mathcal S}_\infty$ to the whole space ${\mathcal S}$ with the above two properties. \begin{Tm}\label{iomega4.tm} Let $\gamma>0$ with $\gamma-d\not\in {\mathbb{Z}}_+$, $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ be a nonzero homogeneous function of degree $-\gamma$, and the continuous linear operator $I$ from ${\mathcal S}$ to ${\mathcal S}'$ be an extension of the linear operator $i_\Omega$ on ${\mathcal S}_\infty$ such that the Fourier transform of $If$ belongs to $K_1$ for all $f\in {\mathcal S}$. Then $I$ is translation-invariant if and only if $I=J_\Omega$ and $\gamma\in (0, d)$. \end{Tm} \begin{proof} The sufficiency follows from \eqref{fractionalderivative.neweq2} and Theorem \ref{maintheorem.iomega1}. Now we prove the necessity. By the assumption on the linear operator $I$, applying an argument similar to the proof of Lemma \ref{homogeneous1.lm}, we can find a family of functions $g_{\bf i}\in K_1, |{\bf i}|\le N$, such that \begin{eqnarray}\label{iomega4.tm.pf.eq0} \widehat{If}(\xi) & = & \Big(\hat f(\xi)- \sum_{|{\bf i}|\le \gamma-d} \frac{\partial^{{\bf i}} \hat f({\bf 0})}{ {\bf i}!} \xi^{\bf i}\Big) \Omega(\xi) +\sum_{|{\bf i}|\le N} \frac{\partial^{{\bf i}} \hat f({\bf 0})}{ {\bf i}!} g_{\bf i}(\xi) \end{eqnarray} for any Schwartz function $f$. This together with \eqref{homogeneous2.lm.pf.eq11} and the translation-invariance of the linear operator $I$ implies that \begin{eqnarray*}\label{iomega4.tm.pf.eq1} & & - \sum_{ |{\bf i}|\le \gamma-d} \sum_{{\bf j}+{\bf k}={\bf i}} \frac{\partial^{\bf j}\hat f({\bf 0})}{{\bf k}! {\bf j}!} (-i{\bf x}_0)^{{\bf k}} \xi^{\bf i}\Omega(\xi) + \sum_{ |{\bf i}|\le N} \sum_{{\bf j}+{\bf k}= {\bf i}} \frac{\partial^{\bf j}\hat f({\bf 0})}{{\bf k}! {\bf j}!} (-i{\bf x}_0)^{{\bf k}} g_{\bf i}(\xi)\nonumber\\ & = & \ e^{i{\bf x}_0\xi} \Big(-\sum_{|{\bf i}|\le \gamma-d} \frac{\partial^{\bf i}\hat f({\bf 0})}{{\bf i}!} \xi^{\bf i}\Omega(\xi) +\sum_{|{\bf i}|\le N} \frac{\partial^{\bf i}\hat f({\bf 0})}{{\bf i}!} g_{\bf i}(\xi)\Big). \end{eqnarray*} As ${\bf x}_0\in {\mathbb{R}}^d$ in \eqref{iomega4.tm.pf.eq1} is chosen arbitrarily, we conclude that \begin{equation*}\label{iomega4.tm.pf.eq2} -\sum_{|{\bf i}|\le \gamma-d} \frac{\partial^{\bf i}\hat f({\bf 0})}{{\bf i}!} \xi^{\bf i}\Omega(\xi) +\sum_{|{\bf i}|\le N} \frac{\partial^{\bf i}\hat f({\bf 0})}{{\bf i}!} g_{\bf i}(\xi)=0\quad {\rm for \ all} \ f\in {\mathcal S}.\end{equation*} Substituting the above equation into \eqref{iomega4.tm.pf.eq0}, we then obtain $\widehat{If}(\xi)= \hat f(\xi) \Omega(\xi) $ for all $f \in {\mathcal S}$. This, together with the observation that $\hat f\Omega\in K_1$ for all $f\in {\mathcal S}$ if and only if $\gamma<d$, leads to the desired conclusion that $I=J_\Omega$ and $\gamma\in (0, d)$. \end{proof} \subsection{Non-integrability in the spatial domain} Let $\gamma>0$ with $\gamma-d\not\in {\mathbb{Z}}_+$ and $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ be a nonzero homogeneous function of degree $-\gamma$. For any Schwartz function $f$, there exists a positive constant $C$ by Theorem \ref{generalizedrieszomega1.tm} such that $|J_\Omega f({\bf x})|\le C (1+\|{\bf x}\|)^{\gamma-d}$ for all ${\bf x}\in {\mathbb{R}}^d$. Hence $J_\Omega f\in L^p, 1\le p\le \infty$, when $\gamma<d(1-1/p)$. In this subsection, we show that the above $p$-integrability property for the generalized Riesz potential $J_\Omega$ is no longer true when $\gamma\ge d(1-1/p)$. \begin{Tm}\label{time1.tm} Let $1\le p\le \infty, 0<\gamma\in [d(1-1/p), \infty)\backslash {\mathbb{Z}}$ and $\Omega\in C^\infty ({\mathbb{R}}^d\backslash \{\bf 0\})$ be a nonzero homogeneous function of degree $-\gamma$. Then there exists a Schwartz function $f$ such that $J_\Omega f\not\in L^p$. \end{Tm} Letting $\Omega(\xi)=\|{\xi}\|^{-\gamma}$ in Theorem \ref{time1.tm} leads to the conclusion mentioned in the abstract: \begin{Cr}\label{nonintegrable.cr} Let $1\le p\le \infty$ and $d(1-1/p)\le \gamma\not\in {\mathbb{Z}}_+$. Then $I_\gamma f$ is {\bf not} $p$-integrable for some function $f\in {\mathcal S}$. \end{Cr} \begin{proof}[Proof of Theorem \ref{time1.tm}] Let the Schwartz functions $\phi$ and $\psi_{\bf i}, {\bf i}\in {\mathbb{Z}}_+^d$, be as in the proof of Lemma \ref{homogeneous2.lm}. We examine three cases to prove the theorem. {\em Case I:\quad $d(1-1/p)\le\gamma< \min(d, d(1-1/p)+1)$}.\quad In this case, $1\le p<\infty$ and \begin{equation}\label{time1.tm.pf.eq1} J_\Omega \psi_{\bf 0}({\bf x})=\int_{{\mathbb{R}}^d} K({\bf x}-{\bf y}) \psi_{\bf 0}({\bf y}) d{\bf y}, \end{equation} by \eqref{fractionalderivative.neweq2}, where $K$ is the inverse Fourier transform of $\Omega$. By \cite[Theorems 7.1.16 and 7.1.18]{hormanderbook}, $K\in C^\infty({\mathbb{R}}^d\backslash\{0\})$ is a homogeneous function of order $\gamma-d\in (-d,0)$, which implies that \begin{equation}\label{time1.tm.pf.eq2} |\partial^{\bf i} K({\bf x})|\le C \|{\bf x}\|^{\gamma-d-|{\bf i}|} \quad {\rm for \ all}\ {\bf i}\in {\mathbb{Z}}_+^d \ {\rm with} \ |{\bf i}|\le 1. \end{equation} Using \eqref{time1.tm.pf.eq1} and \eqref{time1.tm.pf.eq2}, and noting that $\psi_{\bf 0}\in {\mathcal S}$ satisfies $\int_{{\mathbb{R}}^d} \psi_{\bf 0}({\bf y})d{\bf y}=1$, we obtain that for all ${\bf x}\in {\mathbb{R}}^d$ with $\|{\bf x}\|\ge 1$, \begin{eqnarray}\label{time1.tm.pf.eq3} |J_\Omega \psi_{\bf 0} ({\bf x})-K({\bf x})| \!\! & \le &\!\! \int_{\|{\bf y}\|\le \|{\bf x}\|/2} |K({\bf x}-{\bf y})-K({\bf x})| |\psi_{\bf 0}({\bf y})| d{\bf y} \nonumber\\ & &\!\! + \Big(\int_{\|{\bf x}\|/2\le \|{\bf y}\|\le 2 \|{\bf x}\|}+\int_{2\|{\bf x}\|\le \|{\bf y}\|}\Big) |K({\bf x}-{\bf y})||\psi_{\bf 0}({\bf y})| d{\bf y} \nonumber\\ & &\!\! + |K({\bf x})| \int_{\|{\bf y}\|\ge \|{\bf x}\|/2} |\psi_{\bf 0}({\bf y})| d{\bf y} \nonumber\\ \!\! & \le &\!\! C (1+\|{\bf x}\|)^{\gamma-d-1}. \end{eqnarray} We notice that $\int_{\|{\bf x}\|\ge 1} (1+\|{\bf x}\|)^{(\gamma-d-1)p}d {\bf x}<\infty$ and $\int_{\|{\bf x}\|\ge 1} |K({\bf x})|^p d{\bf x}=\infty$ because $K$ is a nonzero homogenous function of degree $\gamma-d$ and $d-p<(d-\gamma)p\le d$. The above two observations together with the estimate in \eqref{time1.tm.pf.eq3} prove that $J_\Omega \psi_{\bf 0} \not\in L^p$, the desired conclusion with $f=\psi_{\bf 0}$. {\em Case II: $d<\gamma<d(1-1/p)+1$}.\quad In this case, $d<p\le \infty$ and \begin{eqnarray}\label{time1.tm.pf.eq4} J_\Omega \psi_{\bf 0}({\bf x}) & = &\frac{1}{d-\gamma} \sum_{|{\bf j}|=1} J_{\Omega_{\bf j}}(\varphi_{\bf j}) ({\bf x}) + \frac{1}{d-\gamma}\sum_{|{\bf i}|=1} (-{\bf x})^{\bf i} J_{\Omega_{\bf i}}\psi_{\bf 0} ({\bf x}) \end{eqnarray} by taking $k_0=1$ in \eqref{generalizedrieszomega1.tm.pf.eq1}, where $\Omega_{\bf i}(\xi)=(i\xi)^{\bf i} \Omega(\xi)$ and $\varphi_{\bf i}({\bf x})={\bf x}^{\bf i} \psi_{\bf 0}({\bf x})$. Let $K_{\bf i}$ be the inverse Fourier transform of the function $\Omega_{\bf i}, |{\bf i}|=1$. Noticing that $\Omega_{\bf i}$ is homogeneous of degree $-\gamma+1$ and that $\int_{{\mathbb{R}}^d} \varphi_{\bf i}({\bf x}) d{\bf x}=0$, we then apply similar argument to the one used in establishing \eqref{time1.tm.pf.eq3} and obtain \begin{eqnarray*} |J_{\Omega_{\bf i}}(\varphi_{\bf i}) ({\bf x})|+ |J_{\Omega_{\bf i}}\psi_{\bf 0} ({\bf x})-K_{\bf i}({\bf x})| \le C \|{\bf x}\|^{\gamma-d-2}\quad {\rm if } \ \|{\bf x}\|\ge 1. \end{eqnarray*} Hence \begin{equation} \label{time1.tm.pf.eq3b} \int_{\|{\bf x}\|\ge 1} \big|J_\Omega \psi_{\bf 0}({\bf x})-\frac{1}{d-\gamma}\sum_{|{\bf i}|=1} (-{\bf x})^{\bf i} K_{\bf i}({\bf x})\big|^p d{\bf x}\le C \int_{\|{\bf x}\|\ge 1} \|{\bf x}\|^{(\gamma-d-1)p} d{\bf x}<\infty \end{equation} if $d<p<\infty$ and \begin{equation}\label{time1.tm.pf.eq4} \sup_{\|{\bf x}\|\ge 1} \big|J_\Omega \psi_{\bf 0}({\bf x})-\frac{1}{d-\gamma}\sum_{|{\bf i}|=1} (-i{\bf x})^{\bf i} K_{\bf i}({\bf x})\big|\le C \sup_{\|{\bf x}\|\ge 1} \|{\bf x}\|^{\gamma-d-1} <\infty \end{equation} if $p=\infty$. Set $K({\bf x}):= \sum_{|{\bf i}|=1} (-{\bf x})^{\bf i} K_{\bf i}({\bf x})$. Then $K$ is homogeneous of degree $\gamma-d$ by the assumption on $\Omega$, and is not identically zero because \begin{eqnarray*} \langle K, g\rangle & = & \int_{{\mathbb{R}}^d} \Omega(\xi) \Big(\sum_{|{\bf i}|=1} \xi^{\bf i} \partial^{\bf i} \hat g(\xi)\Big) d\xi = -\int_{{\mathbb{R}}^d} \Big(\sum_{|{\bf i}|=1}\partial^{\bf i}( \xi^{\bf i} \Omega(\xi))\Big) \hat g(\xi) d\xi \\ & = & \int_{S^{d-1}} \int_0^\infty \big(d\Omega(r\xi')+ r \frac{d}{dr} \Omega(r\xi')\big) \hat g(r\xi') r^{d-1} dr d\sigma(\xi')\\ & =& (d-\gamma) \int_{{\mathbb{R}}^d} \Omega(\xi) \hat g (\xi) d\xi \not\equiv 0 \end{eqnarray*} where $g\in {\mathcal S}_\infty$. Thus $\int_{\|{\bf x}\|\ge 1} |K({\bf x})|^p d{\bf x}=+\infty$ when $d<p<\infty$, and $K({\bf x})$ is unbounded on ${\mathbb{R}}^d\backslash B({\bf 0},1)$ when $p=\infty$. This together with \eqref{time1.tm.pf.eq3b} and \eqref{time1.tm.pf.eq4} proves that $J_\Omega \psi_{\bf 0}\not\in L^p$ and hence the desired conclusion with $f=\psi_{\bf 0}$. {\em Case III: $\gamma\ge d(1-1/p)+1$.} Let $k_0$ be the integer such that $d(1-1/p)\le \gamma-k_0<d(1-1/p)+1$, and set $\Omega_{\bf j}(\xi)=(i\xi)^{\bf j} \Omega(\xi), |{\bf j}|=k_0$. Noting that $J_\Omega \psi_{\bf j}({\bf x})= J_{\Omega_{\bf j}} \psi_{\bf 0} ({\bf x})/{{\bf j}!}$ and $\Omega_{\bf j}$ is homogeneous of degree $-\gamma+k_0$, we have obtained from the conclusions in the first two cases that $J_\Omega \psi_{\bf j} \not\in L^p$. Hence the desired conclusion follows by letting $f=\psi_{\bf j}$ with $|{\bf j}|=k_0$. \end{proof} \subsection{Non-integrability in the Fourier domain} If $\gamma<d$, it follows from \eqref{fractionalderivative.neweq2} that for Schwartz functions $f$ and $g$, $\langle J_\Omega f, g\rangle$ can be expressed as a weighted integral of $\hat g$: \begin{equation}\label{frequency.eq1} \langle J_\Omega f, g\rangle=\int_{{\mathbb{R}}^d} h(\xi) \hat g(\xi) d\xi, \end{equation} where $h(\xi)=(2\pi)^{-d}\Omega(-\xi) \hat f(-\xi)\in K_1$. In this subsection, we show that the above reformulation \eqref{frequency.eq1} to define $\langle J_\Omega f, g\rangle$ via a weighted integral of $\hat g$ {\bf cannot} be extended to $\gamma>d$. \begin{Tm}\label{frequency.tm1} Let $\gamma\in (d, \infty)\backslash {\mathbb{Z}}$, $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ be a nonzero homogeneous function of degree $-\gamma$, and let $J_\Omega$ be defined by \eqref{fractionalderivative.def}. Then there exists a Schwartz function $f$ such that the Fourier transform of $J_\Omega f$ does not belong to $K_1$. \end{Tm} \begin{proof} Let $\phi$ and $\psi_{\bf 0}$ be the Schwartz functions in the proof of Lemma \ref{homogeneous2.lm}, and let $g\in {\mathcal S}_\infty$ be so chosen that its Fourier transform $\hat g$ is supported in $B({\bf 0}, 1)$ and satisfies $\int_{{\mathbb{R}}^d} \Omega(\xi){ \hat g(-\xi)}d\xi=1$. Now we prove that $\widehat{J_\Omega \psi_{\bf 0}}\not\in K_1$. Suppose on the contrary that $\widehat{J_\Omega \psi_{\bf 0}}\in K_1$. Then \begin{eqnarray}\label{frequency.tm1.pf.eq1} \langle J_\Omega \psi_{\bf 0}, n^{-d} g(\cdot/n)\rangle & = &\frac{ (2\pi)^{-d}\Gamma(d-\gamma)} {\Gamma(d+k_0-\gamma)} \int_{S^{d-1}}\int_\epsilon^\infty r^{k_0+d-\gamma-1} \Omega(\xi')\nonumber\\ & & \Big(-\frac{d}{dr}\Big)^{k_0}\Big(\widehat \psi_{\bf 0}(r\xi') \hat g(-rn\xi')\Big) dr d\sigma(\xi') \nonumber\\ & = & (2\pi)^{-d} \int_{{\mathbb{R}}^d} {\hat g(-n\xi)} \Omega(\xi)d\xi = (2\pi)^{-d} n^{\gamma-d}\int_{{\mathbb{R}}^d} \Omega(\xi) {\hat g(-\xi)} d\xi \nonumber\\ &\to& +\infty \quad {\rm as} \ n\to \infty \end{eqnarray} by \eqref{fractionalderivative.def} and \eqref{extension.eq}. On the other hand, \begin{eqnarray}\label{frequency.tm1.pf.eq2} & & |\langle J_\Omega \psi_{\bf 0}, n^{-d} g(\cdot/n)\rangle| = (2\pi)^{-d} \Big|\int_{{\mathbb{R}}^d} \widehat{J_\Omega \psi_{\bf 0}}(\xi) \hat g(-n\xi) d\xi\Big|\nonumber\\ & \le & (2\pi)^{-d} \|\hat g\|_\infty \int_{|\xi|\le 1/n} |\widehat{J_\Omega \psi_0}(\xi)| d\xi \to 0 \ {\rm as} \ n\to \infty, \end{eqnarray} where we have used the hypothesis that $\widehat{J_\Omega \psi_{\bf 0}}\in K_1$ to obtain the limit. The limits in \eqref{frequency.tm1.pf.eq1} and \eqref{frequency.tm1.pf.eq2} contradict each other, and hence the Fourier transform $J_\Omega \psi_{\bf 0}$ does not belong to $K_1$. \end{proof} \subsection{Proof of Theorem \ref{generalizedriesz.tm}} Observe that $J_\Omega=I_\gamma$ when $\Omega(\xi)=\|\xi\|^{-\gamma}$ and $\gamma>0$, and that \begin{equation} \label{generalized.tm.pf.eq1} J_\Omega=(-\triangle)^{-\gamma/2}\quad {\rm if}\ \Omega(\xi)=\|\xi\|^{-\gamma}\quad {\rm and} \ \gamma<0. \end{equation} Then the necessity holds by Theorem \ref{iomega2.tm}, while the sufficiency follows from Corollary \ref{generalizedrieszomega1.cr}, Theorem \ref{maintheorem.iomega1}, and Corollary \ref{composition.cr}. \section{Integrable Riesz Potentials}\label{irp.section} In Section \ref{grp.section}, we have shown that the various attempts for defining a proper (integrable) Riesz potential that is translation-invariant are doomed to failure for $\gamma>d$. We now proceed by providing a fix which is possible if we drop the translation-invariance requirement. Let $1\le p\le \infty, \gamma\in {\mathbb{R}}$, and $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ be a homogeneous function of degree $-\gamma$. We define the linear operator $U_{\Omega,p} $ from ${\mathcal S}$ to ${\mathcal S}'$ with the help of the Fourier transform by \begin{equation}\label{newfractionalderivative.def} {\mathcal F}({U_{\Omega,p}f})(\xi)= \Big(\hat f (\xi)-\sum_{|{\bf i}|\le \gamma-d(1-1/p)} \frac{\partial^{\bf i}\hat f({\bf 0})}{ {\bf i}!} \xi^{\bf i}\Big) \Omega(\xi), \quad f\in {\mathcal S}.\end{equation} We call the linear operator $U_{\Omega, p}$ a {\em $p$-integrable Riesz potential associated with the homogenous function $\Omega$}, or {\em integrable Riesz potential} for brevity, as \begin{equation} \label{fractionalderivativeomegap.def}U_{\Omega, p}=I_{\gamma, p} \quad {\rm if } \quad \Omega(\xi)=\|\xi\|^{-\gamma}.\end{equation} Define \begin{equation}\label{fractionalderivative.tm1.pf.eq3} U_{\Omega, p}^*f({\bf x}) = (2\pi)^{-d} \int_{{\mathbb{R}}^d}\Big(e^{i\langle {\bf x}, \xi\rangle}-\sum_{|{\bf i}|\le \gamma-d+d/p} \frac{(i{\bf x})^{\bf i}\xi^{\bf i}}{ {\bf i}!} \Big) {\Omega(-\xi)}\hat f(\xi) d\xi, \quad f\in {\mathcal S}. \end{equation} Then $U_{\Omega, p}^*$ is the adjoint operator of the integrable Riesz potenrial $U_{\Omega, p}$: \begin{equation}\label{fractionalderivative.tm1.pf.eq4} \langle U_{\Omega, p}f, g\rangle=\langle f, U_{\Omega, p}^*g\rangle \quad {\rm for\ all} \ f, g\in {\mathcal S}. \end{equation} If $\gamma$ satisfies $0<\gamma< d(1-1/p)$, then \begin{equation} U_{\Omega, p}f= J_\Omega f \quad {\rm for \ all} \ f\in {\mathcal S}.\end{equation} Hence in this case, it follows from Theorem \ref{maintheorem.iomega1} that $U_{\Omega, p}$ is dilation-invariant and translation-invariant, and a continuous extension of the linear operator $i_\Omega$ on the closed subspace ${\mathcal S}_\infty$ to the whole space ${\mathcal S}$. Moreover $U_{\Omega, p} f\in L^p$ and ${\mathcal F}( {U_{\Omega, p}f})\in L^q, 1\le q\le p/(p-1)$, for any Schwartz function $f$ by Theorem \ref{generalizedrieszomega1.tm} and the following estimate: $$ |{\mathcal F}({U_{\Omega, p}f})(\xi)|\le C \|\xi\|^{-\gamma} (1+\|\xi\|)^{\gamma-d-1}\ {\rm for \ all} \ \xi\in {\mathbb{R}}^d.$$ So from now on, we implicitly assume that $\gamma\ge d(1-1/p)$, except when mentioned otherwise. In the sequel, we investigate with the properties of the $p$-integrable Riesz potential $U_{\Omega, p}$ associated with a homogenous function $\Omega$, such as dilation-invariance and translation-variance (Theorem \ref{fractionalderivative.tm1}), $L^{p/(p-1)}$-integrability in the Fourier domain (Corollary \ref{fractionalderivative.cr1}), $L^{p}$-integrability in the spatial domain (Theorem \ref{iomegap.tm1} and Corollary \ref{iomegap.cor1}), composition and left-inverse property (Theorem \ref{compositionp.tm} and Corollary \ref{leftinversefractionalderivative.cr}), the uniqueness of dilation-invariant extension of the linear operator $i_\Omega$ from the closed subspace ${\mathcal S}_\infty$ to the whole space ${\mathcal S}$ with additional integrability in the spatial domain and in the Fourier domain (Theorems \ref{time2.tm} and \ref{iomega5.tm}). The above properties of the $p$-integrable Riesz potential associated with a homogenous function will be used to prove Theorem \ref{integrablefractionalderivative.tm} in the last subsection. \subsection{Dilation-invariance, translation-variance and integrability in the Fourier domain} \begin{Tm}\label{fractionalderivative.tm1} Let $1\le p\le \infty, \gamma\ge d(1-1/p)$, $k_1$ be the integral part of $\gamma-d(1-1/p)$, $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{{\bf 0}\})$ be a nonzero homogeneous function of degree $-\gamma$, and let $U_{\Omega, p}$ be defined as in \eqref{newfractionalderivative.def}. Then the following statements hold. \begin{itemize} \item[{(i)}] $U_{\Omega,p}$ is dilation-invariant. \item[{(ii)}] $U_{\Omega, p}$ is not translation-invariant. \item[{(iii)}] If $\sup_{{\bf x}\in {\mathbb{R}}^d} |f({\bf x})| (1+\|{\bf x}\|)^{k_1+d+1+\epsilon}<\infty$ for some $\epsilon>0$, then there exists a positive constant $C$ independent on $f$ such that \begin{equation}\label{fractionalderivative.tm1.eq1} |{\mathcal F}({U_{\Omega, p} f})(\xi)|\le C \Big(\sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf z})| (1+\|{\bf z}\|)^{k_1+d+1+\epsilon} \Big) \|\xi\|^{k_1-\gamma+1} (1+\|\xi\|)^{-1}\end{equation} for all $\xi\in {\mathbb{R}}^d$. \item[{(iv)}] $U_{\Omega,p}$ is a continuous linear operator from ${\mathcal S}$ to ${\mathcal S}'$, and an extension of the operator $i_\Omega$ on the subspace ${\mathcal S}_\infty$ to the whole space ${\mathcal S}$. \end{itemize} \end{Tm} As a consequence of Theorem \ref{fractionalderivative.tm1}, we have the following result about the $L^{p/(p-1)}$-integrability of the Fourier transform of $U_{\Omega, p} f$ for $f\in {\mathcal S}$. \begin{Cr}\label{fractionalderivative.cr1} Let $1\le p\le \infty$ and $\gamma\ge d(1-1/p)$ satisfy either $p=1$ or $\gamma-d(1-1/p)\not\in {\mathbb{Z}}_+$ and $1<p\le \infty$, $k_1$ be the integral part of $\gamma-d(1-1/p)$, $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{{\bf 0}\})$ be a homogeneous function of degree $-\gamma$, and let $U_{\Omega, p}$ be defined as in \eqref{newfractionalderivative.def}. Then the Fourier transform of $U_{\Omega, p} f$ belongs to $L^{p/(p-1)}$ for any $f\in {\mathcal S}$. \end{Cr} \begin{proof} [Proof of Theorem \ref{fractionalderivative.tm1}] {\em (i)}\quad Given any $t>0$ and $f\in {\mathcal S}$, \begin{equation*}\label{fractionalderivative.tm1.pf.eq3} {\mathcal F}(U_{\Omega, p} (\delta_t f))(\xi) = t^{-d} \Big(\hat f\big(\frac{\xi}{t}\big)-\sum_{|{\bf i}|\le \gamma-d+d/p} \frac{\partial^{\bf i} \hat f({\bf 0})}{{\bf i}!} \big(\frac{\xi}{t}\big)^{\bf i}\Big) \Omega(\xi) =t^{-d-\gamma} {\mathcal F}( {U_{\Omega, p} f})\big(\frac{\xi}{t}\big). \end{equation*} This proves the dilation-invariance of the linear operator $U_{\Omega, p}$. {\em (ii)}\quad Suppose, on the contrary, that $U_{\Omega,p}$ is translation-invariant. Then \begin{equation} \label{fractionalderivative.tm1.pf.eq5} \Omega(\xi) \sum_{|{\bf i}|\le \gamma-d+d/p} \frac{\partial^{\bf i} \widehat {\tau_{{\bf x}_0}f}({\bf 0})}{{\bf i}!} \xi^{\bf i}= \Omega(\xi) e^{-i\langle {\bf x}_0, \xi\rangle} \sum_{|{\bf i}|\le \gamma-d+d/p} \frac{\partial^{\bf i} \hat f({\bf 0})}{{\bf i}!} \xi^{\bf i},\quad \xi\in {\mathbb{R}}^d \end{equation} for all ${\bf x}_0\in {\mathbb{R}}^d$ and $f\in {\mathcal S}$. Note that the left-hand side of equation \eqref{fractionalderivative.tm1.pf.eq5} is a polynomial in ${\bf x}_0$ by \eqref{homogeneous2.lm.pf.eq11} while its right hand side is a trigonometric function of ${\bf x}_0$. Hence both sides must be identically zero, which implies that \begin{equation}\Omega(\xi) \sum_{|{\bf i}|\le \gamma-d+d/p} \frac{\partial^{\bf i} \hat f({\bf 0})}{{\bf i}!} \xi^{\bf i}=0, \quad \xi\in {\mathbb{R}}^d \end{equation} for all $f\in {\mathcal S}$. Replacing $f$ in the above equation by the function $\psi_{\bf 0}$ in \eqref{homogeneous2.lm.pf.eq7} and using \eqref{homogeneous2.lm.pf.eq8} and the assumption $\gamma\ge d(1-1/p)$ leads to a contradiction. {\em (iii)}\quad By the assumption on the homogeneous function $\Omega$, $|\Omega(\xi)|\le C \|\xi\|^{-\gamma}$. Then for $\xi\in {\mathbb{R}}^d$ with $\|\xi\|\ge 1$, \begin{eqnarray*} |{\mathcal F}({U_{\Omega,p} f})(\xi)| & \le & C \Big(\|\hat f\|_\infty+\sum_{|{\bf i}|\le k_1} \|\partial^{\bf i}\hat f\|_\infty \|\xi\|^{|\bf i|}\Big) \|\xi\|^{-\gamma}\nonumber\\ & \le & C \Big(\sum_{|{\bf i}|\le k_1+1} \|\partial^{\bf i} \hat f\|_\infty\Big) \|\xi\|^{k_1-\gamma} \end{eqnarray*} by \eqref{newfractionalderivative.def}, and for $\xi\in {\mathbb{R}}^d$ with $\|\xi\|\le 1$, \begin{eqnarray*} |{\mathcal F}({U_{\Omega,p} f})(\xi)| & \le & C \Big(\sum_{|{\bf i}|\le k_1+1} \|\partial^{\bf i} \hat f\|_\infty\Big) \|\xi\|^{k_1-\gamma+1} \end{eqnarray*} by the Taylor's expansion to the function $\hat f(\xi)$ at the origin. Combining the above two estimates gives \begin{equation}\label{fractionalderivative.tm1.pf.eq1} |{\mathcal F}({U_{\Omega,p} f})(\xi)|\le C \Big(\sum_{|{\bf i}|\le k_1+1} \|\partial^{\bf i} \hat f\|_\infty\Big) \|\xi\|^{k_1-\gamma+1} (1+\|\xi\|)^{-1}, \quad \xi\in {\mathbb{R}}^d. \end{equation} Note that \begin{equation}\label{fractionalderivative.tm1.pf.eq2} \|\partial^{\bf i} \hat f\|_\infty\le C \int_{{\mathbb{R}}^d} |f({\bf x})| |{\bf x}|^{|{\bf i}|} d{\bf x}\le C \sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf z})| (1+|{\bf z}|)^{k_1+d+1+\epsilon} \end{equation} for all ${\bf i}\in {\mathbb{Z}}_+^d$ with $|{\bf i}|\le k_1+1$. Then the desired estimate \eqref{fractionalderivative.tm1.eq1} follows from \eqref{fractionalderivative.tm1.pf.eq1} and \eqref{fractionalderivative.tm1.pf.eq2}. {\em (iv)}\quad By \eqref{newfractionalderivative.def} and the first conclusion of this theorem, the Fourier transform of $U_{\Omega, p} f$ is continuous on ${\mathbb{R}}^d\backslash \{{\bf 0}\}$, and satisfies $$\int_{{\mathbb{R}}^d} |{\mathcal F}({U_{\Omega,p} f})(\xi)| (1+\|\xi\|)^{\gamma-k_1-d-1} d\xi\le C \sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf x})| (1+\|{\bf x}\|)^{k_1+d+2}.$$ Hence $U_{\Omega, p}$ is a continuous linear operator from ${\mathcal S}$ to ${\mathcal S}'$. For any $f\in {\mathcal S}_\infty$, $\partial^{\bf i}\hat f({\bf 0})=0$ for all ${\bf i}\in {\mathbb{Z}}_+^d$. Then ${\mathcal F}({U_{\Omega,p} f})={\mathcal F}({i_\Omega f})$ for all $f\in {\mathcal S}_\infty$. This shows that $U_{\Omega, p}, 1\le p\le \infty,$ is a continuous extension of the linear operator $i_\Omega$ from the subspace ${\mathcal S}_\infty\subset {\mathcal S}$ to the whole space ${\mathcal S}$. \end{proof} \subsection{Composition and left-inverse of the fractional Laplacian} Direct calculation leads to $$\sum_{|{\bf i}|\le \gamma-d(1-1/p)}\frac{\partial^{\bf i} (\xi^{\bf k}\hat f(\xi))|_{\xi={\bf 0}}}{{\bf i}!}\xi^{\bf i} = \sum_{|{\bf j}|\le \gamma-|{\bf k}|-d(1-1/p)} \frac{\partial^{\bf j} \hat f({\bf 0})}{{\bf j}!}\xi^{{\bf j}+{\bf k}}, \quad {\bf k}\in {\mathbb{Z}}_+^d$$ for any $\gamma\in {\mathbb{R}}, 1\le p\le \infty$ and $f\in {\mathcal S}$. This together with \eqref{newfractionalderivative.def} implies that \begin{equation} U_{\Omega, p} (\partial^{\bf k} f)= U_{\Omega_{\bf k}, p} f, \quad {\rm for \ all} \ f\in {\mathcal S}\ {\rm and} \ {\bf k}\in {\mathbb{Z}}_+^d, \end{equation} where $\Omega_{\bf k}(\xi)= (i\xi)^{\bf k} \Omega(\xi)$ for ${\bf k}\in {\mathbb{Z}}_+^d$. In general, we have the following result about composition of integrable Riesz potentials. \begin{Tm}\label{compositionp.tm} Let $1\le p\le \infty$, real numbers $\gamma_1, \gamma_2$ satisfy $\gamma_1\ge d(1-1/p)$ and $-\gamma_2$ is larger than the integral part of $\gamma_1-d(1-1/p)$, and let $\Omega_1, \Omega_2\in C^\infty({\mathbb{R}}^d\backslash\{0\})$ be homogenous of degree $-\gamma_1$ and $-\gamma_2$ respectively. Then \begin{equation}\label{compositionp.tm.eq1} U_{\Omega_1, p} (J_{\Omega_2} f)=J_{\Omega_1\Omega_2} f\quad {\rm for \ all} \ f\in {\mathcal S}. \end{equation} \end{Tm} As a consequence of Theorems \ref{composition.tm} and \ref{compositionp.tm}, we have the following result about the left-inverse of the fractional Laplacian $(-\triangle)^{\gamma/2}$. \begin{Cr} \label{leftinversefractionalderivative.cr} Let $1\le p\le \infty$ and $\gamma>0$ satisfy either $1<p\le \infty$ or $p=1$ and $\gamma\not\in {\mathbb{Z}}_+$, and the linear operator $I_{\gamma, p}$ be defined as in \eqref{fractionalderivative.veryolddef}. Then $I_{\gamma,p}$ is a left-inverse of the fractional Laplacian $(-\triangle)^{\gamma/2}$, i.e., $I_{\gamma, p} (-\triangle)^{\gamma/2} f=f$ for all $f\in {\mathcal S}$. \end{Cr} \begin{proof} [\bf Proof of Theorem \ref{compositionp.tm}] Let $k_1$ be the integral part of $\gamma_1-d(1-1/p)$. Then $-\gamma_2>k_1$ by the assumption. Then ${\mathcal F}(J_{\Omega_2} f)(\xi)=\Omega_2(\xi) \hat f(\xi)$ and $\partial^{\bf i} ({\mathcal F}(J_{\Omega_2} f)(\xi))|_{\xi={\bf 0}}=0$ for any ${\bf i}\in {\mathbb{Z}}_+$ with $|{\bf i}|\le k_1$ and any Schwartz function $f$. This implies that ${\mathcal F}(U_{\Omega_1, p}(J_{\Omega_2} f))(\xi)$ is equal to $$\Big(\widehat{J_{\Omega_2} f}(\xi)-\sum_{|{\bf i}|\le \gamma_1-d(1-1/p)} \frac{\partial^{\bf i} ({\mathcal F}(J_{\Omega_2} f)(\xi))|_{\xi={\bf 0}}}{{\bf i}!} \xi^{\bf i}\Big)\Omega_1(\xi),$$ which is the same as ${\mathcal F}(J_{\Omega_1\Omega_2}f)(\xi)$. Hence the equation \eqref{compositionp.tm.eq1} is established. \end{proof} \subsection{$L^p$-integrability in the spatial domain} If $\gamma\in (0, d(1-1/p))$, then it follows from \eqref{newfractionalderivative.def} and Theorem \ref{generalizedrieszomega1.tm} that $|U_{\Omega,p} f({\bf x})|\le C (1+\|{\bf x}\|)^{\gamma-d}, \ {\bf x}\in {\mathbb{R}}^d$ (hence $U_{\Omega, p}f\in L^p$) for any Schwartz function $f$. In this subsection, we provide a similar estimate for $U_{\Omega, p} f$ when $\gamma\ge d(1-1/p)$. \begin{Tm} \label{iomegap.tm1} Let $0<\epsilon<1, 1\le p\le \infty, \gamma\in [d(1-1/p), \infty)\backslash {\mathbb{Z}}$, $k_1$ be the integral part of $\gamma-d(1-1/p)$, and $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ be a homogeneous function of degree $-\gamma$. If \begin{equation} \label{iomegap.tm1.eq1} |f({\bf x})| \le C (1+\|{\bf x}\|)^{-(k_1+1+d+\epsilon)}, \quad {\bf x}\in {\mathbb{R}}^d\end{equation} then \begin{eqnarray}\label{iomegap.tm1.eq2} |U_{\Omega,p}f({\bf x})| & \le & C \Big(\sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf z})| (1+\|{\bf z}\|)^{k_1+1+d+\epsilon}\Big)\nonumber\\ & & \times \|{\bf x}\|^{\min(\gamma-k_1-d,0)} (1+\|{\bf x}\|)^{\max(\gamma-k_1-d,0)-1} \end{eqnarray} for all ${\bf x}\in {\mathbb R}^d$, and \begin{eqnarray}\label{iomegap.tm1.eq3} |U_{\Omega,p}f({\bf x})-U_{\Omega,p}f({\bf x}')| & \le & C \Big(\sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf z})| (1+\|{\bf z}\|)^{k_1+1+d+\epsilon}\Big)\|{\bf x}-{\bf x}'\|^\delta \nonumber\\ & & \times \|{\bf x}\|^{\min(\gamma-k_1-d-\delta,0)} (1+\|{\bf x}\|)^{\max(\gamma-k_1-d-\delta,0)-1} \end{eqnarray} for all ${\bf x}, {\bf x}'\in {\mathbb R}^d$ with $\|{\bf x}-{\bf x}'\|\le \|{\bf x}\|/4$, where $\delta<\min (|\gamma-k_1-d|, \epsilon)$. \end{Tm} As an easy consequence of Theorem \ref{iomegap.tm1}, we have \begin{Cr}\label{iomegap.cor1} Let $1\le p\le \infty, \gamma\ge d(1-1/p)$, and $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ be a homogeneous function of degree $\gamma$. If both $\gamma$ and $\gamma-d(1-1/p)$ are not nonnegative integers, then $U_{\Omega,p} f$ is H\"older continuous on ${\mathbb{R}}^d\backslash \{\bf 0\}$ and belong to $L^p$ for any Schwartz function $f$. \end{Cr} \begin{proof}[Proof of Theorem \ref{iomegap.tm1}] We investigate three cases to establish the estimates in \eqref{iomegap.tm1.eq2} and \eqref{iomegap.tm1.eq3}. {\em Case I: $k_1+1-\gamma<0$}.\quad Set $h_\xi(t)=\hat f(t\xi)$. Applying Taylor's expansion to the function $h_{\xi}$ gives \begin{eqnarray}\label{iomegap.tm1.pf.eq1} \hat f(\xi) & = & h_\xi(1)=\sum_{s=0}^{k_1} \frac{h^{(s)}(0)}{s!}+\frac{1}{k_1!}\int_0^1 h_\xi^{(k_1+1)}(t) (1-t)^{k_1} dt\nonumber\\ &= & \sum_{|{\bf i}|\le k_1} \frac{\partial^{\bf i} \hat f({\bf 0})}{{\bf i}!} \xi^{\bf i}+ (k_1+1) \sum_{|{\bf j}|=k_1+1} \frac{\xi^{\bf j}}{{\bf j}!} \int_0^1 \partial^{\bf j} \hat f(t\xi) (1-t)^{k_1} dt. \end{eqnarray} Hence \begin{equation}\label{iomegap.tm1.pf.eq2} \Big(\hat f(\xi)-\sum_{|{\bf i}|\le k_1} \frac{\partial^{\bf i} \hat f({\bf 0})}{{\bf i}!} \xi^{\bf i}\Big) \Omega(\xi)= \sum_{|{\bf j}|=k_1+1} \frac{1}{{\bf j}!}\Omega_{\bf j}(\xi) \widehat g_{\bf j}(\xi), \end{equation} where $\Omega_{\bf j}(\xi)=(i\xi)^{\bf j} \Omega(\xi)$ and \begin{equation}\label{iomegap.tm1.pf.eq3} g_{\bf j}({\bf x})=(k_1+1) \int_0^1 (1-t)^{k_1} (- {\bf x}/t)^{\bf j} f({\bf x}/t) t^{-d} dt\in L^1, \quad |{\bf j}|=k_1+1. \end{equation} Taking inverse Fourier transform at both sides of the equation \eqref{iomegap.tm1.pf.eq2} yields \begin{equation}\label{iomegap.tm1.pf.eq4} U_{\Omega, p} f({\bf x})=\sum_{|{\bf j}|=k_1+1}\frac{1}{{\bf j}!} \int_{{\mathbb{R}}^d} K_{\bf j}({\bf x}-{\bf y}) g_{\bf j}({\bf y}) d{\bf y}. \end{equation} where $K_{\bf j}, |{\bf j}|=k_1+1$, is the inverse Fourier transform of $\Omega_{\bf j}$. Therefore \begin{eqnarray}\label{iomegap.tm1.pf.eq5} |U_{\Omega, p} f({\bf x})| &\le & C \int_0^1\int_{{\mathbb{R}}^d} \|{\bf x}-{\bf y}\|^{\gamma-d-k_1-1} \|{\bf y}/t\|^{k_1+1} |f({\bf y}/t)| t^{-d} d{\bf y} dt\nonumber\\ & = & C \int_0^1\int_{{\mathbb{R}}^d} \|{\bf x}-t {\bf y}\|^{\gamma-d-k_1-1} \|{\bf y}\|^{k_1+1} |f({\bf y})| d{\bf y} dt\nonumber\\ &\le & C \Big (\sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf z})| (1+\|{\bf z}\|)^{k_1+1+d+\epsilon}\Big) \int_0^1 (t+\|{\bf x}\|)^{\gamma-d-k_1-1} dt\nonumber\\ &\le & C \Big (\sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf z})| (1+\|{\bf z}\|)^{k_1+1+d+\epsilon}\Big) \nonumber\\ & & \times \|{\bf x}\|^{\min(\gamma-d-k_1,0)} (1+\|{\bf x}\|)^{\max(\gamma-d-k_1,0)-1}, \end{eqnarray} where the first inequality holds because $K_{\bf j}\in C^\infty({\mathbb{R}}^d\backslash \{{\bf 0}\})$ is homogeneous of degree $\gamma-d-k_1-1\in (-d, 0)$ \cite[Theorems 7.1.16 and 7.1.18]{hormanderbook}, and the second inequality follows from \eqref{generalizedrieszomega1.tm.pf.eq3}. Similarly, \begin{eqnarray}\label{iomegap.tm1.pf.eq5+} & & |U_{\Omega, p} f({\bf x})-U_{\Omega, p} f({\bf x}')|\nonumber\\ &\le & C \sum_{|{\bf j}|=k_1+1} \int_{\|{\bf x}-{\bf y}\|\ge 2\|{\bf x}-{\bf x}'\|} \|{\bf x}-{\bf x}'\|^\delta \|{\bf x}-{\bf y}\|^{\gamma-d-k_1-1-\delta} |g_{\bf j}({\bf y})| d{\bf y}\nonumber\\ & & + C \sum_{|{\bf j}|=k_1+1} \int_{\|{\bf x}-{\bf y}\|\le 2\|{\bf x}-{\bf x}'\|} \|{\bf x}-{\bf y}\|^{\gamma-d-k_1-1} |g_{\bf j}({\bf y})| d{\bf y}\nonumber\\ & & + C \sum_{|{\bf j}|=k_1+1} \int_{\|{\bf x}-{\bf y}\|\le 2\|{\bf x}-{\bf x}'\|} \|{\bf x}'-{\bf y}\|^{\gamma-d-k_1-1} |g_{\bf j}({\bf y})| d{\bf y}\nonumber\\ &\le & C \Big (\sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf z})| (1+\|{\bf z}\|)^{k_1+1+d+\epsilon}\Big)\|{\bf x}-{\bf x}'\|^\delta\nonumber\\ & & \times \|{\bf x}\|^{\min(\gamma-d-k_1-\delta,0)} (1+\|{\bf x}\|)^{\max(\gamma-d-k_1-\delta,0)-1} \end{eqnarray} for all ${\bf x}, {\bf x}'\in {\mathbb{R}}^d$ with $\|{\bf x}-{\bf x}'\|\le \|{\bf x}\|/4$, where $\delta<\min(\epsilon, |\gamma-k_1-d|)$. Then the desired estimate \eqref{iomegap.tm1.eq2} and \eqref{iomegap.tm1.eq3} follow from \eqref{iomegap.tm1.pf.eq5} and \eqref{iomegap.tm1.pf.eq5+} for the case $k_1+1-\gamma<0$. {\em Case II: $k_1+1-\gamma>0$ and $k_1\ge 1$. }\quad Applying Taylor's expansion to the function $h_{\xi}(t)=\hat f(t\xi)$, we have \begin{equation*} \label{iomegap.tm1.pf.eq7} \hat f(\xi) -\sum_{|{\bf i}|\le k_1} \frac{\partial^{\bf i} \hat f({\bf 0})}{{\bf i}!} \xi^{\bf i} = k_1 \sum_{|{\bf j}|=k_1} \frac{\xi^{\bf j}}{{\bf j}!} \int_0^1 \big(\partial^{\bf j} \hat f(t\xi)-\partial^{{\bf j}}\hat f({\bf 0})\big) (1-t)^{k_1-1} dt. \end{equation*} Multiplying by $\Omega(\xi)$ both sides of the above equation and then taking the inverse Fourier transform, we obtain \begin{equation}\label{iomegap.tm1.pf.eq8} U_{\Omega, p} f({\bf x})= \sum_{|{\bf j}|=k_1}\frac{1}{{\bf j}!} \Big(\int_{{\mathbb{R}}^d} K_{\bf j}({\bf x}-{\bf y}) g_{\bf j}({\bf y}) d{\bf y}-K_{\bf j}({\bf x})\int_{{\mathbb{R}}^d} g_{\bf j}({\bf y}) d{\bf y}\Big), \end{equation} where \begin{equation}\label{iomegap.tm1.pf.eq8b} g_{\bf j}({\bf x})=k_1 \int_0^1 (1-t)^{k_1-1} (- {\bf x}/t)^{\bf j} f({\bf x}/t) t^{-d} dt\in L^1, \quad |{\bf j}|=k_1. \end{equation} Recalling that $K_{\bf j}\in C^\infty({\mathbb{R}}^d\backslash \{{\bf 0}\}), |{\bf j}|=k_1$ are homogeneous of degree $\gamma-d-k_1\in (-d, 0)$, \begin{equation}\label{iomegap.tm1.pf.eq9} |\partial^{\bf i} K_{\bf j}({\bf x})|\le C \|{\bf x}\|^{\gamma-d-k_1-|{\bf j}|}, \quad |{\bf i}|\le 1. \end{equation} Combining \eqref{generalizedrieszomega1.tm.pf.eq3}, \eqref{iomegap.tm1.pf.eq8}, \eqref{iomegap.tm1.pf.eq8b} and \eqref{iomegap.tm1.pf.eq9}, we get \begin{eqnarray} \label{iomegap.tm1.pf.eq10} |U_{\Omega, p} f({\bf x})| \!\!& \le &\!\! C \sum_{|{\bf j}|=k_1} \int_0^1 \int_{{\mathbb{R}}^d} |K_{\bf j}({\bf x}-t{\bf y})-K_{\bf j}({\bf x})| \|{\bf y}\|^{k_1} |f({\bf y})| d{\bf y}\nonumber\\ \!\!& \le &\!\! C \Big(\sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf z})| (1+\|{\bf z}\|)^{k_1+d+1+\epsilon}\Big)\nonumber\\ \!\! & & \!\!\times \Big\{\int_{0}^1 \int_{\|{\bf y}\|\le \|{\bf x}\|/2} t\|{\bf y}\| \|{\bf x}\|^{\gamma-d-k_1-1} (1+\|{\bf y}\|)^{-d-1-\epsilon} d{\bf y} dt\nonumber\\ \!\! & &\!\! + (1+\|{\bf x}\|)^{-1} \int_0^1 \int_{\|{\bf y}\|\ge \|{\bf x}\|/2} \|{\bf x}-t{\bf y}\|^{\gamma-d-k_1} (1+\|{\bf y}\|)^{-d-\epsilon} d{\bf y} dt\nonumber\\ \!\! & &\!\! + \|{\bf x}\|^{\gamma-d-k_1}\int_0^1 \int_{\|{\bf y}\|\ge \|{\bf x}\|/2} (1+\|{\bf y}\|)^{-d-1-\epsilon} d{\bf y} dt\Big\}\nonumber\\ \!\! & \le & \!\! C\Big (\sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf z})| (1+\|{\bf z}\|)^{k_1+d+1+\epsilon}\Big)\nonumber\\ \!\!& & \!\! \times \|{\bf x}\|^{\min(\gamma-k_1-d, 0)} (1+\|{\bf x}\|)^{\max(\gamma-k_1-d, 0)-1}, \end{eqnarray} and \begin{eqnarray} \label{iomegap.tm1.pf.eq10+} & & |U_{\Omega, p} f({\bf x})-U_{\Omega, p} f({\bf x}') |\nonumber\\ \!\!& \le &\!\! C \sum_{|{\bf j}|=k_1} \int_0^1 \Big(\int_{\|t{\bf y}\|\le \|{\bf x}\|/4}+\int_{\|t{\bf y}\|\ge 4\|{\bf x}\|} +\int_{\|{\bf x}\|/4\le \|t{\bf y}\|\le 4\|{\bf x}\|} \Big) \nonumber\\ & & \quad |K_{\bf j}({\bf x}-t{\bf y})-K_{\bf j}({\bf x})- K_{\bf j}({\bf x}'-t{\bf y})+K_{\bf j}({\bf x}')| \|{\bf y}\|^{k_1} |f({\bf y})| d{\bf y}\nonumber\\ \!\!& \le &\!\! C \Big(\sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf z})| (1+\|{\bf z}\|)^{k_1+d+1+\epsilon}\Big)\sum_{|{\bf j}|=k_1} \Big\{ \|{\bf x}-{\bf x}'\|^\delta\nonumber\\ \!\! & & \!\!\times \int_{0}^1 \int_{\|t{\bf y}\|\le \|{\bf x}\|/4} t\|{\bf y}\| \|{\bf x}\|^{\gamma-d-k_1-1-\delta} (1+\|{\bf y}\|)^{-d-1-\epsilon} d{\bf y} dt + \|{\bf x}-{\bf x}'\|^\delta \nonumber\\ \!\! & &\!\! \times \int_0^1 \int_{t\|{\bf y}\|\ge 4\|{\bf x}\|}\big(\|{\bf x}\|^{\gamma-k_1-d-\delta} +\|{\bf y}\|^{\gamma-k_1-d-\delta}\big) (1+\|{\bf y}\|)^{-d-1-\epsilon} d{\bf y} dt\nonumber\\ \!\! & &\!\! + \int_0^1 \int_{\|{\bf x}\|/4\le \|t{\bf y}\|\le 4 \|{\bf x}\|} \Big( | K_{\bf j}({\bf x}-t{\bf y})- K_{\bf j}({\bf x}'-t{\bf y})|+|K_{\bf j}({\bf x})-K_{\bf j}({\bf x})|\Big)\nonumber\\ & & (1+\|{\bf x}\|/t)^{-d-1-\epsilon} d{\bf y} dt\Big\}\nonumber\\ \qquad \!\! & \le & \!\! C\Big (\sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf z})| (1+\|{\bf z}\|)^{k_1+d+1+\epsilon}\Big)\|{\bf x}-{\bf x}'\|^\delta \|{\bf x}\|^{\gamma-k_1-d-\delta} (1+\|{\bf x}\|)^{-1}. \end{eqnarray} Then the desired estimates \eqref{iomegap.tm1.eq2} and \eqref{iomegap.tm1.eq3} are proved in the case that $k_1+1-\gamma>0$ and $k_1\ge 1$. {\em Case III: $k_1+1-\gamma>0$ and $k_1=0$.}\quad In this case, $\gamma\in (0, 1)$ and \begin{equation} U_{\Omega,p} f({\bf x})=\int_{{\mathbb{R}}^d}\big( K({\bf x}-{\bf y})-K({\bf x})\big) f({\bf y}) d{\bf y} \end{equation} where $K$ is the inverse Fourier transform of $\Omega(\xi)$. Then, by applying the argument used in establishing \eqref{iomegap.tm1.pf.eq10}, we have \begin{eqnarray} |U_{\Omega, p} f({\bf x})|\!\! & \le & \!\! C \Big(\sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf z})| (1+\|{\bf z}\|)^{d+1+\epsilon}\Big)\nonumber\\ \!\! & & \!\!\times \Big\{ \int_{\|{\bf y}\|\le \|{\bf x}\|/2} t\|{\bf y}\| \|{\bf x}\|^{\gamma-d-1} (1+\|{\bf y}\|)^{-d-1-\epsilon} d{\bf y} \nonumber\\ \!\! & &\!\! + (1+\|{\bf x}\|)^{-1} \int_{\|{\bf y}\|\ge \|{\bf x}\|/2} \|{\bf x}-{\bf y}\|^{\gamma-d} (1+\|{\bf y}\|)^{-d-\epsilon} d{\bf y} \nonumber\\ \!\! & &\!\! + \|{\bf x}\|^{\gamma-d} \int_{\|{\bf y}\|\ge \|{\bf x}\|/2} (1+\|{\bf y}\|)^{-d-1-\epsilon} d{\bf y} \Big\}\nonumber\\ \!\! &\le & \!\! C\Big (\sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf z})| (1+\|{\bf z}\|)^{d+1+\epsilon}\Big) \|{\bf x}\|^{\gamma-d} (1+\|{\bf x}\|)^{-1}, \end{eqnarray} and \begin{eqnarray} & & |U_{\Omega, p} f({\bf x})-U_{\Omega, p} f({\bf x}')|\nonumber\\ & \le & \Big(\int_{\|{\bf y}\|\le \|{\bf x}\|/4}+\int_{\|{\bf y}\|\ge 4\|{\bf x}\|} +\int_{\|{\bf x}\|/4\le \|{\bf y}\|\le 4\|{\bf x}\|} \Big) \nonumber\\ & & \quad |K({\bf x}-{\bf y})-K({\bf x})-K({\bf x'}-{\bf y})+K({\bf x}')| |f({\bf y}) | d{\bf y}\nonumber\\ & \le & C\Big (\sup_{{\bf z}\in {\mathbb{R}}^d} |f({\bf z})| (1+\|{\bf z}\|)^{k_1+d+1+\epsilon}\Big) \|{\bf x}-{\bf x}'\|^\delta \|{\bf x}\|^{\gamma-d-\delta} (1+\|{\bf x}\|)^{-1}, \end{eqnarray} which yields the desired estimates \eqref{iomegap.tm1.eq2} and \eqref{iomegap.tm1.eq3} for $k_1+1-\gamma>0$ and $k_1=0$. \end{proof} \subsection{Unique dilation-invariant extension of the linear operator $i_\Omega$ with additional integrability in the spatial domain} We now show that $U_{\Omega, p}$ is the only dilation-invariant extension of the linear operator $i_\Omega$ from the subspace ${\mathcal S}_\infty$ to the whole space ${\mathcal S}$ such that its image is contained in $L^p$. \begin{Tm}\label{time2.tm} Let $1\le p\le \infty$, $\gamma>0$ have the property that both $\gamma$ and $\gamma-d(1-1/p)$ are not nonnegative integers, $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ be a nonzero homogeneous function of degree $-\gamma$, and the linear map $I$ from ${\mathcal S}$ to ${\mathcal S}'$ be a homogeneous extension of the linear operator $i_\Omega$ on ${\mathcal S}_\infty$. Then $If$ belongs to $L^p$ for any Schwartz function $f$ if and only if $I=U_{\Omega, p}$. \end{Tm} \begin{proof} The sufficiency follows from \eqref{fractionalderivative.tm1.pf.eq3} and Theorems \ref{generalizedriesz.tm} and \ref{generalizedrieszomega1.tm} for $\gamma<d(1-1/p)$, and from \eqref{fractionalderivative.tm1.pf.eq3}, Theorem \ref{fractionalderivative.tm1} and Corollary \ref{iomegap.cor1} for $\gamma\ge d(1-1/p)$. Now the necessity. By the assumption on the linear operator $I$ from ${\mathcal S}$ to ${\mathcal S}'$, similar to the argument used in Lemma \ref{homogeneous1.lm}, we can find an integer $N$ and tempered distributions $H_{\bf i}, |{\bf i}|\le N$, such that \begin{equation}\label{time2.tm.pf.eq3} If=U_{\Omega, p} f+\sum_{|{\bf i}|\le N} \frac{\partial^{\bf i} \hat f({\bf 0})}{{\bf i}!} H_{\bf i} \quad {\rm for\ all} \ f\in {\mathcal S}. \end{equation} Replacing $f$ in \eqref{time2.tm.pf.eq3} by $\psi_{\bf j}$ in \eqref{homogeneous2.lm.pf.eq7} and using \eqref{homogeneous2.lm.pf.eq8} gives that $H_{\bf j}/{\bf j}!=I\psi_{\bf j}- U_{\Omega, p} \psi_{\bf j}$. Hence \begin{equation}\label{time2.tm.pf.eq4} H_{\bf j}\in L^p\end{equation} by Corollary \ref{iomegap.cor1} and the assumption on the linear map $I$. By \eqref{time2.tm.pf.eq3}, Theorem \ref{fractionalderivative.tm1} and the assumption on the linear operator $I$, $(I-U_{\Omega,p})(\delta_t f)=t^{-\gamma} \delta_t ((I-U_{\Omega,p})f)$ for all $f\in {\mathcal S}$. Hence $H_{\bf j}$ is homogeneous of order $\gamma-d-|{\bf j}|$ by Lemma \ref{homogeneous2.lm}. This together with \eqref{time2.tm.pf.eq4} implies that $H_{\bf j}=0$ for all ${\bf j}\in {\mathbb{Z}}_+^d$ with $|{\bf j}|\le N$. The desired conclusion $I=U_{\Omega, p}$ then follows. \end{proof} \subsection{Unique dilation-invariant extension of the linear operator $i_\Omega$ with additional integrability in the Fourier domain} In this subsection, we characterize all those dilation-invariant extensions $I$ of the linear operator $i_\Omega$ on the subspace ${\mathcal S}_\infty$ to the whole space ${\mathcal S}$ such that $\widehat{If}$ is $q$-integrable for any Schwartz function $f$. \begin{Tm}\label{iomega5.tm} Let $1\le q\le \infty, \gamma\in [d/q, \infty)\backslash {\mathbb{Z}}$ and $\Omega\in C^\infty({\mathbb{R}}^d\backslash \{\bf 0\})$ be a nonzero homogeneous function of degree $-\gamma$, and the linear map $I$ from ${\mathcal S}$ to ${\mathcal S}'$ be a dilation-invariant extension of the linear operator $i_\Omega$ on ${\mathcal S}_\infty$. Then the following statements hold. \begin{itemize} \item[{(i)}] If $1\le q<\infty$, then the Fourier transform of $If$ belongs to $L^q$ for any Schwartz function $f$ if and only if $\gamma-d/q\not\in{\mathbb{Z}}_+$ and $I=U_{\Omega, q/(q-1)}$. \item[{(ii)}] If $q=\infty$ and $\gamma\not\in {\mathbb{Z}}_+$, then the Fourier transform of $If$ belongs to $L^\infty$ for any Schwartz function $f$ if and only if $I=U_{\Omega, 1}$. \item[{(iii)}] If $q=\infty$ and $\gamma\in {\mathbb{Z}}_+$, then the Fourier transform of $If$ belongs to $L^\infty$ for any Schwartz function $f$ if and only if \begin{equation}\label{iomega5.tm.eq1} \widehat{If}(\xi) = \widehat{U_{\Omega, 1}f}(\xi) +\sum_{|{\bf i}|=-\gamma} \frac{\partial^{\bf i}\hat f({\bf 0})}{ {\bf i}!} g_{\bf i}(\xi) \end{equation} for some bounded homogeneous functions $g_{\bf i}, |{\bf i}|=-\gamma$, of degree $0$. \end{itemize} \end{Tm} \begin{proof} {\em (i)} \quad The sufficiency follows from Theorem \ref{fractionalderivative.tm1} and Corollary \ref{fractionalderivative.cr1}. Now we prove the necessity. As every $q$-integrable function belong to $K_1$, similar to the argument used in the proof of Lemma \ref{homogeneous1.lm}, we can find functions $g_{\bf i}\in K_1, |{\bf i}|\le N$, such that \begin{eqnarray}\label{iomega5.tm.pf.eq1} \widehat {If}(\xi) & = & {\mathcal F}(U_{\Omega, q/(q-1)}f)(\xi) + \sum_{|{\bf i}|\le N} \frac{\partial^{\bf i} \hat f({\bf 0})}{{\bf i}!} g_{\bf i}(\xi). \end{eqnarray} Let $\psi_{\bf j}, j\in {\mathbb{Z}}_+^d$ be defined as in \eqref{homogeneous2.lm.pf.eq7}. Replacing $f$ by $\psi_{\bf j}$ with $|{\bf j}|\le N$ and using \eqref{homogeneous2.lm.pf.eq8} gives \begin{eqnarray}\label{iomega5.tm.pf.eq2} \widehat {I\psi_{\bf j}}(\xi) & = & \Big(\widehat{\psi_{\bf j}}(\xi) -\sum_{|{\bf i}|\le -\gamma-d/q} \frac{\partial^{\bf i} \hat \psi_{\bf j} ({\bf 0})}{{\bf i}!} \xi^{\bf i}\Big)\Omega(\xi)+ g_{\bf j}(\xi)\nonumber\\ & = & \left\{\begin{array}{ll} \frac{\xi^{\bf j}}{{\bf j}!} (\phi(\xi)-1) \Omega(\xi)+ g_{\bf j}(\xi) &\quad {\rm if} \ |{\bf j}|\le \gamma-d/q,\\ \frac{\xi^{\bf j}}{{\bf j}!} \phi(\xi) \Omega(\xi)+g_{\bf j}(\xi) & \quad {\rm if}\ |{\bf j}|> \gamma-d/q. \end{array}\right. \end{eqnarray} Note that $\frac{\xi^{\bf j}}{{\bf j}!} (\phi(\xi)-1) \Omega(\xi)\in L^q$ when $|{\bf j}|< \gamma-d/q$, and $\frac{\xi^{\bf j}}{{\bf j}!} \phi(\xi) \Omega(\xi)\in L^p$ when $|{\bf j}|>\gamma-d/q$. This, together with \eqref{iomega5.tm.pf.eq2} and the assumption that $\widehat{I\psi_{\bf j}}\in L^q$, proves that \begin{equation} \label{iomega5.tm.pf.eq3} g_{\bf j}\in L^q\quad {\rm for\ all} \ {\bf j}\in {\mathbb{Z}}_+^d\quad {\rm with} \ \gamma-d/q\ne |{\bf j}|\le N. \end{equation} By the homogeneous property of the linear map $I$, the functions $g_{\bf i}, |{\bf i}|\le N$, are homogeneous of degree $-\gamma+|{\bf i}|$, i.e., \begin{equation}\label{iomega5.tm.pf.eq4} g_{\bf i}(t\xi)= t^{-\gamma+|{\bf i}|} g_{\bf i}(\xi) , \quad \ {\rm for \ all} \ t>0. \end{equation} Combining \eqref{iomega5.tm.pf.eq3} and \eqref{iomega5.tm.pf.eq4} proves that $g_{\bf j}=0$ for all ${\bf j}\in {\mathbb{Z}}_+^d$ with $\gamma-d/q\ne |{\bf j}|\le N$, and the desired conclusion $\widehat {If}(\xi) = {\mathcal F}({U_{\Omega, q/(q-1)} f})(\xi)$ for all $f\in {\mathcal S}$ when $\gamma-d/q\not\in {\mathbb{Z}}_+$. Now it suffices to prove that $\gamma-d/q\not\in {\mathbb{Z}}_+$. Suppose on the contrary that $\gamma-d/q\in {\mathbb{Z}}_+$. Then $1<q<\infty$ as $\gamma\not\in {\mathbb{Z}}$. By \eqref{iomega5.tm.pf.eq2} and the assumption on the linear map $I$, we have $$\int_{\xi\not\in {\rm supp} \phi} |g_{\bf j}(\xi)-\xi^{\bf j} \Omega(\xi)/{\bf j}!|^q d\xi= \int_{\xi\not\in {\rm supp} \phi} |\widehat {I\psi_{\bf j}}(\xi)|^q d\xi <\infty$$ for all ${\bf j}\in {\mathbb{Z}}_+^d$ with $|{\bf j}|=\gamma-d/q$. This, together with \eqref{iomega5.tm.pf.eq4} and the fact that the support ${\rm supp} \phi$ of the function $\phi$ is a bounded set, implies that $g_{\bf j}(\xi)-\xi^{\bf j} \Omega(\xi)/{\bf j}!=0$ for all ${\bf j}\in {\mathbb{Z}}_+^d$ with $|{\bf j}|=\gamma-d/q$. By substituting the above equality for $g_{\bf j}$ into \eqref{iomega5.tm.pf.eq2} we obtain \begin{equation}\label{iomega5.tm.pf.eq5} \widehat{I\psi_{\bf j}}(\xi)= \phi(\xi) \xi^{\bf j} \Omega(\xi)/{\bf j}!\end{equation} for all ${\bf j}\in {\mathbb{Z}}_+^d$ with $|{\bf j}|=\gamma-d/q$. This leads to a contradiction, as $\widehat{I\psi_{\bf j}}(\xi)\in L^q$ by the assumption on the linear map $I$, and $\phi(\xi) \xi^{\bf j} \Omega(\xi)/{\bf j}!\not\in L^q$ by direction computation. \bigskip{\em (ii) } and {\em (iii)}\quad The necessity is true by \eqref{iomega5.tm.eq1} and Theorem \ref{fractionalderivative.tm1}, while the sufficiency follows from \eqref{iomega5.tm.pf.eq1} -- \eqref{iomega5.tm.pf.eq4}. \end{proof} \subsection{Proof of Theorem \ref{integrablefractionalderivative.tm}} The conclusions in Theorem \ref{integrablefractionalderivative.tm} follow easily from \eqref{generalized.tm.pf.eq1}, \eqref{fractionalderivativeomegap.def}, Theorem \ref{time2.tm} and Corollary \ref{leftinversefractionalderivative.cr}. \section{Sparse Stochastic Processes}\label{poisson.section} In this section, we will prove Theorem \ref{generalizedpoisson.tm} and fully characterize the generalized random process $P_\gamma w$, which is a solution of the stochastic partial differential equation \eqref{randompde.def}. In particular, we provide its characteristic functional and its pointwise evaluation. \subsection{Proof of Theorem \ref{generalizedpoisson.tm}} To prove Theorem \ref{generalizedpoisson.tm}, we recall the Levy continuity theorem, and a fundamental theorem about the characteristic functional of a generalized random process. \begin{Lm}\label{levy.lm}{\rm (\cite{probabilitybook})}\ Let $\xi_k, k\ge 1$, be a sequence of random variables whose characteristic functions are denoted by $\mu_k(t)$. If $\lim_{k\to \infty} \mu_k(t)=\mu_{\infty}(t)$ for some continuous function $\mu_\infty(t)$ on the real line, then $\xi_k$ converges to a random variable $\xi_\infty$ in distribution whose characteristic function ${\bf E}(e^{-it \xi_\infty})$ is $\mu_\infty(t)$. \end{Lm} In the study of generalized random processes, the characteristic functional plays a similar role to the characteristic function of a random variable \cite{gelfandbook}. The idea is to formally specify a generalized random process $\Phi$ by its {\em characteristic functional} ${\mathcal Z}_\Phi$ given by \begin{equation} {\mathcal Z}_\Phi(f):={\bf E}(e^{-i\Phi(f)})=\int_{{\mathbb{R}}} e^{-ix} dP({ x}), \quad f\in {\mathcal D}, \end{equation} where $P(x)$ denotes the probability that $\Phi(f)<x$. For instance, we can show (\cite{Unser2009}) that the characteristic functional ${\mathcal Z}_w$ of the white Poisson noise \eqref{whitepoisson.def} is given by \begin{equation} {\mathcal Z}_w(f)=\exp\Big(\lambda \int_{{\mathbb{R}}^d}\int_{{\mathbb{R}}} \big(e^{-iaf({\bf x})}-1\big) dP(a) d{\bf x} \Big) , \quad f\in {\mathcal D}. \end{equation} The characteristic functional ${\mathcal Z}_\Phi$ of a generalized random process $\Phi$ is a functional from ${\mathcal D}$ to ${\mathbb C}$ that is continuous and positive-definite, and satisfies ${\mathcal Z}_\Phi(0)=1$. Here the {\em continuity} of a functional $L$ from ${\mathcal D}$ to ${\mathbb C}$ means that $\lim_{k\to \infty} L(f_k)=L(f)$ if $f_k\in {\mathcal D}$ tends to $f\in {\mathcal D}$ in the topology of the space ${\mathcal D}$, while a functional $L$ from ${\mathcal D}$ to ${\mathbb C}$ is said to be {\em positive-definite} if \begin{equation} \sum_{j,k=1}^n L(f_j-f_k) c_j \bar c_k\ge 0 \end{equation} for any $f_1, \ldots, f_n\in {\mathcal D}$ and any complex numbers $c_1, \ldots, c_n$. The remarkable aspect of the theory of generalized random processes is that specification of ${\mathcal Z}_\Phi$ is sufficient to define a process in a consistent and unambiguous way. This is stated in the fundamental Minlos-Bochner theorem. \begin{Tm}\label{gelfandbook.tm} {\rm (\cite{gelfandbook})}\ Let $L$ be a positive-definite continuous functional on ${\mathcal D}$ such that $L(0)=1$. Then there exists a generalized random process $\Phi$ whose characteristic functional is $L$. Moreover for any $f_1, \ldots, f_n\in {\mathcal D}$, we may take the positive measure $P({ x}_1, \ldots, { x}_n)$ as the distribution function of the random variable $\Phi(f_1), \ldots, \Phi(f_n)$, where the Fourier transform of the positive measure $P({x}_1, \ldots, {x}_n)$ is $L(y_1 f_1+\cdots+y_nf_n)$, i.e., $$L(y_1f_1+\ldots+y_nf_n)=\int_{{\mathbb{R}}^n} \exp(-i (x_1y_1+\ldots+x_ny_n)) dP(x_1, \ldots, x_n).$$ \end{Tm} We are now ready to prove Theorem \ref{generalizedpoisson.tm}. \begin{proof}[Proof of Theorem \ref{generalizedpoisson.tm}] Let $N\ge 1$ and $\varphi$ be a $C^\infty$ function supported in $B({\bf 0},2)$ and taking the value one in $B({\bf 0}, 1)$. For any $f\in {\mathcal D}$, define a sequence of random variables $\Phi_{\gamma, N}(f)$ associated with $f$ by \begin{equation}\label{generalizedpoisson.tm.pf.eq1} \Phi_{\gamma, N}(f):=\sum_{k} a_k \varphi({\bf x}_k/N) I_{\gamma, 1} f({\bf x}_k), \end{equation} where the $a_k$'s are i.i.d. random variables with probability distribution $P(a)$, and where the ${\bf x}_k$'s are random point locations in ${\mathbb R}^n$ which are mutually independent and follow a spatial Poisson distribution with Poisson parameter $\lambda>0$. We will show that $\Phi_{\gamma, N}, N\ge 1$, define a sequence of generalized random processes, whose limit $P_\gamma w(f):=\sum_{k} a_k I_{\gamma, 1}(f)({\bf x}_k)$ is a solution of the stochastic partial differential equation \eqref{randompde.def}. As $\varphi$ is a continuous function supported on $B({\bf 0}, 2)$, \begin{equation}\label{generalizedpoisson.tm.pf.eq2} \Phi_{\gamma, N}(f)= \sum_{{\bf x}_k\in B({\bf 0}, 2N)} a_k \varphi({\bf x}_k/N) I_{\gamma, 1} f({\bf x}_k). \end{equation} Recall that $I_{\gamma, 1} f$ is continuous on ${\mathbb{R}}^d\backslash \{\bf 0\}$ by Corollary \ref{iomegap.cor1}. Then the summation of the right-hand side of \eqref{generalizedpoisson.tm.pf.eq2} is well-defined whenever there are finitely many ${\bf x}_k$ in $B({\bf 0}, 2N)$ with none of them belonging to $B({\bf 0}, \epsilon), \epsilon>0$. Note that the probability that at least one of ${\bf x}_k$ lies in the small neighbor $B({\bf 0}, \epsilon)$ is equal to $$\sum_{n=1}^\infty e^{-\lambda |B({\bf 0}, \epsilon)|} \frac{(\lambda |B({\bf 0}, \epsilon)|)^n}{ n!} =1-e^{-\lambda |B({\bf 0}, \epsilon)|}\to 0 \quad {\rm as} \ \epsilon\to 0.$$ We then conclude that $\Phi_{\gamma, N}(f)$ is well-defined and $\Phi_{\gamma, N}(f)<\infty$ with probability one. Denote the characteristic function of the random variable $\Phi_{\gamma, N}(f)$ by $E_{\gamma, N, f}(t)$: $$E_{\gamma, N, f}(t)= {\bf E}(e^{-it\Phi_{\gamma,N}(f)})={\bf E}(e^{-i\Phi_{\gamma, N}(tf)}).$$ Applying the same technique as in \cite[Appendix B]{tafti2009}, we can show that \begin{equation}\label{generalizedpoisson.tm.pf.eq3} E_{\gamma, N, f}(t)=\exp\Big(\int_{{\mathbb{R}}^d} \int_{{\mathbb{R}}} \big(e^{-i a t \varphi({\bf x}/N) I_{\gamma, 1}f({\bf x})}-1\big) dP(a) d{\bf x}\Big). \end{equation} Moreover, the functional $E_{\gamma, N, f}(t)$ is continuous about $t$ by the dominated convergence theorem, because $$\Big|e^{-i a t \varphi({\bf x}/N) I_{\gamma, 1}f({\bf x})}-1\Big|\le |a| |t| |I_{\gamma, 1}f({\bf x})|$$ and $$\int_{{\mathbb{R}}^d}\int_{{\mathbb{R}}} |a| |I_{\gamma, 1}f({\bf x})| dP(a) d{\bf x}= \Big(\int_{{\mathbb{R}}} |a| dP(a)\Big)\times \Big(\int_{{\mathbb{R}}^d} |I_{\gamma, 1}f({\bf x})| d{\bf x}\Big)<\infty$$ by Corollary \ref{iomegap.cor1} and the assumption on the distribution $P$. Clearly the random variable $\Phi_{\gamma, N}(f)$ is linear about $f\in {\mathcal D}$; i.e., \begin{equation}\label{generalizedpoisson.tm.pf.eq4} \Phi_{\gamma,N}(\alpha f+\beta g)=\alpha \Phi_{\gamma,N}(f)+\beta\Phi_{\gamma,N}(g) \quad\ {\rm for \ all}\ f, g\in {\mathcal D} \ {\rm and} \ \alpha, \beta\in {\mathbb{R}}. \end{equation} For any sequence of functions $f_k$ in ${\mathcal D}$ that converges to $f_\infty$ in the topology of ${\mathcal D}$, it follows from Theorem \ref{iomegap.tm1} and Corollary \ref{iomegap.cor1} that $\lim_{k\to \infty} \|I_{\gamma, 1} f_k-I_{\gamma, 1} f_\infty\|_1=0$. Therefore \begin{eqnarray}\label{generalizedpoisson.tm.pf.eq5} & & \Big|\int_{{\mathbb{R}}^d} \int_{{\mathbb{R}}} \big(e^{-i a t\varphi({\bf x}/N) I_{\gamma, 1}f_k({\bf x})}-1\big) dP(a) d{\bf x}\nonumber\\ & & \quad - \int_{{\mathbb{R}}^d} \int_{{\mathbb{R}}} \big(e^{-i a t \varphi({\bf x}/N) I_{\gamma, 1}f_\infty({\bf x})}-1\big) dP(a) d{\bf x}\Big|\nonumber\\ & \le & |t|\Big(\int_{{\mathbb{R}}} |a| dP(a)\Big)\Big( \int_{{\mathbb{R}}^d} \varphi({\bf x}/N) |I_{\gamma, 1}f_k({\bf x})-I_{\gamma, 1}f_\infty({\bf x})| d{\bf x}\Big)\nonumber\\ & \to & 0 \quad {\rm as} \ k\to \infty, \end{eqnarray} which implies that the characteristic function of $\Phi_{\gamma,N}(f_k)$ converges to the continuous characteristic function of $\Phi_{\gamma,N}(f_\infty)$. Hence the random variable $\Phi_{\gamma, N}(f_k)$ converges to $\Phi_{\gamma, N}(f_\infty)$ by Lemma \ref{levy.lm}, which in turn implies that $\Phi_{\gamma, N}$ is continuous on ${\mathcal D}$. Set \begin{equation}L_{\gamma, N}(f)=E_{\gamma, N, f}(1).\end{equation} For any sequence $c_l, 1\le l\le n$, of complex numbers and $f_l, 1\le l\le n$, of functions in ${\mathcal D}$, \begin{eqnarray} \label{generalizedpoisson.tm.pf.eq6} \sum_{1\le l, l'\le n} L_{\gamma, N}(f_l-f_{l'})c_l \overline{c_{l'}} & = & {\bf E}\Big(\sum_{l, l'=1}^n e^{-i \Phi_{\gamma,N}(f_l-f_{l'})} c_l\overline{c_{l'}}\Big)\nonumber\\ & =& {\bf E}\Big(\Big|\sum_{l=1}^n c_l e^{-i \Phi_{\gamma, N}(f_l)}\Big|^2\Big) \ge 0, \end{eqnarray} which implies that $L_{\gamma, N}$ is positive-definite. By Theorem \ref{gelfandbook.tm}, we conclude that $\Phi_{\gamma, N}$ defines a generalized random process with characteristic functional $L_{\gamma, N}$. Now we consider the limit of the above family of generalized random processes $\Phi_{\gamma, N}, N\ge 1$. By Corollary \ref{iomegap.cor1}, $I_{\gamma, 1} f$ is integrable for all $f\in {\mathcal D}$. Then \begin{equation} \label{generalizedpoisson.tm.pf.eq7} \lim_{N\to +\infty} E_{\gamma, N, f}(t)=\exp\Big(\int_{{\mathbb{R}}^d} \int_{{\mathbb{R}}} (e^{-i a t I_{\gamma, 1}f({\bf x})}-1) dP(a) d{\bf x}\Big)=: E_{\gamma, f}(t). \end{equation} Clearly $E_{\gamma, f}(0)=1$ and $E_{\gamma, f}(t)$ is continuous as $I_{\gamma,1}(f)$ is integrable. Therefore by Lemma \ref{levy.lm}, $\Phi_{\gamma, N}(f)$ converges to a random variable, which is denoted by $P_\gamma(f):=\sum_{k} a_k I_{\gamma, 1} f({\bf x}_k)$, in distribution. As $I_{\gamma, 1} f$ is a continuous map from ${\mathcal D}$ to $L^1$, then $\lim_{k\to \infty} \|I_{\gamma,1} f_k-I_{\gamma,1} f_\infty\|_1=0$ whenever $f_k$ converges to $f$ in ${\mathcal D}$. Hence \begin{eqnarray}\label{generalizedpoisson.tm.pf.eq8} & & \Big|\int_{{\mathbb{R}}^d} \int_{{\mathbb{R}}} \big(e^{-i a t I_{\gamma, 1}f_k({\bf x})}-1\big) dP(a) d{\bf x}\nonumber\\ & & \quad - \int_{{\mathbb{R}}^d} \int_{{\mathbb{R}}} \big(e^{-i a t I_{\gamma, 1}f_\infty({\bf x})}-1\big) dP(a) d{\bf x}\Big|\nonumber\\ & \le & |t|\Big(\int_{{\mathbb{R}}} |a| dP(a)\Big)\Big( \int_{{\mathbb{R}}^d} |I_{\gamma, 1}f_k({\bf x})-I_{\gamma, 1}f_\infty({\bf x})| d{\bf x}\Big)\nonumber\\ & \to & 0 \quad {\rm as} \ k\to \infty, \end{eqnarray} which implies that the characteristic function of $P_\gamma(f_k)$ converges to the characteristic function of $P_\gamma(f_\infty)$ (which is also continuous), and hence $P_\gamma(f_k)$ converges to $P_\gamma(f_\infty)$ in distribution by Lemma \ref{levy.lm}. From the above argument, we see that $P_\gamma(f)$ is continuous about $f\in {\mathcal D}$. Define $L_{\gamma}(f)=E_{\gamma, f}(1)$. From \eqref{generalizedpoisson.tm.pf.eq6} and \eqref{generalizedpoisson.tm.pf.eq7}, we see that \begin{equation} \sum_{1\le l, l'\le n} L_{\gamma}(f_l-f_{l'}) c_i\overline{c_{i'}}= \lim_{N\to \infty} \sum_{1\le l, l'\le n} L_{\gamma, N}(f_l-f_{l'}) c_i\overline{c_{l'}}\ge 0 \end{equation} for any sequence $c_l, 1\le l\le n$, of complex numbers and $f_l, 1\le l\le n$, of functions in ${\mathcal D}$. Therefore by Theorem \ref{gelfandbook.tm}, $P_\gamma w$ defines a generalized random process with its characteristic functional given by \begin{equation} {\mathcal Z}_{P_\gamma w}(f)= \exp\Big(\int_{{\mathbb{R}}^d} \int_{{\mathbb{R}}} (e^{-i a I_{\gamma, 1}f({\bf x})}-1) dP(a) d{\bf x}\Big).\end{equation} \end{proof} \subsection{Pointwise evaluation} In this section, we consider the pointwise characterization of the generalized random process $P_\gamma w$. \begin{Tm}\label{pointpoisson.tm} Let $\gamma, \lambda, P(a), P_\gamma w$ be as in Theorem \ref{generalizedpoisson.tm}, and $I_{\gamma, 1}$ be defined as in \eqref{fractionalderivative.veryolddef}. Then \begin{equation} \label{pointpoisson.tm.eq3} P_{\gamma} w({\bf y}_0):=\lim_{N\to \infty} P_{\gamma} w (g_{N, {\bf y}_0}) \end{equation} is a random variable for every ${\bf y}_0\in {\mathbb{R}}^d$ whose characteristic function is given by \begin{equation} \label{pointpoisson.tm.eq4} {\bf E}(e^{-i t P_\gamma w({\bf y}_0)})= \exp\Big(\lambda \int_{{\mathbb{R}}}\int_{{\mathbb{R}}} \big(e^{-ia t H_{{\bf y}_0}({\bf x})}-1\big) d{\bf x} dP(a)\Big), t\in {\mathbb{R}}, \end{equation} where $g\in {\mathcal D}$ satisfies $\int_{{\mathbb{R}}^d} g({\bf x}) d{\bf x}=1$, $g_{N, {\bf y}_0}({\bf x})=N^d g(N({\bf x}-{\bf y}_0))$, and \begin{equation}\label{pointpoisson.tm.eq5} \widehat{H_{{\bf y}_0}}(\xi)=\Big (e^{i\langle {\bf y}_0, \xi\rangle}-\sum_{|{\bf i}|\le \gamma} \frac{(i{\bf y}_0)^{\bf i} \xi^{\bf i}}{{\bf i}!}\Big) \|\xi\|^{-\gamma}. \end{equation} \end{Tm} An interpretation is that the random variable $P_{\gamma} w({\bf y}_0)$ in \eqref{pointpoisson.tm.eq3} and its characteristic function ${\bf E}(e^{-i t P_\gamma w({\bf y}_0)})$ in \eqref{pointpoisson.tm.eq4} correspond formally to setting $f=\delta(\cdot-{\bf y}_0)$ (the delta distribution) in \eqref{generalizedpoisson.tm.eq1} and \eqref{generalizedpoisson.tm.eq2}, respectively. To prove Theorem \ref{pointpoisson.tm}, we need a technical lemma. \begin{Lm}\label{generalizedpoisson.lm} Let $\gamma$ be a positive non-integer number, $g\in {\mathcal D}$ satisfy $\int_{{\mathbb{R}}^d} g({\bf x}) d{\bf x}=1$, and $H_{{\bf y}_0}$ be defined in \eqref{pointpoisson.tm.eq5}. Then \begin{equation}\label{generalizedpoisson.lm.eq1} \lim_{N\to \infty} \|I_{\gamma, 1} g_{N, {\bf y}_0}-H_{{\bf y}_0}\|_1=0 \end{equation} for all ${\bf y}_0\in {\mathbb{R}}^d$, where $g_{N,{\bf y}_0}({\bf x})= N^d g(N ({\bf x}-{\bf y}_0))$. \end{Lm} \begin{proof} Let $K_{\bf j}$ be the inverse Fourier transform of $(i\xi)^{\bf j} \|\xi\|^{-\gamma}$ and $k_1$ be the integral part of the positive non-integer number $\gamma$. Then from the argument in the proof of Theorem \ref{iomegap.tm1}, \begin{equation}\label{generalizedpoisson.lm.pf.eq1} H_{\bf y}({\bf x})=\left\{\begin{array}{ll} \sum_{|{\bf j}|=k_1} \frac{k_1}{{\bf j}!} \int_0^1 (K_{\bf j}({\bf x}-t{\bf y})-K_{\bf j}({\bf x})) (-{\bf y})^{\bf j} (1-t)^{k_1-1} dt & {\rm if} \ k_1\ge 1\\ K_{\bf 0}({\bf x}-{\bf y})-K_{\bf 0}({\bf x}) & {\rm if} \ k_1=0. \end{array}\right. \end{equation} Therefore for ${\bf y}_0\ne 0$, \begin{eqnarray*} & & \|I_{\gamma, 1} g_{N, {\bf y}_0}-H_{{\bf y}_0}\|_1 \nonumber\\ & \le & C \sum_{|{\bf j}|=k_1} \int_{{\mathbb{R}}^d} \int_0^1 \int_{{\mathbb{R}}^d} | (K_{\bf j}({\bf x}-t{\bf y})-K_{\bf j}({\bf x})) {\bf y}^{\bf j} \nonumber\\ & &\quad - (K_{\bf j}({\bf x}-t{\bf y}_0)-K_{\bf j}({\bf x})) {\bf y}_0^{\bf j} | |g_{N, {\bf y}_0}({\bf y})| d{\bf y} dt d{\bf x}\nonumber\\ & \le & C \sum_{|{\bf j}|=k_1} \int_{{\mathbb{R}}^d} \int_0^1 \int_{{\mathbb{R}}^d} | K_{\bf j}({\bf x}-t{\bf y})-K_{\bf j}({\bf x}-t{\bf y}_0)| \| {\bf y}\|^{k_1} |g_{N, {\bf y}_0}({\bf y})| d{\bf y} dt d{\bf x}\nonumber\\ & & +C\sum_{|{\bf j}|=k_1} \int_{{\mathbb{R}}^d} \int_0^1 \int_{{\mathbb{R}}^d} |K_{\bf j}({\bf x}-t{\bf y}_0)-K_{\bf j}({\bf x})| |{\bf y}^{\bf j}- {\bf y}_0^{\bf j} | |g_{N, {\bf y}_0}({\bf y})| d{\bf y} dt d{\bf x}\nonumber\\ &\le & C \int_0^1 \int_{{\mathbb{R}}^d} (t\|{\bf y}-{\bf y}_0\|)^{\gamma-k_1} ( \| {\bf y}_0\|^{k_1} +\|{\bf y}-{\bf y}_0\|^{k_1}) |g_{N, {\bf y}_0}({\bf y})| d{\bf y} dt \nonumber\\ & & + C \int_0^1 \int_{{\mathbb{R}}^d} (t\|{\bf y}_0\|)^{\gamma-k_1} ( \| {\bf y}_0\|^{k_1-1}\|{\bf y}-{\bf y}_0\| +\|{\bf y}-{\bf y}_0\|^{k_1}) |g_{N, {\bf y}_0}({\bf y})| d{\bf y} dt \nonumber\\ &\to & 0 \quad {\rm as} \ N\to \infty \end{eqnarray*} if $k_1\ge 1$, and \begin{eqnarray*} & & \|I_{\gamma, 1} g_{N, {\bf y}_0}-H_{{\bf y}_0}\|_1 \nonumber\\ & \le & \int_{{\mathbb{R}}^d} \int_{{\mathbb{R}}^d} | K_{\bf 0}({\bf x}-{\bf y})-K_{\bf 0}({\bf x}-{\bf y}_0)| |g_{N, {\bf y}_0}({\bf y}) d{\bf y} d{\bf x}\nonumber\\ & \le & \int_{{\mathbb{R}}^d} \Big(\int_{\|{\bf x}-{\bf y}\|\ge 2 \|{\bf y}-{\bf y}_0\|}| K_{\bf 0}({\bf x}-{\bf y})-K_{\bf 0}({\bf x}-{\bf y}_0)| d{\bf x}\nonumber\\ & & +\int_{\|{\bf x}-{\bf y}\|\le 2 \|{\bf y}-{\bf y}_0\|} | K_{\bf 0}({\bf x}-{\bf y})| + | K_{\bf 0}({\bf x}-{\bf y}_0)| d{\bf x} \Big) |g_{N, {\bf y}_0}({\bf y}) | d{\bf y} \nonumber\\ & \le & C N^d \int_{{\mathbb{R}}^d} \|{\bf y}-{\bf y}_0\|^\gamma |g(N({\bf y}-{\bf y}_0))| d{\bf y}\nonumber\\ & = & C N^{-\gamma} \int_{{\mathbb{R}}^d} \|{\bf z}\|^\gamma |g({\bf z})| d{\bf z}\to 0 \quad {\rm as} \ N\to 0, \end{eqnarray*} if $k_1=0$. This shows that \eqref{generalizedpoisson.lm.eq1} for ${\bf y}_0\ne {\bf 0}$. The limit in \eqref{generalizedpoisson.lm.eq1} for ${\bf y}_0={\bf 0}$ can be proved by using a similar argument, the detail of which are omitted here. \end{proof} \begin{proof} [Proof of Theorem \ref{pointpoisson.tm}] By Lemma \ref{generalizedpoisson.lm} and the dominated convergence theorem, \begin{equation} \lim_{N\to \infty} \int_{{\mathbb{R}}^d} \int_{{\mathbb{R}}} (e^{-i at I_{\gamma, 1} g_{N, {\bf y}_0}({\bf x})}-1) dP(a) d{\bf x}= \int_{{\mathbb{R}}^d} \int_{{\mathbb{R}}} (e^{-i at H_{{\bf y}_0}({\bf x})}-1) dP(a) d{\bf x} \end{equation} for all $t\in {\mathbb{R}}$. Moreover as $H_{{\bf y}_0}$ is integrable from Corollary \ref{iomegap.cor1} and Lemma \ref{generalizedpoisson.lm}, the function $\int_{{\mathbb{R}}^d} \int_{{\mathbb{R}}} (e^{-i at I_{\gamma, 1} H_{{\bf y}_0}({\bf x})}-1) dP(a) d{\bf x}$ is continuous about $t$. Therefore \eqref{pointpoisson.tm.eq3} and \eqref{pointpoisson.tm.eq4} follows from Lemma \ref{levy.lm}. \end{proof} \noindent{\bf Acknowledgement.} {\rm This work was done when the first named author was visiting Ecole Polytechnique Federale de Lausanne on his sabbatical leave. He would like to thank Professors Michael Unser and Martin Vetterli for the hospitality and fruitful discussions. }
1,116,691,497,090
arxiv
\section{Introduction}\label{Sec:Introduction} Content traffic, which is the dominant form of traffic in data communication networks, is not uniformly distributed over the day. This makes caching an integral part of data networks in order to tackle the non-uniformity of traffic. Caching schemes consist of two phases for content delivery. In the first phase, called the placement phase, content is partly placed in caches close to users. This phase takes place during off-peak hours when the requests of users are still unknown. In the second phase, called the delivery phase, each user requests a file while having access to a cache of pre-fetched content. This phase takes place during peak hours when we need to minimize the load over the network. The information-theoretic study of a network of caches originated with the work of Maddah-Ali and Niesen~\cite{CentralizedCaching}. They considered a centralized multicast set-up where there is a server of files connected via a shared error-free link to a group of users, each equipped with a dedicated cache of equal size. They introduced a caching gain called global caching gain. This gain is in addition to local caching gain, which is the result of the fact that users have access to part of their requested files. Global caching gain is achieved by simultaneously sending data to multiple users in the delivery phase via coded transmission over the shared link. The information-theoretic study of cache-aided networks has then been extended to address other scenarios which arise in practice such as decentralized caching \cite{DecentralizedCaching}, where the identity or the number of users is not clear in the placement phase; caching with non-uniform file popularity \cite{CachingNonuniformDemands}, where some of the files in the server are more popular than the others; and hierarchical caching~\cite{HierarchicalCodedCaching}, where there are multiple layers of caches. Also, while most of existing works consider uncoded cache placement, where the cache of each user is populated by directly placing parts of the server files, it has been shown for some special cases that coded cache placement can outperform uncoded cache placement~\cite{CentralizedCaching, CachingWithCodedPlacement1, CachingWithCodedPlacement2, CachingWithCodedPlacement3}. \begin{figure}[t] \centering \includegraphics[width=0.46\textwidth]{FiguresUnequalCacheSize/SystemModel.pdf} \vskip-10pt \caption{System model with a server storing $N$ files of size $F$ bits connected through a shared error-free link to $K$ users. User~$i$ is equipped with a cache of size $M_iF$ bits where $M_i=\hat{M}$, $1\leq i\leq L$, and $M_i=M$, $L+1\leq i\leq K$, for some $\hat{M}>M$.} \label{Fig:SystemModel} \vskip-15pt \end{figure} \subsection{Existing works and Contributions}\label{Sec:ExistingWorksandContributions} In this work, we address caching problems where there is a server connected through a shared error-free link to a group of users with caches of possibly different sizes. The objective is to minimize the load of worst-case demands over the shared link. Considering decentralized caching with unequal cache sizes, the placement phase is the same as the one for the equal-cache case where randomly part of each file is assigned to the cache of each user. The main challenge is to exploit all the coding opportunities in the delivery phase~\cite{DecentralizedUnequalCache1,DecentralizedUnequalCache2}. However, considering centralized caching with unequal cache sizes, the challenge also involves designing the placement phase. For the two-user case, Cao et al.~\cite{CentralizedUnequalCache3} proposed an optimum caching scheme, and showed that coded cache placement outperforms uncoded. For a system with an arbitrary number of users, Saeedi Bidokhti et al.~\cite{CentralizedUnequalCache1} proposed a scheme with uncoded cache placement constructed based on the memory sharing of the scheme for centralized caching with equal cache sizes~\cite{CentralizedCaching}. Also, Ibrahim et al.~\cite{CentralizedUnequalCache2}, assuming uncoded cache placement and linear coded delivery, formulated this problem as a linear optimisation problem in which the number of parameters grows exponentially with the number of users. As the number of users grows, the scheme by Saeedi Bidokhti et al.~\cite{CentralizedUnequalCache1} remains simple at the cost of performance, and the optimisation problem by Ibrahim et al.~\cite{CentralizedUnequalCache2} becomes intractable. In the light of the above mentioned issues, we propose a new caching scheme with uncoded cache placement for centralized caching with unequal cache sizes where there are two subgroups of users, one with a larger cache size than the other. Our caching scheme outperforms the caching scheme proposed by Saeedi Bidokhti et al.~\cite{CentralizedUnequalCache1} suggested by numerical evaluations. In comparison to the work by Ibrahim et al.~\cite{CentralizedUnequalCache2}, as our scheme is an explicit scheme, it does not have the complexity issue associated with solving an optimisation problem. Also, our scheme performs within a multiplicative factor of 1.11 from the scheme by Ibrahim et al.~\cite{CentralizedUnequalCache2} suggested by numerical evaluations. \section{System Model}\label{Section:SystemModel} We consider a centralized caching problem where there is a server storing $N$ independent files $W_\ell$, $\ell\in\mathcal{N}$, $\mathcal{N}=\{1,2,\ldots,N\}$, connected through a shared error-free link to $K$ cache-enabled users, as shown in Fig.~\ref{Fig:SystemModel}. We assume that the number of files in the server is at least as many as the number of users, i.e., $N\geq K$. Each file in the server is of size $F\in\mathbb{N}$ bits (where $\mathbb{N}$ is the set of natural numbers), and is uniformly distributed over the set $\mathcal{W}=\left\{1,2,\ldots,2^{F}\right\}$. User~$i$, $i\in\mathcal{K}$, $\mathcal{K}=\{1,2,\ldots,K\}$, is equipped with a cache of size $M_iF$ bits for some $M_i\in\mathbb{R}$, $0\leq M_i\leq N$, where $\mathbb{R}$ is the set of real numbers. The content of the cache of user~$i$ is denoted by $Z_i$. We represent all the cache sizes by the vector $\mathbf{M}=(M_1,M_2,\ldots,M_K)$. In this work, we assume that there are two subgroups of users, one with a larger cache size than the other, i.e., $M_i=\hat{M}$, $1\leq i \leq L$, and $M_i={M}$, $L+1\leq i \leq K$, for some $\hat{M}>M$. User~$i$ requests $W_{d_i}$ from the server where $d_i\in\mathcal{N}$. We represent the request of all the users by the vector $\mathbf{d}=(d_1,d_2,\ldots,d_K)$. User~$i$ needs to decode $W_{d_i}$ using $Z_i$, and the signal $X_\mathbf{d}$ transmitted by the server over the shared link. As mentioned earlier, each caching scheme consists of two phases, the placement phase and the delivery phase. The placement phase consists of $K$ caching functions \begin{align*} \phi_i:\mathcal{W}^{N}\rightarrow \mathcal{Z}_i,\;\; i\in\mathcal{K}, \end{align*} where $\mathcal{Z}_i\hskip-2pt=\hskip-2pt\left\{\hskip-2pt 1,2,\ldots,2^{\left\lfloor M_iF \right\rfloor}\hskip-2pt\right\}$, i.e., $Z_i\hskip-2pt=\hskip-2pt\phi_i\left(\hskip-2pt W_1,W_2,\ldots,W_N\hskip-2pt\right)$. The delivery phase consists of $N^K$ encoding functions \begin{align*} \psi_{\mathbf{d}}:\mathcal{W}^{N}\rightarrow \mathcal{X}, \end{align*} where $\mathcal{X}=\left\{1,2,\ldots,2^{\left\lfloor RF \right\rfloor}\right\}$, i.e., \begin{align*} X_{\mathbf{d}}=\psi_{\mathbf{d}}\left(W_1,W_2,\ldots,W_N\right). \end{align*} We refer to $RF$ as the load of the transmission and $R$ as the rate of the transmission over the shared link. The delivery phase consists of also $KN^K$ decoding functions \begin{align*} \theta_{\mathbf{d},i}: \mathcal{Z}_i\times\mathcal{X}\rightarrow \mathcal{W},\;\;i\in\mathcal{K}, \end{align*} i.e., $\hat{W}_{\mathbf{d},i}=\theta_{\mathbf{d},i}(X_{\mathbf{d}},Z_i)$, where $\hat{W}_{\mathbf{d},i}$ is the decoded version of $W_{d_i}$ at user~$i$ when the demand vector is $\mathbf{d}$. The probability of error for the scheme is defined as \begin{align*} \underset{\mathbf{d}}{\max}\;\,\underset{i}{\max}\;P(\hat{W}_{\mathbf{d},i}\neq W_{d_i}). \end{align*} \begin{definition} For a given $\mathbf{M}$, we say that the rate $R$ is achievable if for every $\epsilon>0$ and large enough $F$, there exists a caching scheme with rate $R$ such that its probability of error is less than $\epsilon$. For a given $\mathbf{M}$, we also define $R^{\star}(\mathbf{M})$ as the infimum of all achievable rates. \end{definition} \section{Background}\label{Sec:Background} In this section, we first consider centralized caching with equal cache sizes, i.e., $M_i=M,\,\forall i$, and review the optimum scheme among those with uncoded placement~\cite{CentralizedCaching, OptimumCachingWithUnCodedPlacement}. We then review existing works on centralized caching with unequal cache sizes where there are more than two users~\cite{CentralizedUnequalCache1,CentralizedUnequalCache2}. \subsection{Equal Cache Sizes}\label{Sec:EqualCache} Here, we present the optimum caching scheme for centralized caching with equal cache sizes when the cache placement is uncoded, and $N\geq K$~\cite{CentralizedCaching}. In this scheme, a parameter denoted by $t$ is defined at the beginning as \begin{align*} t=\frac{KM}{N}. \end{align*} First, assume that $t$ is an integer. As $0\leq M\leq N$, we have $t\in\{0,1,2,\ldots,K\}$. In the placement phase, $W_\ell$, $\ell\in\mathcal{N}$, is divided into $\binom{K}{t}$ non-overlapping parts denoted by $W_{\ell,\mathcal{T}}$ where $\mathcal{T}\subseteq\mathcal{K}$ and $\left|\mathcal{T}\right|=t$ ($\left|\mathcal{T}\right|$ denotes the cardinality of the set $\mathcal{T}$). $W_{\ell,\mathcal{T}}$ is then placed in the cache of user $i$ if $i\in\mathcal{T}$. This means that the size of each part is $\frac{F}{\binom{k}{t}}$ bits, and we place $\binom{K-1}{t-1}$ parts from each file in the cache of user~$i$. Therefore, we satisfy the cache size constraint as we have \begin{align*} N\frac{\binom{K-1}{t-1}}{\binom{K}{t}}=M. \end{align*} In the delivery phase, the server transmits \begin{align*} X_{\mathbf{d},\mathcal{S}}=\underset{s\in\mathcal{S}}{\bigoplus} W_{d_s,\mathcal{S}\setminus s}, \end{align*} for every $\mathcal{S}\subseteq\mathcal{K}$ where $\left|\mathcal{S}\right|=t+1$. This results in the transmission rate of \begin{align*} R_{\text{eq}}(N,K,M)=\frac{\binom{K}{t+1}}{\binom{K}{t}}. \end{align*} This delivery scheme satisfies the demands of all the $K$ users~\cite{CentralizedCaching}. Now, assume that $t$ is not an integer. In this case, memory sharing is utilized where $t_\text{int}$ is defined as \begin{align*} t_\text{int}\triangleq\left\lfloor t \right\rfloor, \end{align*} and $\alpha$ is computed using the following equation \begin{align*} M=\frac{tN}{K}=\alpha\frac{t_\text{int}N}{K}+(1-\alpha)\frac{(t_\text{int}+1)N}{K}, \end{align*} where $0<\alpha\leq1$. Based on $\alpha$, the caching problem is divided into two independent problems. In the first one, the cache size is $\alpha\frac{t_\text{int}N}{K}F$, and we cache the first $\alpha F$ bits of the files, denoted by $W^{(\alpha)}_{\ell}$, $\ell\in\mathcal{N}$. In the delivery phase, the server transmits \begin{align}\label{Eq:Component1} X^{(\alpha)}_{\mathbf{d},\mathcal{S}_1}=\underset{s\in\mathcal{S}_1}{\bigoplus} W^{(\alpha)}_{d_s,\mathcal{S}_1\setminus s}, \end{align} for every $\mathcal{S}_1\subseteq\mathcal{K}$ where $\left|\mathcal{S}_1\right|=t_\text{int}+1$. In the second one, the cache size is $(1-\alpha)\frac{(t_\text{int}+1)N}{K}F$, and we cache the last $(1-\alpha)F$ bits of the files, denoted by $W^{(1-\alpha)}_{\ell}$, $\ell\in\mathcal{N}$. In the delivery phase, the server transmits \begin{align}\label{Eq:Component2} X^{(1-\alpha)}_{\mathbf{d},\mathcal{S}_2}=\underset{s\in\mathcal{S}_2}{\bigoplus} W^{(1-\alpha)}_{d_s,\mathcal{S}_2\setminus s}, \end{align} for every $\mathcal{S}_2\subseteq\mathcal{K}$ where $\left|\mathcal{S}_2\right|=t_\text{int}+2$. Consequently, the rate \begin{align}\label{Eq:Rate} R_{\text{eq}}(N,K,M)=\alpha \frac{\binom{K}{t_\text{int}+1}}{\binom{K}{t_\text{int}}}+(1-\alpha)\frac{\binom{K}{t_\text{int}+2}}{\binom{K}{t_\text{int}+1}}, \end{align} is achieved where $\binom{a}{b}$ is considered to be zero if $b>a$. \begin{figure}[t] \centering \includegraphics[width=0.3\textwidth]{FiguresUnequalCacheSize/UnequalExistingScheme1.pdf} \vskip-10pt \caption{An existing scheme for centralized caching with unequal cache sizes} \vskip-15pt \label{Fig:ExScheme1} \end{figure} \subsection{Unequal Cache Sizes}\label{Sec:ExistingWorks} Here, we present existing works on centralized caching with unequal cache sizes where there are more than two users. \subsubsection{Scheme~1~\cite{CentralizedUnequalCache1}}\label{Sec:ExistingScheme1} In this scheme, assuming without loss of generality that $M_1\geq M_2 \geq \cdots \geq M_K$, the problem is divided into $K$ caching problems. In problem $i$, $i\in\mathcal{K}$, there are two groups of users: the first group is composed of users 1 to $i$, all with equal cache size of $(M_i-M_{i+1})F$ bits; the second group is composed of users $i+1$ to $K$, all without cache. In problem $K$, $M_{K+1}$ is considered as zero, and there is only one group consisting of $K$ users all with equal cache size of $M_KF$ bits. In problem $i$, we only consider $\beta_iF$ bits of the files where $\beta_1+\beta_2+\cdots+\beta_K=1$. This scheme is schematically shown in Fig.~\ref{Fig:ExScheme1} for the three-user case. Based on the equal cache results, the transmission rate for caching problem~$i$ is \begin{align} R_i=\beta_i R_{\text{eq}}(N,i,\frac{M_i-M_{i+1}}{\beta_i})+\beta_i(K-i),\;i\in\mathcal{K}.\label{eq:existing1} \end{align} The first term on the right-hand side of~\eqref{eq:existing1} corresponds to the transmission rate for the first groups of users, and the second term corresponds to the transmission rate for the second group of users, which are without cache in problem~$i$. Therefore, by optimising the sum rate over the parameters $(\beta_1,\beta_2,\ldots,\beta_K)$, we achieve the following transmission rate \begin{align}\label{Eq:existingwork1} R_{\text{ex1}}(N,K,\mathbf{M})=\underset{(\beta_1,\ldots,\beta_K):\sum_{i=1}^{K}\beta_i=1}{\min}\sum_{i=1}^{K}R_i. \end{align} \subsubsection{Scheme~2~\cite{CentralizedUnequalCache2}}\label{Sec:ExistingScheme2} In this scheme, the problem of centralized caching with unequal cache sizes is formulated as an optimisation problem where it is assumed that the cache placement is uncoded, and the delivery phase uses linear coding. To characterize all possible uncoded placement policies, the parameter $a_{\mathcal{S}}$, $\mathcal{S}\subseteq\mathcal{K}$, is defined where $a_{\mathcal{S}}F$ represents the length of ${W}_{\ell,\mathcal{S}}$ as the fraction of $W_\ell$ stored in the cache of users in $\mathcal{S}$. Hence, these parameters must satisfy \begin{align*} \sum_{\mathcal{S}\subseteq\mathcal{K}} a_{\mathcal{S}}=1, \end{align*} and \begin{align*} \sum_{\mathcal{S}\subseteq\mathcal{K}:i\in\mathcal{S}} a_{\mathcal{S}}\leq\frac{M_i}{N},\;i\in\mathcal{K}. \end{align*} In the delivery phase, the server transmits \begin{align*} X_{\mathbf{d},\mathcal{T}}=\bigoplus_{j\in\mathcal{T}}W_{d_j}^{\mathcal{T}}, \end{align*} to the users in $\mathcal{T}$ where $\mathcal{T}$ is a non-empty subset of $\mathcal{K}$. $W_{d_j}^{\mathcal{T}}$, which is a part of $W_{d_j}$, needs to be decoded at user~$j$, and cancelled by all the users in $\mathcal{T}\setminus\{j\}$. Therefore, $W_{d_j}^{\mathcal{T}}$ is constructed from subfiles ${W}_{d_j,\mathcal{S}}$ where $\mathcal{T}\setminus\{j\}\subseteq \mathcal S$ and $j\notin \mathcal{S}$. To characterize all possible linear delivery policies, two sets of parameters are defined: (i) $v_{\mathcal{T}}$ where $v_{\mathcal{T}}F$ represents the length of $W_{d_j}^{\mathcal{T}},\;\forall j\in\mathcal{T}$, and consequently $X_{\mathbf{d},\mathcal{T}}$. (ii) $u_{\mathcal{S}}^{\mathcal{T}}$ where $u_{\mathcal{S}}^{\mathcal{T}}F$ is the length of $W_{d_j,\mathcal{S}}^{\mathcal{T}}$ which is the fraction of ${W}_{d_j,\mathcal{S}}$ used in the construction $W_{d_j}^{\mathcal{T}}$. In order to have a feasible delivery scheme, these parameters need to satisfy some conditions~\cite[equations (25)--(30)]{CentralizedUnequalCache2}. By considering $(\mathbf{a},\mathbf{u},\mathbf{v})$ as all the optimisation parameters, and $\mathcal{C}(N,K,\mathbf{M})$ as all the conditions that need to be met in the both placement and delivery phases, we achieve the following transmission rate \begin{align}\label{Eq:existingwork2} R_{\text{ex2}}(N,K,\mathbf{M})\hskip-2pt=\hskip-2pt\underset{\mathbf{d}}{\max}\hskip-2pt\left(\hskip-2pt\underset{(\mathbf{a},\mathbf{u},\mathbf{v}):\mathcal{C}(N,K,\mathbf{M})}{\min}\sum_{\mathcal{T}\in\mathcal{K}:\left|\mathcal{T}\right|\neq 0} v_{\mathcal{T}}\hskip-2pt\right). \end{align} \section{Proposed Caching Scheme} In this section, we first provide some insights into our proposed scheme using an example. We then propose a scheme for a system with two subgroups of users, one with a larger cache size than the other, i.e., $M_i=\hat{M}$, $1\leq i \leq L$, and $M_i={M}$, $L+1\leq i \leq K$, for some $\hat{M}>M$. \subsection{An Example} In our example, as shown in Fig.~\ref{Fig:AnExample}, we consider the case where the number of files in the server is four, denoted for simplicity by $(A,B,C,D)$, and the number of users is also four. The first three users have a cache of size $2F$ bits, and the forth one has a cache of size $F$ bits. First, we ignore the extra cache available at the first three users, and use the equal-cache scheme. This divides each file into four parts, and places $(A_i, B_i, C_i, D_i)$, $i\in\{1,2,3,4\}$, in the cache of user~$i$. Therefore, assuming without loss of generality that users~1,~2,~3 and~4 request $A$, $B$, $C$ , and $D$ respectively, the server needs to transmit $A_2\oplus B_1$, $A_3\oplus C_1$, $B_3\oplus C_2$, $A_4\oplus D_1$, $B_4\oplus D_2$ and $C_4\oplus D_3$, and we achieve the rate of $R=3/2$ by ignoring the extra cache available at the first three users. Now, to utilize the extra cache available at users~1,~2, and~3, we look at what is going to be transmitted when ignoring these extra caches, and fill the extra caches to reduce the load of the transmission. In particular, we reduce the load of the transmissions which are only of benefit to the users with a larger cache size (i.e., $A_2\oplus B_1$, $A_3\oplus C_1$, $B_3\oplus C_2$). To do this, we divide $A_i$, $i\in\{1,2,3\}$ into two equal parts, $A'_i$ and $A''_i$. We do the same for $B_i$, $C_i$, and $D_i$, $i\in\{1,2,3\}$. We then place $(A'_2, B'_2, C'_2, D'_2)$ and $(A'_3, B'_3, C'_3, D'_3)$ in the extra cache of user~1, $(A'_1, B'_1, C'_1, D'_1)$ and $(A''_3, B''_3, C''_3, D''_3)$ in the extra cache of user~2, and $(A''_1, B''_1, C''_1, D''_1)$ and $(A''_2, B''_2, C''_2, D''_2)$ in the extra cache of user~3. Therefore, considering the extra cache available at the first three users, instead of $A_2\oplus B_1$, $A_3\oplus C_1$, $B_3\oplus C_2$, we just need to transmit $A''_2\oplus B''_1\oplus C'_1 $, and $A''_3\oplus B'_3\oplus C'_2$ to satisfy the demands of all users, and we achieve the rate $R=1$. Note that what we did in the second part is equivalent to using the equal-cache scheme for a system with a server storing four files of size $\frac{3}{4}F$ bits, i.e., $A^*=(A_1,A_2,A_3)$, $B^*=(B_1,B_2,B_3)$, $C^*=(C_1,C_2,C_3)$, and $D^*=(D_1,D_2,D_3)$, and with three users each with a cache of size $2F$ bits. This can be seen by defining $A^*_{12}=(A'_1,A'_2)$, $A^*_{13}=(A''_1,A'_3)$, and $A^*_{23}=(A''_2,A''_3)$ for $A^*$, and also similarly for $B^*$, $C^*$, and $D^*$. Then we can check that $(A^*_\mathcal{T},B^*_\mathcal{T},C^*_\mathcal{T},D^*_\mathcal{T})$, $\mathcal{T}\in\{\{12\},\{13\},\{23\}\}$, is in the cache of user~$i$, $i\in\{1,2,3\}$ if $i\in\mathcal{T}$. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{FiguresUnequalCacheSize/AnExample.pdf} \vskip-10pt \caption{An example for our proposed scheme} \vskip-12pt \label{Fig:AnExample} \end{figure} \subsection{Scheme with Two Levels of Caches} In this subsection, we explain our proposed scheme for the system where the first $L$ users have a cache of size $\hat{M}F$ bits, and the last $K-L$ users have a cache of size $MF$ bits for some $M<\hat{M}$. \subsubsection{An incremental placement approach}\label{Sec:IncreasingCache} We first describe a concept which is used later in our proposed scheme for the unequal-cache problem. Suppose that we initially have a system with $N$ files, and $K$ users each having a cache of size $MF$ bits. We use the equal-cache scheme described in Section~\ref{Sec:EqualCache} to fill the caches. We later increase the cache size of \textit{each} user by $(M'-M)F$ bits for some $M'>M$. The problem is that we are not allowed to change the content of the first $MF$ bits that we have already filled, but we want to fill the additional cache in such a way that the overall cache has the same content placement as the scheme described in Section~\ref{Sec:EqualCache} for the new system with $N$ files, and $K$ users each having a cache of size $M'F$ bits. We present our solution when $M=\frac{tN}{K}$ and $M'=\frac{(t+1)N}{K}$ for some integer $t$. The solution can be easily extended to an arbitrary $M$ and $M'$. In the cache placement for the system with the parameters $(N,K,M)$, we divide $W_\ell$, $\ell\in\mathcal{N}$, into $\binom{K}{t}$ subfiles denoted by $W_{\ell,\mathcal{T}}$, and place the ones with $i\in\mathcal{T}$ in the cache of user~$i$. This means that we put $\binom{K-1}{t-1}$ subfiles of $W_\ell$ in the cache of each user. After increasing the cache of each user to $M'F$ bits, we further divide each subfile into $(K-t)$ parts denoted by $W_{\ell,\mathcal{T},j}$, $j\in\mathcal{K}\setminus\mathcal{T}$, and place $W_{\ell,\mathcal{T},j}$ in the cache of user~$j$. This adds $W_{\ell,\mathcal{T},j}$, $j\notin\mathcal{T}$, to the cache of user~$j$ while keeping the existing content of the first $MF$ bits of user $j$, i.e., $W_{\ell,\mathcal{T},i}$ $j\in\mathcal{T}$, $i\in\mathcal{K}\setminus\mathcal{T}$. This means that we add \begin{align*} N\frac{\binom{K-1}{t}}{\binom{K}{t}(K-t)}F=\frac{N}{K}F=(M'-M)F\;\; \text{bits}, \end{align*} to the cache of each user which satisfies the cache size constraint. Our cache placement for the system with the parameters $(N,K,M')$ becomes the same as the one described in Section~\ref{Sec:EqualCache} by merging all the parts $W_{\ell,\mathcal{T},j}$ which have the same $\mathcal{T}'=\mathcal{T}\cup\{j\}$ as a single subfile $W_{\ell,\mathcal{T}'}$, where $|\mathcal{T}'|=t+1$. \subsubsection{Proposed Scheme} We here present our proposed scheme for the system where $M_i=\hat{M}$, $i\in\mathcal{L}$, $\mathcal{L}=\{1,2,\ldots,L\}$, and $M_i={M}$, $i\in\mathcal{K}\setminus\mathcal{L}$, for some $M<\hat{M}$. Our placement phase is composed of two stages. In the first stage, we ignore the extra cache available at the first $L$ users, and use the equal-cache placement for the system with the parameters $(N,K,M)$. Hence, at the end of this stage, we can achieve the rate in~\eqref{Eq:Rate} by transmitting $X^{(\alpha)}_{\mathbf{d},\mathcal{S}_1}$, defined in~\eqref{Eq:Component1}, for any $\mathcal{S}_1\subseteq\mathcal{K}$ where $|\mathcal{S}_1|=t_\text{int}+1$, and $X^{(1-\alpha)}_{\mathbf{d},\mathcal{S}_2}$, defined in~\eqref{Eq:Component2}, for any $\mathcal{S}_2\subseteq\mathcal{K}$ where $|\mathcal{S}_2|=t_\text{int}+2$. In the second stage of our placement phase, we fill the extra cache available at the first $L$ users by looking at what are going to be transmitted when ignoring these extra caches. To do so, we try to reduce the load of the transmissions which are intended only for the users with a larger cache size, i.e., $X^{(\alpha)}_{\mathbf{d},\mathcal{S}_1}$for any $\mathcal{S}_1\subseteq\mathcal{L}$ ($|\mathcal{S}_1|=t_\text{int}+1$), and $X^{(1-\alpha)}_{\mathbf{d},\mathcal{S}_2}$ for any $\mathcal{S}_2\subseteq\mathcal{L}$ ($|\mathcal{S}_2|=t_\text{int}+2$). These transmissions are constructed from the subfiles $W^{(\alpha)}_{\ell,\mathcal{T}_1}$, $\mathcal{T}_1\subseteq\mathcal{L}$, $|\mathcal{T}_1|=t_\text{int}$, and $W^{(1-\alpha)}_{\ell,\mathcal{T}_2}$, $\mathcal{T}_2\subseteq\mathcal{L}$, $|\mathcal{T}_2|=t_\text{int}+1$. These subfiles occupy \begin{align} \frac{\binom{L-1}{t_\text{int}-1}}{\binom{K}{t_\text{int}}}N\alpha F\hspace{-3pt}+\hspace{-3pt}\frac{\binom{L-1}{t_\text{int}}}{\binom{K}{t_\text{int}+1}}N(1-\alpha) F\;\; \text{bits}, \end{align} of each user's cache, and the sum-length of these subfiles for any $\ell\in\mathcal{N}$ is \begin{align*} F'\triangleq \frac{\binom{L}{t_\text{int}}}{\binom{K}{t_\text{int}}}\alpha F+\frac{\binom{L}{t_\text{int}+1}}{\binom{K}{t_\text{int}+1}}(1-\alpha) F\;\;\text{bits}. \end{align*} Considering our aim in designing the second stage of our placement phase, we again use the equal-cache placement for the subfiles $W^{(\alpha)}_{\ell,\mathcal{T}_1}$, $\mathcal{T}_1\subseteq\mathcal{L}$, $|\mathcal{T}_1|=t_\text{int}$, and $W^{(1-\alpha)}_{\ell,\mathcal{T}_2}$, $\mathcal{T}_2\subseteq\mathcal{L}$ $|\mathcal{T}_2|=t_\text{int}+1$ while considering the extra cache available at the first $L$ users. This means that we use the equal-cache scheme for a system with $N$ files of size $F'$ bits, and $L$ users each having a cache of size $M'F'$ bits where \begin{align}\label{Eq:CacheSize2} M'\hspace{-2pt}F'\triangleq\hspace{-3pt}\frac{\binom{L-1}{t_\text{int}-1}}{\binom{K}{t_\text{int}}}N\alpha F\hspace{-3pt}+\hspace{-3pt}\frac{\binom{L-1}{t_\text{int}}}{\binom{K}{t_\text{int}+1}}N(1-\alpha) F\hspace{-3pt}+\hspace{-3pt}(\hat{M}-M){F}. \end{align} Note that we are not allowed to change what we have already placed in the cache of the first $L$ users in the first stage. Otherwise, we cannot assume that, from the delivery phase when ignoring the extra caches, the transmissions $X^{(\alpha)}_{\mathbf{d},\mathcal{S}_1}$ where $\mathcal{S}_1=\mathcal{T}_1\cup\{j\}$, $|\mathcal{T}_1|=t_\text{int}$, $\mathcal{T}_1\subseteq \mathcal{L}$, $j\in\mathcal{K}\setminus\mathcal{L}$, and $X^{(1-\alpha)}_{\mathbf{d},\mathcal{S}_2}$ where $\mathcal{S}_2=\mathcal{T}_2\cup\{j\}$, $|\mathcal{T}_2|=t_\text{int}+1$, $\mathcal{T}_2\subseteq \mathcal{L}$, $j\in\mathcal{K}\setminus\mathcal{L}$, can still be decoded by target users. Therefore, we employ our proposed solution in Section~\ref{Sec:IncreasingCache} for using the equal-cache scheme for the second time. Two scenarios can happen in the second stage. \textit{Scenario~$1$} where $M'\leq N$: In this scenario, we achieve the rate \begin{align*} R_\text{ueq}(N,K,L,\hat{M},M)\hspace{-3pt}=\hspace{-3pt}R_\text{eq}(N,K,M)\hspace{-3pt}-\hspace{-3pt}R'\hspace{-3pt}+\hspace{-3pt}R_{\text{eq}}(N,L,M')\frac{F'}{F}, \end{align*} where \begin{align*} R'= \alpha \frac{\binom{L}{t_\text{int}+1}}{\binom{K}{t_\text{int}}}+(1-\alpha)\frac{\binom{L}{t_\text{int}+2}}{\binom{K}{t_\text{int}+1}}. \end{align*} $R'F$ is the load of the transmissions intended only for the users with a larger cache size if we ignore their extra caches (or equivalently if we just utilize the first stage of our placement phase). $R_\text{eq}(N,L,M')F'$ is the new load of the transmissions intended only for the users with a larger cache size at the end of the second stage. \textit{Scenario~$2$} where $M'> N$: In this scenario, we also use memory sharing between the case with $\hat{M}=\Phi$, where \begin{align*} \Phi\triangleq M-\frac{\binom{L-1}{t_\text{int}-1}}{\binom{K}{t_\text{int}}}N\alpha-\frac{\binom{L-1}{t_\text{int}}}{\binom{K}{t_\text{int}+1}}N(1-\alpha)+N\frac{F'}{F}, \end{align*} and the case with $\hat{M}=N$. In the system with $\hat{M}=\Phi$, according to~\eqref{Eq:CacheSize2}, we have $M'=N$, and we achieve the rate $R_\text{eq}(N,K,M)-R'$. In the system with $\hat{M}=N$, we can simply just remove the first $L$ users as they can cache the whole files in the server, and we achieve the rate $R_{\text{eq}}(N,K-L,M)$. Therefore, in this scenario, we achieve the rate \begin{align*} R_\text{ueq}(N,K,L,\hat{M},M)=&\gamma (R_\text{eq}(N,K,M)-R')\\ &\hskip25pt+(1-\gamma)R_{\text{eq}}(N,K-L,M), \end{align*} where $0\leq\gamma\leq1$, and is calculated using $\hat{M}=\gamma \Phi+(1-\gamma)N$. \section{Comparison with existing works} In this section, we present our numerical results comparing our proposed scheme with the existing works, described in Section~\ref{Sec:ExistingWorks}. Our numerical results, characterizing the trade-off between the worst-case transmission rate and cache size for systems with two levels of cache sizes, suggest that our scheme outperforms the scheme by Saeedi Bidokhti et al.~\cite{CentralizedUnequalCache1}. Considering the work by Ibrahim et al.~\cite{CentralizedUnequalCache2}, as the complexity of the solution grows exponentially with the number of users, we implemented that work for systems with up to four users. Our numerical evaluations suggest that our scheme performs withing a multiplicative factor of 1.11 from that scheme, i.e., $1\leq\frac{R_\text{ueq}}{R_{\text{ex2}}}\leq1.11$. As an example, this comparison is shown in Fig.~\ref{Fig:Comparison} for a four-user system with the parameters $N=10$, $K=4$, $M_1=M_2=3M_3=3M_4$. For these parameters, our scheme performs as well as the work by Ibrahim et al.~\cite{CentralizedUnequalCache2} without needing to solve an optimisation problem to obtain the scheme. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{FiguresUnequalCacheSize/Comparison.pdf} \vskip-15pt \caption{Comparing the worst-case transmission rate of the proposed scheme with the existing ones for the system with $N=10$, $K=4$, $M_1=M_2=3M_3=3M_4$.} \vskip-15pt \label{Fig:Comparison} \end{figure} \vspace{-5pt} \section{Conclusion} We addressed the problem of centralized caching with unequal cache sizes. We proposed an explicit scheme for the system with a server of files connected through a shared error-free link to a group of users where one subgroup is equipped with a larger cache size than the other. Numerical results comparing our scheme with existing works showed that our scheme improves upon the existing explicit scheme by having a lower worst-case transmission rate over the shared link. Numerical results also showed that our scheme achieves within a multiplicative factor of 1.11 from the optimal worst-case transmission rate for schemes with uncoded placement and linear coded delivery without needing to solve a complex optimisation problem. \vspace{-5pt} \bibliographystyle{IEEEtran} \section{Introduction}\label{Sec:Introduction} Content traffic, which is the dominant form of traffic in data communication networks, is not uniformly distributed over the day. This makes caching an integral part of data networks in order to tackle the non-uniformity of traffic. Caching schemes consist of two phases for content delivery. In the first phase, called the placement phase, content is partly placed in caches close to users. This phase takes place during off-peak hours when the requests of users are still unknown. In the second phase, called the delivery phase, each user requests a file while having access to a cache of pre-fetched content. This phase takes place during peak hours when we need to minimize the load over the network. The information-theoretic study of a network of caches originated with the work of Maddah-Ali and Niesen~\cite{CentralizedCaching}. They considered a centralized multicast set-up where there is a server of files connected via a shared error-free link to a group of users, each equipped with a dedicated cache of equal size. They introduced a caching gain called global caching gain. This gain is in addition to local caching gain, which is the result of the fact that users have access to part of their requested files. Global caching gain is achieved by simultaneously sending data to multiple users in the delivery phase via coded transmission over the shared link. The information-theoretic study of cache-aided networks has then been extended to address other scenarios which arise in practice such as decentralized caching \cite{DecentralizedCaching}, where the identity or the number of users is not clear in the placement phase; caching with non-uniform file popularity \cite{CachingNonuniformDemands}, where some of the files in the server are more popular than the others; and hierarchical caching~\cite{HierarchicalCodedCaching}, where there are multiple layers of caches. Also, while most of existing works consider uncoded cache placement, where the cache of each user is populated by directly placing parts of the server files, it has been shown for some special cases that coded cache placement can outperform uncoded cache placement~\cite{CentralizedCaching, CachingWithCodedPlacement1, CachingWithCodedPlacement2, CachingWithCodedPlacement3}. \begin{figure}[t] \centering \includegraphics[width=0.46\textwidth]{FiguresUnequalCacheSize/SystemModel.pdf} \vskip-10pt \caption{System model with a server storing $N$ files of size $F$ bits connected through a shared error-free link to $K$ users. User~$i$ is equipped with a cache of size $M_iF$ bits where $M_i=\hat{M}$, $1\leq i\leq L$, and $M_i=M$, $L+1\leq i\leq K$, for some $\hat{M}>M$.} \label{Fig:SystemModel} \vskip-15pt \end{figure} \subsection{Existing works and Contributions}\label{Sec:ExistingWorksandContributions} In this work, we address caching problems where there is a server connected through a shared error-free link to a group of users with caches of possibly different sizes. The objective is to minimize the load of worst-case demands over the shared link. Considering decentralized caching with unequal cache sizes, the placement phase is the same as the one for the equal-cache case where randomly part of each file is assigned to the cache of each user. The main challenge is to exploit all the coding opportunities in the delivery phase~\cite{DecentralizedUnequalCache1,DecentralizedUnequalCache2}. However, considering centralized caching with unequal cache sizes, the challenge also involves designing the placement phase. For the two-user case, Cao et al.~\cite{CentralizedUnequalCache3} proposed an optimum caching scheme, and showed that coded cache placement outperforms uncoded. For a system with an arbitrary number of users, Saeedi Bidokhti et al.~\cite{CentralizedUnequalCache1} proposed a scheme with uncoded cache placement constructed based on the memory sharing of the scheme for centralized caching with equal cache sizes~\cite{CentralizedCaching}. Also, Ibrahim et al.~\cite{CentralizedUnequalCache2}, assuming uncoded cache placement and linear coded delivery, formulated this problem as a linear optimisation problem in which the number of parameters grows exponentially with the number of users. As the number of users grows, the scheme by Saeedi Bidokhti et al.~\cite{CentralizedUnequalCache1} remains simple at the cost of performance, and the optimisation problem by Ibrahim et al.~\cite{CentralizedUnequalCache2} becomes intractable. In the light of the above mentioned issues, we propose a new caching scheme with uncoded cache placement for centralized caching with unequal cache sizes where there are two subgroups of users, one with a larger cache size than the other. Our caching scheme outperforms the caching scheme proposed by Saeedi Bidokhti et al.~\cite{CentralizedUnequalCache1} suggested by numerical evaluations. In comparison to the work by Ibrahim et al.~\cite{CentralizedUnequalCache2}, as our scheme is an explicit scheme, it does not have the complexity issue associated with solving an optimisation problem. Also, our scheme performs within a multiplicative factor of 1.11 from the scheme by Ibrahim et al.~\cite{CentralizedUnequalCache2} suggested by numerical evaluations. \section{System Model}\label{Section:SystemModel} We consider a centralized caching problem where there is a server storing $N$ independent files $W_\ell$, $\ell\in\mathcal{N}$, $\mathcal{N}=\{1,2,\ldots,N\}$, connected through a shared error-free link to $K$ cache-enabled users, as shown in Fig.~\ref{Fig:SystemModel}. We assume that the number of files in the server is at least as many as the number of users, i.e., $N\geq K$. Each file in the server is of size $F\in\mathbb{N}$ bits (where $\mathbb{N}$ is the set of natural numbers), and is uniformly distributed over the set $\mathcal{W}=\left\{1,2,\ldots,2^{F}\right\}$. User~$i$, $i\in\mathcal{K}$, $\mathcal{K}=\{1,2,\ldots,K\}$, is equipped with a cache of size $M_iF$ bits for some $M_i\in\mathbb{R}$, $0\leq M_i\leq N$, where $\mathbb{R}$ is the set of real numbers. The content of the cache of user~$i$ is denoted by $Z_i$. We represent all the cache sizes by the vector $\mathbf{M}=(M_1,M_2,\ldots,M_K)$. In this work, we assume that there are two subgroups of users, one with a larger cache size than the other, i.e., $M_i=\hat{M}$, $1\leq i \leq L$, and $M_i={M}$, $L+1\leq i \leq K$, for some $\hat{M}>M$. User~$i$ requests $W_{d_i}$ from the server where $d_i\in\mathcal{N}$. We represent the request of all the users by the vector $\mathbf{d}=(d_1,d_2,\ldots,d_K)$. User~$i$ needs to decode $W_{d_i}$ using $Z_i$, and the signal $X_\mathbf{d}$ transmitted by the server over the shared link. As mentioned earlier, each caching scheme consists of two phases, the placement phase and the delivery phase. The placement phase consists of $K$ caching functions \begin{align*} \phi_i:\mathcal{W}^{N}\rightarrow \mathcal{Z}_i,\;\; i\in\mathcal{K}, \end{align*} where $\mathcal{Z}_i\hskip-2pt=\hskip-2pt\left\{\hskip-2pt 1,2,\ldots,2^{\left\lfloor M_iF \right\rfloor}\hskip-2pt\right\}$, i.e., $Z_i\hskip-2pt=\hskip-2pt\phi_i\left(\hskip-2pt W_1,W_2,\ldots,W_N\hskip-2pt\right)$. The delivery phase consists of $N^K$ encoding functions \begin{align*} \psi_{\mathbf{d}}:\mathcal{W}^{N}\rightarrow \mathcal{X}, \end{align*} where $\mathcal{X}=\left\{1,2,\ldots,2^{\left\lfloor RF \right\rfloor}\right\}$, i.e., \begin{align*} X_{\mathbf{d}}=\psi_{\mathbf{d}}\left(W_1,W_2,\ldots,W_N\right). \end{align*} We refer to $RF$ as the load of the transmission and $R$ as the rate of the transmission over the shared link. The delivery phase consists of also $KN^K$ decoding functions \begin{align*} \theta_{\mathbf{d},i}: \mathcal{Z}_i\times\mathcal{X}\rightarrow \mathcal{W},\;\;i\in\mathcal{K}, \end{align*} i.e., $\hat{W}_{\mathbf{d},i}=\theta_{\mathbf{d},i}(X_{\mathbf{d}},Z_i)$, where $\hat{W}_{\mathbf{d},i}$ is the decoded version of $W_{d_i}$ at user~$i$ when the demand vector is $\mathbf{d}$. The probability of error for the scheme is defined as \begin{align*} \underset{\mathbf{d}}{\max}\;\,\underset{i}{\max}\;P(\hat{W}_{\mathbf{d},i}\neq W_{d_i}). \end{align*} \begin{definition} For a given $\mathbf{M}$, we say that the rate $R$ is achievable if for every $\epsilon>0$ and large enough $F$, there exists a caching scheme with rate $R$ such that its probability of error is less than $\epsilon$. For a given $\mathbf{M}$, we also define $R^{\star}(\mathbf{M})$ as the infimum of all achievable rates. \end{definition} \section{Background}\label{Sec:Background} In this section, we first consider centralized caching with equal cache sizes, i.e., $M_i=M,\,\forall i$, and review the optimum scheme among those with uncoded placement~\cite{CentralizedCaching, OptimumCachingWithUnCodedPlacement}. We then review existing works on centralized caching with unequal cache sizes where there are more than two users~\cite{CentralizedUnequalCache1,CentralizedUnequalCache2}. \subsection{Equal Cache Sizes}\label{Sec:EqualCache} Here, we present the optimum caching scheme for centralized caching with equal cache sizes when the cache placement is uncoded, and $N\geq K$~\cite{CentralizedCaching}. In this scheme, a parameter denoted by $t$ is defined at the beginning as \begin{align*} t=\frac{KM}{N}. \end{align*} First, assume that $t$ is an integer. As $0\leq M\leq N$, we have $t\in\{0,1,2,\ldots,K\}$. In the placement phase, $W_\ell$, $\ell\in\mathcal{N}$, is divided into $\binom{K}{t}$ non-overlapping parts denoted by $W_{\ell,\mathcal{T}}$ where $\mathcal{T}\subseteq\mathcal{K}$ and $\left|\mathcal{T}\right|=t$ ($\left|\mathcal{T}\right|$ denotes the cardinality of the set $\mathcal{T}$). $W_{\ell,\mathcal{T}}$ is then placed in the cache of user $i$ if $i\in\mathcal{T}$. This means that the size of each part is $\frac{F}{\binom{k}{t}}$ bits, and we place $\binom{K-1}{t-1}$ parts from each file in the cache of user~$i$. Therefore, we satisfy the cache size constraint as we have \begin{align*} N\frac{\binom{K-1}{t-1}}{\binom{K}{t}}=M. \end{align*} In the delivery phase, the server transmits \begin{align*} X_{\mathbf{d},\mathcal{S}}=\underset{s\in\mathcal{S}}{\bigoplus} W_{d_s,\mathcal{S}\setminus s}, \end{align*} for every $\mathcal{S}\subseteq\mathcal{K}$ where $\left|\mathcal{S}\right|=t+1$. This results in the transmission rate of \begin{align*} R_{\text{eq}}(N,K,M)=\frac{\binom{K}{t+1}}{\binom{K}{t}}. \end{align*} This delivery scheme satisfies the demands of all the $K$ users~\cite{CentralizedCaching}. Now, assume that $t$ is not an integer. In this case, memory sharing is utilized where $t_\text{int}$ is defined as \begin{align*} t_\text{int}\triangleq\left\lfloor t \right\rfloor, \end{align*} and $\alpha$ is computed using the following equation \begin{align*} M=\frac{tN}{K}=\alpha\frac{t_\text{int}N}{K}+(1-\alpha)\frac{(t_\text{int}+1)N}{K}, \end{align*} where $0<\alpha\leq1$. Based on $\alpha$, the caching problem is divided into two independent problems. In the first one, the cache size is $\alpha\frac{t_\text{int}N}{K}F$, and we cache the first $\alpha F$ bits of the files, denoted by $W^{(\alpha)}_{\ell}$, $\ell\in\mathcal{N}$. In the delivery phase, the server transmits \begin{align}\label{Eq:Component1} X^{(\alpha)}_{\mathbf{d},\mathcal{S}_1}=\underset{s\in\mathcal{S}_1}{\bigoplus} W^{(\alpha)}_{d_s,\mathcal{S}_1\setminus s}, \end{align} for every $\mathcal{S}_1\subseteq\mathcal{K}$ where $\left|\mathcal{S}_1\right|=t_\text{int}+1$. In the second one, the cache size is $(1-\alpha)\frac{(t_\text{int}+1)N}{K}F$, and we cache the last $(1-\alpha)F$ bits of the files, denoted by $W^{(1-\alpha)}_{\ell}$, $\ell\in\mathcal{N}$. In the delivery phase, the server transmits \begin{align}\label{Eq:Component2} X^{(1-\alpha)}_{\mathbf{d},\mathcal{S}_2}=\underset{s\in\mathcal{S}_2}{\bigoplus} W^{(1-\alpha)}_{d_s,\mathcal{S}_2\setminus s}, \end{align} for every $\mathcal{S}_2\subseteq\mathcal{K}$ where $\left|\mathcal{S}_2\right|=t_\text{int}+2$. Consequently, the rate \begin{align}\label{Eq:Rate} R_{\text{eq}}(N,K,M)=\alpha \frac{\binom{K}{t_\text{int}+1}}{\binom{K}{t_\text{int}}}+(1-\alpha)\frac{\binom{K}{t_\text{int}+2}}{\binom{K}{t_\text{int}+1}}, \end{align} is achieved where $\binom{a}{b}$ is considered to be zero if $b>a$. \begin{figure}[t] \centering \includegraphics[width=0.3\textwidth]{FiguresUnequalCacheSize/UnequalExistingScheme1.pdf} \vskip-10pt \caption{An existing scheme for centralized caching with unequal cache sizes} \vskip-15pt \label{Fig:ExScheme1} \end{figure} \subsection{Unequal Cache Sizes}\label{Sec:ExistingWorks} Here, we present existing works on centralized caching with unequal cache sizes where there are more than two users. \subsubsection{Scheme~1~\cite{CentralizedUnequalCache1}}\label{Sec:ExistingScheme1} In this scheme, assuming without loss of generality that $M_1\geq M_2 \geq \cdots \geq M_K$, the problem is divided into $K$ caching problems. In problem $i$, $i\in\mathcal{K}$, there are two groups of users: the first group is composed of users 1 to $i$, all with equal cache size of $(M_i-M_{i+1})F$ bits; the second group is composed of users $i+1$ to $K$, all without cache. In problem $K$, $M_{K+1}$ is considered as zero, and there is only one group consisting of $K$ users all with equal cache size of $M_KF$ bits. In problem $i$, we only consider $\beta_iF$ bits of the files where $\beta_1+\beta_2+\cdots+\beta_K=1$. This scheme is schematically shown in Fig.~\ref{Fig:ExScheme1} for the three-user case. Based on the equal cache results, the transmission rate for caching problem~$i$ is \begin{align} R_i=\beta_i R_{\text{eq}}(N,i,\frac{M_i-M_{i+1}}{\beta_i})+\beta_i(K-i),\;i\in\mathcal{K}.\label{eq:existing1} \end{align} The first term on the right-hand side of~\eqref{eq:existing1} corresponds to the transmission rate for the first groups of users, and the second term corresponds to the transmission rate for the second group of users, which are without cache in problem~$i$. Therefore, by optimising the sum rate over the parameters $(\beta_1,\beta_2,\ldots,\beta_K)$, we achieve the following transmission rate \begin{align}\label{Eq:existingwork1} R_{\text{ex1}}(N,K,\mathbf{M})=\underset{(\beta_1,\ldots,\beta_K):\sum_{i=1}^{K}\beta_i=1}{\min}\sum_{i=1}^{K}R_i. \end{align} \subsubsection{Scheme~2~\cite{CentralizedUnequalCache2}}\label{Sec:ExistingScheme2} In this scheme, the problem of centralized caching with unequal cache sizes is formulated as an optimisation problem where it is assumed that the cache placement is uncoded, and the delivery phase uses linear coding. To characterize all possible uncoded placement policies, the parameter $a_{\mathcal{S}}$, $\mathcal{S}\subseteq\mathcal{K}$, is defined where $a_{\mathcal{S}}F$ represents the length of ${W}_{\ell,\mathcal{S}}$ as the fraction of $W_\ell$ stored in the cache of users in $\mathcal{S}$. Hence, these parameters must satisfy \begin{align*} \sum_{\mathcal{S}\subseteq\mathcal{K}} a_{\mathcal{S}}=1, \end{align*} and \begin{align*} \sum_{\mathcal{S}\subseteq\mathcal{K}:i\in\mathcal{S}} a_{\mathcal{S}}\leq\frac{M_i}{N},\;i\in\mathcal{K}. \end{align*} In the delivery phase, the server transmits \begin{align*} X_{\mathbf{d},\mathcal{T}}=\bigoplus_{j\in\mathcal{T}}W_{d_j}^{\mathcal{T}}, \end{align*} to the users in $\mathcal{T}$ where $\mathcal{T}$ is a non-empty subset of $\mathcal{K}$. $W_{d_j}^{\mathcal{T}}$, which is a part of $W_{d_j}$, needs to be decoded at user~$j$, and cancelled by all the users in $\mathcal{T}\setminus\{j\}$. Therefore, $W_{d_j}^{\mathcal{T}}$ is constructed from subfiles ${W}_{d_j,\mathcal{S}}$ where $\mathcal{T}\setminus\{j\}\subseteq \mathcal S$ and $j\notin \mathcal{S}$. To characterize all possible linear delivery policies, two sets of parameters are defined: (i) $v_{\mathcal{T}}$ where $v_{\mathcal{T}}F$ represents the length of $W_{d_j}^{\mathcal{T}},\;\forall j\in\mathcal{T}$, and consequently $X_{\mathbf{d},\mathcal{T}}$. (ii) $u_{\mathcal{S}}^{\mathcal{T}}$ where $u_{\mathcal{S}}^{\mathcal{T}}F$ is the length of $W_{d_j,\mathcal{S}}^{\mathcal{T}}$ which is the fraction of ${W}_{d_j,\mathcal{S}}$ used in the construction $W_{d_j}^{\mathcal{T}}$. In order to have a feasible delivery scheme, these parameters need to satisfy some conditions~\cite[equations (25)--(30)]{CentralizedUnequalCache2}. By considering $(\mathbf{a},\mathbf{u},\mathbf{v})$ as all the optimisation parameters, and $\mathcal{C}(N,K,\mathbf{M})$ as all the conditions that need to be met in the both placement and delivery phases, we achieve the following transmission rate \begin{align}\label{Eq:existingwork2} R_{\text{ex2}}(N,K,\mathbf{M})\hskip-2pt=\hskip-2pt\underset{\mathbf{d}}{\max}\hskip-2pt\left(\hskip-2pt\underset{(\mathbf{a},\mathbf{u},\mathbf{v}):\mathcal{C}(N,K,\mathbf{M})}{\min}\sum_{\mathcal{T}\in\mathcal{K}:\left|\mathcal{T}\right|\neq 0} v_{\mathcal{T}}\hskip-2pt\right). \end{align} \section{Proposed Caching Scheme} In this section, we first provide some insights into our proposed scheme using an example. We then propose a scheme for a system with two subgroups of users, one with a larger cache size than the other, i.e., $M_i=\hat{M}$, $1\leq i \leq L$, and $M_i={M}$, $L+1\leq i \leq K$, for some $\hat{M}>M$. \subsection{An Example} In our example, as shown in Fig.~\ref{Fig:AnExample}, we consider the case where the number of files in the server is four, denoted for simplicity by $(A,B,C,D)$, and the number of users is also four. The first three users have a cache of size $2F$ bits, and the forth one has a cache of size $F$ bits. First, we ignore the extra cache available at the first three users, and use the equal-cache scheme. This divides each file into four parts, and places $(A_i, B_i, C_i, D_i)$, $i\in\{1,2,3,4\}$, in the cache of user~$i$. Therefore, assuming without loss of generality that users~1,~2,~3 and~4 request $A$, $B$, $C$ , and $D$ respectively, the server needs to transmit $A_2\oplus B_1$, $A_3\oplus C_1$, $B_3\oplus C_2$, $A_4\oplus D_1$, $B_4\oplus D_2$ and $C_4\oplus D_3$, and we achieve the rate of $R=3/2$ by ignoring the extra cache available at the first three users. Now, to utilize the extra cache available at users~1,~2, and~3, we look at what is going to be transmitted when ignoring these extra caches, and fill the extra caches to reduce the load of the transmission. In particular, we reduce the load of the transmissions which are only of benefit to the users with a larger cache size (i.e., $A_2\oplus B_1$, $A_3\oplus C_1$, $B_3\oplus C_2$). To do this, we divide $A_i$, $i\in\{1,2,3\}$ into two equal parts, $A'_i$ and $A''_i$. We do the same for $B_i$, $C_i$, and $D_i$, $i\in\{1,2,3\}$. We then place $(A'_2, B'_2, C'_2, D'_2)$ and $(A'_3, B'_3, C'_3, D'_3)$ in the extra cache of user~1, $(A'_1, B'_1, C'_1, D'_1)$ and $(A''_3, B''_3, C''_3, D''_3)$ in the extra cache of user~2, and $(A''_1, B''_1, C''_1, D''_1)$ and $(A''_2, B''_2, C''_2, D''_2)$ in the extra cache of user~3. Therefore, considering the extra cache available at the first three users, instead of $A_2\oplus B_1$, $A_3\oplus C_1$, $B_3\oplus C_2$, we just need to transmit $A''_2\oplus B''_1\oplus C'_1 $, and $A''_3\oplus B'_3\oplus C'_2$ to satisfy the demands of all users, and we achieve the rate $R=1$. Note that what we did in the second part is equivalent to using the equal-cache scheme for a system with a server storing four files of size $\frac{3}{4}F$ bits, i.e., $A^*=(A_1,A_2,A_3)$, $B^*=(B_1,B_2,B_3)$, $C^*=(C_1,C_2,C_3)$, and $D^*=(D_1,D_2,D_3)$, and with three users each with a cache of size $2F$ bits. This can be seen by defining $A^*_{12}=(A'_1,A'_2)$, $A^*_{13}=(A''_1,A'_3)$, and $A^*_{23}=(A''_2,A''_3)$ for $A^*$, and also similarly for $B^*$, $C^*$, and $D^*$. Then we can check that $(A^*_\mathcal{T},B^*_\mathcal{T},C^*_\mathcal{T},D^*_\mathcal{T})$, $\mathcal{T}\in\{\{12\},\{13\},\{23\}\}$, is in the cache of user~$i$, $i\in\{1,2,3\}$ if $i\in\mathcal{T}$. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{FiguresUnequalCacheSize/AnExample.pdf} \vskip-10pt \caption{An example for our proposed scheme} \vskip-12pt \label{Fig:AnExample} \end{figure} \subsection{Scheme with Two Levels of Caches} In this subsection, we explain our proposed scheme for the system where the first $L$ users have a cache of size $\hat{M}F$ bits, and the last $K-L$ users have a cache of size $MF$ bits for some $M<\hat{M}$. \subsubsection{An incremental placement approach}\label{Sec:IncreasingCache} We first describe a concept which is used later in our proposed scheme for the unequal-cache problem. Suppose that we initially have a system with $N$ files, and $K$ users each having a cache of size $MF$ bits. We use the equal-cache scheme described in Section~\ref{Sec:EqualCache} to fill the caches. We later increase the cache size of \textit{each} user by $(M'-M)F$ bits for some $M'>M$. The problem is that we are not allowed to change the content of the first $MF$ bits that we have already filled, but we want to fill the additional cache in such a way that the overall cache has the same content placement as the scheme described in Section~\ref{Sec:EqualCache} for the new system with $N$ files, and $K$ users each having a cache of size $M'F$ bits. We present our solution when $M=\frac{tN}{K}$ and $M'=\frac{(t+1)N}{K}$ for some integer $t$. The solution can be easily extended to an arbitrary $M$ and $M'$. In the cache placement for the system with the parameters $(N,K,M)$, we divide $W_\ell$, $\ell\in\mathcal{N}$, into $\binom{K}{t}$ subfiles denoted by $W_{\ell,\mathcal{T}}$, and place the ones with $i\in\mathcal{T}$ in the cache of user~$i$. This means that we put $\binom{K-1}{t-1}$ subfiles of $W_\ell$ in the cache of each user. After increasing the cache of each user to $M'F$ bits, we further divide each subfile into $(K-t)$ parts denoted by $W_{\ell,\mathcal{T},j}$, $j\in\mathcal{K}\setminus\mathcal{T}$, and place $W_{\ell,\mathcal{T},j}$ in the cache of user~$j$. This adds $W_{\ell,\mathcal{T},j}$, $j\notin\mathcal{T}$, to the cache of user~$j$ while keeping the existing content of the first $MF$ bits of user $j$, i.e., $W_{\ell,\mathcal{T},i}$ $j\in\mathcal{T}$, $i\in\mathcal{K}\setminus\mathcal{T}$. This means that we add \begin{align*} N\frac{\binom{K-1}{t}}{\binom{K}{t}(K-t)}F=\frac{N}{K}F=(M'-M)F\;\; \text{bits}, \end{align*} to the cache of each user which satisfies the cache size constraint. Our cache placement for the system with the parameters $(N,K,M')$ becomes the same as the one described in Section~\ref{Sec:EqualCache} by merging all the parts $W_{\ell,\mathcal{T},j}$ which have the same $\mathcal{T}'=\mathcal{T}\cup\{j\}$ as a single subfile $W_{\ell,\mathcal{T}'}$, where $|\mathcal{T}'|=t+1$. \subsubsection{Proposed Scheme} We here present our proposed scheme for the system where $M_i=\hat{M}$, $i\in\mathcal{L}$, $\mathcal{L}=\{1,2,\ldots,L\}$, and $M_i={M}$, $i\in\mathcal{K}\setminus\mathcal{L}$, for some $M<\hat{M}$. Our placement phase is composed of two stages. In the first stage, we ignore the extra cache available at the first $L$ users, and use the equal-cache placement for the system with the parameters $(N,K,M)$. Hence, at the end of this stage, we can achieve the rate in~\eqref{Eq:Rate} by transmitting $X^{(\alpha)}_{\mathbf{d},\mathcal{S}_1}$, defined in~\eqref{Eq:Component1}, for any $\mathcal{S}_1\subseteq\mathcal{K}$ where $|\mathcal{S}_1|=t_\text{int}+1$, and $X^{(1-\alpha)}_{\mathbf{d},\mathcal{S}_2}$, defined in~\eqref{Eq:Component2}, for any $\mathcal{S}_2\subseteq\mathcal{K}$ where $|\mathcal{S}_2|=t_\text{int}+2$. In the second stage of our placement phase, we fill the extra cache available at the first $L$ users by looking at what are going to be transmitted when ignoring these extra caches. To do so, we try to reduce the load of the transmissions which are intended only for the users with a larger cache size, i.e., $X^{(\alpha)}_{\mathbf{d},\mathcal{S}_1}$for any $\mathcal{S}_1\subseteq\mathcal{L}$ ($|\mathcal{S}_1|=t_\text{int}+1$), and $X^{(1-\alpha)}_{\mathbf{d},\mathcal{S}_2}$ for any $\mathcal{S}_2\subseteq\mathcal{L}$ ($|\mathcal{S}_2|=t_\text{int}+2$). These transmissions are constructed from the subfiles $W^{(\alpha)}_{\ell,\mathcal{T}_1}$, $\mathcal{T}_1\subseteq\mathcal{L}$, $|\mathcal{T}_1|=t_\text{int}$, and $W^{(1-\alpha)}_{\ell,\mathcal{T}_2}$, $\mathcal{T}_2\subseteq\mathcal{L}$, $|\mathcal{T}_2|=t_\text{int}+1$. These subfiles occupy \begin{align} \frac{\binom{L-1}{t_\text{int}-1}}{\binom{K}{t_\text{int}}}N\alpha F\hspace{-3pt}+\hspace{-3pt}\frac{\binom{L-1}{t_\text{int}}}{\binom{K}{t_\text{int}+1}}N(1-\alpha) F\;\; \text{bits}, \end{align} of each user's cache, and the sum-length of these subfiles for any $\ell\in\mathcal{N}$ is \begin{align*} F'\triangleq \frac{\binom{L}{t_\text{int}}}{\binom{K}{t_\text{int}}}\alpha F+\frac{\binom{L}{t_\text{int}+1}}{\binom{K}{t_\text{int}+1}}(1-\alpha) F\;\;\text{bits}. \end{align*} Considering our aim in designing the second stage of our placement phase, we again use the equal-cache placement for the subfiles $W^{(\alpha)}_{\ell,\mathcal{T}_1}$, $\mathcal{T}_1\subseteq\mathcal{L}$, $|\mathcal{T}_1|=t_\text{int}$, and $W^{(1-\alpha)}_{\ell,\mathcal{T}_2}$, $\mathcal{T}_2\subseteq\mathcal{L}$ $|\mathcal{T}_2|=t_\text{int}+1$ while considering the extra cache available at the first $L$ users. This means that we use the equal-cache scheme for a system with $N$ files of size $F'$ bits, and $L$ users each having a cache of size $M'F'$ bits where \begin{align}\label{Eq:CacheSize2} M'\hspace{-2pt}F'\triangleq\hspace{-3pt}\frac{\binom{L-1}{t_\text{int}-1}}{\binom{K}{t_\text{int}}}N\alpha F\hspace{-3pt}+\hspace{-3pt}\frac{\binom{L-1}{t_\text{int}}}{\binom{K}{t_\text{int}+1}}N(1-\alpha) F\hspace{-3pt}+\hspace{-3pt}(\hat{M}-M){F}. \end{align} Note that we are not allowed to change what we have already placed in the cache of the first $L$ users in the first stage. Otherwise, we cannot assume that, from the delivery phase when ignoring the extra caches, the transmissions $X^{(\alpha)}_{\mathbf{d},\mathcal{S}_1}$ where $\mathcal{S}_1=\mathcal{T}_1\cup\{j\}$, $|\mathcal{T}_1|=t_\text{int}$, $\mathcal{T}_1\subseteq \mathcal{L}$, $j\in\mathcal{K}\setminus\mathcal{L}$, and $X^{(1-\alpha)}_{\mathbf{d},\mathcal{S}_2}$ where $\mathcal{S}_2=\mathcal{T}_2\cup\{j\}$, $|\mathcal{T}_2|=t_\text{int}+1$, $\mathcal{T}_2\subseteq \mathcal{L}$, $j\in\mathcal{K}\setminus\mathcal{L}$, can still be decoded by target users. Therefore, we employ our proposed solution in Section~\ref{Sec:IncreasingCache} for using the equal-cache scheme for the second time. Two scenarios can happen in the second stage. \textit{Scenario~$1$} where $M'\leq N$: In this scenario, we achieve the rate \begin{align*} R_\text{ueq}(N,K,L,\hat{M},M)\hspace{-3pt}=\hspace{-3pt}R_\text{eq}(N,K,M)\hspace{-3pt}-\hspace{-3pt}R'\hspace{-3pt}+\hspace{-3pt}R_{\text{eq}}(N,L,M')\frac{F'}{F}, \end{align*} where \begin{align*} R'= \alpha \frac{\binom{L}{t_\text{int}+1}}{\binom{K}{t_\text{int}}}+(1-\alpha)\frac{\binom{L}{t_\text{int}+2}}{\binom{K}{t_\text{int}+1}}. \end{align*} $R'F$ is the load of the transmissions intended only for the users with a larger cache size if we ignore their extra caches (or equivalently if we just utilize the first stage of our placement phase). $R_\text{eq}(N,L,M')F'$ is the new load of the transmissions intended only for the users with a larger cache size at the end of the second stage. \textit{Scenario~$2$} where $M'> N$: In this scenario, we also use memory sharing between the case with $\hat{M}=\Phi$, where \begin{align*} \Phi\triangleq M-\frac{\binom{L-1}{t_\text{int}-1}}{\binom{K}{t_\text{int}}}N\alpha-\frac{\binom{L-1}{t_\text{int}}}{\binom{K}{t_\text{int}+1}}N(1-\alpha)+N\frac{F'}{F}, \end{align*} and the case with $\hat{M}=N$. In the system with $\hat{M}=\Phi$, according to~\eqref{Eq:CacheSize2}, we have $M'=N$, and we achieve the rate $R_\text{eq}(N,K,M)-R'$. In the system with $\hat{M}=N$, we can simply just remove the first $L$ users as they can cache the whole files in the server, and we achieve the rate $R_{\text{eq}}(N,K-L,M)$. Therefore, in this scenario, we achieve the rate \begin{align*} R_\text{ueq}(N,K,L,\hat{M},M)=&\gamma (R_\text{eq}(N,K,M)-R')\\ &\hskip25pt+(1-\gamma)R_{\text{eq}}(N,K-L,M), \end{align*} where $0\leq\gamma\leq1$, and is calculated using $\hat{M}=\gamma \Phi+(1-\gamma)N$. \section{Comparison with existing works} In this section, we present our numerical results comparing our proposed scheme with the existing works, described in Section~\ref{Sec:ExistingWorks}. Our numerical results, characterizing the trade-off between the worst-case transmission rate and cache size for systems with two levels of cache sizes, suggest that our scheme outperforms the scheme by Saeedi Bidokhti et al.~\cite{CentralizedUnequalCache1}. Considering the work by Ibrahim et al.~\cite{CentralizedUnequalCache2}, as the complexity of the solution grows exponentially with the number of users, we implemented that work for systems with up to four users. Our numerical evaluations suggest that our scheme performs withing a multiplicative factor of 1.11 from that scheme, i.e., $1\leq\frac{R_\text{ueq}}{R_{\text{ex2}}}\leq1.11$. As an example, this comparison is shown in Fig.~\ref{Fig:Comparison} for a four-user system with the parameters $N=10$, $K=4$, $M_1=M_2=3M_3=3M_4$. For these parameters, our scheme performs as well as the work by Ibrahim et al.~\cite{CentralizedUnequalCache2} without needing to solve an optimisation problem to obtain the scheme. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{FiguresUnequalCacheSize/Comparison.pdf} \vskip-15pt \caption{Comparing the worst-case transmission rate of the proposed scheme with the existing ones for the system with $N=10$, $K=4$, $M_1=M_2=3M_3=3M_4$.} \vskip-15pt \label{Fig:Comparison} \end{figure} \vspace{-5pt} \section{Conclusion} We addressed the problem of centralized caching with unequal cache sizes. We proposed an explicit scheme for the system with a server of files connected through a shared error-free link to a group of users where one subgroup is equipped with a larger cache size than the other. Numerical results comparing our scheme with existing works showed that our scheme improves upon the existing explicit scheme by having a lower worst-case transmission rate over the shared link. Numerical results also showed that our scheme achieves within a multiplicative factor of 1.11 from the optimal worst-case transmission rate for schemes with uncoded placement and linear coded delivery without needing to solve a complex optimisation problem. \vspace{-5pt} \bibliographystyle{IEEEtran}
1,116,691,497,091
arxiv
\section{Introduction} In this paper, we consider the following transport equation that is defined on the sphere in spherical coordinates \cite{flyer2007transport,fornberg2011stabilization} \begin{equation}\label{Eq-1} \dfrac{\partial u}{\partial t}+\bm{v}.\nabla u=0, \end{equation} where $u$ is the scalar quantity being transported \TODO{and $\bm{v}=\bm{v}(\lambda,\theta,t)=(v_{1}(\lambda,\theta,t),v_{2}(\lambda,\theta,t))^{T}$ represents the velocity field. Here $ \lambda$ denotes the longitude and $\theta$ is the latitude which both are measured from the equator \cite{flyer2007transport}.} Furthermore, $\nabla$ is the gradient operator on the surface of a unit sphere in spherical coordinates which is defined as \cite{flyer2007transport,fornberg2011stabilization} \begin{equation}\label{Eq-2} \nabla:=\left(\dfrac{1}{\cos(\theta)} \dfrac{\partial}{\partial \lambda}, \dfrac{\partial}{\partial \theta}\right)^{T}. \end{equation} As seen, in the north and south pole, i.e., $\theta=\pm \dfrac{\pi}{2}$, this operator is singular. \TODO{The transport equation has various applications such as modeling transport in layered magnetic materials \cite{maclaren2000first}, spin valves \cite{krems2007boltzmann}, ocean surface modeling \cite{bender1996modification}, numerical weather prediction \cite{cotter2012mixed}, modeling oil weathering, and transport in sea ice \cite{afenyo2016modeling}. Moreover, we should note that in the atmospheric modeling, transport processes have significant importance \cite{nair2010class}.} Different standard tests are considered for this problem, which are solid body rotation and deformational flow \cite{nair2010class}. A benchmark transport test on the sphere is the solid body rotation of a cosine bell along a great circle trajectory was introduced in \cite{williamson1992standard}. Another test, which is introduced and studied in \cite{nair2008moving} is deformational flow (vortex). In the Cartesian geometry, two important deformational tests with the analytic solutions are called the ''Smolarkiewicz's test'' \cite{smolarkiewicz1982multi} and ''Doswell vortex'' \cite{doswell1984kinematic}, respectively (see also \cite{staniforth1987comments}). Besides, Leveque introduced a deformational test, in which flow trajectories are much more complex \cite{leveque1996high}. In \cite{nair2010class}, the authors followed \cite{leveque1996high}, and constructed different deformational tests in Cartesian and spherical geometries, in which some tests are non-divergent flow and the others are divergent. As said in \cite{flyer2007transport}, geophysical fluid motions on all scales are dominated by the advection process. Therefore, computing the numerical solutions play a more important role for solving the transport equation. In recent years, there are diverse research works based on numerical methods to solve the transport equation on the sphere such as high-order finite volume methods (FVMs) \cite{cheruvu2007spectral,chen2008shallow,zerroukat2004slice}, continuous and discontinuous Galerkin (DG) methods \cite{giraldo2000lagrange,nair2005discontinuous,taylor2007mass}, radial basis functions (RBFs) in spherical coordinates \cite{flyer2007transport}, radial basis functions \cite{flyer2009radial}, adaptive mesh refinement technique \cite{jablonowski2006block, lauter2007parallel,st2008comparison}, \TO{the conservative semi-Lagrangian multi-tracer transport scheme \cite{lauritzen2012standard},} stabilization of RBF-generated finite difference techniques (RBF-FD) in spherical coordinates (adding an artificial hyperviscosity) \cite{fornberg2011stabilization} as well as global, local and partition of unity RBFs methods combined with the semi-Lagrangian approach in Cartesian coordinates \cite{shankar2018mesh}, and a higher-order compatible finite element scheme for the nonlinear rotating shallow water equations on the sphere \cite{shipton2018higher}. In this article, we employ two meshless techniques, namely generalized moving least squares (GMLS) and moving kriging least squares (MKLS) approximations on the sphere in spherical coordinates for discretizing the spatial variables of the transport equation (\ref{Eq-1}). For the first time, the GMLS technique in subdomains of $\mathbb{R}^d$ was introduced by Mirzaei and his co-workers \cite{mirzaei2012generalized}. The generalized moving least squares reproducing kernel approach is also introduced and analyzed in $\Omega \subset \mathbb{R}^d$ \cite{salehi2013generalized}. Recently, the GMLS technique on the sphere was introduced and analyzed by Mirzaei in Cartesian coordinates \cite{mirzaei2017direct}. Here, we approximate $\nabla u$ defined in Eq. (\ref{Eq-1}) via GMLS in spherical coordinates. Besides, we have developed an MKLS interpolation on the sphere, which approximates $\nabla u$ in spherical coordinates. This technique was first introduced in \cite{gu2003moving} for subdomains in $\mathbb{R}^d$, and it is not considered on the sphere. Against GMLS approximation, the MKLS method satisfies the Kronecker the delta property \cite{gu2003moving}. Of course, it depends on a parameter that is similar to the shape parameter of an RBF interpolation. As discussed in \cite{mirzaei2017direct,schaback2017error}, these methods can be considered as "generalized finite differences" in which the differential operators involved in a PDE such as Eq. (\ref{Eq-1}), can be approximated at each scattered data point on each local sub-domain. Both presented approximations are implemented simply on transport equation defined on the sphere since they do not depend on a background mesh or triangulation. \TODO{The main advantage of GMLS and MKLS approximations developed here is that since two techniques do not depend on any background mesh or triangulation, there is no difficulty in implementation. In this work, we apply them on a transport equation on the unit sphere via two different set of points. The proposed methods can be simply implemented to solve numerically various model equations defined on the sphere in different scientific problems.} The temporal variable of Eq. (\ref{Eq-1}) is discretized by a second-order backward differential formula (BDF) \cite{li2018second,li2017second}. The remainder of this manuscript is as follows. In Section \ref{Sec-2}, the time variable of Eq. (\ref{Eq-1}) is discretized by a second-order backward differential formula. In Section \ref{Sec-3}, the generalized moving least squares approximation in spherical coordinates, and how it can be applied to approximate the advection operator in Eq. (\ref{Eq-1}) are presented. In Section \ref{Sec-4}, a new approximation namely moving kriging least squares is introduced on the sphere, and we have approximated the advection operator of an unknown solution using this technique. In Section \ref{Sec-5}, we have obtained the full-discrete scheme of transport equation (\ref{Eq-1}) defined on the unit sphere due to the time and spatial discretizations proposed here. Some numerical simulations are reported in Section \ref{Sec-6} for three test problems, which were studied in the literature works. Finally, concluding remarks are given in Section \ref{Sec-7}. \section{The time discretization}\label{Sec-2} In this section, we apply a second-order BDF for discretization the time variable of Eq. (\ref{Eq-1}) \cite{li2018second,li2017second}. For this purpose, the time interval $[0,T]$ is divided uniformly into $M$ sub-intervals such that $T=M\Delta t$, where $\Delta t$ indicates the time step. By defining $t_n:=n\Delta t$ and $u^{n}:=u(t_n)$, a second-order BDF for transport equation (\ref{Eq-1}) can be written as follows \begin{equation}\label{BDF-1} \dfrac{{3{u^{n + 1}} - 4{u^n} + {u^{n - 1}}}}{{2\Delta t}} + \dfrac{{v_1^{n + 1}}}{{\cos (\theta )}}\dfrac{{\partial {u^{n + 1}}}}{{\partial \lambda }} + v_2^{n + 1} \dfrac{{\partial {u^{n + 1}}}}{{\partial \theta }} = 0,\,\,\,\,\,\,n = 1,2,...,M - 1, \end{equation} \TODO{where $v_1^{n + 1}$ and $v_2^{n + 1}$ are the components of the velocity field at $t=t_{n+1}$.} Also, for the first time step, we have used the first step of backward time stepping as follows \begin{equation}\label{BDF-2} \dfrac{{{u^1} - {u^0}}}{{\Delta t}} + \dfrac{{v_1^1}}{{\cos (\theta )}}\dfrac{{\partial {u^1}}}{{\partial \lambda }} + v_2^1\frac{{\partial {u^1}}}{{\partial \theta }} = 0, \end{equation} \TODO{where $v_1^{1}$ and $v_2^{1}$ are the components of the velocity field at $t=t_{1}$.} Reformulating Eq. (\ref{BDF-1}), we have \begin{equation}\label{BDF-3} 3{u^{n + 1}} + 2\Delta t\left( {\dfrac{{v_1^{n + 1}}}{{\cos (\theta )}}\dfrac {{\partial {u^{n + 1}}}}{{\partial \lambda }} + v_2^{n + 1}\dfrac{{\partial {u^{n + 1}}}} {{\partial \theta }}} \right) = 4{u^n} - {u^{n - 1}},\,\,\,\,\,\,n = 1,2,...,M - 1. \end{equation} Besides, Eq. (\ref{BDF-2}) can be rewritten in the following formula \begin{equation}\label{BDF-4} {u^1} + \Delta t\left( {\dfrac{{v_1^1}}{{\cos (\theta )}} \dfrac{{\partial {u^1}}}{{\partial \lambda }} + v_2^1\dfrac{{\partial {u^1}}}{{\partial \theta }}} \right) = {u^0}. \end{equation} In what follows, we will come back to Eqs. (\ref{BDF-3}) and (\ref{BDF-4}) for deriving their full-discrete schemes. \section{The GMLS formulation for advection operator}\label{Sec-3} As was mentioned earlier, the GMLS approximation on the sphere was introduced by Mirzaei in his recent work \cite{mirzaei2017direct}. Here, we have the derivation of \TODO{advection operator (\ref{Eq-2}) of a given function such as $u$ in spherical coordinates via the GMLS approximation.} Assume that $u \in C^{m+1}(\mathbb{S}^2)$ is a function defined on the unit sphere $\mathbb{S}^2$, \TODO{where $\mathbb{S}^2=\{(x,y,z) \in \mathbb{R}^3\,|\, x^2+y^2+z^2=1 \}$}. Besides, consider a set of $N$ points $X=\{\bm{x}_{1},\bm{x}_{2},...,\bm{x}_{N}\}$ on $\mathbb{S}^{2}$. The approximation of $u$ by GMLS can be written \cite{mirzaei2017direct} \begin{equation}\label{GMLS-1} u(\bm{x}) \approx \overline{u(\bm{x})}=\displaystyle \sum_{j \in I(\bm{x})} a_{j}(\bm{x})u_{j},\,\,\,\, \bm{x} \in \mathbb{S}^{2}, \end{equation} \TODO{where $u_{j}$ is the value $u$ at point $\bm{x}_{j} \in I(\bm{x})$.} Here $I(\bm{x})$ is a set of indices for scattered points $X$ that is defined as \cite{mirzaei2017direct,wendland2001moving} $$I(\bm{x}):=\{j \in \{1,2,...,N\}: dist(\bm{x},\bm{x}_{j})< \delta\},$$ of centers contained in the cap of radius $\delta>0$ around $\bm{x} \in \mathbb{S}^{2}$, \DO{and $dist(\bm{x},\bm{x}_{j})$ represents geodesic distance between $\bm{x}$ and $\bm{x}_{j}$.} In spherical coordinates, $\bm{x}:=(x,y,t)^{T} \in \mathbb{S}^2$ can be considered as follows, e.g., see \cite{flyer2007transport} \begin{equation}\label{GMLS-2} x=\cos(\lambda)\cos(\theta),\,\,\,\, y=\sin(\lambda)\cos(\theta),\,\,\,\, z=\sin(\theta). \end{equation} In Eq. (\ref{GMLS-1}), $a_{j}(\bm{x}),\,\,j=1,2,...,|I(\bm{x})|$ for each point $\bm{x} \in \mathbb{S}^2$, is constructed in the following vector form \cite{mirzaei2017direct} \begin{equation}\label{GMLS-3} \bm{a}^{\star}(\bm{x})=W(\bm{x})P(\bm{x})\Big(P^{T}(\bm{x})W(\bm{x})P(\bm{x})\Big)^{-1}Y(\bm{x}) \TODO{=:\left[a_1(\bm{x}),\cdots, a_{|I(\bm{x})|}(\bm{x}) \right]^{T}}. \end{equation} Here $Y$ is the vector of spherical harmonics of degree at most $m$ \cite{atkinson2012spherical,mirzaei2017direct,wendland2001moving}, \TODO{$P(\bm{x}) \in \mathbb{R}^{|I(\bm{x})| \times N(3,m)}$ is a matrix, and its rows contain the vector of spherical harmonics.} $W(\bm{x})$ represents a diagonal matrix with size $|I(\bm{x})| \times |I(\bm{x})|$ with elements $\phi\Big(\dfrac{dist(\bm{x},\bm{y})}{\delta}\Big)$ on its diagonal, where $\phi:[0,\infty) \rightarrow [0,\infty)$ and with the $\phi(r)>0$ for $r \in [0,1/2]$ and $\phi(r)=0$ for $r\geq 1$ \cite{mirzaei2017direct,wendland2001moving}, or \begin{equation}\label{GMLS-4} w(\bm{x},\bm{y}):=\phi \Big(\dfrac{dist(\bm{x},\bm{y})}{\delta}\Big),~~~~~~ \bm{x}, \bm{y} \in \mathbb{S}^{2}, \end{equation} and $\delta >0$, and it is known as the weight function. As mentioned in \cite{mirzaei2017direct,wendland2001moving}, there are different weight functions as Eq. (\ref{GMLS-4}), which can be considered in this approximation. In this article, the following weight function has been used in GMLS approximation \cite{fasshauer2007meshfree,wendland2004scattered} \[\phi (r) = \left\{ \begin{array}{l} {(1 - r)^4}(4r + 1),\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,0 \le r \le 1,\\\\ 0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,r> 1, \end{array} \right.\] where $r={dist(\bm{x},\bm{y})}/{\delta}$ and $\bm{x},\bm{y} \in \mathbb{S}^2$. If in Eq. (\ref{GMLS-1}) we consider the spherical gradient, i.e., $\nabla_{\mathbb{S}^2}:=\nabla_{0}$, we will have \cite{mirzaei2017direct} \begin{equation}\label{GMLS-5} \nabla_{0}u(\bm{x}) \approx \overline{\nabla_{0}u(\bm{x})}=\displaystyle \sum_{j \in I(\bm{x})} a_{j,\nabla_{0}}(\bm{x})u_{j},\,\,\,\, \bm{x} \in \mathbb{S}^{2}, \end{equation} where \begin{equation}\label{GMLS-6} \bm{a}^{\star}_{\nabla_{0}}(\bm{x})=W(\bm{x})P(\bm{x})\Big(P^{T}(\bm{x})W(\bm{x})P(\bm{x})\Big)^{-1}{\nabla_{0}}(Y(\bm{x})), \end{equation} in which $\nabla_{0}$ acts only on the vector of spherical harmonics of degree at most $m$, i.e., $Y(\bm{x})$, $\bm{x} \in \mathbb{S}^2$. Due to \cite[Lemma 3.1]{mirzaei2017direct} or \cite[Definition 3.3]{mirzaei2017direct}, it is easy to compute the surface gradient in spherical coordinates. Therefore, the partial derivatives of $a_{j}(\bm{x})$ with respect to $\lambda$ and $\theta$ can be written as follows \begin{align}\label{GMLS-7}\nonumber \dfrac{{\partial {a_j}(\bm{x})}}{{\partial \lambda }} &= {{{\left( {{\nabla _0}} \right)}_1}{a_j}(\bm{x})} \dfrac{{\partial x}}{{\partial \lambda }} + {{{\left( {{\nabla _0}} \right)}_2}{a_j}(\bm{x})} \dfrac{{\partial y}}{{\partial \lambda }} + {{{\left( {{\nabla _0}} \right)}_3}{a_j}(\bm{x})} \dfrac{{\partial z}}{{\partial \lambda }}\\\nonumber \\ &= {{{\left( {{\nabla _0}} \right)}_1}{a_j}(\bm{x})} \left( { - \sin (\lambda )\cos (\theta )} \right) + {{{\left( {{\nabla _0}} \right)}_2}{a_j}(\bm{x})} \left( {\cos (\lambda )\cos (\theta )} \right), \end{align} \begin{align}\label{GMLS-8}\nonumber \dfrac{{\partial {a_j}(\bm{x})}}{{\partial \theta }} &= {{{ {{\left(\nabla _0\right)}}}_1}{a_j}(\bm{x})}\dfrac{{\partial x}}{{\partial \theta }} + {{{\left( {{\nabla _0}} \right)}_2}{a_j}(\bm{x})} \dfrac{{\partial y}}{{\partial \theta }} + {{{\left( {{\nabla _0}} \right)}_3}{a_j}(\bm{x})}\dfrac{{\partial z}}{{\partial \theta }}\\\nonumber \\ &= {{{\left( {{\nabla _0}} \right)}_1}{a_j}(\bm{x})}\left( { - \cos (\lambda )\sin (\theta )} \right) +{{{\left( {{\nabla _0}} \right)}_2}{a_j}(\bm{x})} \left ( { - \sin (\lambda )\sin (\theta )} \right) + {{{\left( {{\nabla _0}} \right)}_3}{a_j}(\bm{x})} ( {\cos (\theta )}), \end{align} where $(\nabla_{0})_{1}$, $(\nabla_{0})_{2}$ and $(\nabla_{0})_{3}$ are the components of $\nabla_{0}$. Inserting Eqs. (\ref{GMLS-7}) and (\ref{GMLS-8}) into Eq. (\ref{Eq-2}) yields \begin{equation}\label{GMLS-9} \nabla a_{j}(\bm{x})= \left( \dfrac{1}{\cos(\theta)}\dfrac{\partial a_{j}(\bm{x})}{\partial \lambda}, \dfrac{\partial a_{j}(\bm{x})}{\partial \theta} \right)^{T}:=\left(G_{\lambda},G_{\theta}\right)^{T}, \end{equation} where \begin{align*} G_{\lambda}&={{{\left( {{\nabla _0}} \right)}_1}{a_j}(\bm{x})} \left( { - \sin (\lambda )} \right) + {{{\left( {{\nabla _0}} \right)}_2}{a_j}(\bm{x})} \left( {\cos (\lambda)} \right),\\ \\ G_{\theta}&= {{{\left( {{\nabla _0}} \right)}_1}{a_j}(\bm{x})}\left( { - \cos (\lambda )\sin (\theta )} \right) +{{{\left( {{\nabla _0}} \right)}_2}{a_j}(\bm{x})} \left ( { - \sin (\lambda )\sin (\theta )} \right) + {{{\left( {{\nabla _0}} \right)}_3}{a_j}(\bm{x})} ( {\cos (\theta )}). \end{align*} \section{The MKLS formulation for advection operator}\label{Sec-4} Our goal of this part is to introduce a new approximation namely MKLS on the unit sphere. Previously, this technique had been given in subdomains of $\mathbb{R}^{d}$ \cite{gu2003moving}. Here, we first introduce the methodology of MKLS technique on the sphere, and then \TODO{we approximate $\nabla u$ for a given function $u$ defined on the unit sphere in spherical coordinates using this approach.} We suppose that $u \in C^{m+1}(\mathbb{S}^2)$ is a function defined on $\mathbb{S}^2$. Also, we consider a set of $N$ points on the unit sphere. The approximation of the function $u$ by $\overline{u}$ can be given by \begin{equation}\label{1-MK} u({\bm{x}}) \approx \overline{u(\bm{x})} = {\bm{Y}^T}({\bm{x}}){\bm{a}}({\bm{x}}) + Z({\bm{x}}),\,\,\,\,\,\,\,{\bm{x}} \in \mathbb{S}^{2}, \end{equation} where $\bm{Y}(\bm{x})$ and $\bm{a} (\bm{x})$ are the vectors of spherical harmonics of at most $m$, and the unknown coefficient, respectively. Also, $Z(\bm{x})$ represents the realization of a stochastic process with mean zero, variance $\sigma^{2}$ and non-zero covariance. The matrix form of the covariance can be written as \begin{equation}\label{Matrix-cov} {\mathop{\rm cov}} \left\{ {Z({{\bm{x}}_i}),Z({{\bm{x}}_j})} \right\} = {\sigma ^2} {\bm{R}}\left[ {R({{\bm{x}}_i},{{\bm{x}}_j})} \right],\,\,\,\,\,\,i,j = 1,2,...,N, \end{equation} where ${\bm{R}}\left[ {R({{\bm{x}}_i},{{\bm{x}}_j})} \right]$ and ${R({{\bm{x}}_i},{{\bm{x}}_j})}$ called the correlation matrix and the correlation function between any pair of points located at $\bm{x}_{i}$ and $\bm{x}_{j}$ on $\mathbb{S}^2$, respectively. The following Gaussian function can be chosen as the correlation function \begin{equation}\label{correlation} R({{\bm{x}}_i},{{\bm{x}}_j}) = e^{ - c\, \DO{dist(\bm{x}_{i},\bm{x}_{j})^2}}, \end{equation} where $c >0$ denotes the value of the correlation parameter, which can be effected on the approximation solution. In the same manner \cite{gu2003moving}, Eq. (\ref{1-MK}) can be written as below \begin{equation}\label{2-MK} \overline{u(\bm{x})} = {\bm{Y}^T}({\bm{x}}){\left( {{{\bm{P}}^T}{{\bm{R}}^{ - 1}}{\bm{P}}} \right)^{ - 1}}{{\bm{P}}^T}{{\bm{R}}^{ - 1}}{\bf{u}} + {\bm{r}^T}({\bm{x}}){{\bm{R}}^{ - 1}}\left( {{\bf{I}} - {\bm{P}}{{\left( {{{\bm{P}}^T}{{\bm{R}}^{ - 1}}{\bm{P}}} \right)}^{ - 1}}{{\bm{P}}^T}{{\bm{R}}^{ - 1}}} \right)\bm{u},\,\,\,\,\,\,\,\,\,\,{\bm{x}} \in \mathbb{S}^2, \end{equation} where the vector $r^{T}(\bm{x})=[R(\bm{x}_{1},\bm{x})\,\, R(\bm{x}_{2},\bm{x}) ... R(\bm{x}_{|I(\bm{x}_{c})|},\bm{x})]$ such that $\bm{x}_{c}$ is the evaluation point on $\mathbb{S}^2$. Due to Eq. (\ref{2-MK}), the shape functions of the MKLS approximation on the unit sphere are denoted by the following formula \begin{align}\label{4-MK}\nonumber {\bm{a}^{T}}{({\bm{x}})} :=& {\bm{Y}^T}({\bm{x}}){\left( {{{\bm{P}}^T}{{\bm{R}}^{ - 1}}{\bm{P}}} \right)^{ - 1}}{{\bm{P}}^T}{{\bm{R}}^{ - 1}} + {\bm{r}^T}({\bm{x}}){{\bm{R}}^{ - 1}}\left( {{\bf{I}} - {\bm{P}} {{\left( {{{\bm{P}}^T}{{\bm{R}}^{ - 1}}{\bm{P}}} \right)}^{ - 1}}{{\bm{P}}^T}{{\bm{R}}^{ - 1}}} \right)\\\nonumber \\ =& \left[ {{a_1}({\bm{x}})\,\,\,\,\,{a_2}({\bm{x}})\,\,...\,\,\,{a_{|I({{\bm{x}}_c})|}}({\bm{x}})} \right]. \end{align} The approximation of a function $u$ can be given as follows \begin{equation}\label{5-MK} \overline{u(\bm{x})}=\displaystyle \sum_{j \in I(\bm{x})} a_{j}(\bm{x})u_{j},\,\,\,\, \bm{x} \in \mathbb{S}^2, \end{equation} \TODO{where $a_{j}(\bm{x}),\,\,j=1,2,...,N$ are obtained in (\ref{4-MK}), and $u_{j}$ is the value $u$ at point $\bm{x}_{j} \in I(\bm{x})$.} As shown in \cite{gu2003moving}, the constructed shape functions of MKLS approximation in $\Omega \subset \mathbb{R}^d$ satisfy Kroncker's delta property. This feature also remains for the shape functions (\ref{4-MK}) obtained on the unit sphere. Now, we approximate the surface gradient operator $\nabla u$ using \TODO{MKLS method}. The partial derivatives of $a_{j}(\bm{x})$ with respect to $\lambda$ and $\theta$ can be obtained as \begin{align}\label{6-MK}\nonumber \dfrac{{\partial {a_j}(\bm{x})}}{{\partial \lambda }} &= {{{\left( {{\nabla _0}} \right)}_1}{a_j}(\bm{x})} \dfrac{{\partial x}}{{\partial \lambda }} + {{{\left( {{\nabla _0}} \right)}_2}{a_j}(\bm{x})} \dfrac{{\partial y}}{{\partial \lambda }} + {{{\left( {{\nabla _0}} \right)}_3}{a_j}(\bm{x})} \dfrac{{\partial z}}{{\partial \lambda }}\\\nonumber \\ &= {{{\left( {{\nabla _0}} \right)}_1}{a_j}(\bm{x})} \left( { - \sin (\lambda )\cos (\theta )} \right) + {{{\left( {{\nabla _0}} \right)}_2}{a_j}(\bm{x})} \left( {\cos (\lambda )\cos (\theta )} \right), \end{align} and \begin{align}\label{7-MK}\nonumber \dfrac{{\partial {a_j}(\bm{x})}}{{\partial \theta }} &= {{{ {{\left(\nabla _0\right)}}}_1}{a_j}(\bm{x})}\dfrac{{\partial x}}{{\partial \theta }} + {{{\left( {{\nabla _0}} \right)}_2}{a_j}(\bm{x})} \dfrac{{\partial y}}{{\partial \theta }} + {{{\left( {{\nabla _0}} \right)}_3}{a_j}(\bm{x})}\dfrac{{\partial z}}{{\partial \theta }}\\\nonumber \\ &= {{{\left( {{\nabla _0}} \right)}_1}{a_j}(\bm{x})}\left( { - \cos (\lambda )\sin (\theta )} \right) +{{{\left( {{\nabla _0}} \right)}_2}{a_j}(\bm{x})} \left ( { - \sin (\lambda )\sin (\theta )} \right) + {{{\left( {{\nabla _0}} \right)}_3}{a_j}(\bm{x})} ({\cos (\theta )}), \end{align} where ${\left( {{\nabla _0}} \right)}_1$, ${\left( {{\nabla _0}} \right)}_2$ and ${\left( {{\nabla _0}} \right)}_3$ act on the vector functions $\bm{Y}^{T}(\bm{x})$ and ${\bm{r}^T}({\bm{x}})$ according to Eq. (\ref{4-MK}). The partial derivatives of $\bm{Y}^{T}(\bm{x})$ with respect to $\lambda$ and $\theta$ can be obtained similar to GMLS approximation, which is described in the previous section. On the other hand, since ${\bm{r}^T}({\bm{x}})$ is a radial function, its partial derivatives with respect to $\lambda$ and $\theta$ can be computed in the same way that was given in \cite{flyer2007transport}, and then $\nabla a_{j}(\bm{x})$ will be computed at each point $\bm{x} \in \mathbb{S}^2$. \section{The full-discrete scheme}\label{Sec-5} In this section, we apply two approximations, which are given in Sections \ref{Sec-3} and \ref{Sec-4} for discretizing the spatial variables of semi-discretized equations (\ref{BDF-3}) and (\ref{BDF-4}). We consider $N$ points such as $X=\{\bm{x}_{1},\bm{x}_{2},...,\bm{x}_{N}\}$ on the unit sphere in spherical coordinates, and we assume that the approximation solution of $u^{n+1}$ at each point $\bm{x} \in \mathbb{S}^2$ is \begin{equation}\label{full-1} u^{n+1}(\bm{x})\approx \displaystyle \sum_{j \in I(\bm{x})} a_{j}(\bm{x})u^{n+1}_{j}, \end{equation} where $a_{j}(\bm{x})$ can be chosen from (\ref{GMLS-1}) or (\ref{5-MK}), and $n=0,1,...,M-1$. The surface gradient, i.e., $\nabla u^{n+1}$ can be approximated by the following formula \begin{equation}\label{full-2} \nabla u^{n+1}(\bm{x})\approx \displaystyle \sum_{j \in I(\bm{x})}\nabla a_{j}(\bm{x})u^{n+1}_{j}, \end{equation} where $\nabla a_{j}(\bm{x})$ are defined due to GMLS or MKLS approximation. Replacing Eqs. (\ref{full-1}) and (\ref{full-2}) into Eq. (\ref{BDF-4}) at each point $\bm{x}_{i}$ for $n=0$ gives \begin{equation}\label{full-3} \sum_{j \in I(\bm{x}_{i})} a_{j}(\bm{x}_{i})u^{1}_{j}+\Delta t \bm{v}^{1}.\displaystyle \sum_{j \in I(\bm{x}_{i})}\nabla a_{j}(\bm{x}_{i})u^{1}_{j}=\sum_{j \in I(\bm{x}_{i})} a_{j}(\bm{x}_{i})u^{0}_{j}, \end{equation} where $i=1,2,...,N$. Substituting Eqs. (\ref{full-1}) and (\ref{full-2}) into Eq. (\ref{BDF-3}) yields \begin{equation}\label{full-4} 3\sum_{j \in I(\bm{x}_{i})} a_{j}(\bm{x}_{i})u^{n+1}_{j}+2\Delta t \bm{v}^{n+1}.\displaystyle \sum_{j \in I(\bm{x}_{i})}\nabla a_{j}(\bm{x}_{i})u^{n+1}_{j}=4\sum_{j \in I(\bm{x}_{i})} a_{j}(\bm{x}_{i})u^{n}_{j}- \sum_{j \in I(\bm{x}_{i})} a_{j}(\bm{x}_{i})u^{n-1}_{j}, \end{equation} for $i=1,2,...,N$ and $n=1,2,...,M-1$. The matrix form of Eq. (\ref{full-3}) can be written as follows \begin{equation}\label{full-5} \TO{A_{X}U^{1}_{X}+\Delta t ({v}^{1}_{1}.*B^1_{X})U^{1}_{X}+({v}^{1}_{2}.*B^2_{X})U^{1}_{X}=A_{X}U^{0}_{X},} \end{equation} \DO{where $A_{X}$, $B^1_{X}$ and $B^2_{X}$ are the global matrices as \[{A_X} = {\left[ {{a_j}({\bm{x}_i})} \right] _{1 \le i \le N,1 \le j \le N }},\,\,\,\,\,\,\,\,\,\,\,\, {B^1_X} = {\left[ \dfrac{1}{\cos(\theta_{i})}\dfrac{ \partial a_{j}({\bm{x}_i})}{\partial \lambda} \right]_{1 \le i \le N,1 \le j \le N }},\]} \[\DO{{B^2_X} = {\left[\dfrac{ \partial a_{j}({\bm{x}_i})}{\partial \theta} \right]_{1 \le i \le N,1 \le j \le N }}.}\] $U^{0}_{X}$ and $U^{1}_{X}$ are the vectors of approximation at $t=t_{0}$ and $t=t_{1}$, respectively. ${v}^{1}_{1}$ and $v^1_{2}$ are vectors of velocity field at $N$ points at $t=t_{1}$, \TO{and for example in MATLAB notation, $.*$ denotes the pointwise product between each row of ${v}^{1}_{1}$ and each row of the matrix $B^1_{X}$.} Similarly, Eq. (\ref{full-4}) can be represented in the following matrix form \begin{equation}\label{full-6} \TO{3A_{X}U^{n+1}_{X}+2\Delta t(({v}^{n+1}_{1}.*B^1_{X})U^{n+1}_{X}+({v}^{n+1}_{2}.*B^2_{X})U^{n+1}_{X})=4A_{X}U^{n}_{X}-A_{X}U^{n-1}_{X},} \end{equation} where $U^{n-1}_{X}$, $U^{n}_{X}$ and $U^{n+1}_{X}$ are the vectors of approximation at $t=t_{n-1}$, $t=t_{n}$ and $t=t_{n+1}$, respectively. \DO{In order to solve the linear system of algebraic equations obtained here, i.e., (\ref{full-5}) and (\ref{full-6}), an iterative algorithm namely the biconjugate gradient stabilized (BiCGSTAB) method with zero-fill incomplete lower-upper (ILU) preconditioner is employed. In the literature \cite{lehto2017radial}, it has been shown that the method is efficiently solvable for the linear system of algebraic equations generated by the third-order semi-implicit backward differential formula and combined with a meshless technique for the solution of reaction-diffusion equation on the surfaces \cite{lehto2017radial}. It also should be noted that this algorithm can be used only for the linear system of algebraic equations with a large sparse coefficient matrix \cite{lehto2017radial}. According to the approximations presented here, the final coefficient matrices in Eqs. (\ref{full-5}) and (\ref{full-6}) are sparse, and thus the BiCGSTAB algorithm could be applied without any difficulty as we will observe in the next section.} \begin{figure}[ht] \centering \includegraphics[width=16cm,height=12cm]{Fig24.pdf} \vspace{-2cm} \caption{Set of distribution points on the sphere for PTS points with an example of spherical cap of radius $\delta$.} \label{fig-0} \end{figure} \section{Numerical results}\label{Sec-6} In this section, in order to investigate the ability of the proposed method we use the three standard tests, which have been proposed in the literature \cite{fornberg2011stabilization,nair1999cascade,nair2010class,shankar2018mesh}. The first case is the known "solid-body rotation of a cosine bell" \cite{shankar2018mesh}, the second one is called "vortex roll-up" \cite{fornberg2011stabilization,nair1999cascade}, and the latest is "deformational flow" \cite{nair2010class}. The numerical results reported here via quasi-uniformly distributed point sets $X$, which are known as the \TODO{minimum energy (ME) \cite{flyer2007transport,womersley2003interpolation} and phyllotaxis spiral (PTS) \cite{shankar2018mesh}}, and their fill distance $h$ is proportional to $N^{-1/2}$, where $N$ is the number of points distributed on the unit spheres. For the readers convenience, the procedure of the presented numerical methods is discussed in Algorithm 1 and Algorithm 2. \begin{algorithm} \caption{\TO{Computational algorithm of GMLS (or MKLS) approximation}}\label{Algorithm1} \begin{algorithmic}\vspace{0.15cm} \State \textbf{Input:} data points on $\mathbb{S}^2$, $X=[\bm{x}_{1},\bm{x}_{2},...,\bm{x}_{N}]^{T} \in \mathbb{R}^{N \times 3}$, $\delta>0$ as a radius of local spherical cap; \vspace{0.15cm}\State \textbf{Output:} Matrices $A_{X}$, $B^{1}_{X}$, $B^{2}_{X}$; \vspace{0.15cm}\For {$i=1,2,...,N$} \vspace{0.15cm}\State Construct $I(\bm{x}_{i})$ due to $X$; \vspace{0.15cm}\State Compute local matrix $P^{T}WP$ due to points in $I(\bm{x}_{i})$; \vspace{0.15cm}\State Compute the vector $Y(\bm{x})$, $\nabla_{0}Y(\bm{x})$ defined in Eqs. (\ref{GMLS-3}) and (\ref{GMLS-6}), respectively at each point $\bm{x}_{i}$; \vspace{0.15cm}\State Compute the vectors $\bm{a}^{\star}(\bm{x})$ and $\bm{a}^{\star}_{\nabla_{0}}(\bm{x})$ at each point $\bm{x}_{i}$ due to Eqs. (\ref{GMLS-3}) and (\ref{GMLS-6}) (or similarly compute Eq. (\ref{4-MK}) and components of $\bm{a}^{\star}_{\nabla_{0}}(\bm{x})$) for MKLS approximation ); \vspace{0.15cm}\State $A_{X}(i,:) \gets \bm{a}^{\star}(\bm{x}_{i})$; \vspace{0.15cm}\State $B^{1}_{X}(i,:) \gets \bm{a}^{\star}_{(\nabla_{0})_{1}}(\bm{x}_{i})$; \vspace{0.15cm}\State $B^{2}_{X}(i,:) \gets \bm{a}^{\star}_{(\nabla_{0})_{2}}(\bm{x}_{i})$; \vspace{0.15cm} \EndFor \vspace{0.15cm} \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{\TO{Computational algorithm for solving transport equation on $\mathbb{S}^2$}}\label{Algorithm2} \begin{algorithmic} \vspace{0.15cm} \State \textbf{Input:} data points on $\mathbb{S}^2$, $X=[\bm{x}_{1},\bm{x}_{2},...,\bm{x}_{N}]^{T} \in \mathbb{R}^{N \times 3}$, $\delta>0$ as a radius of local spherical cap; $T$ as final time, time step $\Delta t$; \vspace{0.15cm}\State \textbf{Output:} Approximation solution $U_{X}$ at final time; \vspace{0.15cm}\State Calling $A_{X},B^{1}_{X},B^{2}_{X}$ from Algorithm \ref{Algorithm1}; \vspace{0.15cm}\State Set the initial condition $U^{0}_{X}=\{u(\bm{x}_{i},0)\}_{i=1}^{N},t=0,m=0$; \vspace{0.15cm} \State Set $m=1$, $t=m \Delta t$ and compute the velocity vector i.e., $\bm{v}$ at this time; \vspace{0.15cm} \State Use zero-fill incomplete lower-upper (ILU) preconditioner for Eq. (\ref{full-5}); \vspace{0.15cm} \State Find $U^1_{X}$ by solving the linear system (\ref{full-5}); \While {$t \leq T$} \vspace{0.15cm} \State Set $m=m+1$, $t=m \Delta t$ and compute the velocity vector i.e., $\bm{v}$ at this time; \vspace{0.15cm} \State Use zero-fill incomplete lower-upper (ILU) preconditioner for Eq. (\ref{full-6}); \vspace{0.15cm} \State Find $U^m_{X}$ by solving the linear system (\ref{full-6}); \EndWhile \vspace{0.15cm} \end{algorithmic} \end{algorithm} Figure \ref{fig-0} illustrates $2500$ PTS points with a spherical cap. The $\ell_{2}$ norm is computed to show the accuracy of the proposed two meshless methods that is defined as follows \begin{equation}\label{Integration-1} \left(\displaystyle \int_{\mathbb{S}^2}[f(\bm{x})]^{2}d\bm{x}\right)^{\frac{1}{2}} \approx \left(\displaystyle \dfrac{4 \pi}{N}\sum_{j=1}^{N} [f(\bm{\eta}_{j})]^{2}\right)^{\frac{1}{2}}:=\|f\|_{\ell_{2}}, \end{equation} where $\{\bm{\eta}_{1},\bm{\eta}_{2},...,\bm{\eta}_{N}\}$ is a set of $N$ spherical $t$-design points on the unit sphere \cite{atkinson2012spherical,womersley2003interpolation}. All simulations presented here are run on a $2.2$ GHz Intel Core i7-2670QM CPU and $8$ GB of RAM and all self-developed codes are written \TODO{in MATLAB (version $2017$a) in standard double precision.} \begin{figure}[t!] \centering \includegraphics[width=6.25cm,height=5.25cm]{Fig1.png}\hspace{1cm} \includegraphics[width=6.25cm,height=5.25cm]{Fig2.png}\vspace{1cm} \includegraphics[width=6.25cm,height=5.25cm]{Fig3.png} \hspace{1cm} \includegraphics[width=6.25cm,height=5.5cm]{Fig4.png} \caption{The 3D simulation of one full revolution of the bell over the sphere via GMLS approximation for solid-body rotation test (given in Subsection \ref{61}) at $t=T/8$ (top left), $t=T/4$ (top right), $t=T/2$ (bottom left), and $t=T$ (bottom right).} \label{fig-1} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=5.8cm,height=5.25cm]{Fig5.png}\hspace{1cm} \includegraphics[width=6.05cm,height=5.25cm]{Fig6.png}\vspace{1cm} \includegraphics[width=6.05cm,height=5.25cm]{Fig7.png}\hspace{1cm} \includegraphics[width=6.05cm,height=5.25cm]{Fig8.png} \caption{The 3D simulation of one full revolution of the bell over the sphere via MKLS approximation for solid-body rotation test (given in Subsection \ref{61}) at $t=T/8$ (top left), $t=T/4$ (top right), $t=T/2$ (bottom left), and $t=T$ (bottom right).} \label{fig-2} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=8.25cm,height=5.5cm]{Fig25.png}\hspace{1cm} \includegraphics[width=8.25cm,height=5.5cm]{Fig26.png} \caption{\TO{The used CPU time for constructing all required matrices in Algorithm \ref{Algorithm1} in GMLS and MKLS approximations using different values of $N$, PTS points (left) and ME points (right).}} \label{fig-2-1} \end{figure} \subsection{Solid-body rotation of a cosine bell test}\label{61} As the first standard test, we considered the transport equation (\ref{Eq-1}) on the unit sphere with the following vector velocity field \cite{williamson1992standard,shankar2018mesh} $$v_{1}(\lambda,\theta)=\sin(\theta)\sin(\lambda)\sin(\alpha)-\cos(\theta)\cos(\alpha),\,\,\,\,\,\, v_{2}(\lambda,\theta)=\cos(\lambda)\sin(\alpha),$$ where $-\pi \leq \lambda \leq \pi$ and $-\pi/2 \leq \theta \leq \pi/2$. Here, we have chosen $\alpha=\pi/2$, which shows the advection of the initial condition over the north and south poles directly \cite{shankar2018mesh}. The initial condition for this test is considered as follows \cite{shankar2018mesh} \begin{equation} \label{Initial-1} u(\lambda ,\theta,t=0 ) = \left\{ \begin{array}{l} \dfrac{1}{2}\left( {1 + \cos \left( {\dfrac{{\pi r}}{{{R_b}}}} \right)} \right),\,\,\,\,\,\,\,\,\,\,r < {R_b},\\ 0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,r \ge {R_b}. \end{array} \right. \end{equation} Here $r=\arccos(\cos(\theta)\cos(\lambda))$ and $R_{b}=\frac{1}{2}$. This example illustrates one full revolution of the bell over the sphere at $T=2 \pi$. To simulate this process via the meshless methods presented here, we have fixed $N=19600$ PTS points, $\delta=12h$ with $h=N^{-1/2}$ and $\Delta t=T/1000$. Also, the constant parameter in MKLS method is considered experimentally $c=20/h$ \cite{gu2003moving}. Figures \ref{fig-1} and \ref{fig-2} show the numerical solutions of $u$ at different time levels $t=T/8,~T/4,~T/2$ and $t=T$ using GMLS and MKLS approximations. \TODO{The results obtained via two methods are in a good agreement with those reported in the literature \cite{williamson1992standard,shankar2018mesh}.} \TO{In Figure \ref{fig-2-1}, the used CPU time for constructing all required matrices in Algorithm \ref{Algorithm1} for both approximations are given using different values of $N$.} \DO{Table \ref{Table1-1} shows the used CPU time in both techniques for different values $N$ during the above simulations.} In Tables \ref{Table-1} and \ref{Table-2}, $\ell_{2}$ errors are computed for different values $N$ via both techniques, respectively. \TODO{ As can be observed in results, GMLS and MKLS approximation have almost the same accuracy in solving this example.} \begin{table} \begin{center} \begin{tabular}{lllllllllllllllllll} \hline $\textbf{Method}$&&&&$N$ &&&&$\textbf{CPU time}\,(s)$ \\ \hline \vspace{0.1cm} $\textbf{GMLS}$&&&&$1600$&&&& $3.21$ \\ \vspace{0.1cm} &&&&$6400$&&&& $10.66$ \\ \vspace{0.1cm} &&&&$19600$&&&& $28.73$ \\ \hline \vspace{0.1cm}$\textbf{MKLS}$&&&&$1600$&&&& $3.26$ \\ &&&&$6400$&&&& $10.96$ \\\vspace{0.1cm} &&&&$19600$&&&& $26.50$ \\ \hline \end{tabular} \caption{\DO{The used CPU time with different values $N$ \\for the BiCGSTAB method for the first test.}}\label{Table1-1} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{llllllllllllll} \hline &&\multicolumn{2}{l}{PTS}&&\multicolumn{2}{l}{ME} \\ \cline{3-4} \cline{6-7} $N$&&$\ell_{2}$ &&&& $\ell_{2}$ \\ \hline \vspace{0.1cm}$400$ && $2.59\e-1$ &&&& $2.53\e-1$ \\ \vspace{0.1cm} $1600$ && $1.72\e-1$ &&&& $1.71\e-1$ \\ \vspace{0.1cm} $6400$ && $4.66\e-2$ &&&& $4.64\e-2$ \\ \vspace{0.1cm} $16641$ && $2.05\e-2$ &&&& $2.06\e-2$ \\ \hline \end{tabular} \caption{The $\ell_{2}-$error for different values $N$ \\ in GMLS approximation for the first test problem.}\label{Table-1} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{llllllllllllll} \hline &&\multicolumn{2}{l}{PTS}&&\multicolumn{2}{l}{ME} \\ \cline{3-4} \cline{6-7} $N$&&$\ell_{2}$ &&&& $\ell_{2}$ \\ \hline \vspace{0.1cm} $400$ && $2.68\e-1$ &&&& $2.55\e-1$ \\ \vspace{0.1cm} $1600$ && $2.47\e-1$ &&&& $2.43\e-1$ \\ \vspace{0.1cm} $6400$ && $9.53\e-2$ &&&& $8.73\e-2$ \\ \vspace{0.1cm} $16641$ && $2.36\e-2$ &&&& $1.97\e-2$ \\ \hline \end{tabular} \caption{The $\ell_{2}-$error for different values $N$ \\ in MKLS approximation for the first test.}\label{Table-2} \end{center} \end{table} \begin{figure}[t!] \centering \includegraphics[width=6.5cm,height=5.25cm]{Fig9.png}\hspace{1.5cm} \includegraphics[width=6.5cm,height=5.25cm]{Fig10.png} \includegraphics[width=6.5cm,height=5.25cm]{Fig11.png} \caption{The 3D simulation of the vortex roll-up via GMLS approximation for the second test (given in Subsection \ref{62}) at $t=3$ (top left), $t=6$ (top right), and $t=9$ (bottom).} \label{fig-3} \end{figure} \subsection{Vortex roll-up test}\label{62} As the second standard test for transport equation on the unit sphere, we have considered vortex roll-up test case, which is known as deformational flow, and it models idealized cyclogenesis \cite{fornberg2011stabilization,nair1999cascade}. We have solved the transport equation (\ref{Eq-1}) with the following velocity field \cite{fornberg2011stabilization,nair1999cascade} $$v_{1}(\lambda,\theta)=\omega(\theta)\cos(\theta),\,\,\,\,\,\,v_{2}(\lambda,\theta)=0,$$ where \[\omega \left( \theta \right) = \left\{ \begin{array}{l} \dfrac{{3\sqrt 3 }}{{2\rho \left( \theta \right)}}\sec {h^2} \left( {\rho \left( \theta \right)} \right)\tanh \left( {\rho \left( \theta \right)} \right),\,\,\,\,\,\,\,\,\,\,\rho \left( \theta \right) \ne 0,\\ 0,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\rho \left( \theta \right) \ne 0,\, \end{array} \right.\] where $\rho(\theta)=\rho_{0}\cos(\theta)$, and $\rho_{0}$ controls the radial extent of the vortex \cite{fornberg2011stabilization,nair1999cascade}. The analytical solution of this test is given as follows \cite{fornberg2011stabilization,nair1999cascade} $$u(\lambda,\theta,t)=1-\tanh\left( \dfrac{\rho(\theta)}{\zeta} \sin(\lambda-\omega(\theta)t)\right),\,\,\,\, t\geq 0.$$ For the simulations reported here, we have chosen $\rho_{0}=3$ and $\zeta=5$, which were considered previously in \cite{fornberg2011stabilization}. All required parameters in two approximations are chosen as the previous test, and $\Delta t=1/1000$. In Figures \ref{fig-3} and \ref{fig-4}, we showed the numerical solutions of Eq. (\ref{Eq-1}) on the unit sphere via $N=19600$ PTS points at different time levels $t=3,6$ and $t=9$ by GMLS and MKLS approximations. The results obtained in this test are in agreement with reported results in \cite{fornberg2011stabilization}. \DO{ In Table \ref{Table1-2}, the used CPU time in both techniques for different values of $N$ during simulations are given.} In Tables \ref{Table-3} and \ref{Table-4}, $\ell_{2}$ errors are reported at $T=3$ for different values of $N$ and $\Delta t=T/1000$ via GMLS and MKLS approximations, respectively. \TODO{The results given here are in good agreement with those reported in \cite{fornberg2011stabilization}, and we can see almost the same accuracy using both approximations.} \begin{figure}[t!] \centering \includegraphics[width=6.5cm,height=5.25cm]{Fig12.png}\hspace{1.5cm} \includegraphics[width=6.5cm,height=5.25cm]{Fig13.png} \includegraphics[width=6.5cm,height=5.25cm]{Fig14.png} \caption{The 3D simulation of the vortex roll-up via MKLS approximation for the second test (given in Subsection \ref{62}) at $t=3$ (top left), $t=6$ (top right), and $t=9$ (bottom).} \label{fig-4} \end{figure} \begin{table} \begin{center} \begin{tabular}{lllllllllllllllllll} \hline $\textbf{Method}$&&&&$N$ &&&&$\textbf{CPU time}\,(s)$ \\ \hline \vspace{0.1cm} $\textbf{GMLS}$&&&&$1600$&&&& $28.50$ \\ \vspace{0.1cm} &&&&$6400$&&&& $108.86$ \\ \vspace{0.1cm} &&&&$19600$&&&& $256.68$ \\ \hline \vspace{0.1cm} $\textbf{MKLS}$&&&&$1600$&&&& $20.94$ \\ \vspace{0.1cm} &&&&$6400$&&&& $68.64$ \\ \vspace{0.1cm} &&&&$19600$&&&& $178.37$ \\ \hline \end{tabular} \caption{\DO{The used CPU used time with different values $N$ \\ for the BiCGSTAB method for the second test problem.}}\label{Table1-2} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{llllllllllllll} \hline &&\multicolumn{2}{l}{PTS}&&\multicolumn{2}{l}{ME} \\ \cline{3-4} \cline{6-7} $N$&&$\ell_{2}$ &&&& $\ell_{2}$ \\ \hline \vspace{0.1cm} $400$ && $2.25\e-2$ &&&& $2.09\e-2$ \\ \vspace{0.1cm} $1600$ && $3.51\e-3$ &&&& $3.46\e-3$ \\ \vspace{0.1cm} $6400$ && $5.22\e-4$ &&&& $7.78\e-4$ \\ \vspace{0.1cm} $16641$ && $1.97\e-4$ &&&& $1.92\e-4$ \\ \hline \end{tabular} \caption{The $\ell_{2}-$error for different values $N$ \\ in GMLS approximation for the second test.}\label{Table-3} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{llllllllllllll} \hline &&\multicolumn{2}{l}{PTS}&&\multicolumn{2}{l}{ME} \\ \cline{3-4} \cline{6-7} $N$&&$\ell_{2}$ &&&& $\ell_{2}$ \\ \hline \vspace{0.1cm}$400$ && $4.05\e-2$ &&&& $4.17\e-2$ \\ \vspace{0.1cm} $1600$ && $1.41\e-2$ &&&& $1.31\e-2$ \\ \vspace{0.1cm} $6400$ && $3.59\e-3$ &&&& $1.75\e-3$ \\ \vspace{0.1cm} $16641$ && $7.48\e-4$ &&&& $2.10\e-4$ \\ \hline \end{tabular} \caption{The $\ell_{2}-$error for different values $N$ \\ in MKLS approximation for the second test.}\label{Table-4} \end{center} \end{table} \subsection{Deformational flow test}\label{63} In this part, we consider the following test, which is known as deformational flow \cite{nair2010class}. We have considered the transport equation (\ref{Eq-1}) with the following velocity field $$v_{1}(\lambda,\theta,t)=2\sin^2(\lambda)\sin(2\theta)\cos(\pi t/T),\,\,\,\,\,\, v_{2}(\lambda,\theta,t)=2\sin(2\lambda)\cos(\theta)\cos(\pi t/T),$$ which is non-divergent flow \cite{nair2010class}. The initial condition for this test is given as follows \cite{nair2010class} \[u(\lambda ,\theta,t=0 ) = \left\{ \begin{array}{l} 0.1 + 0.9{u_1}(\lambda ,\theta ),\,\,\,\,\,\,\,\,\,\,\,{r_1}(\lambda ,\theta ) < r,\\ 0.1 + 0.9{u_2}(\lambda ,\theta ),\,\,\,\,\,\,\,\,\,\,\,{r_2}(\lambda ,\theta ) < r,\\ 0.1,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,o.w, \end{array} \right.\] where \[{u_1}(\lambda ,\theta ) = \dfrac{1}{2}\left( {1 + \cos \left( {\frac{{\pi {r_1}(\lambda ,\theta )}}{r}} \right)} \right),\,\,\,\,\,\,\,\,\,\,\,\,\,{u_2}(\lambda ,\theta ) = \dfrac{1}{2}\left( {1 + \cos \left( {\frac{{\pi {r_2}(\lambda ,\theta )}}{r}} \right)} \right),\,\,\,\] and \[\begin{array}{l} {r_1}(\lambda ,\theta ) = \arccos \left( {\sin ({\theta _1})\,\sin (\theta ) + \cos ({\theta _1})\,\cos (\theta )\cos (\lambda - {\lambda _1})} \right)\,,\\\\ {r_2}(\lambda ,\theta ) = \arccos \left( {\sin ({\theta _2})\,\sin (\theta ) + \cos ({\theta _2})\,\cos (\theta ) \cos (\lambda - {\lambda _2})} \right). \end{array}\] \begin{figure}[t!] \centering \includegraphics[width=6.5cm,height=5.25cm]{Fig15.png} \caption{The initial condition of the deformational flow in the third test (given in Subsection \ref{63}).} \label{fig-4-1} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=6.5cm,height=5.25cm]{Fig16.png}\hspace{1cm} \includegraphics[width=6.5cm,height=5.25cm]{Fig17.png}\vspace{1cm} \includegraphics[width=6.5cm,height=5.25cm]{Fig18.png}\hspace{1cm} \includegraphics[width=6.5cm,height=5.25cm]{Fig19.png} \caption{The 3D simulation of the deformational flow via GMLS approximation for the third test (given in Subsection \ref{63}) at $t=T/4$ (top left), $t=T/2$ (top right), $t=9T/2$ (bottom left), and $t=T$ (bottom right).} \label{fig-5} \end{figure} In the above formulations, $(\lambda_{1},\theta_{1})=(5\pi/6,0)$ and $(\lambda_{2},\theta_{2})=(7\pi/6,0)$ are the centers of two cosine bells \cite{nair2010class}. This test shows that the flow field will be deformed at $t=2.5$, and then with return to its initial position (see Figure \ref{fig-4-1}) at $T=5$ \cite{nair2010class}. In Figure \ref{fig-5}, the numerical solution of $u$ at different time levels $t=T/4,T/2,9T/2$ and $t=T$ via GMLS approximation, where $T=5$ and $\Delta t=1/400$ and $N=6400$ ME points is shown. Furthermore, the same simulations are obtained in Figure \ref{fig-6} with MKLS approximation. \begin{figure}[t!] \centering \includegraphics[width=6.5cm,height=5.25cm]{Fig20.png}\hspace{1cm} \includegraphics[width=6.5cm,height=5.25cm]{Fig21.png}\vspace{1cm} \includegraphics[width=6.5cm,height=5.25cm]{Fig22.png}\hspace{1cm} \includegraphics[width=6.5cm,height=5.25cm]{Fig23.png} \caption{The 3D simulation of the deformational flow via MKLS approximation for the third test (given in Subsection \ref{63}) at $t=T/4$ (top left), $t=T/2$ (top right), $t=9T/2$ (bottom left), and $t=T$ (bottom right).} \label{fig-6} \end{figure} \TODO{As observed and expected in these figures, two cosine bells considered at the initial (Figure \ref{fig-4-1}) are deformed at $t=T/2$, and they are returned to their initial positions at $t=T$.} \DO{ In Table \ref{Table1-3}, the used CPU time in GMLS and MKLS approximations for different values of $N$ during simulations are shown.} In Table \ref{Table-5}, $\ell_{2}$ errors are given via GMLS approximation using PTS and ME points on the unit sphere. In Table \ref{Table-6}, $\ell_{2}$ errors obtained from the implementation of MKLS technique with set of points considered are reported and \TO{different values time step such that for $N=400$, $\Delta t=1/100$, for $N=1600$, $\Delta t=1/200$, for $N=6400$, $\Delta t=1/400$, for $N=16641$, $\Delta t=1/800$.} \TODO{Also in these tables, the accuracy in both approximations is almost same.} \begin{table} \begin{center} \begin{tabular}{lllllllllllllllllll} \hline $\textbf{Method}$&&&&$N$ &&&&$\textbf{CPU time}\,(s)$ \\ \hline \vspace{0.1cm}$\textbf{GMLS}$&&&&$1600$&&&& $5.98$ \\ \vspace{0.1cm} &&&&$6400$&&&& $19.28$ \\ \vspace{0.1cm} &&&&$10000$&&&& $28.11$ \\ \hline \vspace{0.1cm} $\textbf{MKLS}$&&&&$1600$&&&& $5.58$ \\ \vspace{0.1cm} &&&&$6400$&&&& $19.25$ \\ \vspace{0.1cm} &&&&$10000$&&&& $26.67$ \\ \hline \end{tabular} \caption{\DO{The used CPU time with different values $N$\\ for the BiCGSTAB method for the third test.}}\label{Table1-3} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{llllllllllllll} \hline &&\multicolumn{2}{l}{PTS}&&\multicolumn{2}{l}{ME} \\ \cline{3-4} \cline{6-7} $N$&&$\ell_{2}$ &&&& $\ell_{2}$ \\ \hline \vspace{0.1cm} $400$ && $3.34\e-3$ &&&& $3.43\e-3$ \\ \vspace{0.1cm} $1600$ && $1.11\e-3$ &&&& $1.15\e-3$ \\ \vspace{0.1cm} $6400$ && $2.76\e-4$ &&&& $3.96\e-4$ \\ \vspace{0.1cm} $16641$ && $8.36\e-5$ &&&& $1.11\e-4$ \\ \hline \end{tabular} \caption{The $\ell_{2}-$error for different values $N$ \\ in GMLS approximation for the third test.}\label{Table-5} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{llllllllllllll} \hline &&\multicolumn{2}{l}{PTS}&&\multicolumn{2}{l}{ME} \\ \cline{3-4} \cline{6-7} $N$&&$\ell_{2}$ &&&& $\ell_{2}$ \\ \hline \vspace{0.1cm} $400$ && $1.45\e-3$ &&&& $1.46\e-3$ \\ \vspace{0.1cm} $1600$ && $8.75\e-4$ &&&& $8.81\e-4$ \\ \vspace{0.1cm} $6400$ && $2.53\e-4$ &&&& $2.55\e-4$ \\ \vspace{0.1cm} $16641$ && $2.00\e-4$ &&&& $1.51\e-4$ \\ \hline \end{tabular} \caption{The $\ell_{2}-$error for different values $N$ \\ in MKLS approximation for the third test.}\label{Table-6} \end{center} \end{table} \TO{\subsection{Comparison between two proposed approximations with other methods} In this section, we compare GMLS and MKLS approximations with other numerical methods, which were applied to solve the transport equation on the sphere in the literature, we have considered CSLAM method \cite{lauritzen2012standard}, DG method \cite{nair2010class}, RBF-FD technique \cite{fornberg2011stabilization}, local and global RBF approaches \cite{shankar2018mesh} and RBF-PU method \cite{shankar2018mesh}. We have used the deformational flow test for the cosine bell, which is given in Subsection \ref{63}. A comparison between all mentioned methods is done in Table \ref{Table-7} due to degrees of freedom (DOF) (the unknown coefficients related to each method), and time steps ($\Delta t$) by computing relative $\ell_{2}$ error. The errors reported here for GMLS and MKLS approximations are computing via PTS points on $\mathbb{S}^2$ due to formula (\ref{Integration-1}), which approximates $\ell_{2}$ error. It also should be noted that, in \cite{shankar2018mesh}, for computing the surface integral, the sixth--order kernel--based meshfree quadrature method has been used \cite{fuselier2014kernel}. Besides, the results reported here for CSLAM, DG, RBF-FD, local RBF, global RBF, and RBF-PU methods are taken from \cite[Table 2, Subsection 4.4]{shankar2018mesh}. } \begin{table} \begin{center} \begin{tabular}{llllllllllllllllllllllllllll} \hline \cline{3-4} \cline{6-7} \textbf{Method}&&&$\Delta t$ &&&& $\textbf{DOF}$&&&& \textbf{Relative} $\ell_{2}$ \textbf{error} \\ \hline \vspace{0.1cm} \textbf{GMLS} &&& $5/2400$ &&&& $6400$ &&&& $2.84\e-4$ \\ \vspace{0.1cm} \textbf{MKLS} &&& $5/2400$ &&&& $6400$ &&&& $2.81\e-4$ \\ \vspace{0.1cm} \textbf{CSLAM} \cite{lauritzen2012standard} &&& $5/240$ &&&& $86400$ &&&& $6.00\e-3$ \\ \vspace{0.1cm} \textbf{DG}, $p=3$ \cite{nair2010class} &&& $5/2400$ &&&& $38400$ &&&& $1.39\e-2$ \\ \vspace{0.1cm} \textbf{RBF-FD}, $n=84$ \cite{nair2010class} &&& $5/900$ &&&& $23042$ &&&& $1.17\e-2$ \\ \vspace{0.1cm} \textbf{Local RBF}, $n=84$ \cite{shankar2018mesh} &&& $5/35$ &&&& $23042$ &&&& $3.45\e-3$ \\ \vspace{0.1cm} \textbf{RBF-PU}, $n=84$ \cite{shankar2018mesh}&&& $5/35$ &&&& $23042$ &&&& $3.63\e-3$ \\ \vspace{0.1cm} \textbf{Global RBF} \cite{shankar2018mesh}&&& $5/45$ &&&& $15129$ &&&& $5.10\e-3$ \\ \hline \end{tabular} \caption{\TO{A comparison between the presented approximations and other numerical methods for deformational flow test (cosine bell) due to DOF and time steps. The compared methods with GMLS and MKLS techniques are CSLAM \cite{lauritzen2012standard}, which is based on a cubed sphere grid. DG is the discontinuous Galerkin scheme \cite{nair2010class} with $p=3$ degree polynomials (fourth--order accurate) and the cubed sphere grid. RBF-FD is the mesh-free Eulerian scheme \cite{fornberg2011stabilization} via $n=84$ points in each local domain. Semi-Lagrangian Local RBF, RBF-PU and global RBF methods, which are applied in \cite{shankar2018mesh}.}}\label{Table-7} \end{center} \end{table} \newpage \TO{\section{Concluding remarks}\label{Sec-7} In this paper, two new techniques, i.e., GMLS and MKLS have been applied to approximate the spatial variables of a transport equation on the sphere in spherical coordinates. The time variable of the model is discretized by a second-order backward differential formula. The resulting of fully discrete scheme is a linear system of algebraic equations per time step, which is solved efficiently by a BiCGSTAB method with zero-fill ILU preconditioner. To ensure the ability of the proposed approaches, we have solved three important test cases namely solid body rotation, vortex roll-up, and deformational flow, which are important examples in the numerical climate modeling community. Both developed techniques do not depend on a background mesh or triangulation, which yields an easy implementation in solving of the transport equation on the sphere. Due to this feature, we have obtained the numerical results using two different distribution points, i.e., PTS and ME on the sphere. Furthermore, pole singularities appeared in this equation have been omitted in differentiation matrices due to the applied approximations. As formulated in our Algorithms \ref{Algorithm1} and \ref{Algorithm2}, the procedure of implementation in both approximations consists of two main parts, which are the construction of required matrices due to Eqs. (\ref{full-5}) and (\ref{full-6}) (Algorithm \ref{Algorithm1}) and solving the obtained full-discrete scheme (Algorithm \ref{Algorithm2}). As mentioned before, all implementations are done in MATLAB software by writing routines due to the presented algorithms. The results and simulations reported here show that both methods have the same accuracy, but the MKLS approximation depends on a constant parameter, which should be controlled experimentally. Besides, as shown in the first test, the GMLS approximation uses less CPU time than MKLS approximation for constructing all matrices in Algorithm \ref{Algorithm1}. We also have compared GMLS and MKLS approximations with other methods in the literature, i.e., CSLAM, DG, RBF-FD, local RBF, global RBF, and RBF-PU for deformational flow test due to time steps and DOF. From these comparisons, we can observe that the accuracy of GMLS and MKLS approximations with less number of points (DOF) is better than other methods. According to the results and discussions in this paper, the GMLS and MKLS approaches can be applied easily to solve mathematical models in spherical geometries.} \bibliographystyle{elsarticle-num}
1,116,691,497,092
arxiv
\section{Participants Demographics} \label{appendix:demographics} In this section, we provide moe details regarding participants' demographics. As Fig. \ref{fig:ages} shows, most participants were between 18 and 34 years old. There were no major differences in gender distribution between the four conditions (Fig.~\ref{fig:femals}). \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{figures/results/demographic/ages.png} \caption{The number of participants in each age group per condition. The bars show from left to right: ``18-24'', ``25-34'',``35-44'', `` 45-54'', ``55-64'' and ``65 or older''. The categories ``17 or younger'' and ``do not want to specify'' were never selected.} \label{fig:ages} \end{figure}{} \begin{figure}[ht] \centering \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/demographic/number_females.png} \caption{Number of female participants per condition.} \label{fig:femals} \end{minipage} \begin{minipage}{0.45\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/demographic/Attitude_towards_AI.png} \caption{The average attitude towards AI, rated on a 5 point Likert scale. } \label{fig:attitude_AI} \end{minipage} \end{figure} We verified that participants in different conditions did not differ much in their AI experience and views and in their Pacman experience. To this end, we asked them when they played Pacman for the last time (1=``never'', 2=``more than 5 years ago'', 3=``less then 5 years ago'', 4=``less than 1 year ago''). Across all four conditions the median group was 2:``I played Pacman more than 5 years ago''. A comparison is shown in figure \ref{fig:expierience_Pamcan}. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{figures/results/demographic/pacman_experience.png} \caption{The Pacman experience across all conditions where the bars depict when the participants played Pacman the last time. From left to right the bars represent: ``never'', ``more than 5 years ago'', ``less then 5 years ago'' and ``less than 1 year ago``. } \label{fig:expierience_Pamcan} \end{figure} \iffalse \begin{figure}[ht] \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/demographic/pacman_experience.png} \caption{The Pacman experience across all conditions (1=``never'', 2=``more than 5 years ago'', 3=``less then 5 years ago'', 4=``less than 1 year ago``). } \label{fig:expierience_Pamcan} \end{minipage} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/demographic/Attitude_towards_AI.png} \caption{The average attitude towards AI, rated on a 5 point Likert scale. } \label{fig:attitude_AI} \end{minipage} \end{figure}{} \fi For the AI experience we adapted a description of AI from Zhang et al.~\cite{zhang2019artificial} and Russel \cite{russell2016artificial} to ``The following questions ask about Artificial Intelligence (AI). Colloquially, the term `artificial intelligence' is often used to describe machines (or computers) that mimic `cognitive' functions that humans associate with the human mind, such as `learning' and `problem solving'. AI agents are already able to perform some complex tasks better than the median human (today). Examples for such intelligent agents are search engines, chatbots, chessbots and voice assistants.'' After that, every participant who stated to have AI experience (104 across all conditions) had to select one or more of the following items: \begin{itemize} \item 1: I know AI from the media. \item 2: I use AI technology in my private life. \item 3: I use AI technology in my work. \item 4: I took at least one AI related course. \item 5: I do research on AI related topics. \item Other: \end{itemize} The last free form option was used exactly once and read ``work on MTurk''. The distribution of the other items for each condition is shown in Fig. \ref{fig:experience_XAI}. To measure the participants' attitude towards AI we adapted a question from Zhang et al~\cite{zhang2019artificial} and asked them to rate their answer to the question ``Suppose that AI agents would achieve high-level performance in more areas one day. How positive or negative do you expect the overall impact of such AI agents to be on humanity in the long run?'' on scale from 5 point Likert scale from ``Extremely negative'' to ``Extremely positive''. The results are shown in Fig.~\ref{fig:attitude_AI}. \begin{figure}[ht] \centering \small \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/demographic/AiExperience_random.png} \emph{R} \end{minipage} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/demographic/AiExperience_highlights.png} \emph{H} \end{minipage} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/demographic/AiExperience_randomLRP.png} \emph{R+S} \end{minipage} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/demographic/AiExperience_highlightsLRP.png} \emph{H+S} \end{minipage} \caption{Distribution of the chosen AI experience items for each condition. The x-axis depicts the items described above.} \label{fig:experience_XAI} \end{figure}{} \clearpage \section{Supplementary Results} \label{appendix:results} In this section, we present additional information about the results of the study that goes beyond the main hypotheses we explored and described in the paper. \paragraph{Confidence, time and pauses} To investigate whether participants were confident in their decisions , they had to rate the confidence in each of their selections (item selection in the retrospection task{} and agent selection in the agent comparison task{}) on a 7 point Likert scale. The results across each task are shown in Fig. \ref{fig:confidences}. \begin{figure}[ht] \centering \small \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/appendix/retroConfAvg.png} (a) retrospection task \end{minipage} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/appendix/TrustConfAvg.png} (b) agent comparison task \end{minipage} \caption{The average confidence that participants in each condition had in their answers during each task.} \label{fig:confidences} \end{figure} To evaluate whether participants were especially diligent or effective during the tasks, we measured the time that each participant stayed on each of page of the survey and calculated the average time per task (each task consists of three pages). Furthermore, we kept track of each time a video was paused, as described in section \ref{sec:analysis}. The average completion times of participants and the average number of pauses are shown in Fig. \ref{fig:times} and \ref{fig:pauses}, respectively (shown in boxplots due to the presence of several outliers that strongly affect the mean values). \begin{figure}[ht] \centering \small \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/appendix/retroTimeAvg.png} (a) retrospection task \end{minipage} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/appendix/trustTimeAvg.png} (b) agent comparison task \end{minipage} \caption{The average time taken by participants in each condition per agent analysis (a) and comparison of agent pairs (b).} \label{fig:times} \end{figure} \begin{figure}[ht] \centering \small \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/appendix/retroClicksAvg.png} (a) retrospection task \end{minipage} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/appendix/TrustClicksAvg.png} (b) agent comparison task \end{minipage} \caption{The average number of times that participants in each condition paused the videos during each agent analysis (a) and comparison of agent pairs (b).} \label{fig:pauses} \end{figure}{} Fig. \ref{fig:confidences} (a) shows that participants in condition \emph{H} were slightly more confident on average in their analysis of the agents. This is also reflected by the lesser amount of time per analysis (Fig. \ref{fig:times} (a)) and pauses (Fig. \ref{fig:pauses} (a)). Apart from this, there are no obvious differences between the average confidence, time and pause values for each task (Fig. \ref{fig:confidences} to \ref{fig:pauses}). \paragraph{Participants' justifications} \label{appendix:justification} As described in section \ref{sec:analysis}, an independent coder identified different concepts inside the participants' justifications. Figure~\ref{fig:heatmap_justifications} shows the average number of mentions of \emph{gameplay} and of \emph{saliency maps} in the different tasks, across the different conditions. As discussed in section \ref{sec:results}, most participants mainly based their justifications on the agents' gameplay (Fig.~\ref{fig:gameplay_justifications}) and, in the saliency conditions, participants seldom mention the saliency maps in their justifications (see Fig.~\ref{fig:heatmap_justifications}). Finally, Fig.~\ref{fig:unjustified_justifications} shows that participants in condition \emph{H}{} gave more unjustified explanations in the retrospection task{}. However, this observation did not repeat in the agent comparison task{}. \begin{figure}[ht] \centering \small \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/text/GAMEPLAY_total.png} retrospection task \end{minipage} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/text/trust_GAMEPLAY_total.png} agent comparison task \end{minipage} \caption{Comparison of how often the participants referenced the agents' \textbf{gameplay} in their justifications for their answers.} \label{fig:gameplay_justifications} \end{figure} \begin{figure}[ht] \centering \small \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/text/HEATMAP_total.png} (a) retrospection task \end{minipage} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/text/trust_HEATMAP_total.png} (b) agent comparison task \end{minipage} \caption{ Comparison of how often the participants referenced the green highlighting of the LRP-argmax \textbf{saliency maps} in their justifications for their answers. } \label{fig:heatmap_justifications} \end{figure}{} \begin{figure}[ht] \centering \small \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/text/UNJUSTIFIED_total.png} (a) retrospection task \end{minipage} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/text/trust_UNJUSTIFIED_total.png} (b) agent comparison task \end{minipage} \caption{Comparison of how often the participants justifications contained \textbf{unjustified} arguments.} \label{fig:unjustified_justifications} \end{figure}{} \section{Evaluation of the Retrospection Task} \label{appendix:scoring_functions} As described in section \ref{sec:analysis}, we evaluated participants' scores in the object selection part of the retrospection task with a simple scoring function based on predefined answers by two of the authors involved in the training of the agents. Hereby, we assign a score of $1$ to each object that is connected to the agents' specific goal and their source of information (Pacman's position for all agents),$-1$ for each object that was not related to the agents' reward function and $-0.5$ to objects that were related to the reward but on which the agent did not focus. The specific scores are shown in table \ref{tb:scoring_object_selection}. \begin{figure}[ht] \begin{tabular}{|c|C|C|C|} \hline selected object & \emph{Power pill agent}{} & \emph{Regular agent}{} & \emph{Fear-ghosts agent}{} \\ \hline ``Pacman''& 1 & 1 & 1 \\ \hline ``normal pill''& -1 & -0.5 & -0.5 \\ \hline ``power pill''& 1 & -0.5 & -0.5 \\ \hline ``normal ghost''& -1 & -0.5 & 1 \\ \hline ``blue ghost''& -1 & 1 & 1\\ \hline ``cherry'' & -1 & -0.5 & -0.5\\ \hline \end{tabular}{} \caption{Caption} \label{tb:scoring_object_selection} \end{figure}{} For the free form answers to the question ``Please describe the strategy of the AI agent'' an independent coder identified various not mutually exclusive concepts contained in the participants answers. We aggregated these concepts into the following 16 groups, where the coder used 'G' for ghosts, 'PP' for power pills and 'NP' for normal pills: \begin{enumerate} \item \emph{eating power pills}: ``eating PP'', ``eating as many PP as possible'', ``eat PP when ghosts are near'', ``eat PP when ghosts are near'', ``prioritizing PP'', ``prioritizing PP to eat ghosts'', ``prioritizing PP , but not eat ghosts'', ``eat PP to get points'' \item \emph{ignore power pills}: ``do not care about PP'' \item \emph{eat normal pills}: ``eat NP to get points'', ``eating NP'', ``eating as many NP as possible'', ``prioritizing NP'', ``clearing the stage'' \item \emph{ignore normal pills}: ``do not care about NP'', ``focus on areas wihtout [sic] NP'' \item \emph{avoid ghosts}: ``avoiding G'', ``avoiding G strongly'', ``wait for G to go away'', ``outmanoveuring G'', ``hiding from G'', ``mislead ghosts'', ``avoids being eaten / caught'', ``avoiding to lose / staying alive'', ``stays away from danger'' \item \emph{move towards ghosts}: ``being close to G'', ``trying to eat G NON blue'', ``(easily) caught by G'', ``easily caught by G'' \item \emph{ignore ghosts}: ``do not care about G'' \item \emph{making ghosts blue}: ``making G blue'' \item \emph{eat blue ghosts}: ``being close to blue G'', ``eating as many G as possible'', ``eat blue G to get points'', ``chasing/going for G'', ``eating the blue G'', ``eating to jail many G''(jailing since the ghosts move back to jail after being eaten),``prioritizing PP to eat ghosts'' \item \emph{avoid blue ghosts}: ``avoiding blue G'' \item \emph{ignore blue ghosts}: ``do not care about blue G'', ``prioritizing PP , but not eat ghosts'' \item \emph{eat cherry}: ``prioritizing cherry'', ``eat cherry to get points'', ``going for cherry'', ``eating cherry'' \item \emph{ignore cherry}: ``do not care about cherry'' \item \emph{random movement}: ``moving randomly'', ``move all over map'', ``switching directions /back\&forth'', ``not moving / being stuck'', ``sticking to walls / outside'', ``confused'', ``without strategy /random'', ``not planning ahead'', ``switching directions'' \item \emph{focus on Pacman}: ``focus on PM'', ``focus on whats in front of/around PM'', ``stuck to itself'' \item \emph{staying in corners}: ``staying in corners'' \end{enumerate} These groups are used to define a simple scoring function. Depending on the agent, each group could either be positive, neutral or negative. Positive groups contain concepts that are in line with the predefined descriptions of the agents' strategies by two of the authors involved in the training. Neutral groups consist of correct observations, which are byproducts of the agent's strategy, and negative concepts go against the agent's strategy. Each positive group contained in an answer increased the participant's score by $1$ and each negative group decreased the score by $-1$. Here, we define a group to be ``contained in an answer'' if at least one concept of this group was included in the answer. Neutral groups did not affect the score. \emph{Power pill agent}{}: \begin{itemize} \item \emph{positive:} ``eat power pill'',``ignore normal pill'',``ignore ghosts'',``ignore blue ghost'',``ignore cherry'',``focus on Pacman'', ``staying in corners'' \item \emph{neutral:} ``eat normal pill'',``making ghosts blue'' \end{itemize} \emph{Regular agent}{}: \begin{itemize} \item \emph{positive}: ``ignore cherry'',``focus on Pacman'',``making ghosts blue'',``eat blue ghost'' \item \emph{neutral}: ``eat normal pill'', ``eat power pill'', ``ignore ghosts'' \end{itemize}{} \emph{Fear-ghosts agent}{}: \begin{itemize} \item \emph{positive}: ``avoid ghost'',``focus on Pacman'',``making ghosts blue'',``eat blue ghost'',``ignore cherry'' \item \emph{neutral}:``eat normal pill'', ``eat power pill'' \end{itemize}{} \clearpage \section{Questionnaire} \label{appendix:questionnaire} In this section, we provide the complete questionnaire used in the study. On the first page the participants were asked to provide personal information: \includegraphics[width=0.9\linewidth]{figures/survey/1.PNG} \includegraphics[width=0.9\linewidth]{figures/survey/2.PNG} \includegraphics[width=0.9\linewidth]{figures/survey/3.PNG} \clearpage Information about Pacman: \includegraphics[width=\linewidth]{figures/survey/4.PNG} \clearpage This quiz tests whether the participants understood the information about Pacman. Participants were sent back to the previous page if they got an answer wrong. \includegraphics[width=\linewidth]{figures/survey/5.PNG} \includegraphics[width=\linewidth]{figures/survey/6.PNG} \clearpage Additional information about the provided explainable AI methods. The information about saliency maps was only displayed if the participant was in one of the saliency conditions. \includegraphics[width=\linewidth]{figures/survey/7.PNG} \clearpage This quiz tests whether the participants understood the information about the provided explainable AI methods. Participants were sent back to the previous page if they got an answer wrong. \includegraphics[width=\linewidth]{figures/survey/8.PNG} \clearpage This is the retrospection task{} that was repeated for each of the three agents in a randomized order: \includegraphics[width=\linewidth]{figures/survey/9.PNG} \includegraphics[width=\linewidth]{figures/survey/10.PNG} \includegraphics[width=\linewidth]{figures/survey/11.PNG} \clearpage After all three agents, the participants were asked for their satisfaction: \includegraphics[width=\linewidth]{figures/survey/12.PNG} \clearpage This is the agent comparison task{} that was repeated for each combination of the three agents in a randomized order: \includegraphics[width=\linewidth]{figures/survey/13.PNG} \includegraphics[width=\linewidth]{figures/survey/14.PNG} \clearpage After all three comparisons, the participants were asked for their satisfaction again: \includegraphics[width=\linewidth]{figures/survey/15.PNG} \section{Saliency Maps} \label{sec:argmax} In this section, we describe the local explanation method which we use in our combined local and global explanation approach. While the development of the local explanation method is not the focus of this paper, we include the details of the approach for completeness. We revisit the foundations of Layer-wise Relevance Propagation (LRP) and show how to use it on the original DQN. Then we describe our previously published $argmax$-rule, an adjustment to this algorithm, which generates more selective saliency maps and which we use in this work. In addition to some previously published illustrations of the selectivity of the $argmax$-rule, we implemented new sanity checks for our saliency maps and report their results. \subsection{Foundations} \label{sec:argmax_foundations} LRP does not describe a specific algorithm but a concept which can be applied to any classifier $f$ that fulfills the following two requirements. First, $f$ has to be decomposable into several layers of computation where each layer can be modeled as a vector of real-valued functions. Secondly, the first layer has to be the input $x$ of the classifier containing, for example, the input pixels of an image, and the last layer has to be the real-valued prediction of the classifier $f(x)$. Any DRL agent fulfills those requirements if we only consider the output value that corresponds to the action we want to analyze. For a given input $x$, the goal of any method following the LRP concept is to assign relevance values $R_{j}^{l}$ to each computational unit $j$ of each layer of computation $l$, in such a way that $R_{j}^{l}$ measures the local contribution of the unit $j$ to the prediction $f(x)$. A method of calculating those relevance values $R_{j}^{l}$ is said to follow the LRP concept if it sets the relevance value of the output unit to be the prediction $f(x)$ and calculates all other relevance values by defining \begin{align}\label{ErsteLRPGleichung} R_{j}^{l} := \sum_{k \in \{j \text{ is input for neuron } k\}} R_{j \leftarrow k}^{l,l+1}, \end{align} for \textbf{messages} $R_{j \leftarrow k}^{l,l+1}$, such that \begin{align}\label{ZweiteLRPGleichung} R_{k}^{l+1} = \sum_{j \in \{j \text{ is input for neuron } k\}} R_{j \leftarrow k}^{l,l+1}. \end{align} In this way a LRP variant is determined by choosing messages $R_{j \leftarrow k}^{l,l+1}$. Through definition \ref{ErsteLRPGleichung} it is then possible to calculate all relevance values $R_{j}^{l}$ in a backward pass, starting from the prediction $f(x)$ and going towards the input layer. Furthermore, equation \ref{ZweiteLRPGleichung} gives rise to \begin{align*} \sum_{k} R_{k}^{l+1} & = \sum_{k} \sum_{j \in \{j \text{ is input for neuron } k\}} R_{j \leftarrow k}^{l,l+1} \\ & = \sum_{j} \sum_{k \in \{j \text{ is input for neuron } k\}} R_{j \leftarrow k}^{l,l+1} = \sum_{j} R_{j}^{l}. \end{align*} This ensures that the relevance values of each layer $l$ are a linear decomposition of the prediction \begin{align*} f(x)= \dots = \sum_{j = 1}^{dim(l)} R_{j}^{l} = \dots = \sum_{j = 1}^{dim(input)} R_{j}^{input}. \end{align*} Such a linear decomposition is easier to interpret than the original classifier because we can think of positive values $R_{j}^{l}$ to contribute evidence in favor of the decision of the classifier and of negative relevance values to contribute evidence against the decision. To use LRP on a DQN agent we first have to look at its network architecture. The DQN $f$, as introduced by Mnih et al. \cite{Mnih15}, consists of three convolutional layers $\operatorname{\textit{conv}}_{1},...,\operatorname{\textit{conv}}_{3}$ followed by two fully connected layers $\operatorname{\textit{fc}}_{1}$ and $\operatorname{\textit{fc}}_{2}$. For an input $x$ we write $\operatorname{\textit{fc}}_{i}(x)$ and $\operatorname{\textit{conv}}_{i}(x)$ for the output of the layers $\operatorname{\textit{fc}}_{i}$ and $\operatorname{\textit{conv}}_{i}$, respectively, during the forward pass that calculates $f(x)$. In this notation, the Q-Values (i. e. the output of the whole DQN) are $\operatorname{\textit{fc}}_{2}(x)$. Following the LRP notation, we denote the relevance value of the $j$-th neuron in the layer $l$ with $R_{j}^{l}$. As described above, we have to define messages $R_{j \leftarrow k}^{l,l+1}$ for any two consecutive Layers $l,l+1$ to determine a LRP variant. For now we assume that $l+1$ is one of the fully connected layers $\operatorname{\textit{fc}}_{i}$. The convolutional case works analogously and will be covered in more detail in the next section. $R_{j \leftarrow k}^{l,l+1}$ should measure the contribution of the $j$-th neuron of $\operatorname{\textit{fc}}_{i-1}$ to the $k$-th neuron of $\operatorname{\textit{fc}}_{i}$, therefore we have to look at the calculation of $\operatorname{\textit{fc}}_{i}(x)_{k}$. The fully connected layer $\operatorname{\textit{fc}}_{i}$ uses a weight matrix $W_{i}$, a bias vector $b_{i}$ and an activation function $\sigma_{i}$ as parameters for its output. Let $W_{i}^{k}$ be the $k$-th row of $W_{i}$ and $b_{i}^{k}$ the $k$-th entry of $b_{i}$. Then the activation of the $k$-th neuron in $\operatorname{\textit{fc}}_{i}(x)$ is \begin{align*} \sigma_{i}(W_{i}^{k} \cdot \operatorname{\textit{fc}}_{i-1}(x) + b_{i}^{k} ), \end{align*} where $\cdot$ denotes the dot product and $\operatorname{\textit{fc}}_{0}$ is the flattened output of $\operatorname{\textit{conv}}_{3}$. Usually the ReLU function $\sigma(x)=max(0,x)$ is used as activation function $\sigma_{i}$ in the DQN architecture. Bach et al. \cite{bach2015lrp} argue that any monotonous increasing function $\sigma$ with $\sigma(0)=0$, like the ReLU function, conserves the relevance of the dot product $W_{i}^{k} \cdot \operatorname{\textit{fc}}_{i-1}(x)$. Newer LRP variants, like the one used by Montavon et al. \cite{montavon18}, also omit the bias when defining $R_{j \leftarrow k}^{l,l+1}$ . With those two assumptions the relevance of each neuron of $\operatorname{\textit{fc}}_{i-1}$ to $\operatorname{\textit{fc}}_{i}(x)_{k}$ is the same as their contribution to the dot product $W_{i}^{k} \cdot \operatorname{\textit{fc}}_{i-1}(x) = \sum_{j} w_{jk}\operatorname{\textit{fc}}_{i-1}(x)_{j}$. This is a linear decomposition, so we can use $ w_{jk}\operatorname{\textit{fc}}_{i-1}(x)_{j} $ to measure the contribution of the $j$-th neuron of $\operatorname{\textit{fc}}_{i-1}$. Since we want to find the parts of the input that contributed evidence in favor of the decision of the DQN agent, we restrict ourself to the positive parts of that sum. That is, we set \begin{align*} z_{jk}^{+} \coloneqq \begin{cases} w_{jk}\operatorname{\textit{fc}}_{i-1}(x)_{j} & \text{if } w_{jk}\operatorname{\textit{fc}}_{i-1}(x)_{j} > 0\\ 0 & \text{if } w_{jk}\operatorname{\textit{fc}}_{i-1}(x)_{j} \leq 0\\ \end{cases}. \end{align*} With this, we define the messages as $ R_{j \leftarrow k}^{l,l+1} \coloneqq \frac{z_{jk}^{+}}{\sum_{j} z_{jk}^{+} } R_{k}^{l+1} $. This method is called $z^{+}$-rule (without bias) and satisfies the LRP equation \ref{ZweiteLRPGleichung}. \subsection{An argmax approach to LRP} \label{chap:argmax} \input{argmax_tikz.tex} In this subsection, we introduce our adjustment to the LRP variant called $z^{+}$-rule which we revisited in the previous subsection. Recent work \cite{iyer2018transparency,goel2018} indicates that DRL agents focus on certain objects within the visual input. With our approach, we aim to generate saliency maps that reflect this property by focusing on the most relevant parts of the input instead of giving too many details. For this purpose, we propose to use an $argmax$ function to find the most contributing neurons in each convolutional layer. This idea is inspired by Mopuri et al.~\cite{CNNFixations}, who generated visualizations for neural networks solely based on the positions of neurons that provide evidence in favor of the prediction. During this process, they follow only the most contributing neurons in each convolutional layer. Our method adds relevance values to the positions of those neurons and therefore expands the approach of Mopuri et al. by an additional dimension of information. Since those relevance values follow the LRP concept, they also possess the advantageous properties of the LRP concept like conservation of the prediction value. As we have seen in the foundations section \ref{sec:argmax_foundations}, a LRP method is defined by its messages $R_{j\leftarrow k}^{l,l+1}$ which propagate the relevance from a layer $l+1$ to the preceding layer $l$. If $l+1$ is a fully connected layer $\operatorname{\textit{fc}}_{i}$ of the DQN (see section \ref{sec:argmax_foundations} for our notation of the DQN architecture), we use the same messages that are used in the $z^{+}$-rule. In the case that $l$ and $l+1$ are convolutional layers $\operatorname{\textit{conv}}_{i-1}$ and $\operatorname{\textit{conv}}_{i}$, we propose new messages based on the $argmax$ function. To define those messages we analyze how the activation of a neuron $\operatorname{\textit{conv}}_{i}(x)_{k}$ was calculated during the forward pass. Let $W$ and $A$ denote the weight kernel and part of $\operatorname{\textit{conv}}_{i-1}(x)$ respectively that were used to calculate $\operatorname{\textit{conv}}_{i}(x)_{k}$ during the forward pass. If we write $W$ and $A$ in appropriate vector form, we get \begin{align*} \operatorname{\textit{conv}}_{i}(x)_{k} = \sigma(\sum_{j} w_{j}a_{j} + b ), \end{align*} where $\sigma$ denotes the activation function of $\operatorname{\textit{conv}}_{i}$ and $b$ the bias corresponding to $W$. Analogously to the $z^{+}$-rule we assume that the activation function and the bias can be neglected when determining the relevance values of the inputs $a_{i}$. We propose to use an $argmax$ function to find the most relevant input neurons by defining the messages in the following way \begin{align*} R_{j\leftarrow k}^{l,l+1} \coloneqq \begin{cases} R_{k}^{l+1} & \text{if } j = argmax\{ w_{j}a_{j} \}\\ 0 & \text{if not}.\\ \end{cases} \end{align*} This definition satisfies the LRP condition given by equation \ref{ZweiteLRPGleichung} because the only non vanishing summand of the sum \begin{align*} \sum_{j \in \{j \text{ is input for neuron } k\}} R_{j \leftarrow k}^{l,l+1} \end{align*} is $ R_{k}^{l+1}$. If we use the same $argmax$ approach to propagate relevance values from $\operatorname{\textit{conv}}_{1}$ to the input $\operatorname{\textit{conv}}_{0}$, then we get very sparse saliency maps where only a few neurons are highlighted. If we highlight the entire areas of the input $\operatorname{\textit{conv}}_{0}$ that were used to calculate relevant neurons of $\operatorname{\textit{conv}}_{1}$, then we lose information about the relevance values inside those areas. Therefore, we draw inspiration from the guided Grad-CAM approach introduced in \cite{selvaraju2016grad-cam}. Guided Grad-CAM uses one thorough relevance analysis for the neurons of the last convolutional layer to get relevant areas for the specific prediction and another thorough relevance calculation for the input pixels to get fine granular relevance values inside those areas. We already did a thorough analysis of the neurons of the last convolutional layer by using the $z^{+}$-rule on the fully connected layers. By following the most relevant neurons through the convolutional layers we keep track of the input areas that contributed the most to those values. Mimicking the second thorough analysis of the Guided Grad-CAM approach we propose to use the $z^{+}$-rule to propagate relevance values from $\operatorname{\textit{conv}}_{1}$ to $\operatorname{\textit{conv}}_{0}$. This generates fine granular relevance values inside the areas identified by following the most contributing neurons and ascertains that those relevance values follow the LRP concept. Figure \ref{fig:arg_z} visualizes the differences between our $argmax$ approach and the $z^{+}$-rule. An implementation of our algorithm that builds up on the iNNvestigate framework \cite{alber2018innvestigate} can be found here: \url{https://github.com/HuTobias/LRP_argmax}. \subsection{Illustration of the Selectivity of the argmax-rule} \label{chap:results} In order to verify that our $argmax$ approach, described in section \ref{sec:argmax}, creates more selective saliency maps than the $z^{+}$-rule (see section \ref{sec:argmax_foundations}), we tested our approach on three different Atari 2600 games. For all games, we trained an agent using the DQN implementation of the OpenAI baselines framework \cite{baselines2017}. The results of all experiments are shown in our previous work \cite{huber2019enhancing}. We review the Pacman results here, since we use this game in the user study evaluating our combined explanation approach. In the game Pacman the player has to navigate through a maze and collect pellets while avoiding enemy ghosts. Because this game contains many important objects and gives the agent a huge variety of possible strategies, DQN agents struggle in this environment and perform worse than the average human player (see \cite{Mnih15}). Explainable AI methods are especially desirable in environments like this, where the agent is struggling, because they help us to understand where the agent had difficulties. The saliency maps created with the $z^{+}$-rule (figure \ref{fig:focus_grob}) reflect the complexity of Pacman by showing that the agent tries to look at nearly all of the objects in the game. This information might be helpful to optimize the DRL agent, but it also distracts from the areas which influenced the agents' decision the most. Figure \ref{fig:focus_grob} shows that the saliency map created by the $argmax$ approach is more focused on the vicinity of the agent and makes it clearer what the agent is focusing on the most. Figure \ref{fig:focus_grob} also illustrates that a fine-granular saliency map in the vicinity of the agent is necessary to see that the agent will most likely decide on moving to the right as his next action. \begin{figure} \centering \includegraphics[width=0.3\linewidth]{figures/argmax/Pacman_raw.png} \includegraphics[width=0.3\linewidth]{figures/argmax/Pacman_z.png} \includegraphics[width=0.3\linewidth]{figures/argmax/Pacman_arg.png} \caption{The left image shows a screen of Pacman. The player (green circle) has to collect pellets (blue area) while avoiding ghosts (red circles). The saliency map created for this game-state by the $z^{+}$-rule (middle) highlights a huge area as relevant while our $argmax$ approach (right) focuses on the vicinity of the player.} \label{fig:focus_grob} \end{figure} \subsection{Sanity Checks} \label{sec:sanity_checks} It is not yet possible to verify whether a saliency map algorithm perfectly reflects what a model learned. However, a basic prerequisite for this is that the saliency maps depend on the weights learned by the model. To verify this, Adebayo et al.~\cite{adebayo2018sanity} proposed sanity checks that cascadingly randomize each layer of the network, starting with the output layer. If the saliency maps depend on the learned weights, then this will lead to increasingly different visualisations. Sixt et al.~\cite{sixt2020} applied the sanity checks to several LRP variants but they have never been used on our $argmax$-rule. Therefore, we implemented the sanity checks\footnote{The code we used for the sanity checks can be found here: \url{https://github.com/HuTobias/HIGHLIGHTS-LRP/tree/master/sanity_checks}} for our $argmax$-rule and test it on the regular Pacman agents described in section \ref{sec:study_design}. An example of these tests for a single state is shown in Fig.~\ref{fig:sanity_vis}. \begin{figure}[ht] \centering \begin{minipage}{0.15\linewidth} \centering \includegraphics[width=\linewidth]{figures/argmax/sanity/raw_argmax.png} original \end{minipage} \begin{minipage}{0.15\linewidth} \centering \includegraphics[width=\linewidth]{figures/argmax/sanity/fc2.png} $\operatorname{\textit{fc}}_2$ \end{minipage} \begin{minipage}{0.15\linewidth} \centering \includegraphics[width=\linewidth]{figures/argmax/sanity/fc1.png} $\operatorname{\textit{fc}}_1$ \end{minipage} \begin{minipage}{0.15\linewidth} \centering \includegraphics[width=\linewidth]{figures/argmax/sanity/conv3.png} $\operatorname{\textit{conv}}_3$ \end{minipage} \begin{minipage}{0.15\linewidth} \centering \includegraphics[width=\linewidth]{figures/argmax/sanity/conv2.png} $\operatorname{\textit{conv}}_2$ \end{minipage} \begin{minipage}{0.15\linewidth} \centering \includegraphics[width=\linewidth]{figures/argmax/sanity/conv1.png} $\operatorname{\textit{conv}}_1$ \end{minipage} \caption{Example for how the LRP-argmax saliency maps change when the network's layers are randomized cascadingly, beginning with output layer $\operatorname{\textit{fc}}_{2}$.} \label{fig:sanity_vis} \end{figure} To measure how similar two saliency maps are we use three different metrics proposed by Adebayo et al.~\cite{adebayo2018sanity}: Spearman rank correlation, structural similarity (ssim) and Pearson correlation of the histogram of gradients. To account for a possible change of sign in the saliency maps, we adopt an approach by Sixt et al~\cite{sixt2020} and use the maximum similarity of the original and the inverted saliency map. \newcommand{\mathbb{R}^{m \times n \times c}}{\mathbb{R}^{m \times n \times c}} That means that for two saliency maps $S,S^{'} \in \mathbb{R}^{m \times n \times c}$ and a similarity measurement $sim: \mathbb{R}^{m \times n \times c} \times \mathbb{R}^{m \times n \times c} \rightarrow \mathbb{R}$ we calculate the actual similarity with \begin{equation} \max (sim(S,S^{'}),sim(\mathbf{1}-S,S^{'})) \end{equation} where $\mathbf{1} \in \mathbb{R}^{m \times n \times c}$ is filled with $1$s. Fig.~\ref{fig:sanity_graphs} shows the average similarities per randomized layer for a gameplay stream of 1000 states. \begin{figure}[ht] \centering \includegraphics[width=0.3\linewidth]{figures/argmax/sanity/spearman.png} \includegraphics[width=0.3\linewidth]{figures/argmax/sanity/pearson.png} \includegraphics[width=0.3\linewidth]{figures/argmax/sanity/ssim.png} \caption{The average similarities between saliency maps for the fully trained agent and agents where the layers have been randomized cascadingly, starting with the last layer $fc_{2}$. The values are based on a stream of 1000 actions in the Atari 2600 Pacman game. } \label{fig:sanity_graphs} \end{figure} The relatively high values for the structural similarity (ssim) can be explained by the high amount of intersecting zeros in all saliency maps. Apart from that, we see the same trends already observed by Sixt et al.~\cite{sixt2020} and Adebayo et al.\cite{adebayo2018sanity}: the sanity maps do analyze the learned weights but the fully connected layers are not sufficiently analyzed. As a consequence, the saliency maps are not class discriminatory. However, class discriminatory saliency maps often come with other drawbacks like being noise \cite{sixt2020} or not analyzing all layers \cite{selvaraju2016grad-cam}. \section{Conclusion} \label{sec:conclusion} This work is a first step toward the development of combined explanation methods for reinforcement learning (RL) agents that provide users with both global information regarding the agent's strategy, as well as local information regarding its decision-making in specific world-states. To this end, we present a joint global and local explanation method, building on our prior work on strategy summaries (HIGHLIGHTS-DIV) and on generating saliency maps for deep RL agents (LRP-argmax). This method is easily adaptable to other global and local algorithms. To evaluate this combined global and local explanation method, as well as the contribution of each explanation type, we conducted a user study. Hereby, we examined participants’ mental models through a retrospection task{} and used an agent comparison task{} to investigate whether their trust was appropriate given agents' capabilities. Regarding the usefulness of \emph{global strategy summaries}, our results show that HIGHLIGHTS-DIV summaries (1) help to establish appropriate trust in agents based on neural networks (extending prior results about classic RL agents \cite{amir18highlights}) and (2) improve participants' mental models of those agents. The evaluation of \emph{local explanations} in the form of LRP saliency maps reveals strengths as well as weaknesses. On the one hand, our analysis shows that reinforcement learning comes with additional usability challenges not present in previously evaluated image classification tasks. First, presenting saliency maps on videos instead of static images \cite{anderson2019mere-mortals,alqaraawi2020evaluating} overwhelms users with a lot of information in a short amount of time and increases the risk of overlooking crucial information. Second, compared to more intuitive image classification tasks \cite{alqaraawi2020evaluating,selvaraju2016grad-cam}, the average users lacks experience to correctly infer how the highlighted regions affect the agent's long-term sequential decision-making. On the other hand, the results indicate that saliency maps have the potential to (1) extend users' mental models beyond strategy summaries by providing insight into what information the agent used and (2) improve users' ability to choose the better agent even with random summaries. Taken together, the results support a combination of local and global explanations, since participants in the combined explanation condition received the highest scores during our survey. However, our evaluation suggests that simply highlighting pixels that are relevant for the agent's decision is insufficient for RL agents and that more work is needed to increase the accessibility of saliency maps. \paragraph{Acknowledgements} This work was partially funded by a J.P. Morgan AI Faculty Research Award and the Deutsche Forschungsgemeinschaft (DFG) under project DEEP(Grant Number 392401413). We thank Otto Grothe for his help with analyzing the participants' textual responses. \section{Discussion \& Future Work} \label{sec:discussion} With the increasing use of RL agents in high-stakes domains, there is a growing need in developing and understanding methods for describing the behavior of these agents to their human users. In this paper, we explored the combination of global information describing agent behavior, in the form of strategy summaries, with local information in the form of saliency maps. To this end, we augmented HIGHLIGHTS-DIV~\cite{amir18highlights} summaries, which select important and diverse states (adapted to DQN agents), with saliency maps generated using the LRP-argmax algorithm~\cite{huber2019enhancing}. We implemented the combined approach in the Atari Pacman environment, and evaluated the separate and joint benefits of showing users global and local information about the agent. We used two types of tasks: a retrospection task{} about the agent's strategy and a agent comparison task{}. \paragraph{Strategy summarization} The results of this study reinforce our prior findings ~\cite{amir18highlights} showing that summaries generated by HIGHLIGHTS-DIV lead to significantly improved performance of participants in the agent comparison task{} compared to random summaries, and show that this result generalizes to RL agents based on neural networks. Furthermore, they show that HIGHLIGHTS-DIV summaries were more useful for analyzing agent strategies and were preferred by participants. Overall, in our study, the choice of states that are shown to participants was more important than the inclusion of local explanations in the form of saliency maps. \paragraph{Limitations of saliency maps} With respect to the addition of saliency maps, we found mixed results. In contrast to previous studies about saliency maps for image classification tasks, which found weak positive effects for saliency maps \cite{alqaraawi2020evaluating,selvaraju2016grad-cam}, there were no significant differences between the saliency and non-saliency conditions in our study. When examining participants' answer justifications, we observed that most participants did not mention utilizing the saliency maps, which may provide a partial explanation to their lack of contribution to participants' performance. Especially in the agent comparison task{}, participants seldom mentioned the saliency maps even though there was a marginally significant difference between performance of participants in condition \emph{R}{} and in condition \emph{R+S}{}. Participants' comments also reflect their dissatisfaction with saliency maps, e.g., ``I do not believe that the green highlighting was useful or relevant" and ``The green highlights didn't seem to help much''. This suggests that saliency maps in their current form may not be accessible enough to the average user. Based on the comments from the participants and in depth feedback we received in pilot studies, we note some possible accessibility barriers. First, when saliency maps are shown as part of a video, it may be difficult for users to keep track of the agent's attention, compared to displays of static saliency maps, as done in previous user studies \cite{selvaraju2016grad-cam,anderson2019mere-mortals,alqaraawi2020evaluating}. For instance, one participant reported that ``[i]t wasn't so easy to see the green area, it needed to be bigger or more prominent to be of more use.'' We tried to take measures against this by using a selective saliency map generation algorithm (LRP-argmax) and interpolating between selected saliency maps to reduce the amount of information, as well as allowing participants to pause the video at any time. However, this does not seem to be enough. Second, participants were not accustomed to interpreting saliency maps, which can be non-intuitive to non-experts. One participant even commented that ``[he/she] feel[s] as though this came with somewhat of a learning curve''. In our pilot studies we noticed that people who were familiar with reinforcement learning or deep learning could more easily interpret saliency maps than those who were not. For example, some participants said that they thought the agent was good when its attention was spread to different areas because they inferred it considered more information, while in fact the agent was attending to different regions because it did not yet learn what the important information is. Similarly, one study participant commented: ``...I don't know if I would prefer an AI that `looked' around more at the board, or focused more in a small area to accomplish a task''. It is possible that prior studies which used saliency maps for interpreting image classification~\cite{alqaraawi2020evaluating,selvaraju2016grad-cam} did not encounter this problem due to the more intuitive nature of the task. Interpreting a visual highlighting for image classification only requires identifying objects that contributed to the classification, while in RL there is an added layer of complexity as interpretation also requires making inferences regarding how the highlighted regions affect the agent's long-term sequential decision-making policy. Finally, while the sanity checks reported in Section \ref{sec:sanity_checks} showed that our saliency maps do analyze what the network learned, they were also found to be indifferent to specific actions. Since prior studies have shown that users find class discriminatory explanations more useful for understanding agents' decisions~\cite{goudet2018functioanlcausal,LopezPaz2017causalsignals,byrne2019humanreasoning}, the lack of discrimination between certain actions can be detrimental to the usefulness of saliency maps. \paragraph{Potential of saliency maps} Regarding the potential of saliency maps, we made encouraging observations. Even though saliency maps did not significantly increase participants' scores in the simple object selection part of the retrospection task{}, they did result in improved scores in the textual strategy description. The difference between our HIGHLIGHTS-DIV conditions \emph{H+S}{} and \emph{H}{} is similar to the one observed by Anderson et al. \cite{anderson2019mere-mortals} (p=0.086 compared to our p=0.088), who also evaluated participants' mental models for RL agents utilizing a strategy description task. The poor result of our random condition \emph{R+S}{} can be explained by the fact that Anderson et al. implicitly chose meaningful states, which we only did with our global explanation method in the HIGHLIGHTS-DIV conditions. A possible reason for the difference between the object selection and the strategy description sub-tasks is the higher complexity of strategy description. It requires participants to not only identify the correct objects but also to describe how they are used. Under this assumption, the increased performance of participants in condition \emph{H+S}{} suggests that saliency maps were useful for putting the objects in the correct context. For example, participants' textual descriptions showed that, while the non-saliency groups know that Pacman is important (most likely based on the fact that it is important for them as players), they did not identify it as a central source of information for the agent. Second, we observed in the agent comparison task{} that saliency maps alone improved participants' ability to place appropriate trust into different agents when comparing conditions \emph{R}{} and \emph{R+S}{}. There, performance was comparable to the performance of participants in the HIGHLIGHTS-DIV conditions, \emph{H}{} and \emph{H+S}{}. This indicates that there is valuable information for this kind of task within saliency maps. The lacking improvement of condition \emph{H+S}{} compared to \emph{H}{} might be explained by the accessibility issues of saliency maps mentioned earlier. When presented with strategy summaries, participants may have had less reason to rely on the non-intuitive saliency maps. \paragraph{Combination of local and global explanations} It is important to note that the positive effects of saliency maps in the retrospection task{} were only visible in the HIGHLIGHTS-DIV conditions \emph{H}{} and \emph{H+S}{}, reinforcing our claim that the choice of states is crucial for explaining RL agents. Therefore, even if the limitations of saliency maps mentioned above are addressed, the potential benefits might only be visible and likely reinforced by a combination with strategy summarization techniques. We note that studies that evaluate local explanations typically implicitly make a global decision about which states to present local explanations for \cite{anderson2019mere-mortals,madumal2019explainable}. Our results suggest that this implicit choice may have a substantial impact on participants' understanding of agent behavior. In the retrospection task{}, we observed that local explanations in the form of saliency maps were useful for identifying what objects the agent attends to (see Fig.~\ref{fig:retro_pacman}), while strategy summaries were more useful for identifying the agent's goals (see Fig.~\ref{fig:retro_select_goal}). This was reflected by participants' utterances such as: ``The agent seemed to be paying attention to the area directly in front of it and partly to the areas directly to each side.'' and ``Pacman wanted those ghosts! His goal was to move as fast as he could towards them.'' and suggests that the two approaches are indeed complementary. The local saliency maps contribute to users' understanding of the agents \emph{attention}, as they reflect the information the agent attends to, while strategy summaries contribute to users' understanding of the agent's \emph{intentions}, as they reflect how the agent acts. Taken together, our results suggest that there is potential for a combined explanation framework in the future, if the accessibility issues of saliency maps are addressed. \paragraph{Study limitations} Our study has several limitations. First, we used a single domain in our user study. However, other recent work has used strategy summaries similar in spirit to HIGHLIGHTS-DIV in another domain~\cite{sequeira2019interestingness} and several works have used saliency maps in other domains (e.g., several Atari games including Pong and Space invaders were used by Greydanus et al.~\cite{greydanus2018}). Second, while our combined explanation approach is easily adaptable to other global explanation methods which choose an informative subset of states, and local methods that highlight relevant information in those states, our study only explored one combination of a particular global explanation method and a particular local explanation method. We chose the HIGHLIGHTS-DIV summary method since strategy summary approaches that are based on policy reconstruction require making various assumptions about people's computational models, and that these models differ depending on context~\cite{lage2019exploring}. We chose saliency maps as a local method both because it is visual and thus can be integrated with a visual summary, and also because other methods typically require additional models or assumptions (e.g., causal explanations~\cite{madumal2019explainable} require a causal graph of the domain). The specific choice of the LRP-argmax algorithm was motivated by its selectivity, which reduces the amount of information that participants have to process. The accessibility problems of saliency maps we identified were mainly related to the presentation of the information. This indicates that simply highlighting how relevant parts of the input are for the prediction of an agent is insufficient even when based on other saliency map algorithms. \paragraph{Future work} There are several directions we intend to explore in future work. First, as discussed earlier, there is a need to make saliency maps more understandable to users. To this end, we plan to augment saliency maps with textual explanations that help users interpret the information correctly, similar to how Rabold et al.~\cite{rabold2019enriching} did with LIME explanations. Hereby, we aim to train a machine learning model on descriptions written by domain experts confronted with the combination of HIGHLIGHTS-DIV and saliency maps presented in this work. Furthermore, we plan to build up on our previous work \cite{weitz2019doyou} and explore the presentation of those textual explanations through virtual agents. Second, we plan to explore interaction approaches that involve the user in the process, e.g., by only showing local information when the user asks for it as we did in the context of cooperative annotation \cite{baur2020explainable}. This could reduce cognitive load while increasing the user's attention to the local information when it is needed. Finally, to verify that our results generalize beyond simulated environments, we would like to conduct user studies in real-world domains such as healthcare. Explainability is crucial in AI systems deployed in the medical field (e.g., pain classification~\cite{weitz2019deep}) since possible errors could lead to dire consequences. RL methods face additional challenges and requirements in the healthcare domain where random exploration of the state space is not possible and evaluation is challenging~\cite{gottesman2019guidelines,gottesman2018evaluating}, making explanation methods even more important. In recent work, we have begun exploring the use of strategy summaries in healthcare using an HIV simulator~\cite{lage2019exploring}, and intend to further explore this direction. \section{Strategy Summarization} \label{sec:highlights} This section describes the strategy summarization approach to global explanations, and the HIGHLIGHTS algorithm and its extension HIGHLIGHTS-DIV, which we developed and evaluated in prior work~\cite{amir18highlights}. Our formalization of the summarization problem assumes that the agent uses a Markov Decision Process (MDP), where $A$ is the set of actions available to the agent, $S$ is the set of states, $R$: $S \times A \rightarrow \mathbb{R}$ is the reward function, which maps each state and action to a reward, and $Tr$ is the transition probability function, i.e., $Tr(s', a, s)$ defines the probability of reaching state $s'$, when taking action $a$ in state $s$. The agent has a policy $\pi$ which specifies which action to take in each of the states. We formalize the problem of summarizing an agent's behavior as follows: from execution traces of an agent, choose a set $T = \langle t_{1},...,t_{k} \rangle$ of trajectories to include in the summary, where each trajectory is composed of a sequence of $l$ consecutive states and the actions taken in those states $\langle(s_{i},a_{i}),...,(s_{i+l-1},a_{i+l-1})\rangle$. We consider trajectories rather than single states because seeing what action was taken by the agent in a specific state might not be meaningful without a broader context (e.g., watching a self-driving car for one second will not reveal much useful information). Because it is infeasible that people will be able to review the behavior of an agent in all possible states, we assume a limited budget $k$ for the size of the summary, such that $|T| = k$. This budget limits the amount of time and cognitive effort that a person needs to invest in reviewing the agent's behavior. There are several factors that could be considered when deciding which states to include in a summary, such as the effect of taking a different action in that state, the diversity of the states that are included in the summary and the frequency at which states are likely to be encountered by the agent. The approach we describe here focuses on the first factor, which we refer to as the ``importance'' of a state. Intuitively, a good summary should provide a person reviewing the summary with a sense of the agent's behavior in states that the person considers important (e.g., when making a mistake would be very costly). The importance of states included in the summary could substantially affect the ability of a person to assess an agent's capabilities. For example, imagine a summary of a self-driving car that only shows the car driving on a highway with no interruptions. This summary would provide people with very little understanding of how the car might act in other, more important, scenarios (e.g., when another car drives into its lane, when there is road construction). In contrast, a summary showing the self-driving car in a range on more interesting situations (e.g., overtaking another car, breaking when a person enters the road) would convey more useful information to people reviewing it. \subsection{The ``Highlights'' Algorithm} \label{sec:alg} The HIGHLIGHTS algorithm generates a summary of an agent's behavior from simulations of the agent in an online manner. It uses the notion of state \emph{importance}~\cite{torrey2013teaching} to decide which states to include in the summary. Intuitively, a state is considered important if taking a wrong action in that state can lead to a significant decrease in future rewards, as determined by the agent's Q-values. Formally, the importance of a state, denoted $I(s)$, is defined as: \begin{equation} \label{eq:importance} I(s)=\max\limits_{a}Q^{\pi}_{(s,a)}-\min\limits_{a}Q^{\pi}_{(s,a)} \vspace{-0.1cm} \end{equation} This measure has been shown to be useful for choosing teaching opportunities in the context of student-teacher reinforcement learning~\cite{torrey2013teaching,amir2016interactive}. Before providing a detailed pseudo-code of the algorithm, we describe its operation at a high-level. HIGHLIGHTS generates a summary that includes trajectories that capture the most important states that an agent encountered in a given number of simulations. To do so, at each step it evaluates the importance of the state and adds it to the summary if its importance value is greater than the minimal value currently represented in the summary (replacing the minimal importance state). To provide more context to the user, for each such state HIGHLIGHTS also extracts a trajectory of states neighboring it and the actions taken in those states. A pseudo-code of the HIGHLIGHTS algorithm is given in Algorithm~\ref{alg:highlights}. Table~\ref{tb:parameters} summarizes the parameters of the algorithm. HIGHLIGHTS takes as input the policy of the agent $\pi$ which is used to determine the agent's actions in the simulation and state importance values, the budget for the number of trajectories to include in the summary ($k$) and the length of each trajectory surrounding a state ($l$). Each such trajectory includes both states preceding the important state and states that were encountered immediately after it. The number of subsequent states to include is determined by the $statesAfter$ parameter (the number of preceding states can be derived from this parameter and $l$). We also specify the number of simulations that can be run ($numSimulations$), and the minimal ``break'' interval between trajectories ($intervalSize$) which is used to prevent overlaps between trajectories. HIGHLIGHTS outputs a summary of the agent's behavior, which is a set of trajectories ($T$). \begin{table}[ht] \centering \small \resizebox{0.85\columnwidth}{!}{% \begin{tabular}{|p{2.5cm}|p{7cm}|} \hline \textbf{Parameter} & \textbf{Description (value used in experiments)} \\ \hline $k$ & Summary budget, i.e., number of trajectories (5) \\ \hline $l$ & Length of each trajectory (40) \\ \hline $numSimulations$ & The number of simulations run by HIGHLIGHTS (50) \\ \hline $intervalSize$ & Minimal number of states between two trajectories in the summary (50) \\ \hline $statesAfter$ & Number of states following $s$ to include in the trajectory (10) \\ \hline \end{tabular}} \caption{Parameters of the HIGHLIGHTS algorithm and the values assigned to them in the experiments reported in ~\cite{amir18highlights} (in parentheses).} \label{tb:parameters} \end{table} The algorithm maintains two data structures: $T$ is a priority queue (line 2), which will eventually hold the trajectories chosen for the summary; $t$ is a list of state-action pairs (line 3), which holds the current trajectory the agent encounters. The procedure runs simulations of the agent acting in the domain. At each step of the simulation, the agent takes an action based on its policy and advances to a new state (line 8). That state-action pair is added to the current trajectory (line 11). If the current trajectory reached its maximal length, the oldest state in the trajectory is removed (lines 9-10). HIGHLIGHTS computes the importance of $s$ based on the Q-values of the agent itself, as defined in Equation~\ref{eq:importance} (line 14). If a sufficient number of states were encountered since the last trajectory was added to the summary, state $s$ will be considered for the summary (the $c==0$ condition in line 17). State $s$ will be added to the summary if one of two conditions hold: either the size of the current summary is smaller than the summary size budget, or the importance of $s$ is greater than the minimal importance value of a state currently represented in the summary (line 17). If one of these conditions holds, a trajectory corresponding to $s$ will be added to the summary. The representation of a trajectory in the summary (a $summaryTrajectory$ object) consists of the set of state-action pairs in the trajectory (which will be presented in the summary), and the importance value $I_{s}$ based on which the trajectory was added (such that it could be compared with the importance of states encountered later). This object ($st$) is initialized with the importance value (line 20) and is added to the summary (line 21), replacing the trajectory with minimal importance if the summary reached the budget limit (lines 18-19). Because the trajectory will also include states that follow $s$, the final set of state-action pairs in the trajectory is updated later (lines 15-16). Last, we set the state counter $c$ to the interval size, such that the immediate states following $s$ will not be considered for the summary. At the end of each simulation, the number of runs is incremented (line 24). The algorithm terminates when it reaches the specified number of simulations. \begin{algorithm} \SetAlFnt{\small\sf} \DontPrintSemicolon \KwIn{$\pi, k, l, numSimulations, intervalSize, statesAfter$} \KwOut{$T$} $runs = 0$ \\ $T \leftarrow PriorityQueue(k, importanceComparator)$ \\ $t \leftarrow$ empty list \\ $c = 0$ \\ \While {$(runs < numSimulations)$} { $sim = InitializeSimulation()$ \\ \While {$(!sim.ended())$} { $(s,a) \leftarrow sim.advanceState(\pi)$ \\ \If{$(|t| == l)$} { $t.remove()$ } $t.add((s,a))$ \\ \If {$(c>0)$} { $c = c-1$ } $I_{s} \leftarrow computeImportance(\pi,s)$ \\ \If{$(IntervalSize - c == statesAfter)$} { lastSummaryTrajectory.setTrajectory(t) \\ } \If{($(|T|<k)$ or $(I_{s} > minImportance(T)))$ and $(c==0))$ } { \If{$|T|==k$} { T.pop() } $st\leftarrow$ new $summaryTrajectory(I_{s})$ \\ $T.add(st)$ \\ $lastSummaryTrajectory \leftarrow st$ \\ $c = intervalSize$ \\ } } runs = runs+1 } \caption{The HIGHLIGHTS algorithm. } \label{alg:highlights} \end{algorithm} Originally, HIGHLIGHTS was implemented in an online algorithm because it is less costly, both in terms of runtime and in terms of memory usage. In addition, such an algorithm can be incorporated into the agent's own learning process without additional cost. In this paper, we adapt the algorithm to work offline, as described in Section~\ref{sec:implementation}. \subsection{Considering State Diversity: the HIGHLIGHTS-DIV algorithm} \label{sec:algDiv} Because HIGHLIGHTS considers the importance of states in isolation when deciding whether to add them to the summary, the produced summary might include trajectories that are similar to each other. This could happen in domains in which the most important scenarios tend to be similar to each other. To mitigate this problem, we developed a simple extension to the HIGHLIGHTS algorithm, which we call HIGHLIGHTS-DIV. Similarly to HIGHLIGHTS, this algorithm also determines which states to include in the summary based on their importance. However, it also attempts to avoid including a very similar set of states in the summary, thus potentially utilizing the summary budget more effectively. HIGHLIGHTS-DIV takes into consideration the diversity of states in the following way: when evaluating a state $s$, it first identifies the state most similar to $s$ that is currently included in the summary\footnote{We assume that distance metric to compare states can be defined. This can be done in many domains, e.g., by computing Euclidean distance if states are represented by feature vectors.}, denoted $s'$. Then, instead of comparing the importance of a state to the minimal importance value that is currently included in the summary, HIGHLIGHTS-DIV compares $I_{s}$ to $I_{s'}$. If $I_{s}$ is greater than $I_{s'}$, the trajectory which includes $s'$ in the summary will be replaced with the current trajectory (which includes $s$). This approach allows less important states to remain represented in the summary (because they will not be compared to some of the more important states that differ from them), potentially increasing the diversity of trajectories in the summary and thus conveying more information to users. \subsection{Empirical evaluation of HIGHLIGHTS and HIGHLIGHTS-DIV} We summarize the main results of the study conducted in our previous work, which demonstrated the usefulness of HIGHLIGHTS and HIGHLIGHTS-DIV summaries. For complete details of the study design and its results see Amir \& Amir~\cite{amir18highlights}. The performance of the basic HIGHLIGHTS algorithm was compared with that of two baselines: (1) random summaries generated by sampling $k$ trajectories uniformly from the agent's execution trace, and (2) summaries generated from the first $k$ trajectories the agent encounters. The task used in the study was identifying the agent that performs better in pairwise comparisons, based on the summaries. Three Ms. Pacman agents were trained varying in their quality: a high-quality agent, medium-quality agent and low-quality agent. This was achieved by varying the number of training episodes. In the first experiment, 40 participants recruited from Amazon Mechanical Turk (23 female, mean age = 35.35, STD = 10.4), were asked to make the pairwise agent comparisons based on summaries generated by either the basic HIGHLIGHTS algorithm or one of the two baselines (Random or First). The study used a within-subject design, such that each participant completed nine comparison tasks showing all combinations of pairs of agents and the summary method (e.g., comparing the high-quality agent to the low quality agent based on the HIGHLIGHTS summary). In the second experiment 48 additional participants (25 female, mean age=36, STD=11.6), performed the same task, but this time summaries were generated either by HIGHLIGHTS-DIV, basic HIGHLIGHTS or the random baseline (since the ``first'' baseline led to the worst performance in the first experiment). In both experiments, participants were incentivized to answer correctly as they received a bonus payment depending on their performance. Results aggregated from both experiments are shown in Figure~\ref{fig:highlights_study}. Both HIGHLIGHTS and HIGHLIGHTS-DIV summaries led to significantly improved performance of participants compared to the baselines. HIGHLIGHTS-DIV further led to improved performance compared to HIGHLIGHTS, especially when comparing the medium quality agent with the high quality agent, which was the hardest comparison to make as their actual performance did not differ by much. Participants also expressed a subjective preference to HIGHLIGHTS summaries compared to baselines. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figures/highlightsExp12agg_all.png} \caption{Correctness rates of participants (aggregated from both experiments) in choosing the better performing agent. The x-axis shows the three different pairwise agent comparison tasks (low quality agent vs. medium quality agent, etc.). In all cases, HIGHLIGHTS and HIGHLIGHTS-DIV outperformed the two baselines. HIGHLIGHTS-DIV led to significant improvement over HIGHLIGHTS only in the Low Vs. Medium agent comparison, which was the most difficult comparison as the two agents were most similar to each other in performance.} \label{fig:highlights_study} \end{figure} \section{Integrating Local and Global Information} \label{sec:implementation} In this section, we describe our integration of the local LRP-argmax saliency maps described in section \ref{sec:argmax} into the global HIGHLIGHTS-DIV summaries described in section \ref{sec:algDiv}. To this end, we describe how the agents were trained, what adjustments we made to HIGHLIGHTS-DIV for the deep reinforcement learning algorithm we used, how we generated saliency maps and how the combined information is displayed. \paragraph{Agent training} To evaluate our combined explanation approach, we trained several Pacman agents using the OpenAI baselines \cite{baselines2017} implementation of the DQN-algorithm \cite{mnih2015human}. The network architecture used in this implementation is described in \ref{sec:argmax_foundations}. The environment we use is the Atari 2600 game MsPacman included in the Arcade Learning Environment (ALE) \cite{bellemare13arcade}, which we refer to in this work as Pacman for simplicity. For each step in a game, the input state consists of the four last frames (the raw pixel values of a single screen of the game) $f_{1}$ to $f_{4}$. Each frame $f_{i}$ is converted to grey scale and scaled down to $84\times84$ pixels. The frames are then stacked to enable the agent to see temporal differences (i.e. movement). The agent chooses an action only every four frames and this action is repeated for the next four frames. In Pacman, the agent has nine different actions to choose from, which correspond to the meaningful actions that can be achieved with an Atari 2600 controller (do nothing, up, down, left, right, up-left, up-right, down-left, down-right). The reward is based on the ALE \cite{bellemare13arcade} reward function that uses the increase of the in-game score at the beginning of the four frames of a state compared to the score after those frames. The final reward functions we used are detailed in section \ref{sec:study_design}, when we describe the agents used in the empirical evaluation. \paragraph{Generating gameplay streams and saliency maps} Since deep neural networks increase the time that the agent needs for each prediction and the LRP-analysis of this decision requires additional computation time, we recorded a stream of $10,000$ steps for each agent and used them to create our summaries. These streams also increase the reproducibility of our experiments\footnote{Since the streams are fairly big we did not upload them. They are available upon request from the authors.}. We computed the average in-game score of each trained agent over the entire stream. This allows us to objectively say which agent achieved the most points during the simulations used for our summaries and therefore gives us a ground-truth for the agent comparison task (see section \ref{sec:study_design}). Since the Atari 2600 version of Pacman does not respond to input for the first 250 frames (empirically tested) after the game starts, we exclude those frames from the streams. Furthermore, we force the agent to repeat the `do nothing' action for a random amount of steps between $0$ and $30$, until it is allowed to choose actions based on its policy. This method introduces randomness into the deterministic Pacman game and is also used during training by the DQN algorithm \cite{mnih2015human,baselines2017}. Saliency maps are created using the LRP-argmax algorithm described in Section~\ref{sec:argmax}. \paragraph{Adjustments to HIGHLIGHTS-DIV} For the summaries, we make several adjustments to the HIGHLIGHTS-DIV algorithm described in Section~\ref{sec:algDiv}, to adapt it to the DQN settings. First, we change the way importance is calculated. Instead of using equation \ref{eq:importance} which calculates the importance by comparing the highest with the lowest Q-value, we use the difference between the highest and second highest Q-values. Let $\operatorname{second highest}$ be the operation that finds the second highest value in a set, then this can be written as: \begin{equation} \label{eq:second_importance} I(s)=\max\limits_{a}Q^{\pi}_{(s,a)}-\operatornamewithlimits{second highest}\limits_{a}Q^{\pi}_{(s,a)} \end{equation} While examining the gap between the best and worst actions worked well in a simpler Pacman environment in which there were only four possible actions, it did not generalize well to the Atari environment where there is a larger number of actions. One possible explanation for this is that some of the 9 actions of the Pacman environment overlap. For example, ``left" and ``top left" can be used interchangeably in many states. Therefore the agent might ignore some of the actions completely. To verify this, we examined the frequency of choosing each action, and found that two of the three agents we trained were clearly biased against certain actions.\footnote{The results can be seen in \url{https://github.com/HuTobias/HIGHLIGHTS-LRP/tree/master/action_checks}} Therefore, some Q-values are largely uninformed by exploration and might have arbitrarily low values, making the worst Q-value non-informative. For the diversity computation in HIGHLIGHTS-DIV, we use Euclidean distance over the raw $84\times84\times4$ input states. Since we pre-generated a stream of $10,000$ states, we implement an offline version of HIGHLIGHTS-DIV that selects the states for the summary retrospectively from the generated stream. The procedure begins by sorting the states based on their Q-values, and adding them to the summary according to this ordering. To reduce the number of overlapping trajectories, we compare each new state with all states in the current summary and corresponding context states (this is equivalent to the HIGHLIGHTS-DIV variant). To find a suitable threshold that determines when a state is too similar to the states that were already selected for the summary, we randomly pick a subset of $1,000$ random states from the recorded stream and calculate the similarity between each pair of states in this set. Then, we set the threshold to be a percentile of the distribution of those similarity values. We empirically found (by manually examining a sample of states) that using a threshold of $3\%$ led to no obvious duplicate trajectories for any of the agents. \paragraph{Video generation} The videos we generate from the states chosen by the summary show $30$ frames per second. To emphasize that demonstrations show different trajectories, they are separated by a black screen that appears for $1$ second (inspired by the fade-out effect used in ~\cite{sequeira2019interestingness}). To prevent the users from using the in-game score to gauge how good an agent is, we mask the bottom half of the screen with black pixels. In pilot studies, participants complained that the videos were flickering too much. One of the reasons for this is that the Atari 2600 implementation of Pacman does not show every object in every frame to save computing power. Since we showed all frame after each other these objects appeared to be blinking and distracted the viewers. To combat this problem, we do not display the current frame $f_{i}$. Instead we display $\max(f_{i},f_{i-1})$, the maximum of each pixel over the current frame $f_{i}$ and the preceding frame $f_{i-1}$. While this introduces some artifacts (e.g. red pellets showing through blue ghosts) it considerably reduces the flickering. Another measure we take against this flickering is to interpolate between the different saliency maps instead of showing a completely different saliency map for each frame. Let $f_{1}$ to $f_{4}$ be the four frames of an input state and let $s_{1}$ to $s_{4}$ be the saliency maps for each of these frames that analyze the agent's decision in this state. For $i<4$ the action that Pacman will take after frame $f_{i}$ is not related to the saliency map $s_{i}$, since the agent only decides on a new action every four frames and is still repeating the action that he decided on based on the last state (composed of the 4 frames before $f_1$). Therefore we show the saliency map $s_{4}$ over the frame $f_{4}$ and for the other frames ($i<4$) we interpolate between the last shown saliency map and $s_{4}$. Before this interpolation we normalize the saliency maps to have a maximum of $1$ and a minimum of $0$. We do this over all 4 frames of the states $s_{1},...,s_{4}$ to avoid losing information that might be transported in the magnitude of relevance values between the frames. Finally, we add the interpolated saliency maps to the green channel of the original screen frame. Our complete implementation can be found here: \url{https://github.com/HuTobias/HIGHLIGHTS-LRP} \section{Introduction} \label{sec:introduction} The maturing of artificial intelligence (AI) methods has led to the introduction of intelligent systems in areas such as healthcare and transportation~\cite{stone2016artificial}. Since these systems are used by people in such high-stakes domains, it is crucial for users to be able to understand and anticipate their behavior. For instance, a driver of an autonomous vehicle will need to anticipate situations in which the car fails and hands over control to her, while a clinician will need to understand the treatment regime recommended by an agent to determine whether it aligns with the patient's preferences. The recognition of the importance of human understanding of agents' behavior, together with the complexity of current AI systems, have led to a growing interest in developing ``explainable AI'' methods~\cite{doshi2017roadmap,gunning2017explainable,aha2017ijcai}. The idea of making AI systems explainable is itself not new, and was already discussed since the early days of expert systems~\cite{swartout1983xplain,chandrasekaran1989explaining}. However, state-of-the-art AI algorithms use more complex representations and algorithms (e.g., deep neural networks), making them harder to interpret. For example, in contrast to classical agent planning approaches such as the belief-desire-intention (BDI) framework~\cite{rao1995bdi} in which the goals of the agent are explicitly defined, current agents often use policies trained using complex reward functions and feature representations that are difficult for people to understand. In this paper, we focus on the problem of describing and explaining the behavior of agents operating in sequential decision-making settings, which are trained in a deep reinforcement learning framework. In particular, we explore the usefulness of \emph{global} and \emph{local} post-hoc explanations~\cite{molnar2019interpretable} of agent behavior. Global explanations describe the overall policy of the agent, that is, the actions it takes in different regions of the state space. An example of such global explanations are strategy summaries~\cite{amir2019summarizing}, which show demonstrations of the agent's behavior in a carefully selected set of world states. Local explanations, in contrast, aim to explain specific decisions made by the agent. For instance, saliency maps are used to show users what information the agent is attending to~\cite{greydanus2018}. We explore the combination of global and local information describing agent policies. The motivation for integrating the two approaches is their complementary nature: while local explanations can help users understand what information the agent attends to in specific situations, they do not provide any information about its behavior in different contexts. This is reinforced by a previous study conducted by Alqaraawi et al.~\cite{alqaraawi2020evaluating} who evaluated local explanations and came to the conclusion that sole instance-level explanations are not sufficient and should be augmented with global information. Similarly, while demonstrating what actions the agent takes in a wide range of scenarios can provide users with a sense of the overall strategy of the agent, it does not provide any explanations as to what information the agent was considering when choosing how to act in a certain situation. To examine the benefits of these two complementary approaches and their relative usefulness, we integrate strategy summaries with saliency maps. Specifically, we adapt the HIGHLIGHTS-DIV algorithm for generating strategy summaries from our previous work~\cite{amir18highlights} such that it can be applied to deep learning settings, and integrate it with saliency maps that are generated based on Layer-Wise Relevance Propagation (LRP) (using a method we previously published in ~\cite{huber2019enhancing}). We combine these two approaches by adding to the summary generated by HIGHLIGHTS-DIV saliency maps showing what the agent attends to. We evaluate this combination of global and local explanations in a user study in which we explore both the benefits of HIGHLIGHTS-DIV summaries and the benefits of adding saliency maps to strategy summaries. Specifically, we compare random summaries and HIGHLIGHTS-DIV summaries, both with and without the addition of saliency maps. Study participants complete two types of tasks requiring the analysis of different agents trained to play the game of Pacman: an agent comparison task{} in which they compare the performance of two agents, and a retrospection task, in which they reflect on an agent's strategy. We chose those tasks to investigate whether the users trusted the right agent and to evaluate their mental models of the agents, respectively. Our results show that participants who were shown HIGHLIGHTS-DIV summaries performed better on both tasks compared to participants who were shown random summaries, and were also more satisfied with HIGHLIGHTS-DIV summaries. We find mixed results with respect to the benefits of adding saliency maps to summaries, which improved participants' ability to identify some aspects of agents' strategies, but in most cases did not lead to improved performance. The paper makes the following contributions: \begin{itemize} \item It demonstrates that the HIGHLIGHTS-DIV algorithm, which was so far only used on classic reinforcement learning, can be applied to deep reinforcement learning agents with slight adjustments. \item It proposes a joint local and global explanation approach for RL agents by integrating LRP saliency maps and HIGHLIGHTS-DIV summaries. \item It evaluates the combination of global and local summaries in a user study, demonstrating the benefits of HIGHLIGHTS-DIV summaries and the potential benefits and limitations of local explanations based on saliency maps. \end{itemize} The remainder of this article is structured as follows: Section \ref{sec:related_work} reviews prior work on explainable intelligent agents, Sections \ref{sec:argmax} and \ref{sec:highlights} describe our previous works on local and global explanations, respectively. Section \ref{sec:implementation} details our combined implementation of those two methods, including the adaptation of HIGHLIGHTS-DIV to deep reinforcement learning. We describe the empirical evaluation we conducted in Section \ref{sec:study_design}, and its results are summarized in Section \ref{sec:results}. Finally, we discuss the results of the study and future directions in Section \ref{sec:discussion}, and conclude in Section \ref{sec:conclusion}. \section{Introduction} \input{intro.tex} \input{related.tex} \input{argmax.tex} \input{highlights.tex} \input{implementation.tex} \input{study_design.tex} \input{results.tex} \input{discussion.tex} \input{conclusion.tex} \bibliographystyle{elsarticle-harv} \section{Related Work} \label{sec:related_work} In this section, we review related works on explainable AI. We begin with a short review of global and local explanation methods of machine learning models, elaborating on the use of saliency maps, which we also make use of. We then discuss in more depth prior works on global and local explanations of policies of agents operating in sequential decision-making settings such as RL agents. \paragraph{Global and local methods for interpretable machine learning} Broadly, our work relates to the problem of interpretable machine learning, that is, explanations for the decisions of prediction models~\cite{doshi2017roadmap}. Few interpretable machine learning approaches provide global explanations, e.g., by showing examples of a set of instances and specifying how they were classified~\cite{ribeiro2016should,kim2016examples} or by generating prototypical images that maximize the activation of specific neurons \cite{simonyan13dicn}. The majority of methods focus on local explanations that explain single decisions of the model. To this end, various methods to measure the relevance of a part of the input for the model's decision have been proposed. For visual input, this information is often displayed as saliency maps that highlight how relevant each pixel is for a particular decision of the agent. Since the input for the Atari agents we use in this study is visual, we will use the word saliency map method even if the very same algorithm can be used on non-visual input data. Gradient-based saliency map generation methods \cite{simonyan13dicn,springenberg14guided-backprop,sundararajan2017axiomatic,selvaraju2016grad-cam} utilize the derivative with respect to the input to estimate how much a small change in this input's value would change the prediction. Occlusion-based methods (\cite{zeiler14deconv,ribeiro2016should,sixt2020restricting}), occlude areas inside the input and measure how much this changes the model's prediction. The idea behind this is to introduce uncertainty to the occluded area and to see how much the model is influenced by the loss of information in that area. Occlusion-based methods often come with the advantage of being independent of the model's structure but with the drawback of not being as precise as some model-specific methods. In contrast to the aforementioned methods for generating saliency maps, Bach et al. \cite{bach2015lrp} proposed Layer-wise Relevance Propagation (LRP), which uses the intermediate activations of the neurons during the forward pass to estimate the contribution of each input pixels to prediction. Common to all interpretable ML approaches is that they focus on one-shot decisions. Thus, they do not fully address the problem of explaining behavior in sequential decision making settings, where the agent takes actions, earns rewards and affects the state of the world. \paragraph{Local explanations of agent behavior} Several approaches have been introduced for explaining specific decisions in the context of Markov Decision Processes (MDP). Some works attempt to provide justifications for a policy~\cite{khan2009minimal,khan2011automatically,dodson2011natural} by making statements about particular actions choices (e.g. an action was chosen because it will lead to a state that has higher value with higher probability). Others provide causal explanations by integrating a causal structure of the domain~\cite{seegebarth2012making,vanderwaa2018contrastive}. Krarup et al.~\cite{krarup2019model} propose methods for generating contrastive explanations to explain action choices. In this paper, we focus on the use of saliency maps for local explanations. Several works have implemented saliency maps in the context of Deep Reinforcement Learning (DRL). Because many DRL algorithms utilize CNNs it is possible to directly use the methods we covered in the previous paragraph on those algorithms. Zahavy et al. \cite{zahavy2016graying} and Wang et al. \cite{wang2015dueling} for example used gradient-based saliency maps on traditional and Dueling Deep Q-Network (DQN) algorithms. Greydanus et al.~\cite{greydanus2018} and Iyer et al.~\cite{iyer2018transparency} propose novel occlusion-based algorithms, where Greydanus et al. use Gaussian blur instead of complete occlusion and Iyer et al. utilize template matching to identify objects in the input. This allows them to train a new agent on this additional information and then selectively occlude those objects. Lapuschkin et al.~\cite{lapuschkin2019} used LRP to visualize the classical DQN architecture. In this paper, we use a more selective LRP variant which we tested on RL agents in our previous work \cite{huber2019enhancing} (see section \ref{sec:argmax}). \paragraph{Global explanations of agent behavior} Several global explanation methods describing what actions an agent takes in different states have been proposed. Hayes et al.~\cite{hayes2017improving} developed a system that allows users to ``debug'' an agent's strategy by querying its decisions in situations specified by the user. In contrast to this approach, strategy summarization methods select a set of important states to share with the user, such that the user does not need to query the agent with respect to specific states. We note that the two approaches are complementary. Booth et al.~\cite{booth2019evaluating} compile logical formulas that specify when certain behaviors occur, e.g., by stating for which regime in the state space an agent will perform a particular action. However, this approach requires a state representation that is understandable to the user, which may not be the case in many complex domains, especially when DRL is used. Our work takes the approach of summarizing agent policies (which we refer to as ``strategy summaries'') by demonstrating the behavior of an agent in a subset of world states which are considered important by the agent~\cite{amir2019summarizing,amir2018agent}. Several methods have been proposed for selecting the subset of demonstrations to present in a summary. Some methods choose states that best enable the reconstruction of the original policy, using computational models such as inverse reinforcement learning (inferring the agent's reward function) or imitation learning (constructing a mapping from states to agents' actions)~\cite{huang2017enabling,lage2019exploring}. An alternative approach uses heuristics for identifying ``interesting'' situations. The HIGHLIGHTS-DIV algorithm we utilize falls into this category, as it selects states based on the distribution of Q-values of different actions. We chose to use this approach since it does not make any assumptions about people's reasoning, is simpler computationally and was shown to improve users' understanding of agent behavior. Similar approaches have been developed in parallel~\cite{huang2018establishing,sequeira2019interestingness}, varying in the specific formulation of the interestingness criteria used to determine which states to include in the summary. Another recent line of work explored the problem of generating plans that are more understandable to people~\cite{kulkarni2019explicable,chakraborti2019plan,cashmore2019towards}. The idea underlying this approach is that by having a model of human plans in a domain, it is possible to generate plans that achieve the desired goal while being as consistent as possible with people's mental models. However, in contrast to the strategy summarization approach, these approaches have only considered goal-based plans for short-term tasks. Furthermore, they require a model of how people plan in the domain, which might not always be feasible to obtain. \paragraph{Evaluation of explanation methods for RL agents} Some recent user studies examined the use of saliency maps and strategy summaries to explain the behavior of RL agents to people. Alqaraawi et al.~\cite{alqaraawi2020evaluating} and Selvaraju et al.~\cite{selvaraju2016grad-cam} found that participants, who saw saliency maps, were able to predict the decision of an image classification model better then participants who did not see them. However, the participants were still only correct in about 60\% of the cases and Alqaraawi et al. proposed to look beyond instance-level explanations in the future. For actual RL agents, Iyer et al.~\cite{iyer2018transparency} and Anderson et al.~\cite{anderson2019mere-mortals} also used an action prediction task to evaluate saliency maps but found no clear advantage of saliency maps. In addition to the prediciton task, Anderson et al. used a retrospection task{} to get an even better understanding of participants' mental models and, in addition to saliency maps, investigated reward decomposition \cite{erwig2018explaining} and a combination of both methods. Here, they found significant positive effects for reward decomposition and the combined approach and a marginally significant (p = 0.086) effect in favor of saliency maps. Strategy summaries have been evaluated using several different tasks. Huang et al.~\cite{huang2017enabling} and Lage et al.~\cite{lage2019exploring} asked participants to predict what actions an agent would take based on summaries optimized for policy reconstructions. Their results show that summary methods that better match with people's computational models lead to improved action prediction, but that people may use different models in different contexts. Summaries generated by a variety of interestingness criteria were shown to improve people's ability to identify regions of the state space in which an agent spends more time and regions of the state space in which an agent requires additional training~\cite{sequeira2019interestingness}. Importance-based summaries (e.g. HIGHLIGHTS-DIV) were shown to improve people's ability to identify the better performing agent in an agent comparison task{}~\cite{amir18highlights} and their ability to decide whether to trust an agent in specific world states~\cite{huang2018establishing}. In sum, this work extends the existing state-of-the-art in explanations of RL agents, by proposing an integrated global and local explanation method, which enhances HIGHLIGHTS-DIV summaries (global) with LRP saliency maps (local), and conducting a user study to examine the joint and separate contributions of the local and global information to people's understanding of the behavior of RL agents. \section{Results} \label{sec:results} In this section, we report the results of our study. We first describe the characteristics of the participant population with respect to their AI experience, attitude towards AI and Pacman experience. Then we assess the main hypotheses (H1--H4) (results summarized in Table~\ref{tb:p_values}) and further provide a descriptive analysis of additional variables such as participants' confidence and analysis of mistakes. \paragraph{AI and Pacman experience} We verify that participants in different conditions did not differ much in their AI experience and views and in their experience with the game Pacman. To this end we asked them when they played Pacman for the last time and across all four conditions the majority of participants answered: `I played Pacman more than 5 years ago'. After receiving a short description of what AI is (using a formulation based on Russel~\cite{russell2016artificial}), 104 participants stated that they had experience with AI. The exact kind of experience ranged from `I know AI from the media' (78 participants) to `I do research on AI related topics' (14 participants). On average the users had a positive attitude towards AI (mean of $3.95$ on a 5-point Likert scale). There are no meaningful differences between the conditions (see \ref{appendix:demographics} for more details). \paragraph{(H1) Participants shown HIGHLIGHT-DIV summaries performed better than participants shown random summaries} Participants' correctness rates for the agent comparison task{} are shown in Figure~\ref{fig:total_score}(b). These results support H1, which states that HIGHLIGHTS-DIV summaries will lead to improved performance in both the agent comparison task{} and the retrospection task{}. The exact definition of performance per task is described in more detail in section \ref{sec:analysis}. Specifically, in the agent comparison task{} we find that participants in condition \emph{H}{} significantly outperformed participants in condition \emph{R}{} (\emph{H}{}: mean=2.1, 95\% CI=[1.83, 2.33], \emph{R}: mean= 1.63, 95\% CI=[1.34, 1.91], Mann-Whitney test U=334.5, $p=0.014$, $r_{rb}${}=0.3)\footnote{Here 95\% CI is the 95\% confidence interval and $r_{rb}${} is Rank biserial correlation.}. While participants in the \emph{H+S}{} condition achieved higher mean correctness rates than participants in the \emph{R+S}{} condition, this difference is not statistically significant (\emph{H+S}: mean=0.71, 95\% CI=[0.6, 0.82], \emph{R+S}: mean=0.65, 95\% CI=[0.54, 0.75], Mann-Whitney test U=391, $p=0.180$, $r_{rb}${}=0.13). Similarly, participants' average explanation satisfaction ratings, shown in Fig.~\ref{fig:total_satisfaction}(b), indicate that participants in condition \emph{H}{} were more satisfied with the videos they received than the other participants. However, this difference is not significant (see Table \ref{tb:p_values}). \begin{table}{} \begin{tabular}{l | l | C |C |C |C} \textbf{Task} & \textbf{Variable} & \multicolumn{2}{l|}{ \parbox{0.25\linewidth}{\textbf{Effect of strategy summarization:}}} & \multicolumn{2}{l}{\parbox{0.25\linewidth}{\textbf{Effect of saliency maps:}}} \\ & & \emph{H}{} > \emph{R}{} & \emph{H+S}{} > \emph{R+S}{} & \emph{R+S}{} > \emph{R}{} & \emph{H+S}{} > \emph{H}{} \\ \hline \multirow{2}{*}{\parbox{0.17\linewidth}{retrospection task{}}} & score & 0.008^{*} & 3.3e-05^{*} & 0.965 & 0.514 \\ & satisfaction & 0.021^{*} & 0.035^{*} & 0.677 & 0.710 \\ & text score & & & & 0.088^{\dagger} \\ \hline \multirow{2}{*}{\parbox{0.17\linewidth}{agent comparison task{}}} & score & 0.014^{*} & 0.180 & 0.062^{\dagger} & 0.307\\ & satisfaction & 0.147 & 0.235 & 0.627 & 0.833 \\ \end{tabular}{} \caption{Summary of all significance tests (calculated with Mann-Whitney tests). The $^{*}$ denotes statistically significant differences and $^{\dagger}$ denotes a p-value $<0.1$.} \label{tb:p_values} \end{table} \begin{figure} \small \centering \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/retroScoreTotal.png} (a) Total score (summed over all three agents) for the object selection in the retrospection task{}. The scoring system is described in \ref{sec:analysis}. \end{minipage} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/TrustPercentCorrect.png} (b) Number of correct agent selections in the agent comparison task{} (Out of three selections). \end{minipage} \caption{Comparison of participants' average performance in each task, by condition. Participants in the HIGHLIGHTS conditions \emph{H}{} and \emph{H+S}{} outperformed the random conditions \emph{R}{} and \emph{R+S}{}. Saliency maps only had a slight positive effect when added to random summaries in the agent comparison task{} } \label{fig:total_score} \end{figure} \begin{figure} \centering \small \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/explSatisfactionRetroAvg.png} (a) Participants' satisfaction in the retrospection task{} averaged over all explanations satisfaction questions. \end{minipage} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/explSatisfactionTrustAvg.png} (b) Participants' satisfaction in the agent comparison task{} averaged over all explanation satisfaction questions. \end{minipage} \caption{Comparison of participants' average explanation satisfaction in each task, by condition. Each participant rated their agreement with several statements adapted from the explanation satisfaction questions proposed by Hoffman et al.~\cite{hoffman2018metrics} on a 5-point Likert scale (see Section \ref{sec:main_tasks}). Participant's final rating was averaged over all those ratings, reversing the rating of the negative statements. Overall, participants in the HIGHLIGHTS conditions \emph{H}{} and \emph{H+S}{} rated the explanations highest.} \label{fig:total_satisfaction} \end{figure} With respect to participants' performance during the retrospection task{}, we find even stronger results (Fig.~\ref{fig:total_score}(a)) then in the agent comparison task{}, further supporting H1. Here too, participants in condition \emph{H}{} obtained a higher score in the object selection sub-task than participants in condition \emph{R}{} (\emph{H}: mean=2.5, 95\% CI=[1.89, 3.03], \emph{R}: mean=1.5, 95\% CI=[0.92, 2.06], Mann-Whitney test U=346.5, $p=0.008$, $r_{rb}${}=0.34) and participants in the \emph{H+S}{} condition received a higer score then participants in the \emph{R+S}{} condition (\emph{H+S}: mean=2.55, 95\% CI=[2.02, 3.06], \emph{R+S}: mean=0.73, 95\% CI=[0.13, 1.31], Mann-Whitney test U=206.5, $p=0.00003$, $r_{rb}${}=0.58). We found analogous significant differences in participants' explanation satisfaction during the retrospection task{} (Fig.~\ref{fig:total_satisfaction}(a)). Here, participants in condition \emph{H}{} were more satisfied than participants in condition \emph{R}{} (\emph{H}: mean=3.63, 95\% CI=[3.35, 3.88], \emph{R}: mean=3.17, 95\% CI=[2.82, 3.5], Mann-Whitney test U=373.0, $p=0.021$, $r_{rb}${}=0.29) and participants in the \emph{H+S}{} condition were more satisfied than participants in the \emph{R+S}{} condition (\emph{H+S}: mean=3.52, 95\% CI=[3.25, 3.78], \emph{R+S}: mean=3.12, 95\% CI=[2.81, 3.43], Mann-Whitney test U=364.5, $p=0.035$, $r_{rb}${} 0.27). \paragraph{(H2) Adding saliency maps improved performance in some areas depending on the task} There were no significant differences supporting our second hypothesis H2 which predicted that adding saliency maps will improve participants' performance in both tasks. Nevertheless, we report two positive effects of saliency maps that are only marginally\footnote{In accordance with convention (Vogt et al.~\cite{vogt2005dictionary}), we use \emph{marginally significant} to describe $0.05 \leq p < 0.1$} significant and which might guide future research in this area. For the agent comparison task{}, we find that the saliency maps only improved performance when added to random summaries (\emph{R}: mean=0.54, 95\% CI=[0.45, 0.64], \emph{R+S}: mean=0.65, 95\% CI=[0.54, 0.75], Mann-Whitney test U=390.5, $p=0.062$, $r_{rb}${}=0.21). Fig.~\ref{fig:total_score}(a) shows that the saliency maps did not help participants identify the most important objects in the retrospection task{}. However, the summative content analysis of participants' textual descriptions of the agents' strategies, shown in Fig~\ref{fig:text_score}, indicates that saliency maps helped participants to correctly describe how the agents use those objects. The descriptions of the agents' strategies written by participants in condition \emph{H+S}{} received a higher score than the ones by participants in condition \emph{H}{} (\emph{H}{}: mean=1.50, 95\% CI=[0.97, 2.0], \emph{H+S}{}: mean=2.13, 95\% CI=[1.55, 2.71], Mann-Whitney test U=400, $p=0.088$, $r_{rb}${}=0.195). \paragraph{(H3 + H4) The effect of the summary generation method was greater than that of adding saliency maps} We hypothesized that the summary generation method will affect the performance of participants more than the addition of saliency maps in the agent comparison task{} (H3), and that the saliency maps will have a greater effect than the summary method in the retrospection task{} (H4). The study results support H3: we found that participants shown HIGHLIGHTS-DIV summaries significantly outperformed participants shown random summaries in the agent comparison task{}, while adding saliency maps only improved performance for the random summaries, and to a lesser extent. For selecting the most important objects for the agent's strategy in the retrospection task{}, the addition of saliency maps did not improve performance, while HIGHLIGHTS-DIV summaries did improve performance compared to the random summaries. Therefore we reject H4, even though the results shown in Fig.~\ref{fig:text_score} indicate that saliency maps improved the textual descriptions of the agent's strategy written by participants in \emph{H+S}{} compared to \emph{H}{}. \begin{figure} \centering \begin{minipage}{0.6\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/text/score_total.png} \end{minipage} \caption{Participants' total score for their textual descriptions of the agents strategy during the agent comparison task{} (summed over all three agents). The scoring function is described in \ref{sec:analysis}. The descriptions of participants in the HIGHLIGHTS-DIV conditions \emph{H}{} and \emph{H+S}{} received a higher score than those of participants in the random conditions. The addition of saliency maps (\emph{H+S}{}) slightly improved this effect further.} \label{fig:text_score} \end{figure} \begin{figure} \centering \footnotesize \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/retroPacmanTotal.png} (a) Selections of Pacman during the object selection per condition. \end{minipage} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/text/focus_on_Pacman_total.png} (b) Mentions of Pacman's vicinity in the descriptions of the agents' strategies per condition. \end{minipage} \caption{The average number of times that participants correctly selected Pacman during the object selection (a), or referred to its vicinity in their textual descriptions (b) of the agents' strategies (sum over all three agents). The results indicate that saliency maps help the participants to identify what information the agents use. } \label{fig:retro_pacman} \end{figure}{} In line with Hypothesis H4.1, Fig.~\ref{fig:retro_pacman} indicates that the improvement of the descriptions of the agents' strategies mainly stems from participants in the saliency groups \emph{R+S}{} and \emph{H+S}{} identifying that the agent mostly payed attention to the vicinity of Pacman. This effect was not as strong in the object selection question, since it did not capture the participants' reasoning. Sub-Hypothesis H4.2 stated that strategy summarization would help participants identify the goals of the agents. The results shown in Fig.~\ref{fig:retro_select_goal} support this Hypothesis, since participants in the HIGHLIGHTS-DIV conditions \emph{H}{} and \emph{H+S}{} identified the correct goals of the agent more often. \begin{figure} \centering \footnotesize \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/retroGoalTotal.png} (a) Selections of the agent's specific goals in the object selection, per condition. \end{minipage} \begin{minipage}{0.48\linewidth} \centering \includegraphics[width=\linewidth]{figures/results/text/goal_total.png} (b) Mentions of the agent's specific goals in the strategy descriptions, per condition. \end{minipage} \caption{The number of times that participants identified the agent's specific goal in the object selection (a) and strategy description (b) components of the retrospection task{}. The results are in line with Hypothesis H4.2 that strategy summarization helps to identify the agents' goals.} \label{fig:retro_select_goal} \end{figure} \paragraph{Participants' Justifications} Across all groups, most participants mainly based their justifications on the agents' gameplay (Fig.~\ref{fig:gameplay_justifications}). In the saliency conditions, most participants did not mention the saliency maps in their justifications. On average, less than one out of 3 justifications in \emph{H+S}{} and in \emph{R+S}{} referred to the green highlighting during the retrospection task{} and during the agent comparison task{} even fewer participants mentioned them (see Fig.~\ref{fig:heatmap_justifications} for more details). Another interesting point we found in participants' justifications during the retrospection task{} is that participants in \emph{H}{} gave more unjustified explanations than any other condition (\emph{H}{}: mean=0.66, compared to the second highest condition \emph{R+S}{}: mean=0.38 ). This is just an observation and did not repeat in the agent comparison task{} but it might be interesting to investigate further in the future. The values for all conditions can be seen in Fig.~\ref{fig:unjustified_justifications}. \paragraph{Participants' confidence and viewing dynamics} In addition to the main metrics used in our study, we further measured participants' confidence (and in particular whether they were more confident when they answered correctly), and their viewing dynamics of the summaries (time and number of pauses). However, apart from a slight positive effect for the participants in condition \emph{H}{}, there were no interesting differences in the three aforementioned variables (see Fig. \ref{fig:confidences} to \ref{fig:pauses} and ~\ref{appendix:results} for additional details). \section{Empirical Evaluation} \label{sec:study_design} To evaluate our hypothesis that there is benefit to combining global and local explanations of RL agents, we conducted a user study. In this study, participants were asked to compare different agents and to reflect on the strategies of agents based on the information they were shown. We next describe in detail the study design, the specific hypotheses we tested, and the metrics we used to evaluate the results. \subsection{Study Design} \paragraph{Empirical domain} We used the Atari game Pacman for our experiments (see section \ref{sec:implementation} for the specific implementation). Atari games are a common benchmark for state of the art reinforcement learning algorithms \cite{bellemare13arcade,baselines2017,Mnih15,wang2015dueling} and to test explanation methods for those algorithms \cite{amir18highlights,greydanus2018,huber2019enhancing,lapuschkin2019,weitkamp2019}. We chose Pacman since it is not as reaction-based as some other Atari games (e.g. Breakout or Enduro) and allows the RL agents to develop different strategies. Furthermore, no additional domain knowledge is necessary to understand Pacman and the rules are not too complicated. This enables us to conduct a study with a wide range of participants by simply explaining the rules at the beginning of the study. In the game, Pacman obtains points by eating food pellets while navigating through a maze and escaping ghosts. There are two types of pellets: regular pills for which Pacman receives 10 points, and power pills that are worth 50 points and also turn the ghosts blue, which makes them edible by Pacman. Pacman receives 200, 400, 800, 1600 points for each ghost it eats successively. At random intervals cherries spawn and move through the labyrinth. Eating a cherry gives 100 points. To evaluate participants' ability to differentiate between alternative agents and analyze their strategies, we trained agents that behave qualitatively different. To this end, we modified the reward function used for training (similar approach to that used by Sequeira et al.~\cite{sequeira2019interestingness}), resulting in three types of agents. As mentioned in section \ref{sec:implementation}, we based all of those reward functions on the default ALE~\cite{bellemare13arcade} reward function, which measures the increase in in-game score (as described above) between the first and last frame of a state. \begin{itemize} \item \emph{Regular agent}: This agent was trained using the default reward function of the ALE \footnote{To remove unnecessary magnitude we divided the rewards by the factor 10, such that a regular pill gives a reward of 1.}. \item \emph{Power pill agent}: This agent was trained using a reward function that only assigned positive rewards to eating power pills\footnote{We achieved this by only giving the agent a reward if the increase in score was between 50 and 99. The range is necessary since Pacman is forced to eat at least one regular pill directly before it eats a power pill.}. \item \emph{Fear-ghosts agent}: This agent used the default ALE reward function but was given an additional negative reward of $-100$ when being eaten by ghosts, causing it to more strongly fear ghosts (which is implicitly learned by other agent due to the lack of future rewards caused by being eaten). \end{itemize} Each agent was trained for 5 Million steps with with the algorithm described in section \ref{sec:implementation}. At the end of this training period the best performing policy is restored. \paragraph{Experimental conditions} To evaluate the potential benefits of integrating global and local explanations, and their relative importance, we assigned participants to four different conditions (summarized in Table~\ref{tb:conditions}). The first two conditions included only global information, while the remaining two conditions integrated local explanations as well: \begin{itemize} \item \textbf{Random Summaries (\emph{R})}: In this condition, participants were shown summaries that were generated by randomly selecting state-action pairs from the streams of the Pacman agents playing the game. We note that since each state had the same probability of being chosen, in practice states that are encountered more frequently will be more likely included. Hence, this is equivalent to selecting states based on the likelihood of encountering them. To ensure that the randomly generated summary was not, by chance, particularly good or particularly bad, we generated 10 different random summaries and randomly assigned them to participants in this condition. \item \textbf{HIGHLIGHTS-DIV summaries (\emph{H})}: In this condition, participants were shown summaries generated by the HIGHLIGHTS-DIV algorithm. The specific implementation of this algorithm and the parameters we used for diversity are described in section \ref{sec:implementation}. \item \textbf{Random Summaries+Saliency (\emph{R+S})}: These summaries included the same states as those shown in the \emph{R}{} summaries, but each image was overlayed with a saliency map generated by the LRP-argmax algorithm described in section \ref{sec:argmax}. \item \textbf{HIGHLIGHTS-DIV summaries+Saliency (\emph{H+S})}: These summaries included the same states as those shown in the \emph{H}{} summaries, where each image was overlayed with a saliency map generated by the LRP-argmax algorithm described in section \ref{sec:argmax}. \end{itemize} \begin{table} \begin{tabular}{| l |c | c |} \hline & `Random' summaries & HIGHLIGHTS-DIV \\ \hline No saliency maps & \emph{R}{} & \emph{H}{} \\ \hline LRP saliency maps & \emph{R+S}{} & \emph{H+S}{} \\ \hline \end{tabular} \caption{The four study conditions.} \label{tb:conditions} \end{table} We used a budget of $k=5$ for the summaries. That is, each summary included 5 base states chosen either randomly or by HIGHLIGHTS-DIV, where for each state we included a surrounding context window of 10 states that occurred right before and after the chosen state and an interval size of 10 states to prevent directly successive states in the summary. The video creation and saliency map overlay process is described in detail in section \ref{sec:implementation}. All video summaries used in the study are available online.\footnote{\url{https://github.com/HuTobias/HIGHLIGHTS-LRP/tree/master/Survey_videos}}. We note that we did not include a condition that shows only local explanations, since by definition a local explanation is given for a specific state, forcing us to make some choice about which states to show (which means making a global decision). However, the \emph{R+S}{} condition simulates a scenario where local explanations are shown for randomly selected states. \paragraph{Participants} We recruited participants through Amazon Mechnical Turk ($N=134$, the majority of participants were between the ages of 25 and 44, 47 females). Participation was limited to people from the US, UK, or Canada (to ensure sufficient English level) with task approval rate greater than 97\%. Since saliency maps are not designed for color blind people, the participants were also asked if they were color blind and stopped from participating if they are. \paragraph{Procedure} Participants were first asked to answer demographic questions (age, gender) and questions regarding their experience with Pacman and their views on AI. Then, they were shown a tutorial explaining the rules of the game Pacman and were asked to play the game to familiarize themselves with it. To verify that participants understood the rules, they were asked to complete a quiz, and were only allowed to proceed with the survey after answering all questions correctly. After completing the quiz, they were given information and another quiz regarding the Pacman agent video summaries. In conditions \emph{R+S}{} and \emph{H+S}{}, this also included an explanation and a quiz about saliency maps. Then, they proceeded to the main experimental tasks. See \ref{appendix:questionnaire} for the complete questionnaire. Participants were compensated as follows: they received \$4 base payment, and an additional bonus of 10 cents for each correct answer. The study protocol was approved by the Institutional Review Board at the Technion. \paragraph{Main tasks} \label{sec:main_tasks} We aimed to investigate three aspects related to the participants in the study: (1) the mental model of the participant about the agent, (2) participants' ability to assess agents' performance (appropriate trust), and (3) participants' satisfaction with respect to the explanations presented. \textbf{Task 1: Eliciting Mental Models through Retrospection.} By mental model, we understand the cognitive representation that the participant has about a complex model~\cite{halasz1983mental, norman2014some}, in our case, the agent. Humans automatically form mental models of agents based on their behavior~\cite{anjomshoae2019explainable}. These mental models help users understand and explain an agent's behavior. The examination of participants' mental models and their correctness helps to verify if explainable AI has been successfully applied \cite{Rutjes2019AIHCI,arrieta2020XAIconcepts}. To evaluate which mental models participants have formed about the agent's behavior, we designed a \textbf{retrospection task{}}. Here we used a task reflection method inspired by prior studies~\cite{anderson2019mere-mortals,sequeira2019interestingness}, which is recommended by Hoffman et al.~\cite{hoffman2018metrics}. This task asked the participants to analyze the behavior of the three different AI agents, \emph{Regular agent}, \emph{Power pill agent}{} and \emph{Fear-ghosts agent}. The ordering of the agents was randomized. Specifically, participants were shown the video summary (according to the condition they were assigned to), and were asked to briefly describe the strategy of the AI agent (textual), and to select up to 3 objects that they think were most important to the strategy of the agent (the possible objects were Pacman, power pills, normal pills, ghosts, blue ghosts and cherries). They were also asked how confident they were in their responses, and to justify their reasoning. Figure~\ref{fig:retro_task} shows a sketch of a retrospection task. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{figures/RetroTask.PNG} \caption{A sketch of the retrospection task{}: participants were asked to analyze the behavior of each agent by providing a textual description of its strategy and identifying the objects that are most important to its decision-making. The full task can be seen in \ref{appendix:questionnaire}.} \label{fig:retro_task} \end{figure} \textbf{Task 2: Measuring Appropriate Trust through Agent Comparison.} We use the term appropriate trust, based on the work of Lee and See~\cite{lee2004trust} who present a conceptual `trust in automation' framework. They define appropriate trust as a well-calibrated trust that matches the true capabilities of a technical system. We measure the appropriate trust using an \textbf{agent comparison task{}}. Here, the participants were shown summaries of two of the three agents at a time, and were asked to indicate which agent performs better in the Pacman game (similar to tasks used in ~\cite{amir18highlights,selvaraju2016grad-cam}). They thus made three comparisons (\emph{Regular agent}{} Vs. \emph{Power pill agent}, \emph{Regular agent}{} Vs. \emph{Fear-ghosts agent} and \emph{Power pill agent}{} Vs. \emph{Fear-ghosts agent}). We do not ask the participant directly about their trust in the two agents shown. Instead, the participants have to choose one of the two agents that they would like to to play on their behalf (see Figure \ref{fig:trust_task}). This implicit question reveals which agent participants consider more reliable and qualified for the task. As in the retrospection task, they were asked to indicate their level of confidence and to provide a textual justification for their decision. The ordering of the three agent comparisons was randomized. \begin{figure}[ht] \centering \includegraphics[width=0.8\linewidth]{figures/TrustTask.PNG} \caption{A sketch of the agent comparison task{}: participants were asked to choose which agent they would like to play on their behalf (i.e, identify the better performing agent) according to the two summary videos. The full task can be seen in \ref{appendix:questionnaire}.} \label{fig:trust_task} \end{figure} \textbf{Explanation satisfaction questions.} Miller~\cite{miller2018explanation,miller2017explainable} argues that the end users' impressions about the agent should be queried and included into the evaluations of the explainable AI methods. This would ensure that the developed explanation methods are comprehensible not only to ML-experts but also to end-users. We address this concern in our study by measuring participants' subjective satisfaction. To this end, we used \textbf{explanation satisfaction questions} adapted from the questionnaire proposed by Hoffman et al.~\cite{hoffman2018metrics}. We did this separately for the retrospection task{} (immediately after completing the three retrospection tasks) and for the agent comparison task{} (after completing the three comparisons), as we hypothesized there may be differences in the usefulness of the summaries for these two different types of tasks. Specifically, participants were asked the following questions using a 5-point Likert scale: \begin{enumerate} \item From watching the videos of the AI agents, I got an idea of the agents' strategies. \item The videos showing the AI agents play contain sufficient detail about the agents' behavior. \item The videos showing the AI agents play contain irrelevant details. \item The videos showing the AI agents play were useful for \emph{the task}. (only shown in groups \emph{R}{} and \emph{H}) \item The gameplay scenarios shown in the videos were useful for \emph{the task}. (only shown in groups \emph{R+S}{} and \emph{H+S}) \item The green highlighting in the videos was useful for for \emph{the task}. (only shown in groups \emph{R+S}{} and \emph{H+S}) \end{enumerate} We substituted \emph{the task} with either \emph{anlayzing the agents' behavior} or \emph{choosing the agent that performs better}, depending on the task they had just completed. \subsection{Hypotheses} \label{sec:hypotheses} Overall, we hypothesized that HIGHILIGHTS-DIV summaries will be more useful than random summaries in both the retrospection and agent comparison tasks, and that adding saliency maps will further improve participants' performance. More specifically, we state the following hypotheses: \begin{itemize} \item H1: For both tasks, participants shown summaries generated by HIGHLIGHTS-DIV will perform better than participants shown randomly generated summaries. That is, performance in \emph{H}{} will be better than performance in \emph{R}{} and performance in \emph{H+S}{} will be better than performance in \emph{R+S}. We expect HIGLIGHTS-DIV summaries to be more useful as they demonstrate the agent's behavior in more meaningful states, which should help both in identifying which agent performs better (in line with prior findings~\cite{amir18highlights,huang2018establishing}), as well as in determining whether an agent is capable of performing well in certain scenarios~\cite{huang2018establishing}. We expect similar effects in terms of participants' explanation satisfaction in each task. \item H2: For both tasks, adding saliency maps will improve participant's performance and satisfaction. That is, we expect the performance in \emph{R+S}{} will be better than in \emph{R}{} and similarly that performance in \emph{H+S}{} will be better than in \emph{H}. Here, too, we expect similar effects in terms of participants' explanation satisfaction in each task. We expect this to be the case as the saliency maps allow people to see not only what actions the agent chooses, but also what information it attends to. Previous studies also found positive effects of saliency maps on participants' mental models \cite{anderson2019mere-mortals,alqaraawi2020evaluating} and on their ability to choose the better performing prediction model \cite{selvaraju2016grad-cam}. \item H3: The effect of the summary generation method on satisfaction and performance will be greater than that of the inclusion of saliency maps in the agent comparison task. That is, we expect that global information will be more crucial for identifying the better performing agent, as it explicitly demonstrates how the agents act. \item H4: The effect of adding saliency maps on satisfaction and performance will be stronger than that of the summary generation method in the retrospection task. Since saliency maps explicitly show what information the agent attends to, we hypothesize it will contribute more to identifying the agent's strategy. However, this is complicated by the fact that random summaries might not include interesting scenarios, making saliency maps less helpful in this case. Therefore, our more specific hypothesis are: \begin{itemize} \item H4.1: Participants in the saliency conditions will be more likely to identify Pacman, the main source of information for our agents, as an important object. \item H4.2: Participants in the HIGHLIGHTS conditions will be more likely to identify objects that relate to agent goals, such as power pills and blue ghosts. Therefore, they will also more accurately describe the agents' strategies. \end{itemize} \end{itemize} \subsection{Analysis} \label{sec:analysis} We analyze the main hypotheses using the the non-parametric Mann-Whitney test~\cite{mcknight2010mann}, as our dependent variables are not normally distributed. We report effect sizes using rank biserial correlation~\cite{tomczak2014need}. Additionally, we report the mean values and the 95\% confidence interval (CI) computed using the bootstrap method. In all plots the error bars correspond to the 95\% confidence intervals. To make sure that the participants involved in our analysis did in fact watch the videos of the agents, we recorded whether they clicked play on each video in addition to how often each video was paused. We did not force them to watch the videos to filter out participants that would have just pressed play to avoid the forcing mechanism. Since we saw from the raw data that some participants only stopped watching videos after the retrospection task{}, we checked each task separately. As a heuristic to measure how attentively a user watched the videos of a task, we took the sum of pauses of the videos in this task, where watching a video until the end was recorded as a pause and not clicking play was counted as $-1$ pause. Based on this heuristic we removed all participants from the retrospection task{} who did not have at least three pauses (5 participants) and all participants from agent comparison task{} who did not have at least six pauses (11 participants). The number of necessary pauses in each task is equal to the number of videos in this task. For evaluating the retrospection task we use a scoring system, where two of the authors involved in the training of the agents assigned a score to each item for each agent before the study started (see \ref{appendix:scoring_functions} for details). For example for the \emph{Power pill agent}{}, which was only rewarded when it ate a Power pill, selecting the Power pill or Pacman increased the score by $1$ point and including any other item reduced the score by $1$ point. Furthermore, selecting more than three items resulted in a score of zero, since the participants were told to select a maximum of three items. Inspired by Anderson et al.~\cite{anderson2019mere-mortals} we use summative content analysis \cite{hsieh2005content_analysis} to evaluate participants' textual responses. An independent coder (not one of the authors) classified responses to the questions ``Please briefly describe the strategy of the AI agent shown in the video above'' in the retrospection task{}, and the question ``Please briefly explain how you came to your selection'' in both the retrospection task{} and the agent comparison task{}. Each question was asked three times (once for each agent description or agent comparison) resulting in $402$ answers per question. For the first question, the coder identified $67$ different concepts in the answers. For example, the answer ``The strategy of this Pacman agents seems to be to mainly avoid the ghosts as it eats the normal pills on the screen. Although it can be seen eating a power pill, the clip still does not show Pacman seeking out and eating the ghosts'' was coded to ``prioritizing normal pills'', ``avoiding ghosts'' and ``do not care about blue ghosts''. We aggregated those concepts to $16$ groups by combining similar concepts like ``eating normal pills'' and ``prioritizing normal pills''. To evaluate the correctness of participants' answers we implemented a simple scoring system. For each agent and for each answer group, we decided whether it is correct, irrelevant or wrong, based on predefined `ground-truth' answers that two of the authors, who were involved in the training of the agents, wrote for each agent before the study started. The exact groups and their assigned scores can be found in \ref{appendix:scoring_functions} and the open-sourced code. The answers to the second question regarding participants' justifications of their responses were classified into six categories (the answer could be based on the game rules, the saliency maps, the gameplay, participants' interpretation and two categories for unjustified or unrelated justifications which we grouped into one ``unjustified'' category) and an additional seventh category for the agent comparison task{}, that encoded that the user could not decide between the two agent and guessed. We note that the classifications assigned by the coder are not mutually exclusive.
1,116,691,497,093
arxiv
\section{Introduction} Two-dimensional (2D) ferromagnets with strong spin-orbit interaction (SOI) possess a variety of spin magnetic properties relevant for spintronics, including anomalous Hall effect~\cite{Nagaosa10}, spin Hall effect in ferromagnets~\cite{Sinova15}, quantum anomalous Hall effect~\cite{Yu10, Wang15, Hou19}, anisotropic magnetoresistance, and planar Hall effects of various origins~\cite{Scharf16, Taskin17, Zheng20, Rao21}. The time-reversal symmetry breaking accompanied by the spin-momentum locking of the 2D states leads to the antisymmetric spin filtering~\cite{Streda03} and causes spin-orbit torques~\cite{Manchon19} and spin swapping~\cite{Saidaoui16}. Although these problems are, in principle, theoretically accessible with {\it ab initio} methods~\cite{Gradhand12, Lowitzer11, Freimuth14}, effective models~\cite{Streda03, Yu10, Wang15, Scharf16, Wang16, Hou19, Zheng20, Thalmeier20, Rao21} are indispensable, as they provide a greater freedom of modeling. The majority of studies have addressed crystal surfaces of sufficiently thick films, in which the interaction between the two 2D systems on the opposite surfaces can be neglected. However, at metallic surfaces the spin-orbit-split surface states often energetically overlap with the bulk bands, which reduces the lifetime of the 2D carriers. This draws the attention to ultra-thin films, in which the interaction between the two surfaces cannot be neglected, and calls for the development of effective \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ models capable of describing such systems. So far in the \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ modeling of the Rashba-split or topological surface states the interaction between the states at the opposite surfaces has been described by a tunneling parameter having the same structure as the Zeeman interaction~\cite{Yu10, Wang15, Hou19, Thalmeier20, Rao21}, which is sufficient to mimic the structural gap and, consequently, a finite effective mass. However, in real materials the interplay between the structural splitting and spin polarization due to SOI may be rather complicated, with a nonuniform spin density distribution at each of the surfaces~\cite{Shikin13}, which is neglected in the simplified models. Here, we develop a relativistic effective \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ model that includes SOI, magnetic exchange interaction, and the spatial overlap between the 2D states---the precursors of the surface states---at the {\it ab initio} level. We apply the model to the study of the effect of the in-plane magnetization on the band structure of centrosymmetric films of noble metals and three-dimensional (3D) topological insulators. Our proof-of-principle calculation shows that the structural gap presents new advantages for spin manipulation and scattering-channel engineering at the nanoscale. We start with an {\it ab initio} relativistic \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ theory~\cite{Nechaev_PRBR_2016, Nechaev_PRB_2018, Nechaev_PRB_2019, Nechaev_PRB_2020} that generates effective Hamiltonians of a desired size and provides a reliable treatment of spin. This enables a predictive analysis of the effect of exchange magnetic interaction in accord with experimental observations~\cite{Susanne2019, Usachov_PRL_2020}. We will consider one representative of each class of materials: a nineteen-layer Au(111) film and a five-quintuple-layer Sb$_2$Te$_3$ film. A four-band Hamiltonian generated for these films is presented in a surface-resolved basis so that the resulting \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ model can be easily decomposed into two copies of a Rashba or Dirac electronic system and their interaction accurately described up to third order in $\mathbf{k}$. We consider the simultaneous action of the SOI and an in-plane exchange field on the 2D states localized at the opposite surfaces and reveal that the interaction between the states plays a crucial role in the behavior of spin and restricts the scattering phase space for these states. In the absence of the exchange field, the SOI splits the dispersion of the surface-state precursor into two branches, with each branch giving rise to a closed constant energy contour at each of the surfaces. However, in the presence of the exchange field, for certain energies, one contour may be torn between the two surfaces, i.e., an open arc occurs at one surface, and its counterpart with the opposite group velocity is at the other one. The resulting very specific shape and spatial spin structure of the constant energy contours constrain the large-angle scattering so that it is necessarily accompanied by a jump to the opposite surface. \section{Computational details} The \textit{ab initio} band structure of the films is obtained in the repeated-slab model with the extended linear augmented plane waves method ~\cite{Krasovskii_PRB_1997} using the full potential scheme of Ref.~\cite{Krasovskii_PRB_1999} within the local density approximation (LDA). The spin-orbit interaction was treated by a second variation method~\cite{Koelling_1977}. The noble metal and topological insulator films are represented, respectively, by the bulk-truncated centrosymmetric nineteen-layer slab of Au(111) and five-quintuple-layer (QL) slab of Sb$_2$Te$_3$ (both films have space group $P\bar{3}m1$, no.~164). For Sb$_2$Te$_3$, the experimental crystal lattice parameters were taken from Ref.~\cite{Wyckoff_RWG} with the LDA relaxed atomic positions of Ref.~\cite{Nechaev_PRB_2015_SBTE}. The experimental lattice parameter of gold was taken from Ref.~\cite{Maeland_CJP_1964}. The films are thick enough to simulate the classical Rashba or Dirac surface state, but, at the same time, there is a tangible splitting of the surface state at $\bar{\Gamma}$: being an eigenfunction of a centrosymmetric slab Hamiltonian, the surface state is represented by two doubly degenerate slab levels $E_1$ and $E_2$ separated by a structural gap of a few meV, $\Delta=E_2-E_1$. This means that at $\bar{\Gamma}$ the Rashba or Dirac surface states form two Kramers-degenerate pairs with the spinor wave functions $|\Psi_{1\mu}\rangle$ and $|\Psi_{2\mu}\rangle$, Fig.~\ref{fig1}. \section{Minimal effective model} We start with a four-band \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ model, choosing the \textit{ab initio} spinors $|\Psi_{1\mu}\rangle$ and $|\Psi_{2\mu}\rangle$ as the basis functions, where the subscript $\mu=\uparrow$ or $\downarrow$ indicates the sign of the {\it on-site} expectation value of the $z$ component $\widehat{J}_z$ of the total angular momentum $\widehat{\mathbf{J}} = \widehat{\mathbf{L}} + \widehat{\mathbf{S}}$~\cite{Nechaev_PRBR_2016, Nechaev_PRB_2020}. With this basis set, we derive a four-band \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ Hamiltonian $H_{\rtm{\mathbf{kp}}}$ from the \textit{ab initio} relativistic \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ perturbation expansion carried out around the $\Gamma$ point up to the third order in $\mathbf{k}$~\cite{Nechaev_PRB_2020}. Next, we transfer to the new basis $|\Phi^{\pm}_{\mu}\rangle =\frac{1}{\sqrt{2}} \left[|\Psi_{1\mu}\rangle \pm |\Psi_{2\mu}\rangle\right]$~\cite{Nechaev_PRB_2018}, in which the four-band Hamiltonian reads \begin{equation}\label{HamFilm4x4} H_{\rtm{\mathbf{kp}}}\longrightarrow H^{\rtm{Film}}_{\rtm{\mathbf{kp}}}=\left( \begin{array}{cc} H_{\rtm{Surf}}^{+} & H_{\rtm{int}} \\ H^{\dag}_{\rtm{int}} & H_{\rtm{Surf}}^{-} \end{array} \right). \end{equation} Here, $H_{\rtm{Surf}}^{\pm}=[\epsilon+Mk^2]\rtm{\mathbb{I}}_{2\times2}\pm H_{\rtm{R}}$ and the interaction term $H_{\rtm{int}}=[\Delta\epsilon+\Delta Mk^2+i\Delta W(k_+^3+k_-^3)]\rtm{\mathbb{I}}_{2\times2}$ with $k=\sqrt{k_x^2+k_y^2}$, $k_{\pm}=k_x\pm ik_y$, and $\widehat{\mathbf{x}}$ being the direction $\bar{\Gamma}$-$\bar{M}$. The well-known $2\times2$ Rashba term \begin{equation}\label{Ham_rash} H_{\rtm{R}}=\left( \begin{array}{cc} iW (k_+^3-k_-^3) & i\alpha k_- \\ -i\alpha k_+ & -iW(k_+^3-k_-^3) \end{array} \right), \end{equation} is responsible for the out-of-plane and in-plane spin structure typical of hexagonal surfaces, see, e.g, Ref.~\cite{Nechaev_PRB_2019} and references therein. In Eq.~(\ref{Ham_rash}), the second-order-corrected Rashba parameter is $\alpha=\alpha^{(1)}+\alpha^{(3)}k^2$. In our theory, a reliable treatment of spin~\cite{Nechaev_PRB_2018, Susanne2019, Usachov_PRL_2020} is realized by means of the spin matrix \begin{equation}\label{SpinFilm4x4} \rtm{\mathbf{S}}_{\rtm{\mathbf{kp}}}\longrightarrow \rtm{\mathbf{S}}^{\rtm{Film}}_{\rtm{\mathbf{kp}}}=\left( \begin{array}{cc} \rtm{\mathbf{S}} & \widetilde{\rtm{\mathbf{S}}} \\ \widetilde{\rtm{\mathbf{S}}} & \rtm{\mathbf{S}} \end{array} \right), \end{equation} where $\rtm{\mathbf{S}}=(s^{\shortparallel}\bm{\sigma}_{\shortparallel}, s^{z}\sigma_z)$ and $\widetilde{\rtm{\mathbf{S}}}=(\Delta s^{\shortparallel}\bm{\sigma}_{\shortparallel}, \Delta s^{z}\sigma_z)$. The elements of the spin matrix $[\rtm{\mathbf{S}}^{\rtm{Film}}_{\rtm{\mathbf{kp}}}]^{\mu\tau}_{\nu\chi} = \langle\Phi^{\tau}_{\mu}|\bm{\sigma}|\Phi^{\chi}_{\nu}\rangle$, where $\tau$ and $\chi$ are $+$ or $-$, enter the expression for the spin expectation value \begin{equation}\label{modelRealSpin} \langle \mathbf{S}_{\mathbf{k}\lambda}\rangle = \frac{1}{2} \langle \widetilde{\Phi}^{\lambda}_{\mathbf{k}}|\bm{\sigma}|\widetilde{\Phi}^{\lambda}_{\mathbf{k}}\rangle = \frac{1}{2}\sum\limits_{\mu\tau \nu\chi} C_{{\mathbf{k}}\mu\tau}^{\lambda\ast}C_{{\mathbf{k}}\nu\chi}^{\lambda} \left[\rtm{\mathbf{S}}^{\rtm{Film}}_{\rtm{\mathbf{kp}}}\right]^{\mu\tau}_{\nu\chi} \end{equation} in the model state $|\widetilde{\Phi}^{\lambda}_{\mathbf{k}}\rangle = \sum\limits_{\mu\tau}C_{\mathbf{k}\mu\tau}^{\lambda} |\Phi^{\tau}_{\mu}\rangle$ of the reduced Hilbert space of the Hamiltonian~(\ref{HamFilm4x4}). The four-dimensional vectors $\mathbf{C}^{\lambda}_{\mathbf{k}}$ diagonalize this Hamiltonian $H^{\rtm{Film}}_{\rtm{\mathbf{kp}}} \mathbf{C}^{\lambda}_{\mathbf{k}} = E^{\lambda}_{\mathbf{k}} \mathbf{C}^{\lambda}_{\mathbf{k}}$. Below, for simplicity, the in-plane component of the spin $\langle \mathbf{S}_{\mathbf{k}\lambda}\rangle$ will be referred to as $\mathbf{S}_{\shortparallel}$. \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{Fig_1.png} \caption{Band structure of the nineteen-layer Au(111) film (a) and the five-QL Sb$_2$Te$_3$ film (b) by the four-band \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ Hamiltonian shown by black lines for the exchange parameter $\bm{\mathcal{J}} = 0$ and by fat bands for $\bm{\mathcal{J}} = \mathcal{J} (\widehat{\mathbf{x}} + \widehat{\mathbf{y}}) / \sqrt{2}$ with $\mathcal{J}=30$~meV for $\mathbf{k} = k_{\bot} (\widehat{\mathbf{x}} - \widehat{\mathbf{y}})/ \sqrt{2}$ perpendicular to the magnetization. The sign of the in-plane spin projection perpendicular to $\mathbf{k}$ is shown by color: Bright colors (red and blue) are used for the upper surface and pale colors for the lower surface. In graphs (a) and (b), the gray numbers mark the basis states $|\Phi_{1}\rangle$ and $|\Phi_{2}\rangle$, while the red ones label the energies $\mathcal{E}_1$, $\mathcal{E}_2$, $\mathcal{E}_3$, and $\mathcal{E}_4$ at which the constant energy contours shown in Figs.~\ref{fig2} and \ref{fig3} are calculated.} \label{fig1} \end{figure} \begin{table}[b] \caption{\label{tab:table1} Parameters of the Hamiltonian~(\ref{HamFilm4x4}) for Au(111) nineteen-layer slab with the lattice parameter $a=5.4495$~a.u. and Sb$_2$Te$_3$ five-QL slabs with $a=8.0312$~a.u. in Rydberg atomic units (except $\epsilon$ and $\Delta \epsilon$ given in eV).} \begin{ruledtabular} \begin{tabular}{ldd} & \multicolumn{1}{c}{Au(111) 19L} & \multicolumn{1}{c}{Sb$_2$Te$_3$ 5QL} \\ \hline $\epsilon$ & -0.490 & -0.093 \\ $\alpha^{(1)}$ & -0.134 & -0.271 \\ $\alpha^{(3)}$ & 10.14 & -33.33 \\ $W$ & 0.10 & -52.71 \\ $M$ & 5.06 & 7.37 \\ $s^{\shortparallel}$ & 0.98 & 0.62 \\ $ s^{z}$ & 0.96 & 0.25 \\ \hline $\Delta \epsilon$ & -0.006 & -0.007 \\ $\Delta M$ & -0.07 & -0.80 \\ $\Delta W$ & -0.06 & -1.71 \\ $\Delta s^{\shortparallel}$ & 0.00 & 0.00 \\ $\Delta s^{z}$ & 0.00 & -0.01 \end{tabular} \end{ruledtabular} \end{table} The microscopically obtained parameters in Eqs.~(\ref{HamFilm4x4}) and (\ref{SpinFilm4x4}) are listed in Table~\ref{tab:table1}. The eigenvalues of the Hamiltonian~(\ref{HamFilm4x4}) obtained with these parameters is shown in Fig.~\ref{fig1} by black solid lines. The spectra are represented by doubly degenerate bands with the characteristic Rashba- or Dirac-like behavior and exhibit the structural gap at $\bar{\Gamma}$ due to the coupling between two copies of the Rashba or Dirac electronic systems residing at the opposite surfaces of a film. Note that our \textit{ab initio} relativistic \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ theory provides an accurate description of this coupling by the term $H_{\rtm{int}}$ up to the third order in $\mathbf{k}$. For further purposes, we define the depth parameter for a given model state $|\widetilde{\Phi}^{\lambda}_{\mathbf{k}}\rangle$ as \begin{equation}\label{Depth_film} D^{\lambda}_{\mathbf{k}} = \sum\limits_{\mu}\left(|C_{\mathbf{k}\mu+}^{\lambda}|^2 - |C_{\mathbf{k}\mu-}^{\lambda}|^2\right). \end{equation} This parameter varies from $-1$ (the upper film surface) to $1$ (the lower film surface) and adds a new dimension to our analysis to characterize the spatial localization of the model states in the films. Now, we include the exchange field into the magnetic Hamiltonian $H_{\rtm{\mathbf{kp}}} + H_{\rtm{EX}}$ as an exchange term $H_{\rtm{EX}} = -\bm{\mathcal{J}} \cdot \rtm{\mathbf{S}}_{\rtm{\mathbf{kp}}}$, where $\bm{\mathcal{J}}=J_{\rtm{ex}}\mathbf{M}$ is a tunable parameter allowing for the magnetic exchange interaction of strength $J_{\rtm{ex}}$ with a magnetization $\mathbf{M}$, when, e.g., the film is brought into contact with a magnetic material or contains ferromagnetically ordered magnetic (spin) moments of doped transition-metal atoms. In the present study, we consider an in-plane magnetization with $\bm{\mathcal{J}} = \mathcal{J} (\widehat{\mathbf{x}} + \widehat{\mathbf{y}}) / \sqrt{2}$ and $\mathcal{J}=30$~meV. For the films we study, the in-plane components of the non-diagonal block $\widetilde{\rtm{\mathbf{S}}}$ of the spin matrix~(\ref{SpinFilm4x4}) are zero, and, therefore, the effect of the in-plane exchange field is described by the block diagonal matrix $H_{\rtm{EX}}$ whose elements are added to the elements of $H_{\rtm{Surf}}^{\pm}$. This implies that in both the non-magnetic and magnetic phase the interaction between the surfaces is exclusively due to the term $H_{\rtm{int}}$, whose parameters are rather small for the chosen thicknesses of the films, Table~\ref{tab:table1}. With increasing the film thickness, these parameters become negligible, and we arrive at two uncoupled surfaces. Each of the surfaces is described by the spin matrix $\rtm{\mathbf{S}}$ and the Hamiltonian $H_{\rtm{Surf}}^{+}$ or $H_{\rtm{Surf}}^{-}$, which is in accord with the form of the two-band Hamiltonian constructed in Ref.~\cite{Fu_PRL_2009} by considering the $C_{3v}$ crystal symmetry and time-reversal symmetry only. Because it comes directly from our fully {\it ab initio} \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ perturbation approach, this Hamiltonian is the same both for the Dirac surface state of topologically nontrivial insulators and for the so-called Rashba-split surface state of noble metals. This can be viewed as an {\it ab initio}\ confirmation of the applicability of the two-band Hamiltonian of Ref.~\cite{Fu_PRL_2009} to topological surface states and the Rashba Hamiltonian of the semiconductor quantum well physics~\cite{Rashba_FTT_1959, Rashba_JETPL_1984} to the trivial surface states first suggested by LaShell \textit{et al.}~\cite{LaShell_PRL_1996}. Additionally, we note that for the Au(111) film the spin parameters $s^{\shortparallel}$ and $s^{z}$ are almost unity, see Table~\ref{tab:table1}, and in this case one may associate the Pauli matrices generally used to represent the Hamiltonian with the observable spin. Furthermore, Hamiltonian~(\ref{HamFilm4x4}) is an instructive \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ model for studying a phenomenon known as a hidden Rashba effect~\cite{Zhang_hidden_2014}, which here is associated with two inversion-symmetry related copies of the Rashba or Dirac system. One copy is described by $H_{\rtm{Surf}}^{+}$, and its inversion partner is represented by $H_{\rtm{Surf}}^{-}$. However, although the \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ model for the isolated copies is well known, their interaction has not been hitherto accurately treated within a relativistic \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ theory. Our \textit{ab initio} \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ perturbation expansion yields the interaction term $H_{\rtm{int}}$ up to third order in $\mathbf{k}$ in accord with the order of the spin-orbit term $H_{\rtm{R}}$ accounting for the splitting and spin structure of the Rashba or Dirac state. \section{In-plane exchange field effect} \begin{figure*}[tbp] \centering \includegraphics[width=\textwidth]{Fig_2.png} \caption{Depth- and spin-resolved constant energy contours for the nineteen-layer Au(111) film (a) and the five-QL Sb$_2$Te$_3$ film (b) under the in-plane exchange field at the energies marked by red numbers in Fig.~\ref{fig1}. The in-plane spin is represented by arrows of the color changing from orange to teal according to the increase of the depth from the upper to the lower surface. The depth resolution manifests itself in the stretching of the contours (solid and dotted black lines) in the depth dimension determined by the contributions of the surface-related basis states $|\Phi^{\pm}\rangle$ to the model state of the four-band \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ Hamiltonian, Eq.~(\ref{Depth_film}). Gray dashed lines are the projection of the contours onto the surface planes. In graphs showing the contours at $\mathcal{E}=\mathcal{E}_1$, yellow circles are the contours for a semi-infinite film under the field.} \label{fig2} \end{figure*} The band structures of the Au(111) and Sb$_2$Te$_3$ films in the in-plane exchange field with $\mathcal{J}=30$~meV are shown in Fig.~\ref{fig1} by fat bands highlighting the sign of the in-plane spin projection onto the filed direction for $\mathbf{k}$ perpendicular to $\bm{\mathcal{J}}$. As seen in the figure, due to the exchange interaction the states with the in-plane spin $\mathbf{S}_{\shortparallel}$ co-directional with the field (blue-shade bands) tend to decrease their energy, while the states with $\mathbf{S}_{\shortparallel}$ opposite to the field (red-shade bands) acquire higher energy. The resulting red- and blue-shade bands resemble an ordinary Zeeman splitting of scalar-relativistic doubly degenerate bands, i.e., the zero-field band structures (black lines) shifted, respectively, up and down in energy. To understand the effect of the SOI-induced spin structure, one should take into account the depth-localization of the states according to the $D^{\lambda}_{\mathbf{k}}$ parameter~(\ref{Depth_film}) given by the color shade. In this case, one can clearly distinguish two pairs of the branches localized at opposite surfaces---the bright red and blue branches of the upper surface and the pale ones of the lower surface. Each pair demonstrates the well-known modifications of the classical Rashba or Dirac states by an in-plane magnetic exchange field with the crossing points shifted away from $\bar{\Gamma}$ (black points in Fig.~\ref{fig1}). As seen in Fig.~\ref{fig1}, these crossing points are on opposite sides of $\bar{\Gamma}$ for the opposite surfaces due to the different sign of the SOI term $H_{\rtm{R}}$ in $H_{\rtm{Surf}}^{\pm}$. In the presence of the field, the structural gap due to the coupling does not disappear, and at $\bar{\Gamma}$ it breaks each of the spin-split branches, Fig.~\ref{fig1}. Owing to the gap at $\bar{\Gamma}$, for $\mathbf{k} \perp \mathbf{M}$ there is an energy interval where a branch of the split state at the upper or lower surface loses its counterpart with the opposite group velocity. In the Rashba system, this is the branch of the same color, implying the same in-plane spin projection, while in the Dirac system the lost counterpart is the branch of the other color, i.e., with the flipped $\mathbf{S}_{\shortparallel}$. In order to examine the structural-gap effect over the whole $(k_x,k_y)$ plane, we calculate spin-resolved constant energy contours (CECs) of the surface-state precursors around $\bar{\Gamma}$ at the energies indicated in Fig.~\ref{fig1}. The depth-resolved contours are presented in Fig.~\ref{fig2} as 3D curves, the vertical dimension being the depth defined by Eq.~(\ref{Depth_film}). As seen in the figure, these 3D CECs are strongly bent towards the vertical for $\mathbf{k}$ close to the field direction (the momentum polar angle $\varphi_{\mathbf{k}}\sim\pi/4$ and $\sim5\pi/4$), so one half of the contour lies on the upper and the other on the lower surface. Over the 3D CEC fragments that pass through the film the surface states are largely hybridized. In the 2D projections of the 3D CECs onto the surface (dashed gray lines in Fig.~\ref{fig2}), the large hybridization manifests itself as the avoided crossing between the contours of the uncoupled surfaces, see red arrows for the case~1 in Figs.~\ref{fig2}(a) and \ref{fig2}(b). Indeed, a pair of the 2D projections (hereafter, 2D contours) can be easily recognized to be the exchange split contour, which is doubly degenerate in a nonmagnetic film. [For Au(111), there are two pairs: two inner and two outer 2D CECs.] In such a pair, one 3D CEC (solid black line) has two flat arcs that lie on the opposite surfaces and project onto one closed 2D CEC, and the arcs of the other one (dotted black line) project onto the other 2D CEC of the pair. In the plane, the arcs are disconnected around the avoided-crossing points, contrary to the case of the uncoupled surfaces, see yellow circles in Fig.~\ref{fig2}. In some cases, in the flat arcs of the 3D CECs the familiar surface spin structure can be easily recognized (orange or teal arrows in Fig.~\ref{fig2}). For example, in the case 1 ($\mathcal{E}=\mathcal{E}_1$), each surface exhibits a pattern typical of a Rashba or Dirac system in an external in-plane exchange field: The exchange interaction shifts the contours perpendicular to the field and distorts the SOI-induced spin-momentum locking, causing anisotropy in the transport properties of the films. (Note a much greater impact of the exchange filed on the in-plane spin structure of the Rashba than of the Dirac system.) Concerning the spin behavior along the whole 3D CECs, note that each pair of the 2D contours contains a dotted-line 3D CEC with the spin $\mathbf{S}_{\shortparallel}$ rotating twice by $2\pi$ along the contour in the $(k_x,k_y)$ plane (double spin winding) and a solid-line 3D CEC along which $\mathbf{S}_{\shortparallel}$ merely deviates from the field direction without a complete $2\pi$ rotation (zero spin winding). Further, we will refer to these 3D CECs as a non-trivial and trivial CEC, respectively. We focus now on the CECs at lower energies (numbered by 2, 3, and 4) and start with the Au(111) film, Fig.~\ref{fig2}(a). As seen in Fig.~\ref{fig1}, for $\mathcal{E}=\mathcal{E}_2$ the red branch loses its counterpart with the opposite group velocity, so the trivial (solid line) 3D contour of the inner pair disappears. As a consequence, at each surface a certain $\mathbf{k}$-sector becomes unavailable for the elastic scattering. For $\mathcal{E}=\mathcal{E}_3$, only the blue branches are left, Fig.~\ref{fig1}(a), with one pair of the 3D CECs. Note that the dotted-line contour becomes trivial, with $\mathbf{S}_{\shortparallel}$ mostly perpendicular to the field direction. At the same time, the trivial solid-line CEC is characterized by $\mathbf{S}_{\shortparallel}$ gravitated towards $\bm{\mathcal{J}}$, Fig.~\ref{fig2}(a). This substantially reduces the phase space for scattering transitions that may occur at one film surface. Finally, for $\mathcal{E}=\mathcal{E}_4$ there remains only one trivial 3D CEC with a sizable spin projection onto the filed direction, which practically forbids any large-angle scattering at a given film surface, thereby causing the electrons to ``leak'' through the interior of the film to the opposite surface. In the five-QL Sb$_2$Te$_3$ film, at $\mathcal{E}=\mathcal{E}_2$ there is only one CEC, which is characterized by the double winding of the in-plane vector $\mathbf{S}_{\shortparallel}$. For the flat CEC arcs, the spin structure is very similar to a Dirac surface state with the spin-momentum locking only slightly affected by the field. The influence of the field increases in the interior of the film, where $\mathbf{S}_{\shortparallel}$ is mainly opposite to the magnetization. The presence of only one 3D CEC reduces by half the phase space at each surface, thereby making a large-angle scattering only possible by means of a ``leakage'' to the opposite surface. For $\mathcal{E}=\mathcal{E}_3$, we have two 3D CECs, but now they lie almost entirely on the opposite surfaces of the film as if the surfaces were uncoupled. Moreover, both CECs have the helical in-plane spin structure typical of the Dirac surface state with the single winding of the in-plane spin. This means that around the energy of the Dirac points located away from $\bar{\Gamma}$, see black points in Fig.~\ref{fig1}(b), there is an energy interval for which the band structure of the thin film in the in-plane exchange field is independent of whether or not the structural gap exists. In our case, this interval covers the gap of the zero-field spectrum of the film, thereby causing a drastic change of the film properties by turning on the field. Finally, for $\mathcal{E}=\mathcal{E}_4$, again, there is only one CEC with the spin structure of inverted helicity with respect to that of the case~2 with $\mathbf{S}_{\shortparallel}$ mostly directed along the field in the interior of the film. Note also a more pronounced impact of the filed on the in-plane spin structure here. \begin{figure}[tbp] \centering \includegraphics[width=\columnwidth]{Fig_3.png} \caption{Spin-resolved depth-integrated (``top view'') contours of the five-QL Sb$_2$Te$_3$ film at the energies marked in Fig.~\ref{fig1}(b). The expectation value of $S_z$ is shown by red ($S_z>0$) and blue ($S_z<0$) fat segments. Orange and teal circles in graph~1 and shadings in graphs~2--4 highlight the CECs segments related to the upper and lower surface, respectively.} \label{fig3} \end{figure} We now turn to the spin $z$ polarization of the CECs of the five-QL Sb$_2$Te$_3$ film under the in-plane external exchange field. [In the Au(111) film the $S_z$ expectation value along the CECs is negligibly small at the energies we consider.] Figure~\ref{fig3} shows a ``top view'' of the spin-$z$ resolved CECs; the largest spin-$z$ projection along the CECs ($|S_z|\sim0.02$) is seen to occur at $\mathcal{E}=\mathcal{E}_1$. In this case, the $S_z$ pattern with the specific motif inherent in the hexagonal structures resembles the pattern of an ordinary Dirac surface state. In Fig.~\ref{fig3}, the orange and teal circles are a guide for the eye to highlight the CECs segments related to the upper and lower surface, respectively. These circles are the CECs of a film with the uncoupled surfaces, and, therefore, they differ from the actual 2D projections of the 3D CECs in that they cross along the field direction. For lower energies ($\mathcal{E}=\mathcal{E}_2$, $\mathcal{E}_3$, and $\mathcal{E}_4$), this motif completely disappears in the $S_z$ pattern. Instead, there appear two rather large blue and red parts with a sizable $S_z$ on each surface-related sector of the 3D CECs (the pale orange and pale teal shading in Fig.~\ref{fig3}), while in the interior of the film (between the shaded areas) $S_z$ is negligible. For these energies, the CECs are characterized by a substantially smaller $S_z$ than in the $\mathcal{E}=\mathcal{E}_1$ case and by a slight field-induced imbalance between spin-up and spin-down (especially for $\mathcal{E}=\mathcal{E}_3$ and $\mathcal{E}_4$). The $\mathcal{E}=\mathcal{E}_3$ case demonstrates that, although the $\mathbf{S}_{\shortparallel}$ pattern is similar to that of a Dirac surface state, the $S_z$ distribution is quite different. In all the cases, the $S_z$ distribution along the CECs gives rise to scattering transition constraints additional to those imposed by the in-plane spin. \section{Conclusions} To summarize, we have developed an effective \hbox{$\mathbf{k}\cdot\mathbf{p}$}\ model in the Hilbert space of four basis states comprising both interacting surfaces of a thin film with strong spin-orbit coupling. We have applied the model to the description of the influence of the in-plane exchange field on the energy-momentum dispersion and spin-momentum locking of the surface-state precursors in a nineteen-layer Au(111) film and in a five-quintuple-layer Sb$_2$Te$_3$ film. The interaction between the states at the opposite surfaces makes the exchange-induced modifications strongly energy dependent: In the simplest case of energies well above the Rashba or Dirac point, a tangible difference from non-interacting surfaces is found only for \textbf{k} parallel to the field, where an avoided crossing appears between the constant energy contours of the surface states localized at the opposite surfaces. In the surface-resolved representation, this manifests itself in the partition of a contour at a given surface into two disconnected flat arcs. At lower energies, where the interaction between the surfaces leads to the structural gaps in the surface-state dispersion, one of the arcs disappears, so the large-angle scattering needs to be accompanied by a jump to the opposite surface through the interior of the film. Furthermore, we have located the energy intervals (one in the gold film and two in the topological-insulator film) where each surface hosts one arc only, so the large-angle electron scattering between the film surfaces is the only possible channel. This suggests a way to control the spin-selective transport properties of the films by manipulating the in-plane exchange field. \begin{acknowledgments} We acknowledge funding from the Department of Education of the Basque Government (Grant No.~IT1164-19) and the Spanish Ministry of Science, Innovation, and Universities (Project No.~PID2019-105488GB-I00). \end{acknowledgments}
1,116,691,497,094
arxiv
\section{Introduction} The demand for increased capacity in cellular networks continues to grow, which is driving the deployment of spectrally-efficient small cells \cite{4623708,6768783,6171992,anpalagan_bennis_vannithamby_2015}. While the deployment of small cells leads to significant capacity gains over macrocell-only systems, the proximity of small cell base stations (BSs) to one another can cause severe interference between them. This interference must be managed carefully to maximize the overall network capacity. Thus, powerful interference mitigation methods as well as optimal resource allocation schemes that involve multiple cells must be developed for 5G networks. In this work we investigate a flexible network structure for cellular systems where, instead of each BS serving all users within its own cell independently, several BSs act cooperatively to create a “virtual cell” with joint resource allocation. In order to design cellular networks that are composed of virtual cells, we address in this work the following two design challenges: 1) Creating the virtual cells, i.e., clustering the BSs and users into virtual cells. 2) Allocating the resources in each virtual cell. In this work we address the uplink resource allocation problem for joint channel allocation and power allocation for the single user detection scenario. We also address the resource allocation problem for coordinated multi-point decoding scenarios in which BSs in a virtual cell jointly decode the signals that they receive. BS and user clustering as part of a resource allocation strategy is discussed in the Cooperative Multi-Point (CoMP) literature, see for example \cite{6530435,6707857,6181826,6555174,5594575,5502468,6655533,4533793,5285181,6786390,8260866}. The work \cite{7839266} presents an extensive literature survey of cell clustering for CoMP in wireless networks. The clustering of BSs and users can be divided into three groups: 1) Static clustering which considers a cellular network whose cells are clustered statically. Hence, the clustering does not adapt to network changes. Examples for static clustering algorithms are presented in \cite{6530435,6181826,6707857,6555174}. 2) Semi-dynamic clustering, in which static clusters are formed but the cluster affiliation of users is adapted according to the networks changes. Examples for such algorithms are presented in \cite{5594575,5502468,6655533}. 3) Dynamic clustering in which the clustering of both BSs and users adapts to changes in the network. Examples for dynamic clustering algorithms are presented in \cite{4533793,5285181,6786390,8260866}. Resource allocation in virtual cells is closely related to cloud radio access networks \cite{5594708,CIT-048,6924850,7487951,6601765} in which several cells act cooperatively. The coordination between the cells can be divided into the following categories: 1) Interference coordination in which only channel states are available at the coordinated BSs. 2) Full cooperation in which BSs share not only channel states but also the data signals they receive. 3) Rate limited coordination in which the BSs exchange data via a limited-capacity backhaul. 4) Relay-assisted cooperation in which cooperation is carried out by dedicated relay nodes that connect users from different cells and BSs. In addition, resource allocation in virtual cells is also closely related to the interference mitigation paradigm called Cooperative Multi-Point (CoMP) (see \cite{5706317}) that encompasses several cooperation models. Two such models are the Uplink Interference Prediction model in which cooperation is allowed in the resource allocation stage only, and the Uplink Joint Detection model that allows BS cooperation in both the resource allocation and decoding stages. In this work we investigate a flexible cooperative resource allocation structure for cellular systems where, instead of each BS serving all users within its own cell independently, several BSs act cooperatively to create a “virtual cell”. We consider two BS cooperation models for the uplink communication in virtual cells. The first model allows for cooperation in the resource allocation stage only, whereas the second model allows for cooperation in both the resource allocation and the decoding stages. We refer to the first model as the interference coordination model and to the second as the coordinated multi-point model. Our work \cite{YeminiGoldsmith2} considers the coordinated multi-point decoding model in which BSs jointly decode their messages assuming infinite capacity backhaul links between BSs in the same virtual cell. Additionally, in \cite{YeminiGoldsmith1} we propose channel and power allocation schemes for the interference coordination model. This manuscript presents a unified framework that evaluates both cooperation models analyzed in \cite{YeminiGoldsmith2} and \cite{YeminiGoldsmith1}. It extends the analysis of the resource allocation schemes presented in \cite{YeminiGoldsmith1}, and also further evaluates and compares the network optimization schemes presented in both \cite{YeminiGoldsmith2} and \cite{YeminiGoldsmith1}. Clustering as part of a resource allocation strategy in wireless networks is also investigated in the ultra-dense networks literature, see for example \cite{7008373,7579583,7794900,6786390,7248710,8110665,8496818}. These works can be categorized into two groups: cell clustering (see \cite{7008373,7579583,7794900}), in which the existing cells of a cellular networks are merged, and user-centric clustering (see \cite{6786390,7248710,8110665}), in which each user chooses a subset of BSs to communicate with. The work presented in this manuscript differs from these works in several key aspects. First, our channel state information model differs from that of the aforementioned works which either assume that the inter-cluster interference is perfectly known for all the channels in the network \cite{7008373,7579583,7794900,8496818}, or strictly statistical for all the channels in the network \cite{7248710,8110665,6786390}. In our setup we assume perfect channel state information inside each virtual cell but no channel information regarding users in different virtual cells. We note that our resource allocation schemes can be adapted to statistical knowledge regarding the inter-cluster interference. Second, in addition to proposing a clustering scheme to create virtual cells, we also address both the channel and power allocation problems. In contrast, the analysis presented in the aforementioned works are limited to the channel allocation problem and do not address the power allocation problem within the clusters. Instead it is assumed that the power allocation is fixed. A fixed power allocation can degrade significantly the performance of cooperative models, such as the coordinated multi-point decoding model, in which BSs jointly decode the signals that they receive. Additionally, to the best of our knowledge, prior works optimizing performance based on cell clustering or CoMP did not consider how performance varied with the number of clusters or with the user affiliation rules. Our work is also related to the concept of Software Defined Networks (SDN), introduced in \cite{6994333,6739370,7000974,1237143,7473831}. The underlying idea behind SDN is the separation of the data plane, which carries the data in the network, and the control plane, which determines how packets in the network are forwarded. Theoretically, the concept of SDN can be harnessed in limiting the interference in the network by allocating the resources in the network centrally \cite{6385040,6385039}. However, the very thing that makes SDN’s centralized control plane attractive also renders its implementation complexity challenging due to the required flexibility. These complexity issues are more severe in wireless communication networks employing SDN because of their time-varying nature, which requires fast updating rules for the control plane. Creating virtual cells that are composed of several cells can assist in managing wireless network and close the gap between the promising concept of SDN and the difficulties that arise in its implementation. \subsection{Main Contributions:} This work extends the concept of cellular networks while preserving several of its key desirable properties, such as simple user association rules and dividing the network into independent cells that may cooperate to suppress interference. We call this network paradigm a cellular network with virtual cells. A cellular network design with virtual cells has the following benefits: \begin{enumerate} \item improves network performance while balancing the computational complexity of optimal resource allocation \item uses both local and global network information \item ensures that local changes in the network do not cause a ``butterfly effect" in which the allocation of resources across the whole network design must be recalculated due to a local change. \end{enumerate} We create the virtual cells by clustering the BSs, instead of users, in the network, and then associate users with the clustered BSs. We cluster BSs based on the hierarchical clustering method with minimax linkage criterion that creates a dendrogram. The dendrogram shows which clusters are merged when the number of clusters is decreased and which are separated when this number is increased. We propose using this clustering approach since it enjoys the unique property that decreasing or increasing the number of clusters affects only the clusters that are being merged or separated, while leaving all others unchanged. By contrast, in other clustering methods, such as K-means or spectral clustering, even a small variation in the number of clusters requires the reclustering of the whole network, which may cause a global change. This is undesirable behavior for wireless communication networks since the channel state information between all users in the new virtual cells and the new virtual BSs must be estimated. Thus, we propose using hierarchical clustering in which the number of clusters can adapt efficiently to the current state of the network without requiring an overall update in the network. Additionally, the method we propose requires only local channel state information that is used in the user association rule and in computing the resource allocation scheme inside the virtual cells. The BS clustering which constructs the ``backbone'' of the network does not require knowledge of the channel state between all the users and BSs in the network. To optimize the performance of cellular networks with virtual cells we also develop resource allocation schemes for virtual cells in the single user detection scenario, and compare them to previously proposed resource allocation schemes for heterogeneous cells. Interestingly, numerical results show that the performance of these resource allocation schemes depends on the number of virtual cells in the network. Additionally, we address resource allocation for the coordinated multi-point decoding scenario. The resource allocation in both setups uses local channel state information, that is, we assume that the BSs in a virtual cell acquire the channel state information between them and all the users in the virtual cells. Finally we note that, while we do not suppress interference between virtual cells in the resource allocation stage, as we decrease the number of virtual cells, interference is dominated by interference within the virtual cell so that our resource allocation scheme mitigates this dominant interference. \subsection{Outline and Notation} The remainder of this paper is organized as follows. Section \ref{sec:problem_formualtion} presents the problem formulation that we analyze in this work. Section \ref{sec:virtual_cell_create} describes the method for forming the virtual cells. Sections \ref{sec:joint_power_allocation} and \ref{sec:alternating_optimization} present several algorithms for allocating resources in the interference coordination model. In particular, Section \ref{sec:joint_power_allocation} proposes a joint channel and power allocation scheme. Section \ref{sec:alternating_optimization} proposes channel and power allocation algorithms based on an alternating optimization in which the resource allocation is calculated by alternating between a channel and power allocation problem. Section \ref{sec:alternating_optimization} presents three channel allocation schemes that we evaluate: a user-centric one that we propose and two existing ones, a BS centric scheme and a sum rate maximization matching scheme. Section \ref{sec:joint_decoding} presents an optimal resource allocation scheme in virtual cells for the coordinated multi-point decoding model. Section \ref{se:simulation} presents numerical results of the average system sum rate for all of our proposed clustering and resource allocation methods. Finally, \ref{sec:conclusion} summarizes and concludes this work. \textit{Notation:} The following notations are used throughout this paper. Vectors are denoted by boldface lowercase letters whereas matrices are denoted by boldface uppercase letters. We denote the transpose of a vector $\boldsymbol a$ by $\boldsymbol a'$, and the conjugate transpose of a matrix $\boldsymbol A$ by $\boldsymbol A^{\dagger}$. The expected value of a random variable $x$ is denoted by $E(x)$. Additionally, we denote the covariance matrix of a random vector $\boldsymbol x$ by $\text{cov}(\boldsymbol x)$. $\det(\boldsymbol A)$ denotes the determinant of a square matrix $\boldsymbol A$. Finally, $\mathbbm{1}_{\mathcal{E}}$ denotes the indicator function; it is equal to one if the event $\mathcal{E}$ is true and zero otherwise. Finally the cardinality of a set $\mathcal{S}$ is denoted by $|\mathcal{S}|$. \section{Problem Formulation}\label{sec:problem_formualtion} We consider a communication network that comprises a set of BSs (BSs) $\mathcal{B}$, a set of users $\mathcal{U}$ and a set of frequency bands $\mathcal{K}$. The users communicate with their BSs and these transmissions interfere with one another. Each user $u\in\mathcal{U}$ has a maximal transmission power of $\overline{P}_u$ dBm. The BSs and users are clustered into virtual cells that must fulfill the following characteristics. \subsection{Virtual Cells}\label{sec:virtual_cell_requirements} \begin{definition}[Virtual BS] Let $b_1,..,b_n$ be $n$ BSs in the set of BSs $\mathcal{B}$, we call the set $\{b_1,..,b_n\}$ a virtual BS. \end{definition} \begin{definition}[Proper clustering] Let $\mathcal{B}$ be a set of BSs, $\mathcal{U}$ be a set of users. Denote $\mathcal{V}=\{1,\ldots,V\}$. For every $v$, define the sets $\mathcal{B}_v\subset \mathcal{B}$ and $\mathcal{U}_v\subset \mathcal{U}$ . We say that the set $\mathcal{V}$ is a proper clustering of the sets $\mathcal{B}$ and $\mathcal{U}$ if $\mathcal{B}_v$ is a partition of the sets $\mathcal{B}$ and $\mathcal{U}$. That is, $\bigcup_{v\in\mathcal{V}}\mathcal{B}_v = \mathcal{B}$, $\bigcup_{v\in\mathcal{U}}\mathcal{U}_v = \mathcal{U}$. Additionally, $\mathcal{B}_{v_1}\cap\mathcal{B}_{v_2}=\emptyset$ and $\mathcal{U}_{v_1}\cap\mathcal{U}_{v_2}=\emptyset$ for all $v_1,v_2\in\mathcal{V}$ such that $v_1\neq v_2$. \end{definition} \begin{definition}[Virtual cell] Let $\mathcal{B}$ be a set of BSs, $\mathcal{U}$ be a set of users, and $\mathcal{V}$ be a proper clustering of $\mathcal{B}$ and $\mathcal{U}$. For every $v\in\mathcal{V}$ the virtual cell $\mathcal{C}_v$ is composed of the virtual BS $\mathcal{B}_v$ and the set of users $\mathcal{U}_v$. \end{definition} This condition ensures that every BS and every user belongs to exactly one virtual cell. This implies that all the transmission power of a user is dedicated to communicating with BSs in the same virtual cell, thus power allocation can be optimized in a virtual cell. Let $\mathcal{V}$ be a proper clustering of the set of BSs $\mathcal{B}$ and the set of users $\mathcal{U}$, and let $\{\mathcal{C}_v\}_{v\in\mathcal{V}}$ be the set of virtual cells that $\mathcal{V}$ creates. In each virtual $\mathcal{C}_v$ we assume that the BSs that compose the virtual BS $\mathcal{B}_v$ jointly allocate their resources. \subsection{The Uplink Resource Allocation Problem for the Interference Coordination Model}\label{subsection:uplink_interference_coordination_problem} In each virtual cell we consider the uplink resource allocation problem in which all the BSs in the virtual cell jointly optimize the channel allocation and the transmission power of the users within the virtual cell. Further, we consider single user detection in which every BS $b$ decodes each of its codewords separately. That is, suppose that users $u_1$ and $u_2$ are both served by BS $b$, then $b$ decodes the codeword of $u_1$ treating the codeword of $u_2$ as noise, and decodes the codeword of $u_2$ treating the codeword of $u_1$ as noise. We refer to this model as the interference coordination model. While each user can communicate with all the BSs in its virtual cell, it follows by \cite{1237143} that, given a power allocation scheme, the maximal communication rate for each user is achieved when the message is decoded by the BS with the highest SINR for this user. Recall that $\mathcal{K}$ is the set of frequency bands. Denote by $h_{u,b,k}$ the channel coefficient of the channel from user $u\in\mathcal{U}$ to BS $b$ over frequency band $k$, and let $P_{u,k}$ be the transmit power of user $u$ over frequency band $k$. Further, let $\sigma^2_{b,k}$ denote the noise power at BS $b$ over frequency band $k$, and let $W_k$ denote the bandwidth of band $k$. The uplink resource allocation problem in each virtual cell $\mathcal{C}_v$, ignoring interference from other virtual cells, is given by: \begin{flalign}\label{eq:no_decoding_cooperation_single_discrete} \max & \sum_{b\in\mathcal{B}_v}\sum_{u\in\mathcal{U}_v}\sum_{k\in\mathcal{K}} \gamma_{u,b,k}W_k\log_2\left(1+\frac{|h_{u,b,k}|^2P_{u,k}}{\sigma^2_{b,k}+J_{u,b,k}}\right)\nonumber\\ \text{s.t.: } & 0\leq P_{u,k},\quad \sum_{k\in\mathcal{K}}P_{u,k} \leq \overline{P}_u,\quad \forall\: u\in \mathcal{U}_v,k\in\mathcal{K},\nonumber\\ &\hspace{-0.15cm} \sum_{\substack{\tilde{u}\in\mathcal{U}_v,\\ \tilde{u}\neq u}} |h_{\tilde{u},b,k}|^2P_{\tilde{u},k}= J_{u,b,k},\: \forall u\in\mathcal{U}_v,b\in \mathcal{B}_v,k\in\mathcal{K} \nonumber\\ &\gamma_{u,b,k}\in\{0,1\},\quad \sum_{b\in\mathcal{B}_v}\gamma_{u,b,k}\leq 1,\quad \forall\:u\in \mathcal{U}_v,b\in \mathcal{B}_v,k\in\mathcal{K}. \end{flalign} This is a mixed-integer programming problem that is NP-hard. Sections \ref{sec:joint_power_allocation} and \ref{sec:alternating_optimization} present two different approaches to approximate this problem for a given virtual cell. The first approach, presented in Section \ref{sec:joint_power_allocation}, translates this problem from a mixed-integer programming problem to an equivalent problem with continuous variables. The second approach, presented in Section \ref{sec:alternating_optimization}, approximates the optimal solution by solving a user-centric channel allocation problem, and a power allocation problem, alternately. \subsection{The Uplink Resource Allocation Problem for Coordinated Multi-Point Decoding}\label{subsection:uplink_joint_decoding_problem} In the coordinated multi-point decoding model BSs jointly decode the signals that they receive. This model can be realized, for example, based on cloud decoding of the signals received by all BSs under the assumption that the BS communication to the cloud has unconstrained capacity. This model is equivalent to a multiple access channel (MAC) with a single transmitting antenna at each user and multiple antennas corresponding to all BS antennas at the receiver. Recalling that $\mathcal{K}$ is the set of frequency bands, denote by $x_{u},k$ the signal of user $u$ on frequency band $k$, and by $y_{b,k}$ the received signal at BS $b$ for band $k\in\mathcal{K}$. For the sake of clarity, we label the BSs in the cluster $v$ by $b_1,\ldots, b_{|\mathcal{B}_v|}$, and label the users in cluster $v$ by $u_1,\ldots,u_{|\mathcal{U}_v|}$. Denote $\boldsymbol y_{v,k}\triangleq(y_{b_1,k},\ldots,y_{b_{|\mathcal{B}_v|},k})'$ and let $\boldsymbol x_{v,k}\triangleq(x_{u_1,k},\ldots,x_{u_{|\mathcal{U}_v|},k})'$. The receiving signal at BS $b\in \mathcal{B}_v$, ignoring the interference from other clusters, in frequency band $k$ is \begin{flalign} y_{b,k} = \sum_{i=1}^{|\mathcal{U}_v|}h_{u_i,b,k} x_{u_i,k}+n_{b,k}, \end{flalign} where $h_{u_i,b,k}$ is the channel coefficient from user $u_i$ in $v$ to the BS $b$ in $v$ over frequency band $k$, and $n_{b,k}$ is a white Gaussian noise at BS $b$ over frequency band $k$. Let $\boldsymbol h_{u_i,k} = (h_{u_1,b_1,k},\ldots,h_{u_i,b_{|\mathcal{B}_v|},k})'$ be the channel coefficient vector between user $u_i$ in $v$ to all the BSs in cluster $v$. Then the receiving signal vectors at the BSs in $v$ are \begin{flalign} \boldsymbol y_{v,k} &= \sum_{i=1}^{|\mathcal{U}_v|}\boldsymbol h_{u_i,k} x_{u_i,k}+\boldsymbol n_{v,k}, \end{flalign} where $\boldsymbol n_{v,k}=(n_{b_1,k},\ldots,n_{b_{|\mathcal{B}_v|,k}})$ is a white noise vector at the BSs. Let $\boldsymbol C_{v,k}=\text{cov}\left(\boldsymbol{x}_{v,k}\right)$ and $\boldsymbol N_{v,k} = \text{cov}(\boldsymbol n_{v,k})$; the sum capacity of the uplink in the virtual cell is then: \begin{flalign}\label{eq:uplink_problem_clean} \max &\sum_{k\in\mathcal{K}}W_k\log_2\det\left(\boldsymbol I+\sum_{u\in\mathcal{U}_v}p_{u,k}\boldsymbol h_{u,k} \boldsymbol h_{u,k}^{\dagger}\boldsymbol{N}_{v,k}^{-1}\right)\nonumber\\ \text{s.t.: } & \sum_{k\in\mathcal{K}} p_{u,k}\leq \overline{P}_u,\quad p_{u,k}\geq 0. \end{flalign} We note that while interference between virtual cells is not addressed in this work, as the number of virtual cells is decreased, each virtual cell becomes larger, and the interference inside the virtual cells becomes the dominant interference. This interference is mitigated in (\ref{eq:no_decoding_cooperation_single_discrete}) and (\ref{eq:uplink_problem_clean}) to improve network performance. Additionally, we note that if an approximated inter-cluster interference is known to be $i_{b,k}$ at BS $b$ at frequency band $k$, then term $\sigma_{b,k}^2$ can be replaced with $\sigma_{b,k}^2+i_{b,k}$ in the interference coordination model. Similarly, in coordinated multi-point decoding, the noise covariance matrix $\boldsymbol N_{v,k}$ can be replaced with the term $\boldsymbol N_{v,k}+\boldsymbol I_{v,k}$ where $\boldsymbol I_{v,k}$ is some approximation for the covariance matrix of inter-cluster interference in the virtual cell $v$. \section{Forming the Virtual Cells}\label{sec:virtual_cell_create} This section presents the clustering approach that creates the virtual cells within which the resource allocation scheme we present in Sections \ref{sec:joint_power_allocation}-\ref{sec:joint_decoding} operate. \subsection{Base Station Clustering via Hierarchical Clustering with Minimax Linkage Criterion} A hierarchical clustering algorithm creates a linkage tree, using a linkage criterion, that shows which clusters are merged when the number of clusters is decreased, and which are separated when this number is increased. This linkage tree is called a dendrogram. We propose using the hierarchical clustering algorithm to cluster BSs, since it enjoys the unique property that decreasing or increasing the number of clusters only affects the clusters that are being merged or separated. Thus, the number of clusters can adapt efficiently to the current state of the network without requiring a full clustering update. By contrast, in other clustering methods, such as K-means or spectral clustering, even a small variation in the number of clusters requires a full clustering update. This is undesirable in wireless networks since a large setup time and overhead for each reclustering is needed for information acquisition and other message passing. Furthermore, we propose using the hierarchical clustering algorithm with the minimax linkage criterion proposed in \cite{BienTibshirani2011} and that we depict in Algorithm \ref{algo:hierarchical_clustering}. This algorithm gets a set of points $S$ and produces the clusterings $B_1,\ldots,B_n$, where $B_m$ is the clustering of size $m$. The algorithm defines the center of a cluster to be the member of the cluster with the minimal maximal distance to all other members in the cluster. This minimal maximal distance is the cluster radius. Then, in every step, the minimax linkage criterion merges the two clusters that will jointly have the smallest radius out of all merging possibilities. Since interference tends to increase on average as the distance between interferers is decreased, at each stage the minimax linkage criterion merges the two clusters of BSs that maximize the smallest anticipated interference at the center of the new cluster caused by the cluster BSs. In addition, the minimax linkage criterion benefits from fulfilling several desirable properties in cluster analysis, as discussed in \cite{BienTibshirani2011}, that other linkage criteria such as the centroid linkage criteria do not fulfill. Next, we formally depict the hierarchical clustering algorithm with minimax linkage criterion. Let $d:\mathbb{R}^2\times\mathbb{R}^2\rightarrow\mathbb{R}$ be the Euclidean distance function, and let $S$ be a set of points in $\mathbb{R}^2$. We then define the following: \begin{definition}[Radius of a set around point] The radius of $S$ around $s_i \in S$ is defined as $r(s_i,S)=\max_{s_j\in S}\:d(s_i,s_j)$. \end{definition} \begin{definition}[Minimax radius] The minimax radius of $S$ is defined as $r(S) = \min_{s_i\in S}\: r(s_i,S)$. \end{definition} \begin{definition}[Minimax linkage] The minimax linkage between two sets of points $S_1$ and $S_2$ in $\mathbb{R}^2$ is defined as $d(S_1,S_2) = r(S_1\cup S_2)$. \end{definition} Let $S=\{s_1,\ldots,s_n\}$ be the set of locations of the BSs in $\mathcal{B}$. We use Algorithm \ref{algo:hierarchical_clustering} below with the input $S$ to create the virtual BSs for each number of clusters $m$. This produces the dendrogram which shows what clusters are merged as the number of clusters is decreased. \setlength{\textfloatsep}{.7cm \begin{algorithm} \caption{}\label{algo:hierarchical_clustering} \begin{algorithmic}[1] \State Input: A set of point $S=\{s_1,\ldots,s_n\}$; \State Set $B_n = \left\{\{s_1\},\dots,\{s_n\}\right\}$; \State Set $d(\{s_i\},\{s_j\})=d(s_i,s_j),\:\forall s_i,s_j\in S$; \For {$m = n-1,\ldots,1$} \State Find $(S_1,S_2) = \arg\min_{\stackrel{G,H\in B_{m+1}:}{G\neq H}} d(G,H)$; \State Update $B_{m} = B_{m+1} \bigcup \{S_1\cup S_2\} \setminus \{S_1,S_2\}$; \State Calculate $d(S_1\cup S_2,G)$ for all $G\in B_m$; \EndFor \end{algorithmic} \end{algorithm} \subsection{Users' Affiliation with Clusters}\label{sec_user_affil} To create the virtual cells, we consider two affiliation rules: \begin{enumerate} \item Closest BS rule in which each user is affiliated with its closest BS. \item Best channel rule in which each user is affiliated with the BS to which it has the best channel (absolute value of the channel coefficient). \end{enumerate} Then each user is associated with the virtual BS that its affiliated BS is part of. This way every virtual BS and it associated users compose a virtual cell. It is easy to verify that the formation of the virtual cells we propose fulfills the requirement presented in Section \ref{sec:virtual_cell_requirements}. The combination of creating virtual cells by using global network information for BS clustering and local network information to associate users with virtual cells creates an easy-to-manage network architecture that does not require a global update when local changes in the network occur. \section{Channel and Power Allocation for the Interference Coordination Model}\label{sec:joint_power_allocation} This section introduces the first resource allocation scheme we propose for the interference coordination model. This scheme is found by converting the problem (\ref{eq:no_decoding_cooperation_single_discrete}) to an equivalent continuous variable problem and then solving the new problem via a convex approximation. \subsection{An Equivalent Continuous Variable Resource Allocation Problem} We can represent the problem (\ref{eq:no_decoding_cooperation_single_discrete}) by an equivalent problem with continuous variables. Suppose that, instead of sending a message to at most one single BS at each frequency band, a user sends messages to all BSs. The signal of user $u\in\mathcal{U}_v$ over frequency band $k$ is then given by $x_{u,k}=\sum_{b\in\mathcal{B}_v}x_{u,b,k}$ where $x_{u,b,k}$ is the part of the signal of user $u$ that is transmitted over frequency band $k$ and is intended to be decoded by BS $b$. Let $P_{u,b,k}$ be the power allocation of the part of the signal of user $u$ that is transmitted over frequency band $k$ and is intended to be decoded by BS $b$.; i.e. $P_{u,b,k}=E\left( x_{u,b,k}^2\right)$, where $E\left( x_{u,b,k}^2\right)$ denotes the expected value of $x_{u,b,k}^2$. We next prove that (\ref{eq:no_decoding_cooperation_single_discrete}) can in fact be written in the following equivalent form: \begin{flalign}\label{eq:no_decoding_cooperation_single_continuous} \max & \sum_{b\in\mathcal{B}_v}\sum_{u\in\mathcal{U}_v}\sum_{k\in\mathcal{K}} W_k\log_2\left(1+\frac{|h_{u,b,k}|^2P_{u,b,k}}{\sigma^2_{b,k}+J_{u,b,k}}\right)\nonumber\\ \text{s.t.: } & 0\leq P_{u,b,k},\quad \sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} P_{u,b,k}\leq \overline{P}_u,\quad \forall \: u\in\mathcal{U}_v,b\in\mathcal{B}_v,k\in\mathcal{K},\nonumber\\ & \hspace{-0.55cm}\sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}}\hspace{-0.5cm} |h_{\tilde{u},b}|^2P_{\tilde{u},\tilde{b},k}= J_{u,b,k},\: \forall\: u\in\mathcal{U}_v,b\in \mathcal{B}_v,k\in\mathcal{K}. \end{flalign} \begin{theorem}\label{theorem:equivalence:discrete_continuous} The mixed-integer programming problem (\ref{eq:no_decoding_cooperation_single_discrete}) and the continuous variables problem (\ref{eq:no_decoding_cooperation_single_continuous}) are equivalent. \end{theorem} \begin{IEEEproof} The equivalence of (\ref{eq:no_decoding_cooperation_single_discrete}) and (\ref{eq:no_decoding_cooperation_single_continuous}) is argued as follows. First, the solution of (\ref{eq:no_decoding_cooperation_single_discrete}) can be achieved by the solution of (\ref{eq:no_decoding_cooperation_single_continuous}) by setting $x_{u,b,k}=0$ whenever $\gamma_{u,b,k}=0$, and $E \left(x_{u,b,k}^2\right) = P_{u,k}$ whenever $\gamma_{u,b,k}=1$. Thus the maximal sum rate that is found by solving (\ref{eq:no_decoding_cooperation_single_continuous}) upper bounds the maximal sum rate that is found by solving (\ref{eq:no_decoding_cooperation_single_discrete}). On the other hand, suppose that the optimal transmission power of user $u$ using frequency band $k$, given the transmission power of all other users, is $P_{u,k}$, that is $P_{u,k} = \sum_{b\in\mathcal{B}_v}P_{u,b,k}$. It follows by the duality between the multiple-access channel and the broadcast channel that is proved in \cite{1237143} that the optimal power allocation $(P_{u,b,k})_{b\in\mathcal{B}_v}$ for user $u$ in frequency band $k$, given the power allocation of all other users, is to allocate all its transmission power $P_{u,k}$ over frequency band $k$ to the transmission to the BS with the highest SINR. It follows that the maximal sum rate of (\ref{eq:no_decoding_cooperation_single_continuous}) cannot be larger than that of (\ref{eq:no_decoding_cooperation_single_discrete}). Thus, the two problems (\ref{eq:no_decoding_cooperation_single_discrete}) and (\ref{eq:no_decoding_cooperation_single_continuous}) are equivalent. \end{IEEEproof} \subsection{Solving an Approximation of the Continuous Variable Resource Allocation Problem Optimally}\label{sec:continuous_HSINR_gradient} In the following, we solve problem (\ref{eq:no_decoding_cooperation_single_continuous}). Denote: \begin{flalign}\label{eq:SINR_def} \text{SINR}_{u,b,k}(\boldsymbol P) =\frac{|h_{u,b,k}|^2 P_{u,b,k}}{\sigma^2_b+\sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}} |h_{\tilde{u},b,k}|^2P_{\tilde{u},\tilde{b},k}}, \end{flalign} where $\boldsymbol P = (P_{u,b,k})_{(u,b,k)\in\mathcal{U}_{v}\times\mathcal{B}_{v}\times\mathcal{K}}$ is the matrix of the transmission power. Using the high SINR approximation \cite{5165179} \begin{flalign}\label{eq:high_SINR_approx_improved} \log(1+z)\geq \alpha(z_0)\log z+\beta(z_0), \end{flalign} where \begin{flalign}\label{eq:alpha_beta_def} \alpha(z_0) = \frac{z_0}{1+z_0},\qquad\beta(z_0) =\log(1+z_0)-\frac{z_0}{1+z_0}\log{z_0}, \end{flalign} we obtain the approximated iterative problem (\ref{eq:iterative_alpha_approx}) where $\alpha_{u,b,k}^{(m)}=\alpha(\text{SINR}_{u,b,k}(\boldsymbol P^{(m-1)}))$, $\beta_{u,b,k}^{(m)}=\beta(\text{SINR}_{u,b,k}(\boldsymbol P^{(m-1)}))$ and $\alpha_{u,b,k}^{(0)}=1$, $\beta_{u,b,k}^{(0)}=0$ for all $u\in\mathcal{U}_v$, $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$. \begin{flalign}\label{eq:iterative_alpha_approx} \boldsymbol P^{(m)} =& \arg\max_{\boldsymbol P} \sum_{b\in\mathcal{B}_v}\sum_{u\in\mathcal{U}_v}\sum_{k\in\mathcal{K}} W_k\left[\alpha_{u,b,k}^{(m)}\log_2\left(\frac{|h_{u,b,k}|^2P_{u,b,k}}{\sigma^2_{b,k}+ \sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}} |h_{\tilde{u},b,k}|^2P_{\tilde{u},\tilde{b},k}}\right)+\beta_{u,b,k}^{(m)}\right]\nonumber\\ &\text{s.t.: } \: 0\leq P_{u,b,k},\quad \sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} P_{u,b,k}\leq \overline{P}_u,\quad \forall \: u\in\mathcal{U}_v,b\in\mathcal{B}_v,k\in\mathcal{K}\nonumber\\ & \sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}} |h_{\tilde{u},b,k}|^2P_{\tilde{u},\tilde{b},k}= J_{u,b,k},\quad \forall \: u\in\mathcal{U}_v,b\in \mathcal{B}_v,k\in\mathcal{K}. \end{flalign} It is left to solve the problem (\ref{eq:iterative_alpha_approx}). By transforming the variables of the problem using $P_{u,b,k}=\exp(g_{u,b,k})$ and noticing that the terms $\beta_{u,b,k}^{(m)}$ do not affect the optimal power allocation, we get the equivalent convex problem: \begin{flalign}\label{sol_continuous_power_approx} &\ln(\boldsymbol P^{(m)}) = \arg\max \sum_{b\in\mathcal{B}_v}\sum_{u\in\mathcal{U}_v}\sum_{k\in\mathcal{K}} W_k\alpha_{u,b,k}^{(m)}\cdot\log_2\left(\frac{|h_{u,b,k}|^2\exp(g_{u,b,k})}{\sigma^2_{b,k}+ \sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}} |h_{\tilde{u},b,k}|^2\exp(g_{\tilde{u},\tilde{b},k})}\right)\nonumber\\ &\text{s.t.: } \sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} \exp(g_{u,b,k})\leq \overline{P}_u,\quad \forall\: u\in \mathcal{U}_v. \end{flalign} The Lagrangian of (\ref{sol_continuous_power_approx}) is given by \begin{flalign}\label{eq:Lagrangian_dual_prob_continuous} &L(\boldsymbol g,\boldsymbol\lambda;m) = \sum_{b\in\mathcal{B}_v}\sum_{u\in\mathcal{U}_v}\sum_{k\in\mathcal{K}} W_k\alpha_{u,b,k}^{(m)}\cdot\log_2\left(\frac{|h_{u,b,k}|^2\exp(g_{u,b,k})}{\sigma^2_{b,k}+ \sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}} |h_{\tilde{u},b,k}|^2\exp(g_{\tilde{u},\tilde{b},k})}\right)\nonumber\\ &\hspace{5cm}- \sum_{u\in\mathcal{U}_v} \lambda_u\left(\sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} \exp(g_{u,b,k})-\overline{P}_u\right), \end{flalign} where $m$ denotes the $m$th time (\ref{sol_continuous_power_approx}) is solved. Furthermore, the dual function of the Lagrangian is given by \begin{flalign}\label{eq:maximizer_Lagrangian_continuous} q(\boldsymbol \lambda;m) = \sup_{\boldsymbol g} L(\boldsymbol g,\boldsymbol\lambda;m). \end{flalign} Thus the dual problem of (\ref{sol_continuous_power_approx}) is \begin{flalign}\label{eq:prob_dual_ptob_continuous} &\max q(\boldsymbol\lambda;m),\nonumber\\ & \text{s.t.: } \lambda_u\geq 0,\:\forall u\in\mathcal{U}_v. \end{flalign} Since the problem (\ref{sol_continuous_power_approx}) is convex with a non-empty interior, its duality gap is zero. Additionally, since (\ref{sol_continuous_power_approx}) has a compact domain in terms of $P_{u,b,k}$, it follows from \cite[Proposition 6.1.1]{Bertsekas/99} that we can solve the dual problem (\ref{eq:prob_dual_ptob_continuous}) using the gradient ascend method, that is: \begin{flalign}\label{eq:grad_ascend} \lambda_u^{(m,n+1)} = \left[\lambda_u^{(m,n)}+\epsilon_{\lambda}\left(\sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} \exp(g_{u,b,k}^{(m,n)})-\overline{P}_u\right)\right]^+, \end{flalign} where $\boldsymbol g^{(m,n)} = (g_{u,b,k}^{(m,n)})_{u\in\mathcal{U}_v,b\in\mathcal{B}_v,k\in\mathcal{K}}$ is the maximizer of $L(\boldsymbol g,\boldsymbol\lambda^{(m,n)};m)$. Recall that $P_{u,b,k}=\exp(g_{u,b,k})$. It is left to solve the subproblem (\ref{eq:maximizer_Lagrangian_continuous}). Since its objective function is a strictly concave and differentiable function of $\boldsymbol g$, a solution is attained at the point: \begin{flalign}\label{eq:fixed_point_prob_continuous} &P_{u,b,k}=\frac{W_k\alpha_{u,b,k}^{(m)}}{\lambda_u\ln 2+W_k\sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}}\alpha_{\tilde{u},\tilde{b},k}^{(m)}\frac{\text{SINR}_{\tilde{u},\tilde{b},k}(\boldsymbol P^{(m)})}{P_{\tilde{u},\tilde{b},k}^{(m)}|h_{\tilde{u},\tilde{b},k}|^2 }|h_{u,\tilde{b},k}|^2}. \end{flalign} By \cite{5165179} and \cite{414651} we can solve the fixed point (\ref{eq:fixed_point_prob_continuous}) problem iteratively: \begin{flalign}\label{update_rule_continuous_orig} &P_{u,b,k}^{(m,n,s+1)}=\frac{W_k\alpha_{u,b,k}^{(m)}}{\lambda_u^{(n)}\ln 2+W_k\sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}}\alpha_{\tilde{u},\tilde{b},k}^{(m)}\frac{\text{SINR}_{\tilde{u},\tilde{b},k}(\boldsymbol P^{(m,n,s)})}{P_{\tilde{u},\tilde{b},k}^{(m,n,s)}|h_{\tilde{u},\tilde{b},k}|^2 }|h_{u,\tilde{b},k}|^2} \end{flalign} to achieve the optimal power allocation of the subproblem (\ref{eq:maximizer_Lagrangian_continuous}) where $m$ denotes the iteration number of the high SINR approximation, $n$ denotes the iteration number of the gradient ascent algorithm used to solve the dual problem, and $s$ denotes the iteration of the iterative fixed point solution. The existence of the solution is guaranteed because of the strong concavity of (\ref{eq:maximizer_Lagrangian_continuous}). \subsection{Solving an Approximation of the Continuous Variable Resource Allocation Problem Efficiently}\label{sec:continuous_HSINR_fixed_point} Since the problem (\ref{sol_continuous_power_approx}) is convex with a non empty interior, its duality gap is zero, and the Karush–Kuhn–Tucker (KKT) conditions are sufficient for the points to be primal and dual optimal. The KKT conditions for (\ref{sol_continuous_power_approx}), after substituting $P_{u,b,k}=\exp(g_{u,b,k})$, are \begin{flalign} &P_{u,b,k}=\frac{W_k\alpha_{u,b,k}^{(m)}}{\lambda_u\ln 2+W_k\sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}}\alpha_{\tilde{u},\tilde{b},k}^{(m)}\frac{\text{SINR}_{\tilde{u},\tilde{b},k}(\boldsymbol P^{(m)})}{P_{\tilde{u},\tilde{b},k}^{(m)}|h_{\tilde{u},\tilde{b},k}|^2 }|h_{u,\tilde{b},k}|^2},\quad \forall u\in\mathcal{U}_v,\\ &0=\lambda_u\left(\sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} P_{u,b,k}- \overline{P}_u\right),\quad \forall u\in\mathcal{U}_v,\\ & \sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} P_{u,b,k}\leq \overline{P}_u,\qquad\lambda_u\geq 0, \quad \forall u\in\mathcal{U}_v. \end{flalign} Define the following iterative update rule \begin{flalign}\label{update_rule_continuous} &P_{u,b,k}^{(m,s+1)}=\frac{W_k\alpha_{u,b,k}^{(m)}}{\lambda_u^{(s+1)}\ln 2+W_k\sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}}\alpha_{\tilde{u},\tilde{b},k}^{(m)}\frac{\text{SINR}_{\tilde{u},\tilde{b},k}(\boldsymbol P^{(m,s)})}{P_{\tilde{u},\tilde{b},k}^{(m,s)}|h_{\tilde{u},\tilde{b},k}|^2 }|h_{u,\tilde{b},k}|^2}, \end{flalign} where $\lambda_u^{(s+1)}=0$ if \begin{flalign} \sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} \frac{\alpha_{u,b,k}^{(m)}}{\sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}}\alpha_{\tilde{u},\tilde{b},k}^{(m)}\frac{\text{SINR}_{\tilde{u},\tilde{b},k}(\boldsymbol P^{(m,s)})}{P_{\tilde{u},\tilde{b},k}^{(m,s)}|h_{\tilde{u},\tilde{b},k}|^2 }|h_{u,\tilde{b},k}|^2}\leq \overline{P}_u. \end{flalign} Otherwise $\lambda_u^{(s+1)}$ is chosen such that $\sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}}P_{u,b,k}^{(m,s+1)}=\overline{P}_u$. We have that if this update rule converges, it must converge to a KKT point, which in turn is globally optimal. While there is no known proof that guarantees convergence, in practice convergence is observed in simulations. \section{Solving the Resource Allocation Problem via Alternating Optimization}\label{sec:alternating_optimization} A more traditional approach to solving the resource allocation problem (\ref{eq:no_decoding_cooperation_single_discrete}) separates it into two subproblems: a channel allocation problem that sets the value of $\gamma_{u,b,k}$ to be either zero or one, and a power allocation problem that optimizes the transmission power. Then we iteratively solve these two problems until a stopping criterion is fulfilled. A resource allocation scheme of this type is depicted by Algorithm \ref{algo:Altenating_general}. \setlength{\textfloatsep}{.7cm \begin{algorithm} \caption{}\label{algo:Altenating_general} \begin{algorithmic}[1] \State Notations: $\boldsymbol P^{(n)} = (P^{(n)}_{u,b,k})_{(u,b,k)\in\mathcal{U}_v\times\mathcal{B}_v\times\mathcal{K}}$, $\boldsymbol \gamma^{(n)} = (\gamma^{(n)}_{u,b,k})_{(u,b,k)\in\mathcal{U}_v\times\mathcal{B}_v\times\mathcal{K}}$; \State Input: $\delta>0, N_{\max}\in\mathbb{N}$; \State Set $n=0$, $\delta_0 = 2\delta$; \State Set $P^{(0)}_{u,b,k}=\overline{P}_u/(|\mathcal{B}_v||\mathcal{K}|)$ and $\gamma^{(0)}_{u,b,k}=0$ for all $u\in\mathcal{U}_v$, $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$; \While{ $\delta_n>\delta$ and $n<N_{\max}$} \State $n=n+1$; \State \textbf{Channel allocation:} Given the power allocation $\boldsymbol P^{(n-1)}$, set $\gamma^{(n)}_{u,b,k}$ to be either zero or one for every $u\in\mathcal{U}_v$, $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$. \State \textbf{Power allocation:} Given $\boldsymbol\gamma^{(n)}$, calculate $\boldsymbol P^{(n)}$ by solving the iterative problem (\ref{sol_continuous_power_approx}) starting with some initial values $\alpha_{u,b,k}^{(0)}$, $(u,b,k)\in\mathcal{U}_v\times\mathcal{B}_v\times\mathcal{K}$. \State Calculate the sum rate \[R(\boldsymbol P^{(n)},\boldsymbol \gamma^{(n)})\hspace{-0.1cm} =\hspace{-0.1cm}\sum_{b\in\mathcal{B}_v}\hspace{-0.05cm}\sum_{u\in\mathcal{U}_v}\hspace{-0.05cm}\sum_{k\in\mathcal{K}} \gamma^{(n)}_{u,b,k}W_k\log_2\left(1+\frac{|h_{u,b,k}|^2P^{(n)}_{u,b,k}}{\sigma^2_{b,k}+J^{(n)}_{u,b,k}}\right); \] \State Calculate $\delta_n = R(\boldsymbol P^{(n)},\boldsymbol \gamma^{(n)})-R(\boldsymbol P^{(n-1)},\boldsymbol \gamma^{(n-1)})$; \EndWhile \end{algorithmic} \end{algorithm} For the sake of depicting the channel allocation schemes and the initial values of $\alpha^{(0)}_{u,b,k}$ we use the notation \begin{flalign}\label{SINR_single_receiver} \overline{\text{SINR}}_{u,b,k}(\boldsymbol P) = \frac{|h_{u,b,k}|^2\sum_{b\in\mathcal{B}_v}P_{u,b,k}}{\sigma^2_{b,k}+\sum_{\substack{\tilde{u}\in\mathcal{U}_v,\tilde{u}\neq u,\\\tilde{b}\in\mathcal{B}_v}} |h_{\tilde{u},b,k}|^2P_{\tilde{u},\tilde{b},k}}. \end{flalign} The interference term in the denominator of (\ref{SINR_single_receiver}) incorporates the constraint that each user communicates with at most one BS at each frequency band. This constraint does not appear in the interference term of the SINR expression (\ref{eq:SINR_def}). This follows since the channel allocation is a by-product of the power allocation scheme presented in Section \ref{sec:joint_power_allocation}. That is, a user is allocated a channel only when the power allocation scheme allocates strictly positive power to the transmission of the user over that channel. Next we present three channel allocation schemes. The first of these channel allocation schemes is a user-centric (UC) one in which, at each frequency band, every user chooses its receiving BS to be the one with the maximal SINR for this user given an initial power allocation. The second and third channel allocation schemes are existing approaches that we also consider for comparison. In particular, the second scheme is BS-centric (BSC) used, for example, in \cite{6678362,6815733}. In this scheme, in each frequency band every BS chooses its transmitting user to be the one with the maximal SINR. The third and final channel allocation scheme we consider is presented in \cite{7873307}. In this scheme, given a power allocation, channels are allocated to maximize the sum rate for that given power allocation using the Hungarian methods. We refer to this approach as the maximum sum rate matching (MSRM) approach. Interestingly, numerical results show that, as the number of virtual cells decreases and their size increases, both the UC channel allocation and the equivalent continuous problem approach outperform both the BSC approach and the MSRM approach. We remark that this work only considers a single power allocation scheme in Algorithm \ref{algo:Altenating_general}. That is due to the results presented in \cite{6678362}, where different power allocation schemes coupled with channel allocation yielded virtually the same average throughput. Hence we believe that different power allocation schemes will yield little difference in the system sum rate from that obtained with the power allocation algorithm used in Algorithm \ref{algo:Altenating_general}. \subsection{User-Centric (UC) Channel Allocation}\label{sec:alternating_power_allocation_user} This section presents the first channel allocation scheme, depicted in Algorithm \ref{algo:Altenating_single_set_power_UCB}, for the interference coordination model. This scheme is a UC one in that every user chooses the receiving BS to be the one with the maximal SINR for this user. \setlength{\textfloatsep}{.7cm \begin{algorithm \caption{}\label{algo:Altenating_single_set_power_UCB} \begin{algorithmic}[1] \State Input: Power allocation $\boldsymbol P = (P_{u,b,k})_{u\in\mathcal{U}_v,b\in\mathcal{B}_v,k\in\mathcal{K}}$; \State For every $u\in\mathcal{U}_v$, $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$ calculate $\overline{\text{SINR}}_{u,b,k}(\boldsymbol P)$; \State For every $u\in\mathcal{U}_v$ and $k\in\mathcal{K}$, calculate: $b_{u,k} = \arg\max_{b\in\mathcal{B}_v} \overline{\text{SINR}}_{u,b,k}(\boldsymbol P)$; \State For every $(u,b,k)\in\mathcal{U}_v\times\mathcal{B}_v\times\mathcal{K}$ set $\gamma_{u,b,k}=\mathbbm{1}_{\{b = b_{u,k}\}}$; \end{algorithmic} \end{algorithm} The motivation behind this approach is allowing the power allocation stage more flexibility to choose the users who transmit to a given BS. More specifically, in previously proposed channel allocation schemes discussed in Sections \ref{sec:alternating_power_allocation_BS} and \ref{sec:MSRM_channel_allocation}, at most one user is allocated to a BS in each frequency band. However, in the UC approach, in each frequency band each BS has a list of users that chose it as their receiving BS, then the power allocation stage chooses the identity of the user in that list who actually transmits to the BS by allocating to that user a positive transmission power. Interestingly, numerical results show that as the number of virtual cells decreases and their size increases, both the UC channel allocation and the equivalent continuous problem approach outperform both of the previously-proposed channel allocation methods that we next discuss. \subsection{Base Station (BS) Centric Resource Allocation}\label{sec:alternating_power_allocation_BS} This section presents the second channel allocation scheme for the interference coordination model. This scheme is a BS-centric one in that every BS chooses its transmitting user to be the one with the maximal SINR for this BS. This scheme is inspired by the works \cite{6678362} and \cite{6815733}, however, we remark that we do not restrict users in this work to transmit to the same BS over all frequency bands but allow them to communicate with different BSs in the virtual cell across different frequency bands. We depict the BS-centric channel allocation scheme in Algorithm \ref{algo:Altenating_single_set_power_BCU}. \setlength{\textfloatsep}{.7cm \begin{algorithm} \caption{}\label{algo:Altenating_single_set_power_BCU} \begin{algorithmic}[1] \State Input: Power allocation $\boldsymbol P = (P_{u,b,k})_{u\in\mathcal{U}_v,b\in\mathcal{B}_v,k\in\mathcal{K}}$; \State For every $u\in\mathcal{U}_v$, $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$ calculate $\overline{\text{SINR}}_{u,b,k}(\boldsymbol P)$; \State For every $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$, calculate: $u_{b,k} = \arg\max_{u\in\mathcal{U}_v} \text{SINR}_{u,b,k}(\boldsymbol P)$; \State For every $u\in\mathcal{U}_v$, $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$ set $\gamma_{u,b,k}=\mathbbm{1}_{\{u = u_{b,k}\}}$; \end{algorithmic} \end{algorithm} The motivation behind this approach is interference reduction, that is, if the SINR at two or more BSs is maximized by the same user, then a transmission of this user intended for one of these BSs strongly interferes with the communication of the other BS. To reduce interference, the same user is chosen as the transmitting user by all of these BSs, then the power allocation scheme will chose the identity of the receiving BSs among them in accordance with the global objective function of the power allocation stage. We remark that even though in this approach several BSs can choose the same user, it can be proved, following the argument presented in the proof of Theorem \ref{theorem:equivalence:discrete_continuous}, that an optimal power allocation scheme will allocate power only to the transmission of no more than one BS. In practice, this behavior is observed using the high SINR approximation. If a power allocation scheme that does not display this behavior is used, that is, after the power allocation stage there is a user that has a positive transmission power over the same frequency band to two or more BSs, one can improve the sum rate by using all the allocated transmit power of that user over that frequency band to the communication with the BS that has the highest SINR for that frequency band. \subsection{Maximum Sum Rate Matching (MSRM) Channel Allocation}\label{sec:MSRM_channel_allocation} This section presents the third and final channel allocation scheme for the interference coordination model. This scheme allocates the channels in a virtual cell optimally for a given power allocation by solving the maximum sum rate matching problem; this approach is presented in \cite{7873307}. Next we depict the channel allocation problem as a matching problem. Let $B_k=(\mathcal{U}_v,\mathcal{B}_v,E,\boldsymbol{P},k)$ denote the bipartite graph that connects the set of users $\mathcal{U}_v$ to the set of BSs $\mathcal{B}_v$ where the set $E$ is the set of all pairs $\{u,b\}$ such that $u\in\mathcal{U}_v$ and $b\in\mathcal{B}_v$. Each edge $\{u,b\}$ is assigned a weight that is equal to the transmission rate from $u$ to $v$ using frequency band $k$, given the power allocation $\boldsymbol{P}$. We allocate the channels at each frequency band $k$ by solving the sum rate maximization matching problem of $B_k$ optimally. This optimal matching can be found for example by using the Hungarian method \cite{doi:10.1002/nav.3800020109} for every $B_k$. This channel allocation scheme is depicted in Algorithm \ref{algo:Altenating_single_set_power_assignment}. \setlength{\textfloatsep}{.7cm \begin{algorithm} \caption{}\label{algo:Altenating_single_set_power_assignment} \begin{algorithmic}[1] \State Input: Power allocation $\boldsymbol P = (P_{u,b,k})_{u\in\mathcal{U}_v,b\in\mathcal{B}_v,k\in\mathcal{K}}$; \State For every $u\in\mathcal{U}_v$, $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$ calculate $\overline{\text{SINR}}_{u,b,k}(\boldsymbol P)$ and \[R_{u,b,k}=W_k\log_2\left(1+\text{SINR}_{u,b,k}(\boldsymbol P)\right);\] \State For every $k\in\mathcal{K}$ find the optimal matching of $B_k=(\mathcal{U}_v,\mathcal{B}_v,E,\boldsymbol{P},k)$, then set $\gamma_{u,b,k}=1$ if user $u$ was matched with BS $b$ in frequency band $k$ and $\gamma_{u,b,k}=0$ otherwise; \end{algorithmic} \end{algorithm} We note that, as stated in \cite{7873307}, given a power allocation $\boldsymbol P$, Algorithm \ref{algo:Altenating_single_set_power_assignment} finds the optimal channel allocation that maximizes the sum rate for that power allocation. However, since the power allocation may not be optimal, the overall solution is not necessarily optimal. Interestingly, as we previously wrote, numerical results show that as the number of virtual cells decreases and their size increases both the user-centric channel allocation and the equivalent continuous problem approach outperforms this scheme. \subsection{Convergence of Algorithm \ref{algo:Altenating_general}} The convergence of Algorithm \ref{algo:Altenating_general} depends on the channel allocation scheme used and the initial values $\alpha_{u,b,k}^{(0)}$. Since the system sum rate is bounded, convergence must occur whenever there is an $N_0\in\mathbb{N}$ such that $R(\boldsymbol P^{(n)},\boldsymbol\gamma^{(n)})\geq R(\boldsymbol P^{(n-1)},\boldsymbol\gamma^{(n-1)})$ for all $n\geq N_0$. This, in turn must occur if $R(\boldsymbol P^{(n-1)},\boldsymbol\gamma^{(n)})\geq R(\boldsymbol P^{(n-1)},\boldsymbol\gamma^{(n-1)})$ and $R(\boldsymbol P^{(n)},\boldsymbol\gamma^{(n)})\geq R(\boldsymbol P^{(n-1)},\boldsymbol\gamma^{(n)})$ for every $n\geq N_0$. This condition holds when allocating channels using Algorithm \ref{algo:Altenating_single_set_power_UCB} or Algorithm \ref{algo:Altenating_single_set_power_assignment} and choosing the initial values $\alpha_{u,b,k}^{(0)}$ at time $n$ to be $\gamma^{(n)}_{u,b,k}\overline{\text{SINR}}_{u,b,k}(\boldsymbol P^{(n-1)})$, since Algorithm \ref{algo:Altenating_single_set_power_UCB} and Algorithm \ref{algo:Altenating_single_set_power_assignment} cannot decrease the sum rate of a virtual cell, and since the high SINR approximation (\ref{eq:high_SINR_approx_improved}) is achieved with equality for $z=z_0$. In practice, convergence was observed in simulations for all channel allocation algorithms presented in this work for the choices $\alpha_{u,b,k}^{(0)}=\gamma^{(n)}_{u,b,k}\overline{\text{SINR}}_{u,b,k}(\boldsymbol P^{(n-1)})$ and $\alpha_{u,b,k}^{(0)}=\gamma^{(n)}_{u,b,k}$. The latter choice provided a small improvement over the first and was used in our simulations. \section{Resource Allocation for Coordinated Multi-Point Decoding in Virtual Cells}\label{sec:joint_decoding} This section is dedicated to solving the problem (\ref{eq:uplink_problem_clean}) that is presented in Section \ref{subsection:uplink_joint_decoding_problem} in which BSs use cloud decoding with backhaul links of infinite capacity. Note that this setup is equivalent to a multiple access channel (MAC) with a single transmitting antenna at each user and multiple antennas at the receiver. Using the identity $\det(\boldsymbol{AB})=\det(\boldsymbol{A})\det(\boldsymbol{B})$ we have that problem (\ref{eq:uplink_problem_clean}), which depicts the capacity of the virtual cell, can be written as follows: \begin{flalign}\label{problem_joint_decode_ininite} \max &\sum_{k\in\mathcal{K}}W_k\left[\log_2\det\left(\boldsymbol{N}_{v,k}+\sum_{u\in\mathcal{U}_v}p_{u,k}\boldsymbol h_{u,k} \boldsymbol h_{u,k}^{\dagger}\right)-\log_2\det\left(\boldsymbol{N}_{v,k}\right)\right],\nonumber\\ \text{s.t.: } & \sum_{k\in\mathcal{K}} p_{u,k}\leq \overline{P}_{u,k},\quad p_{u,k}\geq 0. \end{flalign} Since the terms $\log_2\det\left(\boldsymbol{N}_{v,k}\right)$ are constants, hereafter we omit them from the objective function. Denote $\boldsymbol{p}_u = (p_{u,1},\ldots,p_{u,K})$ and let: \begin{flalign} &f\left(\boldsymbol{p}_{u_1},\ldots,\boldsymbol{p}_{u_{|U|}}\right) =\log_2\det\left(\boldsymbol{N}_{v,k}+\sum_{u\in\mathcal{U}_v}p_{u,k}\boldsymbol h_{u,k} \boldsymbol h_{u,k}^{\dagger}\right). \end{flalign} In order to optimally solve the problem (\ref{problem_joint_decode_ininite}) iteratively using the cyclic coordinate ascend algorithm \cite[Chapter 2.7]{Bertsekas/99}, the following three conditions must hold: \begin{enumerate} \item The function $f\left(\boldsymbol{p}_{u_1},\ldots,\boldsymbol{p}_{u_{|\mathcal{U}_v|}}\right)$ is concave. \item Define \begin{flalign} \mathcal{P}&\triangleq \left\{\left(\boldsymbol{p}_{u_1},\ldots,\boldsymbol{p}_{u_{|\mathcal{U}_v|}}\right):\sum_{k\in\mathcal{K}} p_{u,k}\leq \overline{P}_u,\:\: \sum_{k\in\mathcal{K}}p_{u,k}\geq 0 \:\: \forall\: u\in\mathcal{U}_v\right\},\nonumber\\ \mathcal{P}_u&\triangleq\left\{\boldsymbol{p}_u:\sum_{k\in\mathcal{K}}p_{u,k}\leq \overline{P}_u,\:p_{u,k}\geq0\right\}, \end{flalign} then $\mathcal{P} = \mathcal{P}_{u_1}\times\ldots\times\mathcal{P}_{u_{|U|}}$. \item The problem \begin{flalign}\label{problem_joint_decode_ininite_single} \max_{\tilde{\boldsymbol{p}}_{u_i}}\: &f\left(\boldsymbol{p}_{u_1},\ldots,\boldsymbol{p}_{u_{i-1}},\tilde{\boldsymbol{p}}_{u_i},\boldsymbol{p}_{u_{i+1}},\boldsymbol{p}_{u_{|U|}}\right)\nonumber\\ \text{s.t.: } & \tilde{\boldsymbol{p}}_{u_i}\in\mathcal{P}_{u_i}, \end{flalign} has a unique maximizing solution. \end{enumerate} Next we solve problem (\ref{problem_joint_decode_ininite_single}) and show that the optimal solution is uniquely attained. Denote $\boldsymbol\Sigma_{i,k} = \boldsymbol{N}_{v,k}+\sum_{j\neq i}p_{u_j,k}\boldsymbol h_{u_j,k}\boldsymbol h_{u_j,k}^{\dagger}$. Problem (\ref{problem_joint_decode_ininite_single}) is then \begin{flalign}\label{problem_joint_decode_ininite_single_eq} \max &\sum_{k\in\mathcal{K}}W_k\log_2\det\left(\boldsymbol\Sigma_{i,k}+p_{u_i,k}\boldsymbol h_{u_i,k} \boldsymbol h_{u_i,k}^{\dagger}\right)\nonumber\\ \text{s.t.: } & \sum_{k\in\mathcal{K}} p_{u_i,k}\leq \overline{P}_{u_i},\quad p_{u_i,k}\geq 0. \end{flalign} The Lagrangian of (\ref{problem_joint_decode_ininite_single_eq}) is: \begin{flalign*} &L(\boldsymbol p_{u_i},\lambda,\boldsymbol\mu) = \sum_{k\in\mathcal{K}}W_k\log_2\det\left(\boldsymbol\Sigma_i(k)+p_{u_i,k}\boldsymbol h_{u_i,k} \boldsymbol h_{u_i,k}^{\dagger}\right)-\lambda_{u_i}(\sum_{k\in\mathcal{K}} p_{u_i,k}-\overline{P}_{u_i}) +\sum_{k\in\mathcal{K}}\mu_{u_i,k}p_{u_i,k}. \end{flalign*} Next, we calculate the derivative of the Lagrangian with respect to $p_{u_i,k}$: \begin{flalign} \frac{\partial L(\boldsymbol p_{u_i},\lambda,\boldsymbol\mu)}{\partial p_{u_i,k}}&= W_k\boldsymbol h_{u_i,k}^{\dagger}\left(\boldsymbol\Sigma_{i,k}+p_{u_i,k}\boldsymbol h_{u_i,k} \boldsymbol h_{u_i,k}^{\dagger}\right)^{-1}\boldsymbol h_{u_i,k} - \lambda_{u_i}+\mu_{u_i,k} \nonumber\\ & = W_k\frac{\boldsymbol h_{u_i,k}^{\dagger}\boldsymbol\Sigma_{i,k}^{-1}\boldsymbol h_{u_i,k}}{1+\boldsymbol h_{u_i,k}^{\dagger}\boldsymbol\Sigma_{i,k}^{-1}\boldsymbol h_{u_i,k}p_{u_i,k}}-\lambda_{u_i}+\mu_{u_i,k}. \end{flalign} The KKT conditions for (\ref{problem_joint_decode_ininite_single_eq}) are \begin{flalign} & W_k\frac{\boldsymbol h_{u_i,k}^{\dagger}\boldsymbol\Sigma_{i,k}^{-1}\boldsymbol h_{u_i,k}}{1+\boldsymbol h_{u_i,k}^{\dagger}\boldsymbol\Sigma_{i,k}^{-1}\boldsymbol h_{u_i,k}p_{u_i,k}}-\lambda_{u_i}+\mu_{u_i,k} = 0,\nonumber\\ & \lambda_{u_i}\left(\sum_{k\in\mathcal{K}} p_{u_i,k}-\overline{P}_{u_i}\right) = 0,\quad \mu_{u_i,k}p_{u_i,k}=0,\nonumber\\ & \mu_{u_i,k}\geq 0,\quad \lambda_{u_i}\geq0. \end{flalign} Since $\mu_{u_i,k}$ is nonnegative for all $k$, and the matrix $\boldsymbol\Sigma^{-1}_{i,k}$ is positive definite for all $k$, in order to fulfill the first KKT condition, $\lambda_{u_i}$ must be strictly positive. Now, if $p_{u_i,k}>0$, then $\mu_{u_i,k} =0 $ and by the first KKT condition we have \begin{flalign} p_{u_i,k}=\frac{W_k}{\lambda_{u_i}}-\frac{1}{\boldsymbol h_{u_i,k}^{\dagger}\boldsymbol\Sigma_{i,k}^{-1}\boldsymbol h_{u_i,k}}. \end{flalign} Also, if $p_{u_i,k}=0$, then by the first KKT condition we have \begin{flalign} W_kh_{u_i,k}^{\dagger}\boldsymbol\Sigma_{i,k}^{-1}\boldsymbol h_{u_i,k}+\mu_{u_i,k} = \lambda_{u_i}. \end{flalign} It follows that \begin{flalign} p_{u_i,k} = \left(\frac{W_k}{\lambda_{u_i}}-\frac{1}{\boldsymbol h_{u_i,k}^{\dagger}\boldsymbol\Sigma_{i,k}^{-1}\boldsymbol h_{u_i,k}}\right)^+ \end{flalign} where $\lambda_{u_i}$ is chosen such that $\sum_{k\in\mathcal{K}}p_{u_i,k} = \overline{P}_{u_i}$. \section{Numerical Results}\label{se:simulation} This section presents Monte Carlo simulation results for the resource allocation and user affiliation schemes presented in this paper. In these simulations there are $8$ frequency bands, each of bandwidth 20 KHz, and the carrier frequency is set to $1800$ MHz. The noise power received by each BS is $-174$ dBm/Hz, and the maximal power constraint for each user is $23$ dBm. Finally, in each frequency band the channel exhibits Rayleigh fading, log-normal shadowing with standard deviation $8$ dB, and a path loss of $PL(d)= 35\log_{10}(d)+34$, where $d$ denotes the distance between the transmitter and the receiver in meters (see \cite{4138008}). The network comprises $15$ BSs and $100$ users which are uniformly located in a square of side $2000$ meters. The results are averaged over $1000$ system realizations. The numerical results depict the average system sum rate achieved by the BS clustering, resource allocation methods, and user affiliation scheme we propose in this paper. To evaluate the performance of our BS clustering we compare the average system-sum rate achieved by using the hierarchical clustering with minimax linkage criterion to that of other popular clustering algorithms, namely, the K-means clustering algorithm and the spectral clustering algorithm \cite{Ng:2001:SCA:2980539.2980649} for the choices $\sigma=\sqrt{2000}$ and $\sigma=2000$. The simulation results for the system setup stated above are shown in Figures \ref{Best_Channel_Average_Sum_Rate_fig_single_all}-\ref{Several_Joint_decoding_exhaustive_hierarchial_fig}. An additional figure, Fig.~\ref{Several_Comparison_Clustering_Max_Average_Sum_Rate_max_single2}, presents numerical results that evaluate the clustering choice for a system setup with $10$ BSs and $80$ users that are uniformly located in a square of side $1000$ meters; all the other system parameter remain the same. The line descriptions of the figures are of the structure $F1 - F2 - F3$ where \begin{itemize} \item The $F1$ field describes the BS clustering method used. This field can take one of following options: \textit{Hierarchical}, which stands for the hierarchical clustering with minimax linkage criterion; \textit{K-means}, which stands for the K-means clustering algorithm, and \textit{Spectral clustering $\sigma=x$}, which stands for spectral clustering where $\sigma$ takes the value $x$. \item The $F2$ field describes the resource allocation scheme. This field can take one of the following options: \begin{itemize} \item \textit{JD}, which stands for Joint Decoding, refers to the resource allocation schemes for the coordinated multi-point model which is presented in Section \ref{sec:joint_decoding}. \item \textit{Continuous}, which refers to the resource allocation presented in Section \ref{sec:joint_power_allocation}. \item \textit{UC}, which refers to the resource allocation presented in Section \ref{sec:alternating_power_allocation_user}. \item \textit{BSC}, which refers to the resource allocation presented in Section \ref{sec:alternating_power_allocation_BS}. \item \textit{MSRM}, which refers to the resource allocation presented in Section \ref{sec:MSRM_channel_allocation}. \item \textit{Max SUD} which refers to the maximal average sum rate produced by each of the above resource allocation schemes for the interference coordination model. \end{itemize} \item The $F3$ field describes the user affiliation criterion. This field can either be \textit{``best channel"} or \textit{``closest BS"}. \end{itemize} \subsection{Average System Sum Rate} Figures \ref{Best_Channel_Average_Sum_Rate_fig_single_all}-\ref{Best_Channel_Average_Sum_Rate_fig_all} depict the average system sum rate as a function of the number of virtual cells. We clustered the BSs in the network according to the hierarchical clustering algorithm with the minimax linkage criterion that is depicted in Algorithm \ref{algo:hierarchical_clustering}. We considered both of the user affiliation rules we propose in Section \ref{sec_user_affil}, i.e., the ``closest BS" criterion and the ``best channel" criterion. We examined the average system sum rate of both cooperation models discussed in this paper: the interference coordination model whose resource allocation schemes are discussed in Sections \ref{sec:joint_power_allocation}-\ref{sec:alternating_optimization}, and the coordinated multi-point decoding model whose resource allocation scheme is discussed in Section \ref{sec:joint_decoding}. Fig.~\ref{Best_Channel_Average_Sum_Rate_fig_single_all} depicts the average system sum rate of the interference coordination model for each of the resource allocation schemes and each of the user affiliation schemes we propose in this paper. Fig.~\ref{Best_Channel_Average_Sum_Rate_joint_decoding} depicts the average system sum rate of the coordinated multi-point decoding for each of the user affiliation schemes we propose. Finally, Fig.~\ref{Best_Channel_Average_Sum_Rate_fig_all} depicts the average system sum rate achieved by each of the cooperation models we consider. \begin{figure} \centering \includegraphics[scale=0.67]{Figure1.png} \vspace{-0.4cm} \caption{Comparison of the average system sum rate of the interference coordination model as a function of the number of virtual using hierarchical BS clustering with minimax linkage criterion.} \label{Best_Channel_Average_Sum_Rate_fig_single_all} \vspace{-0.3cm} \end{figure} \begin{figure} \centering \includegraphics[scale=0.67]{Figure2.png} \vspace{-0.4cm} \caption{Comparison of the average system sum rate of the coordinated multi-point decoding as a function of the number of virtual cells using hierarchical BS clustering with minimax linkage criterion.} \label{Best_Channel_Average_Sum_Rate_joint_decoding} \vspace{-0.3cm} \end{figure} \begin{figure} \centering \includegraphics[scale=0.67]{Figure3.png} \vspace{-0.4cm} \caption{Comparison between the average sum rate of the interference coordination model and the coordinated multi-point decoding as a function of the number of virtual cells using hierarchical BS clustering with minimax linkage criterion.} \label{Best_Channel_Average_Sum_Rate_fig_all} \vspace{-0.4cm} \end{figure} Figures \ref{Best_Channel_Average_Sum_Rate_fig_single_all}-\ref{Best_Channel_Average_Sum_Rate_fig_all} lead to several interesting insights and conclusions. First, they confirm the expectation that, as the number of virtual cells decreases, the average sum rate increases. Second, they show that the best channel affiliation rule outperforms the closest BS one when the number of virtual cells is large. However, as Fig.~\ref{Best_Channel_Average_Sum_Rate_fig_single_all} shows, this changes in the interference coordination model when the number of virtual cells decreases. In this case the closest BS affiliation rule either outperforms or is on par with the best channel one, depending on the resource allocation scheme. Additionally, Fig.~\ref{Best_Channel_Average_Sum_Rate_fig_single_all} shows that it is best to use the BSC or MSRM channel allocation methods, which yielded similar performance, for allocating channels and power in virtual cells except when there is a single virtual cell (fully centralized optimization). In this case the two new resource allocation techniques that we propose outperform these other methods. This can be explained by the fact that our new schemes provide more freedom in the power allocation stage to choose which users have a positive transmission power compared with existing methods. However, since the power allocation problem is solved approximately, its solution may not be optimal. When the size of the virtual cells is small (i.e. there are many virtual cells), the channel allocation choice of the existing methods is good whereas the new methods suffer loss in performance due to the suboptimality of the power allocation stage. However, as the size of the virtual cells grows (as their number is decreased), the ability of the new methods to consider in the power allocation stage more channel allocation combinations improves the resource allocation performance, even though the solution of the power allocation problem is only approximately optimal. Overall the average sum rate increase of the resource allocation schemes of the fully centralized scenario, i.e., a single virtual cell compared to the fully distributed scenario, is approximately $20\%$ when considering the best achieved average sum rate at each point. Fig.~\ref{Best_Channel_Average_Sum_Rate_joint_decoding} depicts the average system sum rate of the coordinated multi-point decoding as a function of the number of virtual cells comprising the network. It shows the monotonic and significant improvement in average system sum rate as the number of virtual cells decreases; the overall improvement in average system sum rate is 330\%. Fig.~\ref{Best_Channel_Average_Sum_Rate_fig_all} compares the average system sum rate achieved by the coordinated multi-point decoding and the one achieved by the interference coordination model. Fig.~\ref{Best_Channel_Average_Sum_Rate_fig_all} shows that coordinated multi-point decoding can achieve significantly higher average system sum rate compared with single user decoding. However, single user decoding may yield a higher sum rate when the number of virtual cells is large. For a large number of virtual cells, where the limited coordination between BSs is similar to having no coordination between BSs, ignoring out of cell interference affects the joint decoding scheme more severely, since it depends on the exact second order statistics of the interference. Thus ignoring the interference outside virtual cells affects the coordinated multi-point scheme more severely than the interference coordination model with a large number of virtual cells. In this case the loss in performance caused by using an inexact interference covariance matrix is not compensated by the gain in performance of using joint decoding in the virtual cell. \subsection{Comparison with Other Clustering Algorithms} We also compared the average system sum rate using the hierarchical clustering algorithm with minimax linkage criterion with that of two other popular clustering algorithms, namely, the K-means clustering algorithm and that of the spectral clustering algorithm \cite{Ng:2001:SCA:2980539.2980649} for the choices $\sigma=\sqrt{2000}$ and $\sigma=2000$. Fig.~\ref{Several_Comparison_Clustering_Max_Average_Sum_Rate_max_single} depicts the maximal average system sum rate achieved by each of the clustering algorithms where the maximization is taken over the resource allocation schemes for the interference coordination model presented in this work. Additionally, Fig.~\ref{Several_Joint_decoding_exhaustive_hierarchial_fig} depicts the average system sum rate achieved by coordinated multi-point decoding. Fig.~\ref{Several_Comparison_Clustering_Max_Average_Sum_Rate_max_single}-\ref{Several_Joint_decoding_exhaustive_hierarchial_fig} show that the hierarchical algorithm consistently outperforms both the K-means and the spectral clustering algorithms for both user affiliation rules and both cooperation models. \begin{figure} \centering \includegraphics[scale=0.67]{Figure4.png} \vspace{-0.4cm} \caption{Comparison of the maximal average sum rate of several BSs clustering algorithms as a function of the number of virtual cells for the interference coordination model.} \label{Several_Comparison_Clustering_Max_Average_Sum_Rate_max_single} \vspace{-0.4cm} \end{figure} \begin{figure} \centering \includegraphics[scale=0.67]{Figure5.png} \vspace{-0.4cm} \caption{Comparison of the maximal average sum rate of several BSs clustering algorithms as a function of the number of virtual cells for coordinated multi-point decoding.} \label{Several_Joint_decoding_exhaustive_hierarchial_fig} \vspace{-0.4cm} \end{figure} We considered an additional network setup which was comprised of $10$ BSs and $80$ users that were uniformly located in a square of side $1000$ meters. Fig.~\ref{Several_Comparison_Clustering_Max_Average_Sum_Rate_max_single2} presents the average system sum rate as a function of the number of virtual cells for the interference coordinated model. The results were averaged over $1000$ system realizations. Fig.~\ref{Several_Comparison_Clustering_Max_Average_Sum_Rate_max_single2} shows that a proper choice of the clustering algorithm is crucial for improving network performance. This is evident in the plot of the spectral clustering algorithm in which the network performance monotonically decreases as the number of virtual cells is decreased from 10 to 5. \begin{figure} \centering \includegraphics[scale=0.6]{Figure6.png} \vspace{-0.4cm} \caption{Comparison of the maximal average sum rate of several BSs clustering algorithms as a function of the number of virtual cells for the interference coordination model.} \label{Several_Comparison_Clustering_Max_Average_Sum_Rate_max_single2} \vspace{-0.4cm} \end{figure} \section{Conclusion}\label{sec:conclusion} This work addressed the role of virtual cells in resource allocation and network management for future wireless networks. It proposed methods for two design aspects of this network optimization; namely, forming the virtual cells and allocating the communication resources in each virtual cell to maximize total system sum rate. We considered two cooperation models in virtual cells. The first model used interference coordination, where the resource allocation in each virtual cell is performed jointly for all users and BSs in the virtual cell but there is no decoding cooperation. The second cooperation model we considered was the coordinated multi-point decoding model, whereby BSs in a virtual cell allocate the communication resources jointly and also decode their signal cooperatively. We presented two types of resource allocation schemes for the interference coordination model. The first scheme converted the NP-hard mixed-integer resource allocation problem into a continuous resource allocation problem and then found an approximate solution. The second scheme alternated between the power allocation and channel allocation problems. We proposed a new channel allocation that was carried out in a user-centric manner, and also considered a BS centric approach. We additionally considered a maximum sum rate matching approach where an optimal channel assignment is found for a given power allocation. Since this power allocation may not be optimal, the overall solution may be sub-optimal as well. We also solved the joint decoding resource allocation problem for the coordinated multi-point decoding model in each virtual cell optimally. All of these schemes assume the BSs have been assigned to virtual cells via clustering. For this clustering we proposed the use of hierarchical clustering in the clustering of the BSs to form the virtual cells, since changing the number of virtual cells only causes local changes and does not force a reclustering of all the virtual BSs in the network. We presented numerical results for all of the aforementioned models. Our numerical results demonstrate the increase in system sum rate that our neighborhood-based optimization yields. This increase is monotonic as the neighborhood-based optimization reverts from distributed to centralized optimization. Additionally, our numerical results indicate that coordinated multi-point communication systems show greater increase in system sum rate as the number of virtual cells decreases, in comparison with interference coordination communication systems. Finally, they show that the hierarchical clustering with the minimax linkage criterion yields higher system sum rate than both K-means and spectral clustering. \bibliographystyle{IEEEtran}
1,116,691,497,095
arxiv
\section{Introduction} Schur-Weyl duality connects polynomial representations of $GL_N$ and representations of the symmetric group $S_n$. Let $V = {\mathbb{C}}^N$ denote the vector representation of $GL_N$. Then $V^{\otimes n}$ has a $GL_N$-action. Let $S_n$ be the symmetric group on $n$ indices. The tensor $V^{\otimes n}$ has a natural right $S_n$-action. By the Schur-Weyl duality, we have the decomposition $$V^{\otimes n} = \bigoplus_{|\lambda| = n} V^{\lambda} \boxtimes S_{\lambda},$$ where $n \geq N$, $\lambda$ is a partition of $n$ with at most $N$ rows, $S_{\lambda}$ runs through all irreducible representations $S_n$ and $V^{\lambda}$ is the irreducible $GL_N$-module with highest weight $\lambda$. Moreover, the actions of Jucys-Murphy elements are diagonalizable.In \cite{AS}, Arakawa and Suzuki constructed a functor from the category of $U(\mathfrak{gl}_N)$-modules to the category of representations of the degenerate affine Hecke algebra of type $A_n$. In \cite{CEE}, Calaque, Enriquez and Etingof generalized this functor to the category of representations of degenerate double affine Hecke algebra of type $A_n$. Etingof, Freund and Ma \cite{EFM} extended the construction to the category of representations of degenerate affine and double affine Hecke algebra of type $BC_n$ by considering the classical symmetric pair $(\mathfrak{gl}_N, \mathfrak{gl}_p \times \mathfrak{gl}_{N - p})$. As a quantization of the functors by Etingof-Freund-Ma, Jordan and Ma in \cite{JM} constructed functors from the category of $U_q(\mathfrak{gl}_N)$-modules to the category of representations of affine Hecke algebra of type $C_n$ and from the category of quantum $\mathcal{D}$-modules to the category of representations of the double affine Hecke algebra of type $C^{\vee}C_n$. The construction in \cite{JM} used the theory of quantum symmetric pair $(U_q(\mathfrak{gl}_N), B_{\sigma})$ where $B_{\sigma}$ is a coideal subalgebra. This is a quantum analogue of the classical symmetric pair.\\ On the other hand, in \cite{Ree}, Reeder did the classification of irreducible representations of affine Hecke algebra of type $C_2$ with equal parameters. In \cite{K}, Kato indexed and analyzed the weights of representations of affine Hecke algebra of type $C_n$. In \cite{M}, Ma analyzed the image of principal series modules under the Etingof-Freund-Ma functor. Moreover, the combinatorial description of Young diagrams is used to describe irreducible representations of the symmetric group and Hecke algebra of type $A$ with standard tableaux on the Young diagram indexing the bases. Similarly, the skew shape and standard tableaux on it describes the irreducible representation of the affine Hecke algebra of type $A$. Moreover, in \cite{SV}, Suzuki and Vazirani introduced a description of some irreducible representations of the double affine Hecke algebra of type $A$ by periodic skew Young diagrams and periodic standard tableaux on it. In \cite{R}, Ram introduced the chambers and local regions and described the representations of the affine Hecke algebra. In \cite{D}, Daugherty introduce the combinatorial description of representations of degenerate extended two-boundary Hecke algebra. In \cite{DR}, Daugherty and Ram gave a Schur-Weyl duality approach to the affine Hecke algebra of type $C_n$.\\ This paper focuses on the representations of the degenerate affine Hecke algebra of type $C_n$ and gives a combinatorial description which is similar to the combinatorial description in \cite{D} and \cite{DR} but is via a different structure, the Etingof-Freund-Ma functor. This paper is arranged as follow: Section 2-4 are about the Etingof-Freund-Ma functor, the degenerate affine Hecke algebra of type $C_n$ and $GL_N$-modules. In section 5 and section 6, we compute the underlying vector space of the image of Etingof-Freund-Ma functor and the $\mathcal{Y}$-actions. In section 7 and section 8, we talk about intertwining operators and define combinatorial moves. Section 9 concerns the irreducibility of the image. In section 10, we talk about how to recover a $GL_N$-module from a representation of degenerate affine Hecke algebra of type $C_n$.\\ \textbf{Acknowledgments.} I would like to thank Monica Vazirani for her guidance and helpful discussions, Arun Ram for helpful comments on my first draft and for the suggestion on presenting the $\mathcal{Y}$-action, José Simental Rodríguez for detailed feedback and helpful discussions. \section{Definitions and notations} \subsection{Root system of type $C_n$} Let ${\mathfrak{h}}^*$ be a finite-dimensional real vector space with basis $\{\epsilon_i | i = 1, \cdots, n \}$ and a positive definite symmetric bilinear form $(\cdot, \cdot)$ such that $(\epsilon_i, \epsilon_j) = \delta_{ij}$. Let $R_n$ be an irreducible root system of type $C_n$ with $$R_n = \{\epsilon_i + \epsilon_j | i, j = 1, \cdots, n\}\cup \{\epsilon_i - \epsilon_j | i, j =1, \cdots, n \text{ and } i \neq j\},$$ and the positive roots are $$R_{n+} = \{\epsilon_i + \epsilon_j | i,j = 1, \cdots, n\} \cup \{\epsilon_i - \epsilon_j | 1 \leq i < j \leq n\}.$$ For any root $\alpha$, the coroot is $\alpha^{\vee} = \dfrac{2\alpha}{(\alpha, \alpha)}$. Let $Q$ be the root lattice and $Q^{\vee}$ be the coroot lattice. Let $\alpha_i = \epsilon_i - \epsilon_{i + 1}$, for $i = 1, \cdots, n - 1$ and $\alpha_n = 2\epsilon_n$. Then the collection of simple roots are $$\Pi_n = \{\alpha_i | i = 1 ,\cdots, n\}.$$ For each simple root $\alpha_i$, define the reflection $s_i := s_{\alpha_i}$, $$s_{\alpha_i}(\lambda) = \lambda - (\lambda, \alpha_i^{\vee}) \alpha_i,$$ where $\lambda \in {\mathfrak{h}}^*$. Then the finite Weyl group $W_0$ of type $C_n$ is generated by the generators $s_1, \cdots, s_{n-1},s_n$ with the relations \begin{align} &s_i^2 = 1, \text{ for } i = 1, \cdots, n,\\ &s_i s_{i + 1}s_i = s_{i + 1}s_is_{i + 1}, \text{ for } i = 1, \cdots, n - 1,\\ &s_{n - 1}s_ns_{n - 1}s_n = s_ns_{n - 1}s_ns_{n - 1},\\ &s_i s_j = s_j s_i, \text{ for } |i - j| > 1. \end{align} \subsection{Affine Weyl group of type $C_n$} Let $W = W_0 \ltimes Q^{\vee}$. For any $\iota \in {\mathfrak{h}}^*$, where $\iota = \iota_1 \epsilon_1 + \cdots + \iota_n \epsilon_n$ and $\iota_k \in \mathbb{Z}$, let $y^{\iota} = y_1^{\iota_1}\cdots y_n^{\iota_n}$ and the action of $w \in W_0$ by $w . y^{\iota} = y^{w(\iota)}$ Let $W = W_0 \ltimes Q^{\vee}$ and the affine Weyl group of type $C_n$ is generated by $s_1,\cdots, s_{n-1},s_n$ and $Y_i$, for $i = 1, \cdots, n$ with the following additional relations to (1)-(4), \begin{align} & s_i Y_j = Y_j s_i, \text{ for } j \neq i, i + 1,\\ & Y_i Y_j = Y_j Y_i, \\ & s_i Y_i s_i = Y_{i + 1}, \text{ for } i = 1, \cdots, n - 1,\\ & s_n Y_n s_n = {Y_n}^{-1}. \end{align} \subsection{Definition of degenerate affine Hecke algebra of type $C_n$} Let $\kappa_1$ and $\kappa_2$ be two parameters. The trigonometric degenerate affine Hecke algebra $H_n(\kappa_1, \kappa_2)$ is an algebra generated over $\mathbb{C}$ by $s_1, \cdots, s_{n-1},\gamma_n$, where we take $\gamma_n=s_n$, and $y_1, \cdots, y_n$ with relations (1)-(6) and the following relations \begin{align} & s_i y_i - y_{i + 1} s_i = \kappa_1, \text{ for } i = 1, \cdots, n - 1,\\ & \gamma_n y_n + y_n \gamma_n = \kappa_2. \end{align} \subsection{$\mathcal{Y}$-semisimple degenerate affine Hecke algebra representations} Now let define what we mean by $\mathcal{Y}$-semisimple. Let $\mathcal{Y} = \mathbb{C}[y_1, \cdots, y_n]$ be the commutative subalgebra of the degenerate affine Hecke algebra $H_n(\kappa_1, \kappa_2)$. Let $L$ be a representation of $H_n(\kappa_1, \kappa_2)$. For a function $\zeta: \{1,\cdots,n\} \to \mathbb{C}$, let $\zeta_i$ denote $\zeta(i)$ and $\zeta=[\zeta_1,\cdots,\zeta_n]$. Define the simultaneous generalized eigenspace as $$L_{\zeta}^{gen} = \{v \in L | (y_i - \zeta_i)^k v = 0 \text{ for some } k \gg 0 \text{ and for all } i = 1, \cdots, n\}.$$ Since the polynomial algebra $\mathcal{Y}$ is commutative, the restriction of $L$ on $\mathcal{Y}$ decomposes to a sum of simultaneous generalized eigenspace, i.e. $L = \oplus_{\zeta} L_{\zeta}^{gen}$. Similarly, define the simultaneous eigenspace $$L_{\zeta} = \{v \in L | y_i v = \zeta_i v \text{ for all } i = 1, \cdots, n\}.$$ \begin{definition} If the restriction of $L$ on $\mathcal{Y}$ decomposes to a sum of simultaneous eigenspaces, i.e. $L = \oplus_{\zeta} L_{\zeta}$, then call $L$ is $\mathcal{Y}$-semisimple. The function $\zeta$ is called a weight and $L_{\zeta}$ is the weight space of weight $\zeta$. \end{definition} \section{Etingof-Freund-Ma Functor} We recall the definition of the Etingof-Freund-Ma functor $F_{n, p, \mu}$ in \cite{EFM}. Let $N$ be a positive number and $V$ be the vector representation of $\mathfrak{gl}_N$. Let $p, q$ be positive integers such that $N = p + q$. Let $\mathfrak{t} = \mathfrak{gl}_p \times \mathfrak{gl}_q$ and $\mathfrak{t}_0$ be the subalgebra in $\mathfrak{t}$ consisting of all the traceless elements in $\mathfrak{t}$. Let $\chi$ is a character defined on $\mathfrak{t}$ as \begin{equation} \chi( \begin{bmatrix} S &0\\ 0 &T \end{bmatrix}) = q\cdot tr(S) - p \cdot tr(T), \end{equation} where $S \in \mathfrak{gl}_p$ and $T \in \mathfrak{gl}_q$. For a given $\mu \in \mathbb{C}$, define a functor $F_{n, p, \mu}$ from the category of $\mathfrak{gl}_N$-modules to the category of representations of degenerate affine Hecke algebra $H_n(1, p -q-\mu N)$ $$F_{n, p, \mu}(M) = (M \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu},$$ where the $(\mathfrak{t}_0, \mu)$-invariant corresponds $A . v = \mu \chi(A) v$, for all $A \in \mathfrak{t}_0$. \\ Let $M$ be the $0$-th tensor factor. Let $V_i$ be the $i$-th tensor factor with $V_i=V$ being the vector representation for $i=1, \cdots,n$. In \cite{JM}, the action of the degenerate affine Hecke algebra $H_n(1, p-q-\mu N)$ is the quasi classical limit of the action of the affine Hecke algebra $\mathcal{H}_n(q,q^{\sigma},q^{(p-q-\tau)})$ generated by $T_1,\cdots, T_{n-1},T_n$ and $Y_1^{\pm}, \cdots, Y_n^{\pm}$. In the following figures, $V_i$ is the vector representation for $i=1, \cdots, n$. In \cite{JM}, the action of $T_i$ for $i=1, \cdots, n-1$ was defined by $\tau_{V_i, V_{i+1}} \circ R_{i,i+1}$, where the flip operator $\tau_{V_i,V_{i+1}}: V_i \otimes V_{i+1} \to V_{i+1} \otimes V_i$ is defined by $v_i \otimes v_{i+1} \mapsto v_{i+1} \otimes v_i$ and $R_{i,i+1}$ is the $R$ matrix acting on $V_{i} \otimes V_{i+1}$, \begin{center} \begin{tikzpicture}[scale=0.6] \draw (-2,1) node {$T_i$}; \draw (-1,1) node {$=$}; \draw [thick] plot (5,0) to[bend left=10] (4.65,0.65); \draw [thick] (4.4,1.05) to [bend left=10] (4,2); \draw [thick] (4,0) to[bend right=10] (5,2); \draw [thick] (0,2)--(0,0); \draw [thick] (1,2)--(1,0); \draw [thick] (8,2)--(8,0); \draw (2.5,1)node {$\cdots$}; \draw (6.5,1)node {$\cdots$}; \draw (0, -0.5) node {\tiny{$M$}}; \draw (0, 2.5) node {\tiny{$M$}}; \draw (1, -0.5) node {\tiny{$V_1$}}; \draw (1, 2.5) node {\tiny{$V_1$}}; \draw (4, -0.5) node {\tiny{$V_i$}}; \draw (4, 2.5) node {\tiny{$V_i$}}; \draw (5, -0.5) node {\tiny{$V_{i+1}$}}; \draw (5, 2.5) node {\tiny{$V_{i+1}$}}; \draw (8, -0.5) node {\tiny{$V_n$}}; \draw (8, 2.5) node {\tiny{$V_n$}}; \end{tikzpicture} \end{center} Let $T_i=s_ie^{ \hbar s_i/2}$. Proposition 39 in \cite{J} and section 10.7 of \cite{JM} computed the action of $s_i$, i.e. $s_i$ acts on $F_{n,p,\mu}(M)$ by exchanging the $i$-th and ${(i + 1)}$-th tensor factors.\\ The action of $T_n$ was defined as follows \begin{center} \begin{tikzpicture}[scale=0.6] \draw (-2,1) node {$T_n$}; \draw (-1,1) node {$=$}; \draw [thick] (0,2)--(0,0); \draw [thick] (1,2)--(1,0); \draw [thick] (2,2)--(2,0); \draw [thick] (5,2)--(5,1.4); \draw [thick] (5,0.6)--(5,0); \draw (3.5,1) node {$\cdots$}; \draw (0, -0.5) node {\tiny{$M$}}; \draw (0, 2.5) node {\tiny{$M$}}; \draw (1, -0.5) node {\tiny{$V_1$}}; \draw (1, 2.5) node {\tiny{$V_1$}}; \draw (2, -0.5) node {\tiny{$V_2$}}; \draw (2, 2.5) node {\tiny{$V_2$}}; \draw (5, -0.5) node {\tiny{$V_n$}}; \draw (5, 2.5) node {\tiny{$V_n$}}; \draw [thick] (4.6,1.4) rectangle (5.4,0.6); \draw (5,1) node {\tiny{$J_V$}}; \end{tikzpicture} \end{center} where the matrix $J_V$ is a right-handed numerical solution of the reflection equation $R_{21}(J_V)_1R_{12}(J_V)_2=(J_V)_2R_{21}(J_V)_1R_{12}$ in section 7 of \cite{JM}. Section 10.7 of \cite{JM} compute the quasi classical limit of $T_n$. Then $\gamma_n$ acts on $F_{n,p,\mu}(M)$ by multiplying the $n$-th tensor factor by $J = diag(I_p,-I_q)$.\\ The action of $Y_1$ was define by $q^{\frac{2n}{N}+\mu(q-p)-N}R^{-1}_{01} \circ \tau_{V,M} \circ R^{-1}_{10} \circ \tau_{M,V}$. \begin{center} \begin{tikzpicture}[scale=0.6] \draw (-6.5,1) node {$Y_1$}; \draw (-3.5,1) node {\tiny{$=q^{\frac{2n}{N}+\mu(q-p)-N}$}}; \draw [thick] plot [smooth] coordinates {(1,0) (0.85,0.3) (-0.8,1) (-0.2,1.45)}; \draw [thick] plot [smooth] coordinates {(0.2,1.5) (0.9,1.7) (1,2)}; \draw [thick] (0,2)--(0,0.8); \draw [thick] (0,0.45)--(0,0); \draw [thick] (2,2)--(2,0); \draw [thick] (3,2)--(3,0); \draw [thick] (6,2)--(6,0); \draw (4.5,1)node {$\cdots$}; \draw (0, -0.5) node {\tiny{$M$}}; \draw (0, 2.5) node {\tiny{$M$}}; \draw (1, -0.5) node {\tiny{$V_1$}}; \draw (1, 2.5) node {\tiny{$V_1$}}; \draw (2, -0.5) node {\tiny{$V_2$}}; \draw (2, 2.5) node {\tiny{$V_2$}}; \draw (6, -0.5) node {\tiny{$V_n$}}; \draw (6, 2.5) node {\tiny{$V_n$}}; \end{tikzpicture} \end{center} Let $Y_1=e^{y_1 \hbar}$. By Proposition 10.13 in \cite{JM}, \begin{equation} y_1=-\sum_{s,t}(E_s^t)_0 \otimes (E_t^s)_1+\dfrac{n}{N}+\dfrac{\mu (q-p)}{2} - \dfrac{N}{2}, \end{equation} where $E_s^t$ is the $N \times N$ matrix with the $(s, t)$ entry being $1$ and other entries being $0$ and $(E_s^t)_i$ means $E_s^t$ acting on the $i$-th tensor factor. Let $s_{k,l}$ denote the transposition $(k, l) \in S_n$ and $\gamma_k \in W_0$ denote the action multiplying the $k$-th factor by $J$. In \cite{EFM}, the action of $y_1$ is given by \begin{equation} -\sum_{s|t} (E_s^t)_0 \otimes (E_t^s)_1 + \dfrac{p - q - \mu N}{2} \gamma_1 + \dfrac{1}{2}\sum_{l > 1} s_{1,l} + \dfrac{1}{2}\sum_{l \neq 1} s_{1,l} \gamma_1 \gamma_l, \end{equation} where $\sum_{s | t} = \sum_{s = 1}^{p} \sum_{t = p + 1}^{n} + \sum_{t = 1}^p \sum_{s = p + 1}^n$. In section 6.1, we show that the computation via equation (13) agrees with equation (12). By the relation $y_k = s_{k-1}y_{k-1}s_{k-1} - s_{k-1}$, we could compute the action of $y_k$ for $k=1, \cdots, n$. \section{$GL$-module} We consider images of polynomial $GL_N$-modules under Etingof-Freund-Ma functor. Recall the facts about polynomial $GL_N$-modules. Let $M$ be a polynomial $GL_N$-module and $H \subset GL_N$ be the collection of invertible diagonal matrices. Let $v \in M$ satisfy $$x.v=x_1^{\lambda_1}\cdots x_N^{\lambda_N}v,$$ for any $x=diag(x_1,\cdots,x_N) \in H$. Then $v$ is a weight vector of $H$-weight $\lambda=(\lambda_1,\cdots,\lambda_N)$. The subspace $$M(\lambda)= \{v \in M | x.v=x_1^{\lambda_1}\cdots x_N^{\lambda_N}v, x \in H\}$$ is called the weight space of weight $\lambda$. Then the polynomial $GL_N$-module $M$ is a direct sum of weight spaces $$M=\bigoplus M(\lambda).$$ Let $B \subset GL_N$ be the collection of all invertible upper triangular matrices. Let $v \in M$ be a generator of $M$. If $v$ satisfies $x.v=c(x)v$ for some function $c(x)$ and any $x \in B$, then $v$ is called a highest weight vector. If $M$ has the unique highest weight vector up to a scalar of the highest weight $\xi$, then $M$ is a highest weight module with the highest weight $\xi$ and let us denote $M$ by $V^{\xi}$. A $GL_N$-module $M$ is irreducible if and only if $M$ is a highest weight $GL_N$-module. Furthermore, two highest weight $GL_N$-modules are isomorphic if and only if they have the same highest weight. Let $\xi=\sum_{i=1}^N \xi_i \epsilon_i$ satisfying $\xi_1\geq \xi_2 \geq \cdots \geq \xi_N$ and $\xi_i \in \mathbb{Z}$ for $i=1,\cdots,N$. Then $\xi$ is an integral dominant weight of $GL_N$. Let $P^+$ denote the collection of all integral dominant weights and $P^+_{\geq 0}$ denote the collection of all integral dominant weights $\xi=\sum_{i=1}^N \xi_i \epsilon_i$ with $\xi_i \in \mathbb{N}$, for $i=1,\cdots,N$. Then the highest weight modules with highest weights $\xi \in P^+_{\geq 0}$ are all the irreducible polynomial $GL_N$-modules. Let $M$ be a rational $GL_N$-module. Then $M= det^m \otimes N$ for some $m \in \mathbb{Z}$ and a polynomial $GL_N$-module $N$. Then the highest weight modules with integral dominant highest weights are all the irreducible rational $GL_N$-modules.\\ The collection $P^+_{\geq 0}$ has a one-to-one correspondence with the collection of partitions with at most $N$ parts and thus the one-to-one correspondence with Young diagrams with at most $N$ rows. For the ease of writing, for each irreducible polynomial $GL_N$-module $V^{\xi}$ with highest weight $\xi \in P^+_{\geq 0}$, let us denote the corresponding partition $(\xi_1, \cdots,\xi_N)$ and Young diagram also by $\xi$. Moreover, define $|\xi|=\sum_{i=1}^N \xi_i$ for $\xi \in P^+$.\\ For a highest weight $GL_N$-module $V^{\xi}$, $\xi \in P^+_{\geq 0}$, with weight space decomposition $V^{\xi}=\bigoplus V^{\xi}(\lambda)$, the character of $V^{\xi}$ $$\chi_{V^{\xi}}=\sum_{\lambda} dim(V^{\xi}(\lambda)) x_1^{\lambda_1}\cdots x_N^{\lambda_N}$$ is the Schur polynomial $s_{\xi}(x_1, \cdots, x_N)$ of shape $\xi$.\\ By Pieri's rule, $$s_{\xi}e_1=\sum_{\nu} s_{\nu},$$ where $\nu \in P^+_{\geq 0}$ runs through all the shapes obtained by adding a cell to some row of $\xi$. Observe that $e_1=s_{\xi}$, where $\xi=(1)$, is the character of the vector representation $V$ of $GL_N$. This fact indicates how the tensor product of an irreducible polynomial $GL_N$-module and vector representation decomposes into a sum of irreducible polynomial $GL_N$-modules. % \section{Invariant space} In this section, we compute the underlying vector space $F_{n,p,\mu}(V^{\xi})=(M \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu}$ by finding a special basis of it and then index the basis elements by a collection of standard tableaux.\\ \subsection{Definition of the invariant space}~\\ Let $M$ be a $GL_N$-module, then $M$ has a $\mathfrak{gl}_N$-module structure. For any $X \in \mathfrak{gl}_N$ and $v \in M$, $$X.v = \frac{d}{dt}(e^{tX}.v)_{t=0}.$$ Recall the notations, $K=GL_p \times GL_q$, $Lie(K)=\mathfrak{t}$ and $\mathfrak{t}_0 \subset \mathfrak{t}$ which is the collection of traceless matrices in $\mathfrak{t}$. \begin{prop} The underlying vector space is invariant under tensoring powers of the determinant representation, i.e. $(det^{m} \otimes M\otimes V^{\otimes n})^{\mathfrak{t}_0, \mu} \cong (M \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu}$, for any $m \in \mathbb{C}$. \end{prop} \begin{proof} Take any element from $(det^{m} \otimes M\otimes V^{\otimes n})^{\mathfrak{t}_0, \mu}$, we can denote it by $\mathbb{1} \otimes w$, where $w \in M \otimes V^{\otimes n}$. According to the definition of invariant space \begin{align*} &(det^{m} \otimes M\otimes V^{\otimes n})^{\mathfrak{t}_0, \mu}\\ = &\{ \mathbb{1}\otimes w | A . (\mathbb{1}\ \otimes w) = \mu \chi(A) (\mathbb{1} \otimes w) \text{, for any }A \in \mathfrak{t}_0 \}. \end{align*} Compute the action of $A \in \mathfrak{t}_0$ \begin{align*} A . \mathbb{1} &= \dfrac{d}{dt}(e^{tA} . \mathbb{1})_{t=0} \\ &= \dfrac{d}{dt}(det^{m}(e^{tA}))_{t=0} . \mathbb{1} \\ &= \dfrac{d}{dt}(e^{m \cdot tr(tA)})_{t=0} . \mathbb{1} = 0, \end{align*} since $tr(A)=0$. Then it follows \begin{align*} A . (\mathbb{1} \otimes w) &= (A . \mathbb{1}) \otimes w + \mathbb{1} \otimes (A . w)\\ &= \mathbb{1} \otimes (A . w). \end{align*} Hence \begin{align*} &(det^{m} \otimes M\otimes V^{\otimes n})^{\mathfrak{t}_0, \mu}\\ =& \{ \mathbb{1}\otimes w | \mathbb{1} \otimes (A . w) = \mu \chi(A) (\mathbb{1} \otimes w) \text{, for any }A \in \mathfrak{t}_0 \} \\ \cong & \{w | A . w = \mu \chi(A) w \text{, for any }A \in \mathfrak{t}_0 \} \\ =& (M \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu} . \end{align*} \end{proof} \begin{rmk} For an irreducible rational $GL_N$-module $M$, we could write $M = det^m \otimes V^{\xi}$ for some integer $m$ and some highest weight module $V^{\xi}$ with the highest weight $\xi \in P^+_{\geq 0}$ such that $\xi_N=0$. Then $(M \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu} = (V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu}$. So it is enough to consider highest weight module $V^{\xi}$ with highest weight $\xi \in P^+_{\geq 0}$ such that $\xi_N=0$, which is associated to partitions $\xi$ of length at most $N-1$. \end{rmk} \subsection{Computation of the $(\mathfrak{t}_0,\mu)$ invariant space}~\\ \begin{prop} The $(\mathfrak{t}_0, \mu)$ invariant space $F_{n,p,\mu}(V^{\xi})=(V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0,\mu}$, for $\mu \in \mathbb{C}$ and $\xi \in P^+_{\geq 0}$. \begin{align*} (V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu} \cong & Hom_{\mathfrak{t}_0}({\mathbb{1}}_{\mu \chi }, Res_{\mathfrak{t}_0}^{\mathfrak{gl}_N}V^{\xi} \otimes V^{\otimes n})\\ \cong & Hom_{\mathfrak{t}}({\mathbb{1}}_{\theta}, Res_{\mathfrak{t}}^{\mathfrak{gl}_N}V^{\xi} \otimes V^{\otimes n}), \end{align*} where ${\mathbb{1}}_{\theta}$ is a one-dimensional $\mathfrak{t}$-module and $${\mathbb{1}}_{\theta}= (\mu q + \frac{|\xi|+n}{N})tr_{\mathfrak{gl}_p} + (-\mu p + \frac{|\xi| + n}{N}) tr_{\mathfrak{gl}_q}. $$ \end{prop} \begin{proof} The $(\mathfrak{t}_0, \mu)$ invariant space $F_{n,p,\mu}(V^{\xi})=(V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu}$ is defined to be the subspace $$ \{v \in V^{\xi} \otimes V^{\otimes n} | A v = \mu \chi(A) v\text{ for any A }\in \mathfrak{t}_0\}. $$ To compute this subspace, we lift it to a $\mathfrak{t}$ invariant space. Let $\mathbb{1}_{\psi}$ the one-dimensional $\mathfrak{t}$-module such that \begin{align*} &(V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu}\\ = &(Res_{\mathfrak{t}}^{\mathfrak{gl}_N}(V^{\xi} \otimes V^{\otimes n}) \otimes \mathbb{1}_{\psi})^{\mathfrak{t}}. \end{align*} Let $\mathfrak{t} = \mathfrak{t}_0 \oplus \mathbb{C}\{I_N\}$. For any $P \in \mathfrak{t}$, there is a unique decomposition $P = A + B$ such that $A \in \mathfrak{t}_0$ and $B = bI_N$ for some $b \in \mathbb{C}$. So the $\mathfrak{t}$-invariant corresponds to $$\{v \in V^{\xi} \otimes V^{\otimes n} | Pv + \mathbb{1}_{\psi}(P)v = 0 \}$$. Then $Pv + \mathbb{1}_{\psi}(P)v = Av + Bv + \mathbb{1}_{\psi}(P)v = 0$. And $B = bI_N$ acts by the scalar $$b(|\xi| + n) = (|\xi| + n) \dfrac{tr(B)}{N}$$. Also, we have $\chi(P) = \chi(A) + \chi(B) = \chi(A)$, since $\chi(B) = qbp-pbq = 0$. So \begin{align*} &\{v \in V^{\xi} \otimes V^{\otimes n} | Pv + \mathbb{1}_{\psi}(P)v = 0 \} \\ = &\{v \in V^{\xi} \otimes V^{\otimes n} | Av = \mu \chi(A)v\}. \end{align*} For any $P \in \mathfrak{t}$ with $$ P = \begin{bmatrix} S &0\\ 0 &T \end{bmatrix} $$ where $S \in \mathfrak{gl}_p$ and $T \in \mathfrak{gl}_q$, we have \begin{align*} \mathbb{1}_{\psi}(P) &=-\mu \chi (A) - \dfrac{|\xi|+n}{N} tr(B)\\ &=-\mu \chi (P) - \dfrac{|\xi|+n}{N} tr(P)\\ &= (-\mu q - \frac{|\xi| + n}{N})tr_{\mathfrak{gl}_p}(S)+ (\mu p - \frac{|\xi| + n}{N})tr_{\mathfrak{gl}_q}(T).\\ \end{align*} Hence it follows that the one dimensional $\mathfrak{t}$-module $$\mathbb{1}_{\theta} = (\mu q + \frac{|\xi|+n}{N})tr_{\mathfrak{gl}_p} + (-\mu p + \frac{|\xi| + n}{N}) tr_{\mathfrak{gl}_q}.$$ \end{proof} \begin{rmk} The $(\mathfrak{t}, \mathbb{1}_{\theta})$ invariant space above is equivalent to the following $K$ invariant space. \begin{align*} (V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu} \cong & Hom_{\mathfrak{t}_0}({\mathbb{1}}_{\mu \chi }, Res_{\mathfrak{t}_0}^{\mathfrak{gl}_N}V^{\xi} \otimes V^{\otimes n})\\ \cong & Hom_{\mathfrak{t}}({\mathbb{1}}_{\theta}, Res_{\mathfrak{t}}^{\mathfrak{gl}_N}V^{\xi} \otimes V^{\otimes n})\\ \cong & Hom_K(det^a \boxtimes det^b, V^{\xi} \boxtimes V^{\otimes n}), \end{align*} where $a=\mu q +\frac{|\xi|+n}{N}$ and $b=-\mu p + \frac{|\xi|+n}{N}$. \end{rmk} \subsection{A basis of invariant space and standard tableaux}~\\ The characters of irreducible polynomial $GL_N$-modules are Schur functions. So we could consider the restriction of $V^{\xi} \otimes V^{\otimes n}$ by exploring Schur functions. Recall the following fact of Schur functions. \begin{prop} Let $s_{\nu}(x_1, \cdots, x_p, z_{p+1}, \cdots, z_N)$ be the character of $V^{\nu}$, then $$ s_{\nu} (x_1, \cdots, x_p, z_{p+1}, \cdots, z_N) = \Sigma c^{\nu}_{\omega_1, \omega_2} s_{\omega_1}(x_1, \cdots, x_p) s_{\omega_2}(z_{p+1}, \cdots, z_N), $$ where $\omega_1$ is a highest weight of $GL_p$ and $\omega_2$ is a highest weight of $GL_q$, $c^{\nu}_{\omega_1,\omega_2}$ is the Littlewood-Richardson coefficient. \end{prop} The Littelwood-Richardson coefficient $c^{\nu}_{\omega_1,\omega_2}$ is the multiplicity of the $K$-module $V^{\omega_1} \boxtimes V^{\omega_2}$ in the restriction of $GL_N$-module $V^{\nu}$. Let $V^{\xi} \otimes V^{\otimes n}=\bigoplus_{\nu} m_{\nu} V^{\nu}$ as $GL_N$-modules, where $\nu \in P^+_{\geq 0}$ and $m_{\nu} \in \mathbb{N}$ is the multiplicity of $V^{\nu}$ in $V^{\xi}$. Then the $(\mathfrak{t}_0,\mu)$ invariant space \begin{align} F_{n,p,\mu}(V^{\xi}) &= Hom_K(det^a \boxtimes det^b, Res_K^{GL_N} V^{\xi} \otimes V^{\otimes n})\\ &= \bigoplus_{\nu} m_{\nu}Hom_K(det^a \boxtimes det^b, Res_K^{GL_N} V^{\nu}). \end{align} Since $\nu \in P^+_{\geq 0}$, to guarantee $Hom_K(det^a \boxtimes det^b, Res_K^{GL_N} V^{\nu}) \neq 0$ for each $\nu$ in (13), it suffices to consider $a,b \in \mathbb{N}$, otherwise $F_{n,p,\mu}(V^{\xi}) =(V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu} =0$. Our goal is to compute the $\nu$ such that the multiplicity of $det^a \boxtimes det^b$ in the $K$ restriction of the $GL_N$-module $V^{\nu}$ is nonzero. To do this, we need Okada's theorem \cite{O}. \begin{thm} For any two rectangular shapes $(a^p)$ and $(b^q)$, where $a$ and $b$ are nonnegative integers and $p \leq q$, then $$ s_{a^p} \cdot s_{b^q} = \sum c^{\nu}_{(a^p) (b^q)}s_{\nu}, $$ where $c^{\nu}_{(a^p) (b^q)} = 1$ when $\nu$ satisfies the condition \begin{align} &\nu_i + \nu_{p+q-i +1} = a+b, \quad i= 1, \cdots, p\\ &\nu_{p} \geq max(a,b)\\ &\nu_i = b, \quad i= p+1, \cdots, q \end{align} and $c^{\nu}_{(a^p) (b^q)} = 0$ otherwise. \end{thm} \begin{cor} Now we have the following fact, the $(\mathfrak{t}_0,\mu)$ invariant space \begin{align} F_{n,p,\mu}(V^{\xi})&=(V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu}\\ &=\bigoplus_{\nu} Hom_{GL_N}(V^{\nu}, V^{\xi} \otimes V^{\otimes n}), \end{align} where $\nu \in P^+_{\geq 0}$ runs through all partitions satisfying (16)-(18). \end{cor} Moreover, by Pieri's rule, the vector space $Hom_{GL_N}(V^{\nu}, V^{\xi} \otimes V^{\otimes n})$ has a basis indexed by standard tableaux $T$ such that the shape of $T$ is $\nu / \xi$ and the dimension of this vector space $$m_{\nu}=dim Hom_{GL_N}(V^{\nu}, V^{\xi} \otimes V^{\otimes n})$$ equals the number of standard tableaux $T$ with the shape of $T$ being $\nu / \xi$. If $m_{\nu} \neq 0$, then $\xi \subset \nu$ and $|\nu|=|\xi|+n$. \begin{thm} The $(\mathfrak{t}_0,\mu)$ invariant space $F_{n,p,\mu}(V^{\xi})=(V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0,\mu}$ has a one to one correspondence to the set of standard tableaux $T$ such that the shape of $T$ is $\nu / \xi $ for $\nu \in P^+_{\geq 0}$ with $|\nu|= |\xi|+n$, $\nu$ runs through all the partitions satisfying (16)-(18) and $\xi \subset \nu$. \end{thm} Let us consider the following example of $(\mathfrak{t}_0,\mu)$ invariant space. \begin{eg} Let $M = V^{\xi}$ be a $GL_3$-module, $\xi=2\epsilon_1+\epsilon_2$, $n= 3$, $p=1$ and $\mu=0$.\\ Then $(a^p) = (2^1)$ and $(b^q) = (2^2)$.\\ By Okada's theorem, we could compute the shapes $\nu$ such that the invariant space is nonzero.\\ \begin{center} \begin{tikzpicture}[scale=0.5][shift={(1,0)}] \begin{scope}[shift={(6,15)}] \draw (1,2.8) node[red!50] {$(2^1)$}; \draw[step=1] (0,1) grid (2,2); \end{scope} \draw (9,16.5) node{$\times$}; \begin{scope}[shift={(10,15)}] \draw (1,2.8) node[blue!65] {$(2^2)$}; \draw[step=1] (0,0) grid (2,2); \end{scope} \draw (13,16.5) node{$=$}; \begin{scope}[shift={(14,15)}] \draw[thin,fill=blue!10] (0,1) rectangle (2,2); \draw[thin,fill=blue!10] (0,2) rectangle (2,3); \draw[thin,fill=red!10] (0,1) rectangle (2,0); \draw[step=1] (0,0) grid (2,3); \end{scope} \draw (17,16.5) node{$+$}; \begin{scope}[shift={(18,15)}] \draw [thin,fill=blue!10] (0,2) rectangle (2,3); \draw [thin,fill=blue!10] (0,1) rectangle (2,2); \draw[thin,fill=red!10] (0,1) rectangle (1,0); \draw[thin,fill=red!10] (2,3) rectangle (3,2); \draw[step=1] (0,1) grid (2,3); \draw (2,2) rectangle (3,3); \draw (0,0) rectangle (1,1); \end{scope} \draw (22,16.5) node{$+$}; \begin{scope}[shift={(23,15)}] \draw[thin,fill=blue!10] (0,2) rectangle (2,3); \draw[thin,fill=blue!10] (0,1) rectangle (2,2); \draw[thin,fill=red!10] (2,3) rectangle (4,2); \draw[step=1] (0,1) grid (2,3); \draw[step=1] (2,2) grid (4,3); \end{scope} \end{tikzpicture} \end{center} Then a basis of the invariant space could be indexed by standard tableaux on skew shapes obtained by the shapes above skewed by $\xi$.\\ \begin{center} \begin{tikzpicture}[scale=0.4] \begin{scope}[shift={(5,10)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,0) grid (2,3); \draw (0.5, 0.5) node[red] {$2$}; \draw (1.5, 1.5) node[blue] {$1$}; \draw (1.5, 0.5) node[black!75] {$3$}; \end{scope} \begin{scope}[shift={(9,10)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,0) grid (2,3); \draw (0.5, 0.5) node[blue] {$1$}; \draw (1.5, 1.5) node[red] {$2$}; \draw (1.5, 0.5) node[black!75] {$3$}; \end{scope} \begin{scope}[shift={(5,5)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw (2,2) rectangle (3,3); \draw (0,0) rectangle (1,1); \draw (0.5, 0.5) node[blue] {$1$}; \draw (1.5, 1.5) node[red] {$2$}; \draw (2.5, 2.5) node[black!75] {$3$}; \end{scope} \begin{scope}[shift={(9,5)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw (2,2) rectangle (3,3); \draw (0,0) rectangle (1,1); \draw (0.5, 0.5) node[blue] {$1$}; \draw (1.5, 1.5) node[black!75] {$3$}; \draw (2.5, 2.5) node[red] {$2$}; \end{scope} \begin{scope}[shift={(13,5)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw (2,2) rectangle (3,3); \draw (0,0) rectangle (1,1); \draw (0.5, 0.5) node[red] {$2$}; \draw (1.5, 1.5) node[blue] {$1$}; \draw (2.5, 2.5) node[black!75] {$3$}; \end{scope} \begin{scope}[shift={(17,5)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw (2,2) rectangle (3,3); \draw (0,0) rectangle (1,1); \draw (0.5, 0.5) node[red] {$2$}; \draw (1.5, 1.5) node[black!75] {$3$}; \draw (2.5, 2.5) node[blue] {$1$}; \end{scope} \begin{scope}[shift={(21,5)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw (2,2) rectangle (3,3); \draw (0,0) rectangle (1,1); \draw (0.5, 0.5) node[black!75] {$3$}; \draw (1.5, 1.5) node[blue] {$1$}; \draw (2.5, 2.5) node[red] {$2$}; \end{scope} \begin{scope}[shift={(25,5)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw (2,2) rectangle (3,3); \draw (0,0) rectangle (1,1); \draw (0.5, 0.5) node[black!75] {$3$}; \draw (1.5, 1.5) node[red] {$2$}; \draw (2.5, 2.5) node[blue] {$1$}; \end{scope} \begin{scope}[shift={(5,0)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw[step=1] (2,2) grid (4,3); \draw (1.5, 1.5) node[blue] {$1$}; \draw (2.5, 2.5) node[red] {$2$}; \draw (3.5, 2.5) node[black!75] {$3$}; \end{scope} \begin{scope}[shift={(10,0)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw[step=1] (2,2) grid (4,3); \draw (1.5, 1.5) node[red] {$2$}; \draw (2.5, 2.5) node[blue] {$1$}; \draw (3.5, 2.5) node[black!75] {$3$}; \end{scope} \begin{scope}[shift={(15,0)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw[step=1] (2,2) grid (4,3); \draw (1.5, 1.5) node[black!75] {$3$}; \draw (2.5, 2.5) node[blue] {$1$}; \draw (3.5, 2.5) node[red] {$2$}; \end{scope} \end{tikzpicture} \end{center} In this example, we obtain an invariant space of $11$ dimensions. \end{eg} \subsection{One skew shape} In this subsection, we associate a skew shape $\varphi_{n,p,\mu}^{\xi}$ to the image $F_{n,p,\mu}(V^{\xi})$ under Etingof-Freund-Ma functor. Let $\xi=\sum_{i=1}^N \xi_i \epsilon_i \in P^+_{\geq 0}$. The corresponding Young diagram $\xi = (\xi_1, \cdots, \xi_N)$. The first $q$ rows of $\xi$ forms a Young diagram denoted by $\xi^{(1)}$ and the last $p$ rows of $\xi$ forms a Young diagram denoted by $\xi^{(2)}$. The parameter $\mu$ gives a pair of rectangles $(a^p)$ and $(b^q)$ denoting the $K$-module $det^a \boxtimes det^b$, where $a=\mu q+\frac{|\xi|+n}{N}$ and $b=-\mu p + \frac{|\xi|+n}{N}$.\\ Suppose $p \leq q$. Placing the northwestern corner the rectangle $(a^p)$ next to the northeastern corner of the rectangle $(b^q)$ forms a Young diagram $\beta$. Delete the Young diagram $\xi^{(1)}$ from northwestern corner of $\beta$. Let \rotatebox[origin=c]{180}{$\xi^{(2)}$} denote the skew shape obtained by rotating $\xi^{(2)}$ by $\pi$. Delete the rotated $\xi^{(2)}$ from the southeastern corner of $\beta$, i.e. the skew shape $\varphi^{\xi}_{n,p,\mu}$ is defined by $\varphi^{\xi}_{n,p,\mu} = \nu / \xi^{(1)}$, where $\nu_i= a+b -\xi_{N-i+1}$ for $i = 1,\cdots, p$ and $\nu_i = b$ for $i=p+1, \cdots, q$.\\ \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.4, shift={(-10,0)}] \draw[step=1, dotted] (-2,10) grid (2,7); \draw[step=1, dotted] (5,10) grid (8,6); \draw [red,thick] (-2,10) rectangle (2,7); \draw [red,thick] (5,10) rectangle (8,6); \draw [scale=0.5](0, 10) node {\tiny{$(a^p)$}}; \draw [scale=0.5](13,10) node {\tiny{$(b^q)$}}; \draw [blue] (0,3)--(0,-3)--(1,-3)--(1,-2)--(2,-2)--(2,1)--(5,1)--(5,3)--(0,3); \draw [draw=none,fill=blue!10] (0,3) rectangle (2,-1); \draw [draw=none,fill=gray!20] (0,-1) rectangle (1,-3); \draw [draw=none, fill=gray!20] (1,-1) rectangle (2,-2); \draw [draw=none,fill=blue!10] (2,3) rectangle (5,1); \draw [dotted, red](-1,-1)--(4,-1); \draw (2.5,2) node {\tiny{$\xi^{(1)}$}}; \draw (0.8,-1.8) node {\tiny{$\xi^{(2)}$}}; \draw [scale=0.5] (5,-8) node {\tiny{$\xi = \sum_{i=1}^N \xi_i \epsilon_i \in P^+_{\geq 0}$}}; \draw [->] (-0.5,0.5)--(-0.5,-1); \draw [->] (-0.5,1.5)--(-0.5,3); \draw (-0.5, 1) node {\tiny{$q$}}; \end{scope} \begin{scope}[shift={(2,0)}] \draw (-1,2) node {$\longmapsto$}; \end{scope} \begin{scope}[scale=0.4, shift={(9,4)}] \draw [blue] (0,3)--(0,-3)--(1,-3)--(1,-2)--(2,-2)--(2,1)--(5,1)--(5,3)--(0,3); \draw [draw=none,fill=blue!10] (0,3) rectangle (2,-1); \draw [draw=none,fill=gray!20] (0,-1) rectangle (1,-3); \draw [draw=none, fill=gray!20] (1,-1) rectangle (2,-2); \draw [draw=none,fill=blue!10] (2,3) rectangle (5,1); \draw [draw=none,fill=gray!20] (5,1) rectangle (7,0); \draw [draw=none,fill=gray!20] (6,2) rectangle (7,1); \draw[dotted] (0,3) grid (3, -1); \draw[dotted] (3,3) grid (7,0); \draw [red,thick] (0,3) rectangle (3,-1); \draw [red,thick] (3,3) rectangle (7,0); \draw (2.5,2) node {\tiny{$\xi^{(1)}$}}; \draw (6.3,0.8) node {\tiny{\rotatebox{180}{$\xi^{(2)}$}}}; \draw [->] (1,-1.5)--(0,-1.5); \draw [->] (2,-1.5)--(3,-1.5); \draw [->] (4,-0.5)--(3,-0.5); \draw [->] (6,-0.5)--(7,-0.5); \draw [->] (0.5,3.5)--(0,3.5); \draw [->] (3.5,3.5)--(4,3.5); \draw (2,3.5) node {\tiny{$max(a,b)$}}; \draw (1.5,-1.5) node {\tiny{$b$}}; \draw (5,-0.5) node {\tiny{$a$}}; \draw [->] (-0.5,2)--(-0.5,3); \draw [->] (-0.5,0)--(-0.5,-1); \draw (-0.5,1) node {\tiny{$q$}}; \draw [->] (7.5,2)--(7.5,3); \draw (7.5, 1.5) node {\tiny{$p$}}; \draw [->] (7.5,1)--(7.5,0); \draw [dotted, purple] (4,5)--(4,-2); \draw (3.5, -3.5) node {\tiny{$\varphi_{n,p,\mu}^{\xi}$}}; \end{scope} \end{tikzpicture} \end{center} Let $\varphi=\varphi_{n,p,\mu}^{\xi}$. If a cell $(i,j)$ of the skew shape $\varphi$ satisfy $(i+1,j) \notin \varphi$ and $(i,j+1) \notin \varphi$, then call $(i,j)$ a corner of $\varphi$. Define $\gamma$-move on a skew shape $\varphi$: delete a corner $(i,j) \in \varphi$ such that $j> max(a,b)$ and $1 \leq i \leq p$, and add the cell $(p+q-i+1,a+b-j+1)$. Denote the $\gamma$-move by $\varphi \to \varphi'$ where $\varphi'= \varphi \setminus (i,j) \cup (p+q-i+1,a+b-j+1)$. Note that for a given $\varphi$, the $\gamma$-move stops when there is no cell $(i,j)$ such that $j > max(a,b)$. Given the skew shape $\varphi^{\xi}_{n,p,\mu}$, a collection $D(\varphi^{\xi}_{n,p,\mu})$ of skew shapes consists of $\varphi^{\xi}_{n,p,\mu}$ and all the skew shapes obtained by applying $\gamma$-moves on $\varphi^{\xi}_{n,p,\mu}$ for finitely many times. The shape $\varphi^{\xi}_{n,p,\mu}$ is called the minimal shape of the representation $F_{n,p,\mu}(V^{\xi})$. \begin{center} \begin{tikzpicture} \begin{scope}[blue, scale=0.2,shift={(0,17.5)}] \draw [dotted, purple] (2,3)--(2,-3); \draw (0,0) grid (1,-2); \draw (1,0) grid (2,-1); \draw (2,0) rectangle (3,-1); \draw (3,2) rectangle (4,1); \draw (4,2) rectangle (5,1); \draw (3,1) rectangle (4,0); \draw (2.5,-3.2) node {\tiny{$\varphi^{\xi}_{n,p,\mu}$}}; \end{scope} \begin{scope}[scale=0.2,shift={(-25,5)}] \draw [dotted, purple] (2,3)--(2,-3); \draw (0,0) grid (1,-2); \draw (1,0) grid (2,-1); \draw [draw=none, fill=gray!20] (2,0) rectangle (3,-1); \draw (0,-2) rectangle (1,-3); \draw (3,2) rectangle (4,1); \draw (4,2) rectangle (5,1); \draw (3,1) rectangle (4,0); \end{scope} \begin{scope}[scale=0.2,shift={(0,5)}] \draw [dotted, purple] (2,3)--(2,-3); \draw (0,0) grid (1,-2); \draw (1,0) grid (2,-1); \draw (2,0) rectangle (3,-1); \draw (3,2) rectangle (4,1); \draw (4,2) rectangle (5,1); \draw [draw=none, fill=gray!20] (3,1) rectangle (4,0); \draw (-1,-3) rectangle (0,-4); \end{scope} \begin{scope}[scale=0.2,shift={(25,5)}] \draw [dotted, purple] (2,3)--(2,-3); \draw (0,0) grid (1,-2); \draw (1,0) grid (2,-1); \draw (2,0) rectangle (3,-1); \draw (3,2) rectangle (4,1); \draw [draw=none, fill=gray!20] (4,2) rectangle (5,1); \draw (-2,-4) rectangle (-1,-5); \draw (3,1) rectangle (4,0); \end{scope} \begin{scope}[scale=0.2,shift={(-25,-8)}] \draw [dotted, purple] (2,3)--(2,-3); \draw (0,0) grid (1,-2); \draw (1,0) grid (2,-1); \draw [draw=none, fill=gray!20](2,0) rectangle (3,-1); \draw (0,-2) rectangle (1,-3); \draw (3,2) rectangle (4,1); \draw (4,2) rectangle (5,1); \draw [draw=none, fill=gray!20] (3,1) rectangle (4,0); \draw (-1,-3) rectangle (0,-4); \end{scope} \begin{scope}[scale=0.2,shift={(25,-8)}] \draw [dotted, purple] (2,3)--(2,-3); \draw (0,0) grid (1,-2); \draw (1,0) grid (2,-1); \draw (2,0) rectangle (3,-1); \draw (3,2) rectangle (4,1); \draw [draw=none, fill=gray!20](4,2) rectangle (5,1); \draw (-2,-4) rectangle (-1,-5); \draw [draw=none, fill=gray!20] (3,1) rectangle (4,0); \draw (-1,-3) rectangle (0,-4); \end{scope} \begin{scope}[scale=0.2,shift={(0,-8)}] \draw [dotted, purple] (2,3)--(2,-3); \draw (0,0) grid (1,-2); \draw (1,0) grid (2,-1); \draw [draw=none, fill=gray!20] (2,0) rectangle (3,-1); \draw (0,-2) rectangle (1,-3); \draw (3,2) rectangle (4,1); \draw [draw=none, fill=gray!20] (4,2) rectangle (5,1); \draw (-2,-4) rectangle (-1,-5); \draw (3,1) rectangle (4,0); \end{scope} \begin{scope}[scale=0.2,shift={(-13,-20)}] \draw [dotted, purple] (2,3)--(2,-3); \draw (0,0) grid (1,-2); \draw (1,0) grid (2,-1); \draw [draw=none, fill=gray!20] (2,0) rectangle (3,-1); \draw (0,-2) rectangle (1,-3); \draw (3,2) rectangle (4,1); \draw [draw=none, fill=gray!20] (4,2) rectangle (5,1); \draw (-2,-4) rectangle (-1,-5); \draw [draw=none, fill=gray!20] (3,1) rectangle (4,0); \draw (-1,-3) rectangle (0,-4); \end{scope} \begin{scope}[scale=0.2,shift={(13,-20)}] \draw [dotted, purple] (2,3)--(2,-3); \draw (0,0) grid (1,-2); \draw (1,0) grid (2,-1); \draw (2,0) rectangle (3,-1); \draw [draw=none, fill=gray!20] (3,2) rectangle (4,1); \draw (-2,-3) rectangle (-1,-4); \draw [draw=none, fill=gray!20] (4,2) rectangle (5,1); \draw (-2,-4) rectangle (-1,-5); \draw [draw=none, fill=gray!20] (3,1) rectangle (4,0); \draw (-1,-3) rectangle (0,-4); \end{scope} \begin{scope}[scale=0.2,shift={(0,-32)}] \draw [dotted, purple] (2,3)--(2,-3); \draw (0,0) grid (1,-2); \draw (1,0) grid (2,-1); \draw [draw=none, fill=gray!20] (2,0) rectangle (3,-1); \draw (0,-2) rectangle (1,-3); \draw [draw=none, fill=gray!20] (3,2) rectangle (4,1); \draw (-2,-3) rectangle (-1,-4); \draw [draw=none, fill=gray!20] (4,2) rectangle (5,1); \draw (-2,-4) rectangle (-1,-5); \draw [draw=none, fill=gray!20] (3,1) rectangle (4,0); \draw (-1,-3) rectangle (0,-4); \end{scope} \draw [scale=0.2] [->] (-3,14)--(-20,8); \draw [scale=0.2] [->] (2,12)--(2,9); \draw [scale=0.2] [->] (6,14)--(23,8); \draw [scale=0.2] [->] (-24,1)--(-24,-5); \draw [scale=0.2] [->] (-20,1)--(-3,-5); \draw [scale=0.2] [->] (6,1)--(23,-5); \draw [scale=0.2] [->] (-3,1)--(-20,-5); \draw [scale=0.2] [->] (27,1)--(27,-5); \draw [scale=0.2] [->] (22,1)--(6,-5); \draw [scale=0.2] [->] (-23,-12)--(-15,-18); \draw [scale=0.2] [->] (-3,-12)--(-10,-17); \draw [scale=0.2] [->] (22,-11)--(-7,-20); \draw [scale=0.2] [->] (22,-13)--(16,-17); \draw [scale=0.2] [->] (-10,-23)--(-3,-30); \draw [scale=0.2] [->] (11,-26)--(6,-30); \end{tikzpicture} \end{center} % % Continue Example 5.9, the representation $F_{3,1,0}(V^{\Yboxdim{4pt}\young(\quad \quad,\quad)})$ is index by the following skew shape $\varphi$. \begin{center} \begin{tikzpicture}[scale=0.5] \draw [dotted] (0,2) grid (2,0); \draw [dotted] (2,2) grid (4,1); \draw [red] (0,2) rectangle (2,0); \draw [red] (2,2) rectangle (4,1); \draw [draw=none, fill=blue!10] (0,2) rectangle (2,1); \draw [draw=none, fill=blue!10] (0,1) rectangle (1,0); \draw [dotted, purple] (2,3)--(2,-1); \end{tikzpicture} \end{center} The collection $D(\varphi)$ of skew shapes is obtained as follows:\\ \begin{center} \begin{tikzpicture}[scale=0.4] \begin{scope}[shift={(-12,0)}] \draw (1,1) rectangle (2,0); \draw (2,2) rectangle (3,1); \draw (3,2) rectangle (4,1); \end{scope} \begin{scope}[shift={(0,0)}] \draw (1,1) rectangle (2,0); \draw (2,2) rectangle (3,1); \draw [draw=none, fill=gray!10](3,2) rectangle (4,1); \draw (0,0) rectangle (1,-1); \end{scope} \begin{scope}[shift={(12,0)}] \draw (1,1) rectangle (2,0); \draw [draw=none, fill=gray!10](2,2) rectangle (3,1); \draw (2,0) rectangle (1,-1); \draw [draw=none, fill=gray!10](3,2) rectangle (4,1); \draw (1,0) rectangle (0,-1); \end{scope} \draw [->] (-7,1)--(-2,1); \draw [->] (5,1)--(10,1); \end{tikzpicture} \end{center} \subsection{Skew shapes and standard tableaux} For the ease of description, let us use the following definition of skew shapes and standard tableaux. Given a partition $\xi=(\xi_1, \cdots, \xi_l)$, the corresponding Young diagram $\xi$ is a subset of $\mathbb{Z}^2$, consisting of $(i,j)$ such that $1 \leq i \leq l$ and $1 \leq j \leq \xi_i$. Let $\nu=(\nu_1, \cdots, \nu_l)$ and $\xi=(\xi_1, \cdots, \xi_l)$ such that $\nu_i \geq \xi_i$ for $1 \leq i \leq l$, then for the corresponding Young diagrams $\xi \subset \nu$ holds. A skew shape $\nu / \xi$ is the subset $\nu \setminus \xi$ of $\mathbb{Z}^2$. For example, let $\nu=(7,6,5,3,2,1)$ and $\xi =(5,5,2,2,2,1)$, then Young diagrams $\nu$ and $\xi$ and the skew shape $\nu / \xi$ are the following subsets of $\mathbb{Z}^2$. $$\nu=\{(i,j)|1 \leq i \leq 6, 1 \leq j \leq \nu_i\},$$ $$\xi=\{(i,j)|1 \leq i \leq 6, 1 \leq j \leq \xi_i\}$$ and $$\nu /\xi = \{(1,6),(1,7),(2,6),(3,3),(3,4),(3,5),(4,3)\}.$$ Define a tableau $T$ on $n$-indices $\{1, \cdots, n\}$ to be an injective map $T$ \begin{align*} T: \{1, \cdots, n\} &\to \mathbb{Z}^2\\ k &\mapsto (\mathfrak{i}(k), \mathfrak{j}(k)) \end{align*} where $\mathfrak{i}$ and $\mathfrak{j}$ being two maps from $\{1, \cdots,n\}$ to $\mathbb{Z}$ and the image $Im(T)$ of $T$ being a skew shape. The image $Im(T)$ is also called the shape of the tableaux $T$. Let $cont_T$ be a map \begin{align*} cont_T: \{1, \cdots, n\} &\to \mathbb{Z}\\ & k \mapsto \mathfrak{j}(k)-\mathfrak{i}(k), \end{align*} call $cont_T(k)$ is the content of $k$ in the tableau $T$. If $T^{-1}(i + 1, j) > T^{-1} (i, j)$ and $T^{-1}(i,j + 1) > T^{-1}(i, j)$ hold for each cell $(i, j) \in Im(T)$, then call $T$ is a standard tableau.\\ Let $$Tab(\varphi^{\xi}_{n,p,\mu}) = \{T| T \text{ is a standard tableau and } Im(T) \in D(\varphi^{\xi}_{n,p,\mu})\}.$$ The invariant space $F_{n,p,\mu}(V^{\xi})=(V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu}$ has a basis indexed by a collection of standard tableaux on the skew shapes in $D(\varphi^{\xi}_{n,p,\mu})$, i.e. all the tableaux in $Tab(\varphi^{\xi}_{n,p,\mu})$. Let $v_T$ denote the basis vector indexed by $T \in Tab(\varphi^{\xi}_{n,p,\mu})$. Then as a vector space \begin{align*} F_{n,p,\mu}(V^{\xi})&=(V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu}\\ &= span_{\mathbb{C}}\{v_T|T \in Tab(\varphi^{\xi}_{n,p,\mu})\}. \end{align*} \section{$\mathcal{Y}$- semisimplicity} \subsection{Action of $\mathcal{Y}$} In this subsection let us computer the $\mathcal{Y}$-actions on the invariant space $F_{n,p,\mu}(V^{\xi})=(V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu}$. In \cite{J}, Jordan computed the action of $y_1$ and used the fact that Etingof-Freund-Ma functor is a trigonometric degeneration of the quantum case. Now let us review the computation and conduct it in the degenerate case. Let us use the following notations in \cite{EFM} for sums \begin{align} &\sum_{s,t}= \sum_{s=1}^N \sum_{p = 1}^N\\ &\sum_{s|t} = \sum_{s = 1}^p \sum_{t = p + 1}^N + \sum_{t= 1}^p \sum_{s = p + 1}^N\\ &\sum_{st} = \sum_{s= 1}^p \sum_{t=1}^p + \sum_{s = p +1}^N \sum_{t = p+1}^N \end{align} It is easy to observe that the sum of $(22)$ and $(23)$ equals $(21)$.\\ Review the definition of $y_1$ on the $(\mathfrak{t}_0,\mu)$-invariant space $F_{n,p,\mu}(V^{\xi})=(V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0,\mu}$ in \cite{EFM}, \begin{align*} y_1 = -\sum_{s|t} (E_s^t)_0 \otimes (E_t^s)_1 + \dfrac{p-q-\mu N}{2}\gamma_1 + \dfrac{1}{2}\sum_{l>1}s_{1,l} + \dfrac{1}{2}\sum_{l \neq 1}s_{1,l} \gamma_1 \gamma_l. \end{align*} Compute the last two terms of $y_1$, we have \begin{align*} &\dfrac{1}{2}\sum_{l> 1}s_{1,l} + \dfrac{1}{2}\sum_{l \neq 1}s_{1,l} \gamma_1 \gamma_l\\ =& \dfrac{1}{2}\sum_{l>1} \sum_{s,t}(E_s^t)_1 \otimes (E_t^s)_l + \dfrac{1}{2}\sum_{l > 1}\sum_{s,t}(E_s^t J)_1 \otimes (E_t^s J)_l\\ =& \sum_{l>1}\sum_{st}(E_s^t)_1 \otimes (E_t^s)_l\\ =& \sum_{st}(E_s^t)_1 (\sum_{l>1} 1 \otimes (E_t^s)_l)\\ =& \sum_{st} (E_s^t)_1 (\Delta^{(n)}(E_t^s) - (E_t^s)_0 - (E_t^s)_1)\\ \end{align*} The last step follows the fact that $\sum_{l>1} 1 \otimes (E_t^s)_l= \Delta^{(n)}(E_t^s)-(E_t^s)_0-(E_t^s)_1$, where $\Delta$ denotes the comultiplication of Lie algebra $\mathfrak{gl}_N$ and $\Delta^{(n)}(E_t^s)=\sum_{l=0}^n (E_t^s)_l$.\\\\ Applying the fact that $y_1$ preserves on the $(\mathfrak{t}_0, \mu)$-invariant space $F_{n,p,\mu}(V^{\xi})=(V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0,\mu}$, the computation of the last two terms of $y_1$ above continues as follows. \begin{align*} & \sum_{st} (E_s^t)_1 (\Delta^{(n)}(E_t^s) - (E_t^s)_0 - (E_t^s)_1)\\ =& \sum_{s= 1}^p (\mu q + \dfrac{|\xi| +n}{N})(E_s^s)_1 + \sum_{s = p +1}^N (-\mu p + \dfrac{|\xi| + n}{N})(E_s^s)_1 \\ & - \sum_{s =1}^p p (E_s^s)_1 - \sum_{s = p+1}^N q(E_s^s)_1 - \sum_{st}(E_s^t)_1 \otimes (E_t^s)_0\\ =& (\mu q -p +\dfrac{|\xi| + n}{N}) \sum_{s =1}^p(E_s^s)_1+ (-\mu p -q + \dfrac{|\xi|+n}{N}) \sum_{s=p+1}^N (E_s^s)_1 \\ &- \sum_{st}(E_t^s)_0 \otimes (E_s^t)_1 \end{align*} Combining other terms in the definition of $y_1$, \begin{align*} y_1 = &-\sum_{s,t}(E_s^t)_0 \otimes (E_t^s)_1 + \dfrac{p-q-\mu N}{2} \gamma_1\\ &+ (\mu q -p +\dfrac{|\xi| + n}{N}) \sum_{s =1}^p(E_s^s)_1+ (-\mu p -q + \dfrac{|\xi|+n}{N}) \sum_{s=p+1}^N (E_s^s)_1 \\ =& -\sum_{s,t}(E_s^t)_0 \otimes (E_t^s)_1 + (\mu q -p + \dfrac{|\xi|+n}{N} + \dfrac{p-q-\mu N}{2})\sum_{s=1}^p (E_s^s)_1\\ &+(-\mu p - q + \dfrac{|\xi| +n}{N} - \dfrac{p-q-\mu N}{2}) \sum_{s=p+1}^N (E_s^s)_1\\ =& -\sum_{s,t}(E_s^t)_0 \otimes (E_t^s)_1 + (\dfrac{|\xi|+n}{N} + \dfrac{\mu q - \mu p}{2} - \dfrac{N}{2})\sum_{s =1}^N (E_s^s)_1\\ =& -\sum_{s,t}(E_s^t)_0 \otimes (E_t^s)_1 + \dfrac{|\xi|+n}{N} + \dfrac{\mu q - \mu p}{2} - \dfrac{N}{2}, \end{align*} \begin{rmk} Since the action in \cite{JM} was define on $F_{n,p,\mu}(M)$ for $M$ is a $\mathcal{D}$-module, there is a difference between equation (12) and the above result. If we input a $\mathcal{D}$-module instead of $V^{\xi}$, the above result will be the same with equation (12). \end{rmk} Moreover, the action of $y_k$ for $k>1$ is computed by induction. \begin{prop} The action of $y_k$, for $k =1, \cdots, n$, on the invariant space $(V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0, \mu}$ is computed by $$y_k = -\sum_{s,t} (\Delta^{(k-1)}E_s^t)_{(0,k)} \otimes (E_t^s)_k + \dfrac{|\xi|+n}{N} + \dfrac{\mu q - \mu p}{2} - \dfrac{N}{2},$$ where $(E_s^t)_{(0,k)}$ denotes the tensor product $(V^{\xi} \otimes V^{\otimes (k-1)})$ and hence $\Delta^{(k-1)}E_s^t$ acting on $(E_s^t)_{(0,k)}$. \end{prop} \begin{proof} We verified the action of $y_1$ above. Suppose the statement is true for $y_i$, $i < k$. Let compute the action of $y_k$. By the relation $s_{k-1}y_{k-1} - y_ks_{k-1} = \kappa_1=1$ and the inductive hypothesis, it follows \begin{align*} y_k &= s_{k-1}y_{k-1}s_{k-1} - s_{k-1}\\ &= -\sum_{s,t,j,l} (\Delta^{(k-2)}E_s^t)_{(0,k-1)} \otimes (E_l^t E_t^s E_s^j)_{k-1} \otimes (E_t^l E_j^s)_k \\ &- \sum_{s,t}(E_s^t)_{k-1} \otimes (E_t^s)_k + \dfrac{|\xi|+n}{N} + \dfrac{\mu q - \mu p}{2} - \dfrac{N}{2}\\ &= -\sum_{s,t,j} (\Delta^{(k-2)}E_s^t)_{(0,k-1)} \otimes (E_j^j)_{k-1} \otimes (E_t^s)_k \\ &- \sum_{s,t}(E_s^t)_{k-1} \otimes (E_t^s)_k +\dfrac{|\xi|+n}{N} + \dfrac{\mu q - \mu p}{2} - \dfrac{N}{2} \end{align*} Take the fact $\sum_{j} (E_j^j)_{k-1} = (I_N)_{k-1}$. The above computation continues \begin{align*} &= -\sum_{s,t} (\Delta^{(k-2)}E_s^t)_{(0,k-1)} \otimes (I_N)_{k-1} \otimes (E_t^s)_k \\ &- \sum_{s,t}(E_s^t)_{k-1} \otimes (E_t^s)_k + \dfrac{|\xi|+n}{N} + \dfrac{\mu q - \mu p}{2} - \dfrac{N}{2}\\ &= -\sum_{s,t}(\Delta^{(k-1)}E_s^t)_{(0,k)} \otimes (E_t^s)_k + \dfrac{|\xi|+n}{N} + \dfrac{\mu q - \mu p}{2} - \dfrac{N}{2}. \end{align*} \end{proof} The Lie algebra $\mathfrak{gl}_N$ has a basis $\{E_s^t| 1 \leq s,t \leq N\}$ with the dual basis $\{E_t^s\}$ with respect to the Killing form. Let $C$ denote the Casimir element of $U(\mathfrak{gl}_N)$, then $C = \sum_{s,t}E_s^t E_t^s$. The following computation follows \begin{align*} \Delta(C) =& \sum_{s,t} \Delta(E_s^t) \Delta(E_t^s)\\ =& \sum_{s,t} (E_s^t \otimes 1 + 1 \otimes E_s^t)(E_t^s \otimes 1 + 1 \otimes E_t^s)\\ =& (\sum_{s,t} E_{s}^t E_t^s)\otimes 1 + 1 \otimes (\sum_{s,t}E_s^t E_t^s) + 2\sum_{s,t} E_s^t \otimes E_t^s. \end{align*} Thus $$\sum_{s,t} E_s^t \otimes E_t^s = \dfrac{\Delta(C) - C \otimes 1 - 1 \otimes C}{2}.$$ \subsection{Weights and contents} In \cite{R2}, Ram talked about the standard tableaux and representations of affine Hecke algebra of type $C$ and analyzed the weights in terms of boxes. Now let us analyze the weights of $F_{n,p,\mu}(V^{\xi})$ in terms of contents. In section 5, we obtain a basis of the $(\mathfrak{t}_0,\mu)$-invariant space $F_{n,p,\mu}(V^{\xi})=(V^{\xi} \otimes V^{\otimes n})^{\mathfrak{t}_0,\mu}$ indexed by $Tab(\varphi_{n,p,\mu}^{\xi})$, i.e. standard tableaux on a family of skew shapes $\nu / \xi$ where $\nu$ are obtained by Okada's theorem. The action of $y_k$ on the basis element indexed by standard tableau $T$ is by a scalar. Moreover, this scalar is computed in terms of the content of the box fixed by $k$. \begin{thm} Let $v_T$ denote the basis element of the invariant space indexed by standard tableau $T$. Then $v_T$ is an eigenvector of $y_k$ and the eigenvalue is computed as $$ -cont_T(k) +\mathfrak{s}, $$ where $\mathfrak{s}= \dfrac{|\xi|+n}{N} +\dfrac{\mu q - \mu p}{2}-\dfrac{N}{2}$. \end{thm} \begin{proof} Let us $T \in Tab(\varphi_{n,p,\mu}^{\xi})$. Since $T$ is a standard tableau, then $T$ corresponds to a sequence $(\nu^{(k)})_{k =0}^{k = n}$ of Young diagrams, where \begin{align*} &\nu^{(0)} =\xi,\\ &\nu^{(1)} = \xi \cup T(\{1\}),\\ &\nu^{(2)} = \xi \cup T(\{1, 2\}),\\ & \cdots \\ &\nu^{(n)} = \xi \cup T(\{1, 2, \cdots, n\}), \end{align*} where $T(\{1, \cdots, k\})$ is the collection of cells filled by numbers $1, \cdots, k$, i.e. the Young diagram $\nu^{(k)}$ is formed by adding the cells filled by numbers $1, \cdots, k$ to the Young diagram $\xi$. So it follows, for $k = 1, \cdots, n$, $$v_T \in (V^{\xi} \otimes V^{\otimes k})[\nu^{(k)}] \otimes V^{\otimes (n - k)},$$ where $(V^{\xi} \otimes V^{\otimes k})[V^{\nu^{(k)}}]$ denotes the $V^{\nu_k}$-isotopic component of the tensor product $V^{\xi} \otimes V^{\otimes k}$. By the previous subsection 6.1, it follows that the term $\sum_{s,t}(\Delta^{(k-1)}(E_s^t))_{(0,k)} \otimes (E_t^s)_k$ acts on $v_T$ by $$\dfrac{C_{(0,k+1)} -C_{(0,k)} \otimes 1_k -1_{(0,k)} \otimes C_k }{2}.$$ Moreover, the Casimir element acts on the highest weight module $V^{\nu}$ by the scalar $\langle \nu, \nu + 2 \rho \rangle$, where the weight $2 \rho = \sum_{i = 1}^{N} (N - 2i + 1) \epsilon_i$. So for each $k$ such that $1 \leq k \leq N$, $C_{(0,k+1)}$ acts on $V^{\nu^{(k)}}$ by the scalar $\langle \nu^{(k)}, \nu^{(k)} + 2\rho \rangle$, $C_{(0,k)}$ acts on $V^{\nu^{(k-1)}}$ by the scalar $\langle \nu^{(k-1)}, \nu^{(k-1)} + 2\rho \rangle$ and $C_k$ acts on $V$ by the scalar $\langle \epsilon, \epsilon + 2\rho \rangle = N$, namely $$\dfrac{C_{(0,k+1)} -C_{(0,k)} \otimes 1_k -1_{(0,k)} \otimes C_k }{2}$$ acts by $$\dfrac{1}{2}(\langle \nu^{(k)}, \nu^{(k)} + 2\rho \rangle - \langle \nu^{(k - 1)}, \nu^{(k - 1)} + 2 \rho \rangle- \langle \epsilon, \epsilon + 2 \rho \rangle) .$$ Let $T(k)$ be the cell $(\mathfrak{i}(k), \mathfrak{j}(k))$, then $\nu^{(k)}_{\mathfrak{i}(k)} = \mathfrak{j}(k) = \nu^{(k-1)}_{\mathfrak{i}(k)} + 1$ and $\nu_i^{(k)} = \nu_i^{(k-1)}$, for $i \neq \mathfrak{i}(k)$. \begin{align*} &\dfrac{1}{2}(\langle \nu^{(k)}, \nu^{(k)} + 2\rho \rangle - \langle \nu^{(k - 1)}, \nu^{(k - 1)} + 2 \rho \rangle - \langle \epsilon, \epsilon + 2 \rho \rangle)\\ = &\dfrac{1}{2}((\mathfrak{j}(k) + N -2\mathfrak{i}(k) + 1)(\mathfrak{j}(k)) - (\mathfrak{j}(k)+N-2\mathfrak{i}(k))(\mathfrak{j}(k)-1) -N)\\ = & \mathfrak{j}(k) - \mathfrak{i}(k). \end{align*} Then the statement follows. \end{proof} \begin{thm} Let $F_{n,p,\mu}(V^{\xi})$ denote the image of the irreducible $GL_N$-module $V^{\xi}$, for some $\xi \in P^+$, under Etingof-Freund-Ma functor. Then $F_{n,p,\mu}(V^{\xi})$ has a basis indexed tableaux in $Tab(\varphi^{\xi}_{n,p,\mu})$, i.e. $\{v_T|T \in Tab(\varphi^{\xi}_{n,p,\mu})\}$. This basis is a weight basis with each basis vector $v_T$ is a weight vector of weight $\zeta_T=-cont_T+\mathfrak{s}$. So $F_{n,p,\mu}(V^{\xi})$ is a $\mathcal{Y}$-semisimple representation of $H_n(1,p-q-\mu N)$. Moreover, it is obvious different standard tableaux give different weights. Hence each weight space is one dimensional. \end{thm} % \section{Intertwining operators} \subsection{Definition of intertwining operators}~\\ \begin{definition} For $i = 1, \cdots, n-1$, define the intertwining operators $$\phi_i = [s_i , y_i],$$ and for $\gamma_n$, define $$\phi_n = [\gamma_n, y_n].$$ \end{definition} \begin{prop} The intertwining operators $\phi_i$ satisfy the braid relations \begin{align*} &\phi_i \phi_{i+1} \phi_i = \phi_{i+1} \phi_i \phi_{i+1}, i = 1, \cdots, n - 1,\\ &\phi_i \phi_j = \phi_j \phi_i, |i - j| > 1,\\ &\phi_{n-1} \phi_n \phi_{n-1} \phi_n = \phi_n \phi_{n-1} \phi_n \phi_{n-1}. \end{align*} \end{prop} Since the operators $\phi_i$'s satisfy the same braid relations with $s_i$'s and $\gamma_n$, it makes sense to define the following. \begin{definition} Let $W_0$ denote the finite Weyl group of type $C_n$, for each $w \in W_0 $, it has a reduced expression $w = s_{i_1} s_{i_2} \dots s_{i_m}$, $l(w)=m$, here we take the convention $s_n = \gamma_n$. Define $$\phi_w = \phi_{i_1} \phi_{i_2} \dots \phi_{i_m}.$$ \end{definition} \subsection{Properties of intertwining operators}~\\ Some computations on intertwining operators: \begin{enumerate} \item $\phi_i = s_i (y_i - y_{i+1}) - 1$,\\ $\phi_n = 2 \gamma_n y_n - \kappa_2$.\\ \item $\phi_i ^2 = (1 - y_i + y_{i+1})(1 + y_i - y_{i+1})$,\\\\ $\phi_n ^2 = (\kappa_2 - 2y_n)(\kappa_2 + 2y_n)$.\\ \end{enumerate} \begin{definition} Define the actions of $W_0$ on weight $\zeta=[\zeta_1, \cdots, \zeta_n]$: for an arbitrary $w \in W_0$, the action of $w$ is $$w . \zeta = \zeta \circ w^{-1},$$ where we take $\zeta_{-k}=-\zeta_k$. \end{definition} \begin{thm} Let $L$ be a $\mathcal{Y}$-semisimple module and $L_{\zeta}$ denote the weight space of weight $\zeta$, then $$\phi_{w} L_{\zeta} \subset L_{w. \zeta}.$$ \end{thm} \begin{proof} It suffices to show the statement is true for each operator $\phi_i$.\\ Case 1. When $1 \leq i \leq n-1$. We have the following facts that $$y_i \phi_i = \phi_i y_{i + 1},$$ $$y_{i + 1} \phi_i = \phi_i y_i,$$ and $$y_j \phi_i = \phi_i y_j, j \neq i \text{ or } i+1.$$ Case 2. Consider $\phi_n$. We have facts that $$y_n \phi_n = -\phi_n y_n,$$ $$y_j \phi_n = \phi_n y_j, j \neq n. $$ \end{proof} \begin{rmk} Since each weight space of $\mathcal{F}_{n,p,\mu}(V^{\xi})$ is one dimensional, so the action of $\phi_i$ is either $0$ or an isomorphism. \end{rmk} \begin{lemma} If $\zeta_i - \zeta_{i+1} \neq \pm 1$ for some $i \in \{1, 2, \cdots, n-1 \}$, then $\phi_i v_{\zeta} \neq 0$, where $v_{\zeta}$ is the weight vector of the weight $\zeta$. \end{lemma} \begin{proof} Suppose that $\phi_i v_{\zeta} = 0$. Then $\phi_i^2 v_{\zeta} = 0$. By the computation above $\phi_i ^2 = (1 - y_i + y_{i+1})(1 + y_i - y_{i+1})$. Then $\phi_i^2 v_{\zeta} = (1 - \zeta_i + \zeta_{i+1})(1 + \zeta_i - \zeta_{i+1})v_{\zeta} = 0$. Then we have that $\zeta_i - \zeta_{i+1} = \pm 1$. \end{proof} Similarly, we have the following fact. \begin{lemma} If $\zeta_n \neq \pm \frac{\kappa_2}{2}$, then $\phi_n v_{\zeta} \neq 0$, where $v_{\zeta}$ is the weight vector of the weight $\zeta$. \end{lemma} \begin{proof} Suppose that $\phi_n v_{\zeta} = 0$. Then $\phi_n^2 v_{\zeta} = 0$. By the computation above $\phi_n ^2 = (\kappa_2 - 2y_n)(\kappa_2 + 2y_n)$. Then $\phi_n^2 v_{\zeta} = \phi_n ^2 = (\kappa_2 - 2 \zeta_n)(\kappa_2 + 2 \zeta_n)v_{\zeta} = 0$. Then we have that $\zeta(n) = \pm \frac{\kappa_2}{2}$. \end{proof} \subsection{Properties of irreducible $\mathcal{Y}$-semisimple representations} Let $L$ be an irreducible $\mathcal{Y}$-semisimple representation of $H_n(\kappa_1,\kappa_2)$. Let $\zeta=[\zeta_1,\cdots,\zeta_n]$ is a weight $L$. \begin{thm} If $\zeta_i = \zeta_{i+1}$ for some $1 \leq i \leq n-1$ , then $L_{\zeta} = 0$. \end{thm} \begin{proof} Let $\zeta$ be a weight such that $\zeta_i = \zeta_{i+1}$. Suppose there exists a nonzero element $v \in L_{\zeta}$. Consider the vector $s_i v$. Since $\phi_i = s_i (y_i - y_{i+1}) - 1 = (y_{i+1} - y_i)s_i + 1$, we have $\phi_i v = -v$. \begin{align*} (y_i - y_{i+1})s_i v = &(1 - \phi_i) v \\ = &2v \neq 0. \end{align*} And act again by $y_i - y_{i+1}$, \begin{align*} &(y_i - y_{i+1})^2 s_i v\\ = &2(y_i - y_{i+1})v =0. \end{align*} This means $s_i v$ belongs to the generalized eigenspace of $y_i - y_{i+1}$ and does not belong to the eigenspace of $y_i - y_{i+1}$, which contradicts $\mathcal{Y}$-semisimplicity. \end{proof} \begin{thm} Let $\kappa_2 \neq 0$. If $\zeta_n = 0$, then $L_{\zeta} = 0$. \end{thm} \begin{proof} Let $\zeta$ be a weight such that $\zeta_n = 0$. Suppose there exists a nonzero element $v \in L_{\zeta}$. Consider the vector $\gamma_n v$. Since $\phi_n = 2\gamma_n y_n - \kappa_2 = -2y_n \gamma_n + \kappa_2$, we have $\phi_n v = -\kappa_2 v$. \begin{align*} 2y_n\gamma_n v = &(\kappa_2 - \phi_n) v\\ = &2\kappa_2 v \neq 0. \end{align*} Act again by $y_n$, we have \begin{align*} &2{y_n}^2 \gamma_n v\\ = &2 \kappa_2 y_n v = 0. \end{align*} his means $s_i v$ belongs to the generalized eigenspace of $y_n$ and does not belong to the eigenspace of $y_n$, which contradicts $\mathcal{Y}$-semisimplicity. \end{proof} \begin{rmk} When $\kappa_2 = 0$, it is possible for an irreducible $\mathcal{Y}$-semisimple module $L$ to contain a nonzero weight space $L_{\zeta}$ with $\zeta_n = 0$. In this case, $\gamma_n v \in \mathbb{C} v$. Otherwise, the vector $v + \gamma_n v$ generalizes a nonzero proper submodule of $L$, which contradicts the irreducibility. \end{rmk} \begin{lemma} For any arbitrary $w \in W_0$, the intertwining operator $$\phi_w = w \Pi_{\alpha_{ij} \in R(w)} (y_i - y_j) + \sum_{x <w} x P(y),$$ where $P(y)$ is a polynomial of $y_1, \cdots, y_n$. \end{lemma} \begin{thm} Let $\zeta$ be a weight of $L$ such that $L_{\zeta} \neq 0$. Let $v$ be a nonzero weight vector in $L_{\zeta}$. Then the set $\{\phi_w v | w \in W_0\}$ spans the irreducible representation $L$. \end{thm} \begin{proof} We need to show $w. v$ lies in the span of $\{\phi_w v | w \in S_n \ltimes (\mathbb{Z}/2\mathbb{Z})^n\}$ for any arbitrary $w \in S_n \ltimes (\mathbb{Z}/2\mathbb{Z})^n$. We prove by induction on the length of $w$. When the length of $w$ is zero, the statement is trivial. Now assume for $w$ with $l(w) <k $, the statement holds, i.e. $w. v$ can be expressed by a linear combination of elements in $\{\phi_w v | w \in W_0\}$. Set $w$ is of length $k$ and $w = s_{i_1} \cdots s_{i_k}$. Then by Lemma 7.12, we have $\phi_w \cdot v= \Pi_{\alpha_{ij} \in R(w)} (\zeta_i - \zeta_j) \cdot w \cdot v+ \Sigma_{x <w} c_x x \cdot v$. Since $l(x) < k$, the terms $x \cdot v$ can be express by $\{\phi_w v | w \in S_n \ltimes (\mathbb{Z}/2\mathbb{Z})^n\}$. As long as the coefficient $ \Pi_{\alpha_{ij} \in R(w)}(\zeta_i - \zeta_j) \neq 0$, $w \cdot v$ can be express by $\{\phi_w v | w \in W_0\}$. So it is reduced to consider only the case when $ \Pi_{\alpha_{ij} \in R(w)}(\zeta_i - \zeta_j) = 0$.\\ In this case, there exists $p \in [1,k]$ such that $\Pi_{\alpha_{ij} \in R(s_{i_{p+1}} \cdots s_{i_k})}(\zeta_i - \zeta_j) \neq 0$ and $\Pi_{\alpha_{ij} \in R(s_{i_p} \cdots s_{i_k})}(\zeta_i - \zeta_j) = 0$. Set $u = s_{i_{p+1}} \cdots s_{i_k}$. When $i_p \in [1, n-1]$, this implies $(y_{i_p} - y_{i_{p + 1 }})\phi_u v = 0$ and hence $\phi_u v = 0$ by theorem 4.1. And when $i_p = n$, this implies $2y_n \phi_u v = 0$ and hence $\phi_u v = 0$ by theorem 4.2. It follows $\Pi_{a_{ij} \in R(u)}(\zeta_i-\zeta_j)u. v= \sum_{x<u}xP(y)v$ and hence $\Pi_{a_{ij} \in R(u)}w. v= \sum_{x<u}s_{i_1}\cdots s_{i_p}xP(y)$. Since $l(s_{i_1}\cdots s_{i_p}x)<k$, then $(s_{i_1}\cdots s_{i_p}x) . v$ and hence $w . v$ can be expressed by a linear combination of elements in $\{\phi_w v | w \in W_0\}$. \end{proof} \begin{thm} Let $\zeta$ be a weight such that $L_{\zeta} \neq 0$. Let $ w \neq 1 \in W_0$ such that $w. \zeta = \zeta$. Then $\phi_w v = 0$. \end{thm} \begin{proof} Let $w=s_{i_1}\cdots s_{i_k}$. since $w . \zeta=\zeta$, there is $1 \leq p \leq k$ such that $s_{i_1} \cdots s_{i_p} = (h m)$ where $\zeta_h=\zeta_m$. Consider $\phi_{i_{p-1}}\cdots \phi_{i_1} \phi_w v = \Pi_{1 \leq j \leq p} (1-\zeta_{i_j}+\zeta_{i_j+1})(1+\zeta_{i_j}-\zeta_{i_j+1}) \phi_u v$. It follows $\phi_u v =0$ and hence $\phi_w v=0$. \end{proof} \begin{cor} Let $\zeta$ be a weight such that $L_{\zeta} \neq 0$. Then it follows $dim(L_{\zeta}) = 1$. \end{cor} \begin{prop} \begin{enumerate} \item Let $v$ be a nonzero weight vector of weight $\zeta$ such that $|\zeta_i -\zeta_{i+1}| = 1$. Then $\phi_i v =0$.\\ \item Let $v$ be a nonzero weight vector of weight $\zeta$ such that $\zeta_n = \pm \frac{\kappa_2}{2}$. Then $\phi_n v =0$. \end{enumerate} \end{prop} \begin{rmk} Some similar results also happen in degenerate affine Hecke algebra of type $A_{n-1}$. Let $H_n(1)$ be the degenerate affine Hecke algebra generated by $s_i (i = 1, \cdots n-1)$ and $y_i(i = 1 \cdots n)$ with the following relations: \begin{align*} &s_i^2 = 1, i = 1, \cdots, n - 1,\\ &s_i s_j = s_j s_i, |i - j| > 1,\\ &s_i s_{i + 1} s_i = s_{i + 1} s_i s_{i + 1}, i = 1, \cdots, n - 1,\\ &y_i y_j = y_j y_i,\\ &s_i y_i - y_{i + 1} s_i = 1,\\ &s_i y_j = y_j s_i, j \neq i, i+1. \end{align*} There is the same definition of $\mathcal{Y}$-semisimple representation. And for any $\mathcal{Y}$-semisimple representation $M$, if a weight $\zeta$ with $\zeta_i = \zeta_{i+1}$, then $M_{\zeta} = 0$.\\ Furthermore, we still could define the intertwining operator $\phi = s_i y_i - y_i s_i$, then we will also have $\phi_i^2 = (1- y_i + y_{i + 1})(1 + y_i - y_{i + 1})$. This also implies the fact that if $\phi_i v_{\zeta} = 0$ then we have $\zeta_i - \zeta_{i + 1} = \pm 1$. For the double affine Hecke algebra of type $A$, \cite{SV} explored similar properties in details. \end{rmk} \section{Combinatorial moves} \subsection{Moves among standard tableaux}~\\ Let $Tab(\varphi^{\xi}_{n,p,\mu})$ denote the collection of standard tableaux indexing the basis of $F(V^{\xi})$ in section 5. We define a set of moves $\mathfrak{m}_1, \cdots,\mathfrak{m}_n$ on $Tab(\varphi^{\xi}_{n,p,\mu}) \sqcup \{\mathfrak{0}\}$ as follows. The move $\mathfrak{m}_i$ for $i-1, \cdots, n-1$ is defined as $$ \mathfrak{m}_i(T) = \begin{cases} T', &T' \text{ is a standard tableau}\\ \mathfrak{0}, &\text{ otherwise, } \end{cases} $$ where $T'(k)=T(s_i(k))$. The move $\mathfrak{m}_n$ is defined to be $$ \mathfrak{m}_n \cdot T = \begin{cases} \mathfrak{0}, & \mathfrak{i}(n) \leq max(p,q) \text{ and } \mathfrak{j}(n) \leq max(a,b)\\ T'', &\text{ otherwise,} \end{cases} $$ where $T''(j) =T(j)$ for each $j \neq n$ and $T''(n) = (N - \mathfrak{i}(n)+ 1, a+b-\mathfrak{j}(n)+1)$. \begin{rmk} There is a easy observation. For any shape $\varphi' \in D(\varphi^{\xi}_{n,p,\mu})$ and any $i \leq min(p,q)$, the sum of the column number of the last cell of the $i$-th row and the column number of the last cell of the $(N-i+1)$-th row equal $a+b$. So $ T''(n)=(N-\mathfrak{i}(n)+1, a+b -\mathfrak{j}(n)+1)$ means that the $\mathfrak{m}_n$-move takes the cell filled by $n$ to the end of the $(N-\mathfrak{i}(n)+1)$-th row. \end{rmk} here $m$ be the column number of the last cell of the $(N-\mathfrak{i}(n)+1)$-th row of $Im(T)$. \subsection{Correspondence between algebraic actions and combinatorial moves}~\\ Let $v_T$ denote the basis vector indexed by $T \in Tab(\varphi^{\xi}_{n,p,\mu})$ and $\zeta_T$ denote the weight of $v_T$, i.e. $\zeta_T=-cont_T+\mathfrak{s}$. \begin{prop} \begin{enumerate} \item For $i=1, \cdots, n-1$, if $\mathfrak{m}_i(T) \neq \mathfrak{0}$ holds, then $\mathfrak{m}_i(T) \in Tab(\varphi^{\xi}_{n,p,\mu})$ and the common eigenbasis vector $v_{\mathfrak{m}_i(T)}$ is of weight $\zeta_{\mathfrak{m}_i(T)}=s_i. \zeta_T$. \item If $\mathfrak{m}_n(T) \neq \mathfrak{0}$, then $\mathfrak{m}_n(T) \in Tab(\varphi^{\xi}_{n,p,\mu})$ and the common eigenbasis vector $v_{\mathfrak{m}_n(T)}$ is of weight $\zeta_{\mathfrak{m}_i(T)}=\gamma_n.\zeta_T$ \end{enumerate} \end{prop} \begin{proof} First, for $i=1, \cdots, n-1$, if $\mathfrak{m}_i(T) \neq \mathfrak{0}$, then by the definition of the move $\mathfrak{m}_i$, $T \in Tab(\varphi^{\xi}_{n,p,\mu})$ and we want to show $\zeta_{\mathfrak{m}_i(T)}=s_i.\zeta_T$. Then let us consider the case when $w = \gamma$. In this case $w$ moves the box filled by $n$ in the $\mathfrak{i}$-th row of tableau $T$ to the end of the $(N-\mathfrak{i}+1)$-th row. So the only box in the new tableau $\gamma . T$ with a different position comparing with the tableau $T$ is the box filled by $n$. Thus the only difference in the new weight associated to $\gamma . T$ comparing with $\zeta_T$ is the eigenvalue of $y_n$. Let $(\mathfrak{i} , \mathfrak{j})$ denote the coordinates of the box filled by $n$ in the tableau $T$. Then the coordinates of the box filled by $n$ in the new tableau $\gamma . T$ is $(N - \mathfrak{i} + 1, \mu(q-p) + 2 \dfrac{|\xi| + n}{N} - \mathfrak{j} + 1)$. Then the eigenvalue of $y_n$ in the new weight $\zeta_{\gamma . T}$ associated to $\gamma . T$ is $\mathfrak{j} - \mathfrak{i} -\dfrac{|\xi|+n}{N}+\dfrac{N}{2}+\dfrac{\mu(p-q)}{2}$. So the new weight equals $\gamma . \zeta_T$.\\ \end{proof} \begin{prop} If $w . T \neq 0$ for some $w \in W_0$, then $\phi_w v_T \neq 0$. \end{prop} \begin{proof} It is enough to verify the statement when $w$ is the transposition $s_i$ or $\gamma_n$.\\ First, consider the case when $w = s_i$, $i=1, \cdots,n-1$. Suppose $\phi_i v_T = 0$ for some $1 \leq i \leq n-1$ implies that $\phi_i^2 v_T = 0$ and $\phi_i ^2 = (1 - y_i + y_{i+1})(1 + y_i - y_{i+1})$. Then $\zeta_T(i) - \zeta_T(i + 1) = \pm1$. In this case the contents of boxes filled by $i$ and $i + 1$ differ by $1$ and hence the two boxes are adjacent and in the same row or in the same column. We have $s_i . T = 0$ in this case. This contradicts the condition. So we have $\phi_i v_T \neq 0$.\\ Second, consider the case when $w = \gamma_n$. Suppose $\phi_n v_T = 0$ which implies the eigenvalue of $y_n$ is $\pm \frac{\kappa_2}{2}$. Since $\phi_n^2 v_T = 0$ in this case and $\phi_n ^2 = (\kappa_2 - 2y_n)(\kappa_2 + 2y_n)$. Then the box filled by $n$ is either $(p, \mu q + \frac{|\xi|+n}{N})$ or $(q, -\mu p + \frac{|\xi|+n}{N})$. But by the definition of action of $\gamma_n$ on the tableau $T$, we have in both cases that $\gamma_n . T = 0$. This contradicts the condition. Hence we have that $\phi_n v_T \neq 0$. \end{proof} \begin{rmk} \begin{enumerate} \item If $\mathfrak{m}_i(T) \neq \mathfrak{0}$, then $\phi_i v_T=c v_{\mathfrak{m}_i(T)}$ up to a nonzero scalar $c \in \mathbb{C}$ for $i=1, \cdots, n$. \item If $\mathfrak{m}_i(T) = \mathfrak{0}$, then $\phi_i v_T=0$ for $i=1, \cdots, n$. \end{enumerate} \end{rmk} \begin{eg} In example 5.9, the action of intertwining operators are as follows. The diagonals give the eigenvalue of $y_i$'s. \begin{center} \begin{tikzpicture}[scale=0.4] \begin{scope}[shift={(-10,16)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,0) grid (2,3); \draw (0.5, 0.5) node[red] {$2$}; \draw (1.5, 1.5) node[blue] {$1$}; \draw (1.5, 0.5) node[black!75] {$3$}; \draw [dotted,blue] (-1,4) -- (3, 0); \draw [dotted,red] (-1,2) -- (2, -1); \draw [dotted,black!75] (-1,3) -- (3, -1); \draw (3.3, 0) node[blue]{\scriptsize $\frac{1}{2}$}; \draw (2.3, -1.3) node[red]{\scriptsize $\frac{5}{2}$}; \draw (3.3, -1.3) node[black!75]{\scriptsize $\frac{3}{2}$}; \end{scope} \draw [->](-9,14)--(-9,12); \draw (-9.5,13) node {$\mathfrak{m}_1$}; \draw [->](-7, 17.5) --(-5, 17.5); \draw (-6,18) node {$\mathfrak{m}_3$}; \draw [->](0, 17.5) --(2, 17.5); \draw (1,18) node {$\mathfrak{m}_2$}; \draw [->](8, 17.5) --(10, 17.5); \draw (9,18) node {$\mathfrak{m}_3$}; \draw [->](8, 9.5) --(10, 9.5); \draw (9,10) node {$\mathfrak{m}_3$}; \draw [->](0, 1.5) --(2, 1.5); \draw (1,2) node {$\mathfrak{m}_1$}; \begin{scope}[shift={(-10,8)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,0) grid (2,3); \draw (0.5, 0.5) node[blue] {$1$}; \draw (1.5, 1.5) node[red] {$2$}; \draw (1.5, 0.5) node[black!75] {$3$}; \draw [dotted,red] (-1,4) -- (3, 0); \draw [dotted,blue] (-1,2) -- (2, -1); \draw [dotted,black!75] (-1,3) -- (3, -1); \draw (3.3, 0) node[red]{\scriptsize $\frac{1}{2}$}; \draw (2.3, -1.3) node[blue]{\scriptsize $\frac{5}{2}$}; \draw (3.3, -1.3) node[black!75]{\scriptsize $\frac{3}{2}$}; \end{scope} \draw [->](-7, 9.5) --(-5, 9.5); \draw (-6,10) node {$\mathfrak{m}_3$}; \begin{scope}[shift={(-4,8)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw (2,2) rectangle (3,3); \draw (0,0) rectangle (1,1); \draw (0.5, 0.5) node[blue] {$1$}; \draw (1.5, 1.5) node[red] {$2$}; \draw (2.5, 2.5) node[black!75] {$3$}; \draw [dotted,red] (-1,4) -- (3, 0); \draw [dotted,blue] (-1,2) -- (2, -1); \draw [dotted,black!75] (1,4) -- (5, 0); \draw (2.3, -1) node[blue]{\scriptsize $\frac{5}{2}$}; \draw (3.3, 0) node[red]{\scriptsize $\frac{1}{2}$}; \draw (5.3, 0) node[black!75]{\scriptsize -$\frac{3}{2}$}; \end{scope} \begin{scope}[shift={(-4,0)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw (2,2) rectangle (3,3); \draw (0,0) rectangle (1,1); \draw (0.5, 0.5) node[blue] {$1$}; \draw (1.5, 1.5) node[black!75] {$3$}; \draw (2.5, 2.5) node[red] {$2$}; \draw [dotted,black!75] (-1,4) -- (3, 0); \draw [dotted,blue] (-1,2) -- (2, -1); \draw [dotted,red] (1,4) -- (5, 0); \draw (2.3, -1) node[blue]{\scriptsize $\frac{5}{2}$}; \draw (3.3, 0) node[black!75]{\scriptsize $\frac{1}{2}$}; \draw (5.3, 0) node[red]{\scriptsize -$\frac{3}{2}$}; \end{scope} \begin{scope}[shift={(-4,16)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw (2,2) rectangle (3,3); \draw (0,0) rectangle (1,1); \draw (0.5, 0.5) node[red] {$2$}; \draw (1.5, 1.5) node[blue] {$1$}; \draw (2.5, 2.5) node[black!75] {$3$}; \draw [dotted,blue] (-1,4) -- (3, 0); \draw [dotted,red] (-1,2) -- (2, -1); \draw [dotted,black!75] (1,4) -- (5, 0); \draw (2.3, -1) node[red]{\scriptsize $\frac{5}{2}$}; \draw (3.3, 0) node[blue]{\scriptsize $\frac{1}{2}$}; \draw (5.3, 0) node[black!75]{\scriptsize -$\frac{3}{2}$}; \end{scope} \begin{scope}[shift={(4,0)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw (2,2) rectangle (3,3); \draw (0,0) rectangle (1,1); \draw (0.5, 0.5) node[red] {$2$}; \draw (1.5, 1.5) node[black!75] {$3$}; \draw (2.5, 2.5) node[blue] {$1$}; \draw [dotted,black!75] (-1,4) -- (3, 0); \draw [dotted,red] (-1,2) -- (2, -1); \draw [dotted,blue] (1,4) -- (5, 0); \draw (2.3, -1) node[red]{\scriptsize $\frac{5}{2}$}; \draw (3.3, 0) node[black!75]{\scriptsize $\frac{1}{2}$}; \draw (5.3, 0) node[blue]{\scriptsize -$\frac{3}{2}$}; \end{scope} \begin{scope}[shift={(4,16)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw (2,2) rectangle (3,3); \draw (0,0) rectangle (1,1); \draw (0.5, 0.5) node[black!75] {$3$}; \draw (1.5, 1.5) node[blue] {$1$}; \draw (2.5, 2.5) node[red] {$2$}; \draw [dotted,blue] (-1,4) -- (3, 0); \draw [dotted,black!75] (-1,2) -- (2, -1); \draw [dotted,red] (1,4) -- (5, 0); \draw (2.3, -1) node[black!75]{\scriptsize $\frac{5}{2}$}; \draw (3.3, 0) node[blue]{\scriptsize $\frac{1}{2}$}; \draw (5.3, 0) node[red]{\scriptsize -$\frac{3}{2}$}; \end{scope} \begin{scope}[shift={(4,8)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw (2,2) rectangle (3,3); \draw (0,0) rectangle (1,1); \draw (0.5, 0.5) node[black!75] {$3$}; \draw (1.5, 1.5) node[red] {$2$}; \draw (2.5, 2.5) node[blue] {$1$}; \draw [dotted,red] (-1,4) -- (3, 0); \draw [dotted,black!75] (-1,2) -- (2, -1); \draw [dotted,blue] (1,4) -- (5, 0); \draw (2.3, -1) node[black!75]{\scriptsize $\frac{5}{2}$}; \draw (3.3, 0) node[red]{\scriptsize $\frac{1}{2}$}; \draw (5.3, 0) node[blue]{\scriptsize -$\frac{3}{2}$}; \end{scope} \begin{scope}[shift={(11,16)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw[step=1] (2,2) grid (4,3); \draw (1.5, 1.5) node[blue] {$1$}; \draw (2.5, 2.5) node[red] {$2$}; \draw (3.5, 2.5) node[black!75] {$3$}; \draw [dotted,blue] (-1,4) -- (3, 0); \draw [dotted,black!75] (2,4) -- (6, 0); \draw [dotted,red] (1,4) -- (5, 0); \draw (6.3, 0) node[black!75]{\scriptsize -$\frac{5}{2}$}; \draw (3.3, 0) node[blue]{\scriptsize $\frac{1}{2}$}; \draw (5.3, 0) node[red]{\scriptsize -$\frac{3}{2}$}; \end{scope} \begin{scope}[shift={(11,8)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw[step=1] (2,2) grid (4,3); \draw (1.5, 1.5) node[red] {$2$}; \draw (2.5, 2.5) node[blue] {$1$}; \draw (3.5, 2.5) node[black!75] {$3$}; \draw [dotted,red] (-1,4) -- (3, 0); \draw [dotted,black!75] (2,4) -- (6, 0); \draw [dotted,blue] (1,4) -- (5, 0); \draw (6.3, 0) node[black!75]{\scriptsize -$\frac{5}{2}$}; \draw (3.3, 0) node[red]{\scriptsize $\frac{1}{2}$}; \draw (5.3, 0) node[blue]{\scriptsize -$\frac{3}{2}$}; \end{scope} \begin{scope}[shift={(11,0)}] \draw [thin,fill=gray!20] (0,2) rectangle (2,3); \draw [thin,fill=gray!20] (0,1) rectangle (1,2); \draw[step=1] (0,1) grid (2,3); \draw[step=1] (2,2) grid (4,3); \draw (1.5, 1.5) node[black!75] {$3$}; \draw (2.5, 2.5) node[blue] {$1$}; \draw (3.5, 2.5) node[red] {$2$}; \draw [dotted,black!75] (-1,4) -- (3, 0); \draw [dotted,red] (2,4) -- (6, 0); \draw [dotted,blue] (1,4) -- (5, 0); \draw (6.3, 0) node[red]{\scriptsize -$\frac{5}{2}$}; \draw (3.3, 0) node[black!75]{\scriptsize $\frac{1}{2}$}; \draw (5.3, 0) node[blue]{\scriptsize -$\frac{3}{2}$}; \end{scope} \draw [->] (-2,14)--(-2,12); \draw (-2.5,13) node {$\mathfrak{m}_1$}; \draw [->] (6,14)--(6,12); \draw (5.5,13) node {$\mathfrak{m}_1$}; \draw [->](13,15)--(13,12); \draw (12.5,13) node {$\mathfrak{m}_1$}; \draw [->](-2,6)--(-2,4); \draw (-2.5,5) node {$\mathfrak{m}_2$}; \draw [->](6,6)--(6,4); \draw (5.5,5) node {$\mathfrak{m}_2$}; \draw [->](13, 7)--(13,4); \draw (12.5,5.5) node {$\mathfrak{m}_2$}; \end{tikzpicture} \end{center} \end{eg} Let $k$ be the filling of the cell $(q,b)$,we could compute that the eigenvalue of $y_k$ is $-\frac{\kappa_2}{2}$. Similarly, let $k$ be the filling of the cell $(p,a)$, it follows the eigenvalue of $y_k$ is $\frac{\kappa_2}{2}$. Furthermore, $\kappa_2 = p-q -a+b$. \section{Irreducible representations} \subsection{The image $ F_{n,p,\mu}(V^{\xi})$ is irreducible} \begin{lemma} Let $\varphi_1$ and $\varphi_2$ be two skew shapes in $D(\varphi)$ with $\varphi_1 \to \varphi_2$. Then there exist standard tableaux $T_1$ and $T_2$ with $Im(T_1) = \varphi_1$ and $Im(T_2)=\varphi_2$ such that $\gamma_n(T_1) = T_2$. \end{lemma} \begin{proof} The $\varphi_1 \to \varphi_2$ implies that $\varphi_2$ is obtained by moving a corner $(i, \varphi_{i})$ of $\varphi_1$ to the end of the $(N-i+1)$-th row of $\varphi_1$. Since $(i, \varphi_1)$ is a corner of $\varphi_1$, there exists a standard tableau $T_1$ such that $(i, \varphi_1)$ is filled by $n$. Applying the $\gamma_n$ move to $T_1$, let $T_2 = \gamma_n(T_1)$. Then $T_2$ is a standard tableau with $Im(T_2)=\varphi_2$. \end{proof} We show in the following the representation of degenerate affine Hecke algebra obtained through Etingof-Freund-Ma functor is irreducible. \begin{thm} The image $\mathcal{F}_{n,p,\mu}(V^{\xi})$ of a finite dimensional irreducible $\mathfrak{gl}_N$-module $V^{\xi}$ under the Etingof-Freund-Ma functor is irreducible. \end{thm} \begin{proof} A basis of $F_{n,p,\mu}(V^{\xi})$ is indexed by $$\mathcal{T} = \{T| T \text{ is a standard tableau and } Im(T) \in D(\varphi_{n,p,\mu}^{\xi})\}.$$ It's obvious to see that the underlying vector space of $F_{n,p,\mu}(V^{\xi})$ is isomorphic to $span_{\mathbb{C}}\{v_T | T \in \mathcal{T}\}$. Let $N$ be a submodule of $F_{n,p,\mu}(V^{\xi})$. Then $N$ contains at least one weight vector of $F_{n,p,\mu}(V^{\xi})$. Let $v_T$ be a weight vector associated to the tableau $T \in Tab(\varphi^{\xi}_{n,p,\mu})$ and the submodule $N$ contains $v_T$.\\ We show in the following we could get every other weight vector from an arbitrary weight vector $v_T$. We could consider the actions of signed permutations on standard tableaux since the actions of signed permutations on standard tableaux are compatible with the actions of intertwining operators on weight vectors.\\ Case 1. For any the standard tableau $T'$ with the same shape of the tableau $T$, there exists $w \in S_n$ such that $T' = w . T$. Equivalently $v_{T'} =c \phi_{\omega}v_T$ where $c \in \mathbb{C}$ is nonzero.\\ Case 2. For standard $T_1$ and $T_2$ with $Im(T_1) \to Im(T_2)$, combining Lemma 8.4 and Case 1, it follows $T_2 = \omega(T_1)$ for some $\omega \in W(BC_n)$ and hence $v_{T_2} =c \phi_{\omega} v_{T_1}$ where $c \in \mathbb{C}$ is nonzero.\\ Furthermore, consider two arbitrary standard tableaux $T_1$ and $T_2$ in $\mathcal{T}$. Let $T$ be a standard tableaux of shape $\varphi$. There is a path $\varphi \to \varphi_1 \to \cdots \to Im(T_1)$ and hence $v_{T_1} = c_1 \phi_{\omega}v_{T_0}$. \end{proof} \subsection{Irreducible representation associated to a skew shape $\varphi_{n,p,\mu}^{\xi}$} Define a representation $L^{\varphi_{n,p,\mu}^{\xi}}$ of $H_n(1,p-q-\mu N)$ as follows. Let the underlying vector space be $span_{\mathbb{C}}\{w_T | T \in \mathcal{T}\}$. The action of $H_n(1, p-q-\mu N)$ is defined by \begin{align} &y_k w_T = (-cont_T(k)+\mathfrak{s}) w_T,\\ &s_i w_T =\frac{(1-cont_T(i)+cont_T(i+1))w_{s_i(T)}}{cont_T(i) -cont_T(i+1)}+\frac{1}{cont_T(i)-cont_T(i+1)}w_T,\\ &\gamma_n w_T = \dfrac{(p-q-\mu N -2cont_T(n)) w_{\gamma_n(T)}}{2cont_T(n)}+(p-q-\mu N) \dfrac{1}{2cont_T(n)}w_T. \end{align} \begin{thm} The representation $F_{n,p,\mu}(V^{\xi})$ is isomorphic to $L^{\varphi_{n,p,\mu}^{\xi}}$. \end{thm} \begin{proof} Fix a $T \in \mathcal{T}$. Define a map $f:F_{n,p,\mu}(V^{\xi}) \to L^{\varphi_{n,p,\mu}^{\xi}}$ by $$f(v_T) = w_t$$ and $f(\phi_i v_T) = (1-cont_T(i)+cont_T(i+1)) w_{s_i(T)}$. \end{proof} \section{Combinatorial description} In this section, we first discuss some properties of a representation of the degenerate affine Hecke algebra $H_n(1,\kappa_2)$ obtained via the Etingof-Freund-Ma functor, where $\kappa_2=p-q-\mu N$, and then we show that any representation satisfying these properties is the image of some irreducible polynomial representation of $GL_N$ via the Etingof-Freund-Ma functor. \subsection{Some facts of $F_{n,p,\mu}(V^{\xi})$} Let $F = F_{n,p,\mu}(V^{\xi})$ be a representation $H_n(1, p-q-\mu N)$ obtained through Etingof-Freund-Ma functor and $\zeta = (\zeta_1, \cdots, \zeta_n)$ be weight of $F$ such that $F_{\zeta} \neq 0$. For $i=1, \cdots, n$, if there is an increasing sequence $i=i_0 < i_1< \cdots <i_m \leq n$ such that $|\zeta_{i_k} -\zeta_{k+1}|=1$ for $k=0, \cdots, m-1$ and $\zeta_{i_m}= \pm \frac{\kappa_2}{2}$, then we call the coordinate $\zeta_i$ is fixed. It is easy to observe the following properties. % \begin{property} For $i=1, \cdots, n$, if $|\zeta_{i}| \leq |\frac{\kappa_2}{2}|$, then $\zeta_i$ is fixed, i.e. there is an increasing sequence $i=i_0 < i_1< \cdots <i_m \leq n$ such that $|\zeta_{i_k} -\zeta_{k+1}|=1$ for $k=0, \cdots, m-1$ and $\zeta_{i_m}= \pm \frac{\kappa_2}{2}$. \end{property} \begin{property} The parameter $\kappa_2$ is an integer. If $\kappa_2$ is even, then all $\zeta_i$'s, for $i=1, \cdots, n$, are integers. If $\kappa_2$ is odd, then all $\zeta_i$'s, for $i=1, \cdots, n$ are half integers. \end{property} Recall that the the cell $(p,a)$ in $\varphi^{\xi}_{n,p,\mu}$ gives the eigenvalue $\frac{\kappa_2}{2}$ and that the cell $(q,b)$ gives the eigenvalue $-\frac{\kappa_2}{2}$. Then Property 2 follows.\\ In \cite{R2}, Ram explored the facts of weights of a semisimple affine Hecke algebra representation. Now let us explore facts of weights in the degenerate case. Let $L$ be an irreducible and $\mathcal{Y}$-semisimple representation of $H_n(1, \kappa_2)$ satisfying Property 1 and Property 2 above and $\zeta$ be a weight such that $L_{\zeta} \neq 0$. Then $\zeta$ satisfies the following property. \begin{prop} If there exist $1 \leq i<j \leq n$ such that $\zeta_i = \zeta_j$, then there exist $i<k_1<j$ such that $\zeta_{k_1} = \zeta_i+1$ and $i<k_2<j$ such that $\zeta_{k_2} = \zeta_i-1$. \end{prop} \begin{proof} Let $\zeta$ be a weight such that $L_{\zeta} \neq 0$. Suppose there exist $1 \leq i<j\leq n$ such that $\zeta_i =\zeta_j$ and there is no $i<k<j$ such that $\zeta_k=\zeta_i$. We proof by induction on $j-i$.\\ First, if $j-i=1$, then $\zeta_i =\zeta_{i+1}$ which contradicts theorem 7.9.\\ Second, if $j-i=2$, by Theorem 7.9 and Lemma 7.7, it follows $\zeta_{i+1} = \zeta_i \pm1 = \zeta_{i+1} \pm 1$. Let $v$ be a nonzero weight vector of weight $\zeta$. Proposition 7.16 implies $\phi_i v = \phi_{i+1} v = 0$. Combining the definition of the intertwining operators, it follows $s_i v = \mp v$ and $s_{i+1}v = \pm v$ and hence $$\pm v = s_is_{i+1}s_iv = s_{i+1} s_i s_{i+1}v = \mp v,$$ which is a contradiction. \\ So the base case of the induction is $j-i=3$. If $\zeta_i \neq \zeta_{i+1} \pm 1$ or $\zeta_{j-1} \neq \zeta_j \pm 1$. Lemma 7.7 implies the existence of a weight satisfying the condition in the second case which is a contradiction. So it hold $|\zeta_i - \zeta_{i+1}| =1$ and $|\zeta_{j-1} -\zeta_j| =1$. If $\zeta_i = \zeta_{i+1} + 1$ and $\zeta_{j-1} = \zeta_j + 1$, then $k_1 = j-1$ and $k_2=i+1$. Similarly, if $\zeta_i=\zeta_{i+1} -1$ and $\zeta_{j-1}= \zeta_j -1$, then $k_1=i+1$ and $k_2=j-1$. If $\zeta_i = \zeta_{i+1} \pm 1$ and $\zeta_{j-1} = \zeta_j \mp 1$, then $\zeta_{i+1} = \zeta_{i+2}$ which contradicts theorem 7.9.\\ Suppose the statement is true for all $i-j<m$, consider the case $j-i = m$.\\ Case1. If $|\zeta_i - \zeta_{i +1}| \neq 1$ or $|\zeta_{j-1}-\zeta_j| \neq 1$ and let $v$ be a nonzero weight vector of weight $\zeta$, then $\phi_iv$ or $\phi_{j-1}v$ will be a nonzero weight vector of weight $s_i \zeta$ or respectively $s_{j-1} \zeta$ with $s_i \zeta$ or $s_{j-1} \zeta$ has $\zeta_{i+1} =\zeta_j$ or respectively $\zeta_i = \zeta_{j-1}$. Then the $k_1$ and $k_2$ exist by the inductive hypothesis.\\ Case 2. If $\zeta_i =\zeta_{i+1} \pm 1$ and $\zeta_{j-1} =\zeta_j \mp 1$, this implies $\zeta_{i+1} = \zeta_{j-1}$, the statement still holds by inductive hypothesis.\\ Case 3. If $\zeta_i=\zeta_{i+1}+1$ and $\zeta_{j-1}=\zeta_j+1$, then $k_1=j-1$ and $k_2=i+1$.\\ Case 4. If $\zeta_i=\zeta_{i+1} -1$ and $\zeta_{j-1} = \zeta_j -1$, then $k_1=i+1$ and $k_2=j-1$. \end{proof} Next let us explore another fact of $L$. \begin{lemma} Let $\zeta = [\zeta_1, \cdots, \zeta_n]$ be a weight of $L$ such that $L_{\zeta} \neq 0$ and $\zeta$ satisfies $\zeta_i > \frac{|\kappa_2|}{2}$ for $i=k, \cdots, n$. Then there is weight $$\zeta'=[\zeta_1, \cdots, \zeta_{k-1}, -\zeta_n, -\zeta_{n-1}, \cdots, -\zeta_{k+1}, -\zeta_k]$$ such that $L_{\zeta'} \neq 0$. \end{lemma} \begin{proof} Let $v$ be a nonzero weight vector of $\zeta$. Acting on $v$ by $$h= \phi_n (\phi_{n-1} \phi_n)\cdots(\phi_k \phi_{k+1} \cdots \phi_n),$$ the vector $h v \in L_{\zeta'}$ and $h v \neq 0$ by Lemma 7.7 and Lemma 7.8. \end{proof} \begin{definition} Let $\zeta = [\zeta_1, \cdots, \zeta_n]$ be a weight of $L$ such that $L_{\zeta} \neq 0$ and $\zeta$ satisfies the condition: if a coordinate $\zeta_{i} > 0 $, then $\zeta_i$ is fixed, i.e. there exists an increasing sequence $i=i_0<i_1 < \cdots < i_m \leq n$ such that $|\zeta_{i_{k}} -\zeta_{i_{k+1}}| = 1$ and $\zeta_{i_m}= \pm \frac{\kappa_2}{2}$. Then we call $\zeta$ is a minimal weight of $L$. \end{definition} \begin{prop} There exists at least one minimal weight $\zeta = [\zeta_1, \cdots, \zeta_n]$ of $L$ such that $L_{\zeta} \neq 0$. \end{prop} \begin{proof} Let $\zeta$ be any weight such that $L_{\zeta} \neq 0$. If $0< \zeta_{i} \leq \frac{|\kappa_2|}{2}$, then $\zeta_i$ is fixed since $L$ satisfies Property 1. So it suffices to consider the coordinate $\zeta_{i} > \frac{|\kappa_2|}{2}$. We want to show that starting with any weight $\zeta$ such that $L_{\zeta} \neq 0$, there is an algorithm to obtain a weight $\zeta'$ such that $L_{\zeta'} \neq 0$ and $\zeta'$ satisfies the condition: if a coordinate $\zeta'_i>0$, then $\zeta'_i$ is fixed. \\ Suppose $\{\zeta_{r_1}, \zeta_{r_2},\cdots, \zeta_{r_l}\}$ is the collection of all the coordinates such that $\zeta_{r_i}> \frac{|\kappa_2|}{2}$ and $\zeta_{r_i}$ is not fixed, for $1 \leq r_1 < r_2 < \cdots < r_l \leq n$. Let $v$ be a nonzero weight vector of weight $\zeta$. We start with the rightmost coordinate $\zeta_{r_l}$ in this collection. If $r_l \neq n$, there are only the following two cases.\\ Case 1. There exists an increasing sequence $r_l+1=j_0<j_1 < \cdots < j_l \leq n$ such that $|\zeta_{j_{k+1}} -\zeta_{j_k}| = 1$ and $\zeta_{j_l}= \pm \frac{\kappa_2}{2}$. Then $|\zeta_{r_l}-\zeta_{r_l+1}| \neq 1$, otherwise there is an increasing sequence $r_l=j_{-1}< j_1<j_1 < \cdots < j_l \leq n$ such that $|\zeta_{j_{k+1}} -\zeta_{j_k}| = 1$ and $\zeta_{j_l}= \pm \frac{\kappa_2}{2}$. So $\phi_{r_l} v$ is a nonzero vector of weight $\zeta^{(1)}=s_{r_l}\zeta$.\\ Case 2. If $\zeta_{r_l+1}<-\frac{|\kappa_2|}{2}$, then $|\zeta_{r_l}-\zeta_{r_l+1}|>1$ and hence $\phi_{r_l} v$ is a nonzero weight vector of weight $\zeta^{(1)}=s_{r_l}\zeta$.\\ Then we consider $\zeta^{(1)}_{r_l+1}$ and we are in the same situation. Hence we repeat this process for $(n-r_l)$ times and obtain a nonzero weight vector $(\phi_{n-1} \cdots \phi_{r_l+1} \phi_{r_l})v$ of weight $$\zeta^{(n-r_l)}=(s_{n-1}\cdots s_{r_l+1}s_{r_l}) \zeta.$$\\ Next, we deal with the second rightmost coordinate $\zeta_{r_{l-1}}=\zeta^{(n-r_l)}_{r_{l-1}}$ in the collection above and repeat the process above for $(n-1-r_{l-1})$ times. We obtain a nonzero weight vector $$(\phi_{n-2} \cdots \phi_{r_{l-1}+1} \phi_{r_{l-1}})(\phi_{n-1} \cdots \phi_{r_l+1} \phi_{r_l})v$$ of weight $$\zeta^{(2n-1-r_{l-1} -r_l)}=(s_{n-2}\cdots s_{r_{l-1}+1}s_{r_{l-1}})(s_{n-1} \cdots, s_{r_l+1}s_{r_l})\zeta.$$ Next, we continue to deal with other coordinates in the collection in the order of $\zeta_{r_{l-2}}, \zeta_{r_{l-3}}, \cdots, \zeta_{r_1}$ and repeat the process for $(n-k-r_k)$ times for the coordinate $\zeta_{r_k}$ for $k=1, \cdots, l$. We obtain a nonzero weight vector $$(\phi_{n-l} \cdots \phi_{r_1+1}\phi_{r_1})(\phi_{n-l+1} \cdots \phi_{r_2+1}\phi_{r_2}) \cdots (\phi_{n-1} \cdots \phi_{r_l+1}\phi_{r_l})v$$ of weight $$\zeta^{(ln-l(l-1)/2-r_1-r_2 \cdots-r_l)}=(s_{n-l} \cdots s_{r_1+1}s_{r_1})(s_{n-l+1} \cdots s_{r_2+1}s_{r_2})\cdots (s_{n-1} \cdots s_{r_l+1}s_{r_l})\zeta.$$ The weight $\zeta^{(ln-l(l-1)/2-r_1-r_2 \cdots-r_l)}$ satisfies the condition that $$\zeta^{(ln-l(l-1)/2-r_1-r_2 \cdots-r_l)}_i > \frac{|\kappa_2|}{2}$$ for $i=n-l+1, \cdots, n$. Moreover, for $i=1,\cdots,n-l$, it follows either $$\zeta^{(ln-l(l-1)/2-r_1-r_2 \cdots-r_l)}_i<0$$ or the coordinate $\zeta^{(ln-l(l-1)/2-r_1-r_2 \cdots-r_l)}_i$ is fixed. Combining Lemma 10.2, there is a weight $$\zeta'=\gamma_n (s_{n-1} \gamma_n) \cdots (s_{n-l+1}\cdots s_{n-1}\gamma_n) \zeta^{(ln-l(l-1)/2-r_1-r_2 \cdots-r_l)}$$ such that $L_{\zeta^{(ln-l(l-1)/2-r_1-r_2 \cdots-r_l)}} \neq 0$ and satisfying the condition: if $$\zeta^{(ln-l(l-1)/2-r_1-r_2 \cdots-r_l)}_i >0,$$ then $\zeta^{(ln-l(l-1)/2-r_1-r_2 \cdots-r_l)}_i$ is fixed for any $i=1, \cdots, n$. \end{proof} \begin{rmk} Lemma 10.2 and Proposition 10.4 indicate that for any weight $\zeta$ such that $L_{\zeta} \neq 0$ and a nonzero $v \in L_{\zeta}$, there is a nonzero weight vector $\phi_{\omega} v \in L_{\zeta'}$ such that $\zeta'$ satisfies the condition in Proposition 10.4. \end{rmk} \begin{eg} Let $\zeta=[-2,2,\textcolor{blue}{4},\textcolor{blue}{5},\textcolor{blue}{6},-3,1]$ and $v \in L$ is a nonzero weight vector of weight $\zeta$. Locate the collection of all the coordinates which are positive and not fixed: $\{\zeta_3=4, \zeta_4=5, \zeta_5=6\}$, i.e. there are three coordinates with $r_1=3, r_2=4$ and $r_3=5$. We deal with these coordinates from right to left. First, we deal with the rightmost coordinate $\zeta_5=6$ in this collection and apply the process for $(n-r_3)=2$ times. We obtain a nonzero weight vector $$(\phi_{n-1} \cdots\phi_{r_3})v=(\phi_6 \phi_5)v$$ of weight $$\zeta^{(n-r_3)}=\zeta^{(2)}=(s_6s_5)\zeta=[-2,2,\textcolor{blue}{4},\textcolor{blue}{5},-3,1,\textcolor{blue}{6}].$$ Then we work on with the coordinate $\zeta_4=\zeta^{(2)}_4=5$ and apply the process for $(n-1-r_2)$ times. We obtain a nonzero weight vector $$(\phi_{n-2}\cdots \phi_{r_2})(\phi_{n-1} \cdots \phi_{r_3})v=(\phi_5\phi_4)(\phi_6\phi_5)v$$ of weight $$\zeta^{(2n-1-r_1-r_2)}=\zeta^{(4)}=(s_5s_4)\zeta^{(2)}=(s_5s_4)(s_6s_5)\zeta=[-2,2,\textcolor{blue}{4},-3,1,\textcolor{blue}{5},\textcolor{blue}{6}].$$ Finally, we deal with the coordinate $\zeta_3=\zeta^{(4)}_3=4$ and apply the process for $n-2-r_3$ times. We obtain a nonzero weight vector $$(\phi_{n-3}\cdots\phi_{r_1})(\phi_{n-2} \cdots \phi_{r_2})(\phi_{n-1}\cdots \phi_{r_3})v=(\phi_4\phi_3)(\phi_5\phi_6)(\phi_6\phi_5)v$$ of weight $$\zeta^{(3n-3-r_1-r_2-r_3)}=\zeta^{(6)}=(s_4s_3)\zeta^{(4)}=[-2,2,-3,1,\textcolor{blue}{4},\textcolor{blue}{5},\textcolor{blue}{6}].$$ Now the weight $\zeta^{(6)}$ satisfies the condition in Lemma 10.2 with $\zeta^{(6)}_i>\frac{|\kappa_2|}{2}$ for $i=5,6,7$. Moreover, for each $i=1,\cdots, 4$, either $\zeta^{(6)}_i<0$ or that $\zeta^{(6)}_i$ is fixed.\\ Applying Lemma 10.2, we obtain a nonzero weight vector $$\phi_7(\phi_6\phi_7)(\phi_5\phi_6\phi_7)(\phi_4\phi_3)(\phi_5\phi_6)(\phi_6\phi_5)v$$ of weight $$\zeta'=\gamma_7(s_6 \gamma_7)(s_5s_6\gamma_7) \zeta^{(6)}=[-2,2,-3,1,\textcolor{red}{-6},\textcolor{red}{-5},\textcolor{red}{-4}].$$ \end{eg} \begin{eg} Let $\zeta = [0,4,-1,6,-2,5,1]$ and $v \in L$ is a nonzero weight vector of weight $\zeta$. There are three coordinates $\zeta_2 =4$, $\zeta_4 =6$ and $\zeta_6=5$ satisfying the condition that $i=2,4,6$, there is no increasing sequence $i<i_1 < \cdots < i_l \leq n$ such that $|\zeta_{i_{k+1}} -\zeta_{i_k}| = 1$ and $|\zeta_{i_l}|= \pm \frac{\kappa_2}{2}$. Starting with the coordinate with maximal index $i=6$ and applying the intertwining operators, it follows \begin{center} \begin{tikzpicture}[scale=0.4] \draw (0,0) node {\tiny{$[0,\textcolor{blue}{4},-1,\textcolor{blue}{6},-2,\textcolor{blue}{5},1]$}}; \draw (10,0) node {\tiny{$[0,\textcolor{blue}{4},-1,\textcolor{blue}{6},-2,1,\textcolor{blue}{5}]$}}; \draw (20,0) node {\tiny{$[0,\textcolor{blue}{4},-1,-2,1,\textcolor{blue}{6},\textcolor{blue}{5}]$}}; \draw (30,0) node {\tiny{$[0,-1,-2,1,\textcolor{blue}{4},\textcolor{blue}{6},\textcolor{blue}{5}]$}}; \draw [->] (3.5,0)--(6.5,0); \draw [->] (13.5,0)--(16.5,0); \draw [->] (23.5,0)--(26.5,0); \draw (5,0.5) node {\tiny{$s_6$}}; \draw (15,0.5) node {\tiny{$s_5 s_4$}}; \draw (25,0.5) node {\tiny{$s_4 s_3 s_2$}}; \end{tikzpicture} \end{center} and by Lemma 10.2 \begin{center} \begin{tikzpicture}[scale=0.4] \draw (0,0) node {\tiny{$[0,-1,-2,1,\textcolor{blue}{4},\textcolor{blue}{6},\textcolor{blue}{5}]$}}; \draw (10,0) node {\tiny{$[0,-1,-2,1,\textcolor{red}{-5},\textcolor{blue}{4},\textcolor{blue}{6}]$}}; \draw (20.5,0) node {\tiny{$[0,-1,-2,1,\textcolor{red}{-5},\textcolor{red}{-6},\textcolor{blue}{4}]$}}; \draw (31,0) node {\tiny{$[0,-1,-2,1,\textcolor{red}{-5},\textcolor{red}{-6},\textcolor{red}{-4}]$}}; \draw [->] (3.5,0)--(6.5,0); \draw [->] (13.5,0)--(16.5,0); \draw [->] (24.5,0)--(27,0); \draw (5,0.5) node {\tiny{$s_5 s_6 \gamma_7$}}; \draw (15,0.5) node {\tiny{$s_6 \gamma_7$}}; \draw (26,0.5) node {\tiny{$\gamma_7$}}; \end{tikzpicture} \end{center} Let $\zeta' = [0,-1,-2,1,\textcolor{red}{-5},\textcolor{red}{-6},\textcolor{red}{-4}]$. Then there is a nonzero weight vector $$\phi_7 (\phi_6 \phi_7)(\phi_5 \phi_6 \phi_7)(\phi_4 \phi_3 \phi_2)(\phi_5 \phi_4)\phi_6 v \in L_{\zeta'}.$$ \end{eg} \begin{rmk} For any minimal weight $\zeta$ of $F=F_{n,p,\mu}(V^{\xi})$ such that $F_{\zeta} \neq 0$, let $T_{\zeta}$ be the corresponding standard tableau. Then $Im(T_{\zeta})$ is the minimal shape $\varphi^{\xi}_{n,p,\mu}$ of $F_{n,p,\mu}(V^{\xi})$. \end{rmk} Before introducing the third property of $F_{n,p,\mu}(V^{\xi})$, we need the following definition and lemma. \begin{definition} Let $\zeta=[\zeta_1, \cdots, \zeta_n]$ be a weight. If a coordinate $\zeta_i$, $i=1,2,\cdots,n$, satisfies the condition that there is no $i<k \leq n$ such that $\zeta_k=\zeta_i \pm 1$, then the coordinate $\zeta_i$ is a corner of $\zeta$. \end{definition} \begin{rmk} Let $\zeta=[\zeta_1,\cdots, \zeta_n]$ and $T_{\zeta}$ is the corresponding standard tableau. For $i=1,\cdots, n$, $\zeta_i$ is a corner of $\zeta$ if and only if $T(i)$ is a southeastern corner of $Im(T_{\zeta})$. \end{rmk} \begin{eg} Let $\zeta=[0,-1,\textcolor{brown}{-2},\textcolor{brown}{1},-5,\textcolor{brown}{-6},\textcolor{brown}{-4}]$. Then $\zeta_3=-2$, $\zeta_4=1$, $\zeta_6=-6$ and $\zeta_7=-4$ are corners of $\zeta$. The corresponding standard tableau $T_{\zeta}$ has southeastern corners $3,4,6$ and $7$. \begin{center} \begin{tikzpicture}[scale=0.5] \draw (2,1)--(2,-1)--(3,-1)--(3,0)--(5,0)--(5,1)--(6,1)--(6,2)--(7,2)--(7,3)--(5,3)--(5,1)--(2,1); \draw [dotted] (2,1) grid (3,-1); \draw [dotted] (3,1) grid (5,0); \draw [dotted] (5,3) grid (6,1); \draw [dotted] (6,3) grid (7,2); \draw (2.5,0.5) node {$1$}; \draw [dotted] (1,2)--(4,-1); \draw (4.5,-1) node {\tiny{$0$}}; \draw[brown] (2.5,-0.5) node {$4$}; \draw (3.5,0.5) node {$2$}; \draw[brown] (4.5,0.5) node {$3$}; \draw [brown] (5.5,1.5) node {$7$}; \draw (5.5,2.5) node {$5$}; \draw [brown] (6.5,2.5) node {$6$}; \end{tikzpicture} \end{center} \end{eg} \begin{lemma} Let $L$ be an irreducible and $\mathcal{Y}$-semisimple representation of $H_n(1, \kappa_2)$ satisfying Property 1. Let $\zeta$ be a minimal weight of $L$ such that $L_{\zeta} \neq 0$. For $i=1, \cdots, n$, if the coordinate $\zeta_i$ is a corner of $\zeta$, then $\zeta_i= \pm \frac{\kappa_2}{2}$ or $\zeta < -\frac{|\kappa_2|}{2}$. \end{lemma} \begin{proof} First, since $L$ satisfies Property 1, if $|\zeta_i|<\frac{|\kappa_2|}{2}$, then $\zeta_i$ is fixed, i.e. there is an increasing sequence $i=i_0 < i_1< \cdots <i_m \leq n$ such that $|\zeta_{i_k} -\zeta_{k+1}|=1$ for $k=0, \cdots, m-1$ and $\zeta_{i_m}= \pm \frac{\kappa_2}{2}$. This contradicts the fact that $\zeta_i$ is a corner of $\zeta$.\\ Second, suppose $\zeta_i> \frac{|\kappa_2|}{2}$. Since $\zeta$ is a minimal weight, $\zeta_i$ if fixed, which again contradicts the fact that $\zeta_i$ is a corner. \end{proof} Now we introduce the third property of $F_{n,p,\mu}(V^{\xi})$. \begin{property} Let $\zeta$ be a minimal weight such that $F_{\zeta} \neq 0$. If $\zeta_k$ is the rightmost coordinate equal to $\frac{|\kappa_2|}{2}$ and $\zeta_r$ is the rightmost coordinate equal to $-\frac{|\kappa_2|}{2}$, then at least one of these two coordinates is not a corner. \end{property} \begin{proof} Let $T_{\zeta}$ be the corresponding standard tableau of weight $\zeta$. Since $\zeta$ is a minimal weight, the shape $Im(T_{\zeta})$ is the minimal shape $\varphi = \varphi_{n,p,\mu}^{\xi}$. So it suffices to show that it is impossible for $T_{\zeta}$ to have $T_{\zeta}(k)$ and $T_{\zeta}(r)$ at southeastern corners simultaneously, equivalently, it is impossible for $\varphi$ to have a southeast corner at eigenvalue $\frac{\kappa_2}{2}$ and a southeastern corner at eigenvalue $-\frac{\kappa_2}{2}$ simultaneously. Let $p \leq q$, $$a=\mu q+ \dfrac{|\xi|+n}{N}$$ and $$b=-\mu p+ \dfrac{|\xi|+n }{N}.$$ Suppose $\varphi$ simultaneously has a southeast corner at eigenvalue $\frac{\kappa_2}{2}$ and a southeastern corner at eigenvalue $-\frac{\kappa_2}{2}$, then $p<q$ and $a>b$ follow. In this case, $\varphi$ has cell $(p,a)$ at eigenvalue $-\frac{|\kappa_2|}{2}$ and cell $(q,b)$ at eigenvalue $\frac{|\kappa_2|}{2}$. Furthermore, the fact that cell $(p,a)$ is a southeastern corner indicates $\xi^{(2)}_1=\xi_{q+1}=b$. The fact that cell $(q,b) \in \varphi$ indicates $\xi^{(1)}_q=\xi_q < b$. This contradicts $\xi \in P^+_{\geq 0}$. \end{proof} \subsection{Combinatorial description of irreducible representations in $\mathcal{M}$} Let $\mathcal{M}(H_n(1,\kappa_2))$ be collection of $\mathcal{Y}$-semisimple representations of $H_n(1, \kappa_2)$ satisfying Properties 1-3. In this subsection, we show that any irreducible representation in $\mathcal{M}(H_n(1, \kappa_2))$ is isomorphic to the image $F_{n, p,\mu}(V^{\xi})$ for a tuple of $n,p,\mu$ and some $\xi \in P^+_{\geq 0}$. Let $L \in \mathcal{M}(H_n(1, \kappa_2))$ be irreducible and $\zeta$ be a minimal weight such that $L_{\zeta} \neq 0$. Recall, if $\zeta_i \geq 0$, then there is an increasing sequence $k_1< \cdots < k_m$ such that $\zeta_{k_{i+1}} = \zeta_{k_i} \pm 1$ and $\zeta_{k_m} = \pm \frac{\kappa_2}{2}$. The weight $\zeta$ gives a standard tableau $T_{\zeta}$ such that $\zeta_k = -cont_{T_{\zeta}}(k)+s$ for some fixed number $s$ where $s-\kappa_2$ is an integer. Let $Im(T_{\zeta}) = \nu / \beta$ such that $\beta_1<\nu_1$ and $\beta_{\ell(\nu)}<\nu_{\ell(\nu)}$. Let us explore in different cases depending on corners. According to Lemma 10.11, if $\zeta_i$ is a corner of $\zeta$, for some $i=1,\cdots,n$, then $\zeta_i=\pm \frac{\kappa_2}{2}$ or $\zeta_i < -\frac{|\kappa_2|}{2}$. For any minimal $\zeta$, there is at least one corner of $\zeta$. Let the coordinate $\zeta_{r_1}$ be the corner of $\zeta$ such that $\mathfrak{i}(r_1)$ is the maximal of $\{\mathfrak{i}(i)|\zeta_{i} \text{ is corner of } \zeta\}$ and the coordinate $\zeta_{r_2}$ is the corner of $\zeta$ such that $\mathfrak{i}(r_2)$ is the second largest number in $\{\mathfrak{i}(i)|\zeta_{i} \text{ is corner of } \zeta\}$ if $\zeta_{r_2}$ exists. It is obvious $\zeta_{r_2}<\zeta_{r_1}$. There are the following cases. If $\zeta_{r_1}=\frac{|\kappa_2|}{2}$, then $\zeta_{r_2}<-\frac{|\kappa_2|}{2}$ or $\zeta_{r_2}$ doesn't exist. By Lemma 10.11, if $\zeta_{r_1}=\frac{|\kappa_2|}{2}$ and $\zeta_{r_2}=-\frac{|\kappa_2|}{2}$, then $\zeta$ violates Property 3. When $\zeta_{r_1}=-\frac{|\kappa_2|}{2}$, $\zeta_{r_2}<-\frac{|\kappa_2|}{2}$ or there is no $\zeta_{r_2}$. When $\zeta_{r_1}<-\frac{|\kappa_2|}{2}$, $\zeta_{r_2}<-\frac{|\kappa_2|}{2}$ or $\zeta_{r_2}$ doesn't exist. So let us discuss in five cases.\\ \textbf{Case 1.} The corner $\zeta_{r_1}=\frac{|\kappa_2|}{2}$ and the corner $\zeta_{r_2}<-\frac{|\kappa_2|}{2}$.\\ Denote $T_{\zeta}(r_1)=(i_1,j_1)$ and $T_{\zeta}(r_2)=(i_2,\nu_{i_2})$. Let $j_2=i_2+s+\frac{|\kappa_2|}{2}$. In this case, set two rectangles $$(a^p)=((\nu_1-j_1)^{i_2})$$ and $$(b^q)=((\nu_1-j_2)^{i_1}).$$ \begin{claim} Following the setting above, the number $\nu_{i_2} - j_1-j_2 \geq 0$. \end{claim} \begin{proof} Since $\zeta_{r_2}$ is a corner, there exists a weight $\tilde{\zeta}$ such that $L_{\tilde{\zeta}} \neq 0$, $Im(T_{\tilde{\zeta}})=Im(T_{\zeta})$ and $T_{\tilde{\zeta}}(n)=(i_2, \nu_{i_2})$, where $T_{\tilde{\zeta}}$ denotes the standard tableau given by the weight $\tilde{\zeta}$. Let $v$ be a nonzero weight vector of weight $\tilde{\zeta}$. Since $\tilde{\zeta}_n \neq \pm \frac{\kappa_2}{2}$, it follows that $\phi_n v$ is a nonzero weight vector of weight $\gamma_n \tilde{\zeta}$. Moreover, the standard tableau $T_{\gamma_n \tilde{\zeta}}$ given by $\gamma_n \tilde{\zeta_n}$ satisfies that $$Im(T_{\gamma_n \tilde{\zeta}})) = Im(T_{\zeta}) \setminus \{(i_2, \nu_{i_2})\} \cup \{(i_1+1, j_1+j_2-\nu_{i_2}+1)\}$$ since $(\gamma_n \tilde{\zeta})_n=-\tilde{\zeta}_n$. Lemma 10.1 implies $T_{\gamma_n \tilde{\zeta}}$ is a standard tableau and hence $Im(T_{\gamma_n \tilde{\zeta}})$ is a skew shape. This fact forces $j_1+j_2-\nu_{i_2}+1 \leq 1$ and thus $$\nu_{i_2}-j_1-j_2 \geq 0.$$ \end{proof} Set $\xi^{(1)}=(\xi^{(1)}_1,\cdots, \xi^{(1)}_{i_1})$ with $$\xi^{(1)}_k= \beta_k + \nu_1-j_1-j_2,$$ for $k=1, \cdots, i_1$ and $\xi^{(2)}=(\xi^{(2)}_1, \cdots, \xi^{(2)}_{i_2})$ with $$\xi^{(2)}_k=\nu_1-\nu_{i_2-k+1},$$ for $k=1, \cdots, i_2$. Furthermore, set $\xi=(\xi_1, \cdots, \xi_{i_1+i_2})$ with $$\xi_k=\xi^{(1)}_k,$$ for $k=1, \cdots, i_1$ and $$\xi_{k}=\xi^{(2)}_{k-i_1},$$ for $k=i_1+1, \cdots, i_1+i_2$. \begin{rmk} Claim 10.13 implies the following two facts. \begin{enumerate} \item It follows $\nu_1-j_1-j_2 \geq 0$. \item The inequality $\nu_1-\nu_{i_2}=\xi^{(2)}_1 \leq \xi^{(1)}_{i_1}=\beta_{i_1}+\nu_1-j_1-j_2$ holds and hence $\xi$ is a well-defined Young diagram. \end{enumerate} \end{rmk} \begin{eg} Continue Example 10.7. An irreducible representation $L$ in $\mathcal{M}(H_7(1,-2))$, we start with a minimal weight $\zeta=[0,-1,\textcolor{brown}{-2},\textcolor{brown}{1},-5,\textcolor{brown}{-6},\textcolor{brown}{-4}]$ and the standard tableau of $\zeta$. The corners of $\zeta$ are $\zeta_3=-2$, $\zeta_4=1$,$\zeta_6=-6$ and $\zeta_7=-4$. Furthermore, $\zeta_{r_1}=\zeta_4=1$ and $\zeta_{r_2}=\zeta_3=-2$ \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(-8,4)}] \draw [blue] (2,1)--(5,1)--(5,3); \draw [dotted] (2,1) grid (3,-1); \draw [dotted] (3,1) grid (5,0); \draw [dotted] (5,3) grid (6,1); \draw [dotted] (6,3) grid (7,2); \draw [draw=none, fill=brown!20] (2,0) rectangle (3,-1); \draw [draw=none, fill=brown!20] (4,1) rectangle (5,0); \draw [draw=none, fill=brown!20] (5,2) rectangle (6,1); \draw [draw=none, fill=brown!20] (6,3) rectangle (7,2); \draw (2.5,0.5) node {$1$}; \draw (2.5,-0.5) node {$4$}; \draw [blue] (1.5,-0.5) node {\tiny{$i_1$}}; \draw [blue] (2.5,3.5) node {\tiny{$j_1$}}; \draw [blue] (1.5,0.5) node {\tiny{$i_2$}}; \draw [blue] (3.5,3.5) node {\tiny{$j_2$}}; \draw (3.5,0.5) node {$2$}; \draw (4.5,0.5) node {$3$}; \draw (5.5,1.5) node {$7$}; \draw (5.5,2.5) node {$5$}; \draw (6.5,2.5) node {$6$}; \draw [draw=blue, fill=gray!50] (2,3) rectangle (5,1); \draw [orange!50] (3,1)--(5,-1); \draw (5.3,-1) node {\tiny{$-1$}}; \draw [orange!50] (2,0)--(4,-2); \draw (4.2,-2) node {\tiny{$1$}}; \draw [dotted] (2,3) grid (5,1); \draw [orange!50] (4,1)--(6,-1); \draw (6.3,-1) node {\tiny{$-2$}}; \draw [blue] (2,3)--(2,-1)--(3,-1)--(3,0)--(5,0)--(5,1)--(6,1)--(6,2)--(7,2)--(7,3)--(2,3); \end{scope} \begin{scope}[scale=0.5, shift={(2,6)}] \draw [scale=0.5] (5,3) node {\tiny{$s=-2$}}; \draw [scale=0.5] (5,1.5) node {\tiny{$\nu=(5,4,3,1)$}}; \draw [scale=0.5] (5,0) node {\tiny{$\beta=(3,3)$}}; \draw [scale=0.5] (5,-2) node {\tiny{$\nu_1=5$}}; \draw [scale=0.5] (5,-3.5) node {\tiny{$i_1=4$, $j_1=1$}}; \draw [scale=0.5] (5,-5) node {\tiny{$i_2=3$, $j_2=2$}}; \end{scope} \end{tikzpicture} \end{center} Place the southeastern corner of $((\nu_1-j_2)^{i_1})$ at the cell $(i_1,j_1)$ and northeastern corner of ${(\nu_1-j_1)^{i_2}}$ at the cell $(1,\nu_1)$. The gray part on the left forms $\xi^{(1)}$ and the gray part on the right forms \rotatebox[origin=c]{180}{$\xi^{(2)}$}. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(-10,-4)}] \draw [dotted] (2,1) grid (3,-1); \draw [dotted] (3,1) grid (5,0); \draw [dotted] (5,3) grid (6,1); \draw [dotted] (6,3) grid (7,2); \draw [draw=none, fill=yellow!20] (2,0) rectangle (3,-1); \draw [draw=none, fill=yellow!20] (6,3) rectangle (7,2); \draw [draw=none, fill=gray!50] (5,1) rectangle (6,0); \draw [draw=none, fill=gray!50] (6,2) rectangle (7,0); \draw [draw=none, fill=gray!50] (0,3) rectangle (2,-1); \draw [draw=blue, fill=gray!50] (2,3) rectangle (5,1); \draw [blue] (2,3)--(2,-1)--(3,-1)--(3,0)--(5,0)--(5,1)--(6,1)--(6,2)--(7,2)--(7,3)--(2,3); \draw [red,thick] (0,3) rectangle (3,-1); \draw [red,thick] (3,3) rectangle (7,0); \draw (2.5,0.5) node {$1$}; \draw (2.5,-0.5) node {$4$}; \draw [blue] (-0.5,-0.5) node {\tiny{$i_1$}}; \draw [blue] (2.5,3.5) node {\tiny{$j_1$}}; \draw [blue] (-0.5,2.5) node {\tiny{$1$}}; \draw [blue] (6.5,3.5) node {\tiny{$\nu_1$}}; \draw (3.5,0.5) node {$2$}; \draw (4.5,0.5) node {$3$}; \draw (5.5,1.5) node {$7$}; \draw (5.5,2.5) node {$5$}; \draw (6.5,2.5) node {$6$}; \draw [orange!50] (3,1)--(5,-1); \draw (5.3,-1) node {\tiny{$-1$}}; \draw [orange!50] (2,0)--(4,-2); \draw (4.2,-2) node {\tiny{$1$}}; \draw [dotted] (2,3) grid (5,1); \draw [orange!50] (4,1)--(6,-1); \draw (6.3,-1) node {\tiny{$-2$}}; \end{scope} \begin{scope}[scale=0.3, shift={(2,-5)}] \draw (5,3) node {\tiny{$(a^p)=(4^3)$}}; \draw (5,1.5) node {\tiny{$(b^q)=(3^4)$}}; \draw (5,0) node {\tiny{$\xi^{(1)}=(5,5,2,2)$}}; \draw (5,-1.5) node {\tiny{$\xi^{(2)}=(2,1,0)$}}; \end{scope} \end{tikzpicture} \end{center} Furthermore, we obtain other parameters of Etingof-Freund-Ma functor as $N=p+q=7$, $p=3$ and $\mu=\frac{a-b}{N}=\frac{1}{7}$. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(2,-12)}] \draw [blue] (0,3)--(0,-1)--(2,-1)--(2,1)--(5,1)--(5,3)--(0,3); \draw [blue] (0,-1)--(0,-3)--(1,-3)--(1,-2)--(2,-2)--(2,-1); \draw (2,4) node {\tiny{$\xi=(5,5,2,2,2,1,0)$}}; \draw (2,2) node {\tiny{$\xi^{(1)}$}}; \draw (1,-1.7) node {\tiny{$\xi^{(2)}$}}; \draw [dotted] (0,3) grid (5,1); \draw [dotted] (0,1) grid (2,-2); \end{scope} \begin{scope}[scale=0.5, shift={(-10,-12)}] \draw [draw=none, fill=gray!30] (0,3) rectangle (5,1); \draw [draw=none, fill=gray!30] (0,1) rectangle (2,-1); \draw (2.8,2) node {\tiny{$\xi^{(1)}$}}; \draw [draw=none, fill=gray!30] (5,1) rectangle (7,0); \draw [draw=none, fill=gray!30] (6,2) rectangle (7,1); \draw (6.2,0.5) node {\tiny{\rotatebox{180}{$\xi^{(2)}$}}}; \draw [red] (0,3) rectangle (3,-1); \draw [red] (3,3) rectangle (7,0); \draw [dotted] (0,3) grid (3,-1); \draw [dotted] (3,3) grid (7,0); \draw (1.5,3.3) node {\tiny{$b$}}; \draw (5,3.3) node {\tiny{$a$}}; \draw (7,3)--(7,3.5); \draw (0,3)--(0,3.5); \draw (3,3)--(3,3.5); \draw[->] (1.2,3.3)--(0,3.3); \draw[->] (1.8,3.3)--(3,3.3); \draw[->] (4.7,3.3)--(3,3.3); \draw[->] (5.3,3.3)--(7,3.3); \draw (-0.3,1) node {\tiny{$q$}}; \draw (7.3,1.5) node {\tiny{$p$}}; \draw (0,3)--(-0.5,3); \draw (0,-1)--(-0.5,-1); \draw (7,0)--(7.5,0); \draw (7,3)--(7.5,3); \draw[->] (-0.3,1.3)--(-0.3,3); \draw[->] (-0.3,0.7)--(-0.3,-1); \draw[->] (7.3,1.8)--(7.3,3); \draw[->] (7.3,1.2)--(7.3,0); \end{scope} \end{tikzpicture} \end{center} \end{eg} \textbf{Case 2.} The corner $\zeta_{r_1}=-\frac{|\kappa_2|}{2}$ and the corner $\zeta_{r_2}<-\frac{|\kappa_2|}{2}$.\\ Denote $T_{\zeta}(r_1)=(i_1,j_1)$ and $T_{\zeta}(r_2)=(i_2,\nu_{i_2})$. Let $j_2=i_2+s-\frac{|\kappa_2|}{2}$. In this case, set two rectangles $$(a^p)=((\nu_1-j_1)^{i_2})$$ and $$(b^q)=((\nu_1-j_2)^{i_1}).$$ We have a similar claim to that in Case 1. \begin{claim} Following the setting above, the number $\nu_{i_2} - j_1-j_2 \geq 0$. \end{claim} The proof is the same with that in Case 1.\\ Similarly, let $\xi^{(1)}=(\xi^{(1)}_1,\cdots, \xi^{(1)}_{i_1})$ with $$\xi^{(1)}_k= \beta_k + \nu_1-j_1-j_2,$$ for $k=1, \cdots, i_1$ and $\xi^{(2)}=(\xi^{(2)}_1, \cdots, \xi^{(2)}_{i_2})$ with $$\xi^{(2)}_k=\nu_1-\nu_{i_2-k+1},$$ for $k=1, \cdots, i_2$. Furthermore, set $\xi=(\xi_1, \cdots, \xi_{i_1+i_2})$ with $$\xi_k=\xi^{(1)}_k,$$ for $k=1, \cdots, i_1$ and $$\xi_{k}=\xi^{(2)}_{k-i_1},$$ for $k=i_1+1, \cdots, i_1+i_2$.\\ \begin{eg} Let $L$ be an irreducible representation in $\mathcal{M}(H_7(1,-2))$ with a minimal weight $\zeta=[-1,1,0,-2,\textcolor{brown}{-1},\textcolor{brown}{-5},\textcolor{brown}{-3}]$ and the standard tableau of $\zeta$. The corners of $\zeta$ are $\zeta_4=-6$,$\zeta_6=-4$ and $\zeta_7=-2$. Furthermore, $\zeta_{r_1}=\zeta_5=-1$ and $\zeta_{r_2}=\zeta_7=-3$ \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(-8,4)}] \draw [blue] (6,3)--(6,2)--(3,2)--(3,1)--(2,1); \draw [dotted] (2,1) grid (5,0); \draw [dotted] (3,2) grid (6,1); \draw [draw=none, fill=brown!20] (5,2) rectangle (6,1); \draw [draw=none, fill=brown!20] (6,3) rectangle (7,2); \draw [orange!50] (5,2)--(7,0); \draw (7.3,0) node {\tiny{$-3$}}; \draw [orange!50] (4,1)--(6,-1); \draw (6.3,-1) node {\tiny{$-1$}}; \draw [orange!50] (-0.2,3.2)--(2,1); \draw (-0.4,3.2) node {\tiny{$1$}}; \draw [draw=none, fill=brown!20] (4,1) rectangle (5,0); \draw (3.5,1.5) node {$1$}; \draw (4.5,1.5) node {$4$}; \draw (2.5,0.5) node {$2$}; \draw (3.5,0.5) node {$3$}; \draw (5.5,1.5) node {$7$}; \draw (4.5,0.5) node {$5$}; \draw [blue] (0.5,0.5) node {\tiny{$i_1$}}; \draw [blue] (4.5,3.5) node {\tiny{$j_1$}}; \draw [dotted] (1,3) grid (2,0); \draw [blue] (0.5,1.5) node {\tiny{$i_2$}}; \draw [blue] (1.5,3.5) node {\tiny{$j_2$}}; \draw (6.5,2.5) node {$6$}; \draw [draw=none, fill=gray!50] (2,3) rectangle (6,2); \draw [draw=none, fill=gray!50] (2,2) rectangle (3,1); \draw [blue] (2,3)--(2,0)--(5,0)--(5,1)--(6,1)--(6,2)--(7,2)--(7,3)--(2,3); \end{scope} \begin{scope}[scale=0.5, shift={(3,6)}] \draw [scale=0.5] (5,3) node {\tiny{$s=-1$}}; \draw [scale=0.5] (5,1.5) node {\tiny{$\nu=(5,4,3)$}}; \draw [scale=0.5] (5,0) node {\tiny{$\beta=(4,1,0)$}}; \draw [scale=0.5] (5,-3.5) node {\tiny{$i_1=3$, $j_1=3$}}; \draw [scale=0.5] (5,-5) node {\tiny{$i_2=2$, $j_2=0$}}; \end{scope} \end{tikzpicture} \end{center} Place the southeastern corner of $(b^q)$ at the cell $(i_1,j_1)$ and northeastern corner of $(a^p)$ at the cell $(1,\nu_1)$. The gray part on the left forms $\xi^{(1)}$ and the gray part on the right forms \rotatebox[origin=c]{180}{$\xi^{(2)}$}. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(-10,-4)}] \draw [blue] (6,3)--(6,2)--(3,2)--(3,1)--(2,1); \draw [dotted] (2,1) grid (5,0); \draw [dotted] (3,2) grid (6,1); \draw [draw=none, fill=yellow!20] (6,3) rectangle (7,2); \draw [draw=none, fill=yellow!20] (4,1) rectangle (5,0); \draw (3.5,1.5) node {$1$}; \draw (4.5,1.5) node {$4$}; \draw (2.5,0.5) node {$2$}; \draw (3.5,0.5) node {$3$}; \draw (5.5,1.5) node {$7$}; \draw (4.5,0.5) node {$5$}; \draw (6.5,2.5) node {$6$}; \draw [blue] (-0.5,0.5) node {\tiny{$i_1$}}; \draw [blue] (4.5,3.5) node {\tiny{$j_1$}}; \draw [blue] (-0.5,2.5) node {\tiny{$1$}}; \draw [blue] (6.5,3.5) node {\tiny{$\nu_1$}}; \draw [draw=none, fill=gray!50] (2,3) rectangle (6,2); \draw [draw=none, fill=gray!50] (2,2) rectangle (3,1); \draw [draw=none, fill=gray!50] (0,3) rectangle (2,0); \draw [draw=none, fill=gray!50] (6,2) rectangle (7,1); \draw [orange!50] (5,2)--(7,0); \draw (7.3,0) node {\tiny{$-3$}}; \draw [orange!50] (4,1)--(6,-1); \draw (6.3,-1) node {\tiny{$-1$}}; \draw [orange!50] (-0.5,3.5)--(2,1); \draw (-0.7,3.5) node {\tiny{$1$}}; \draw [dotted] (0,3) grid (2,0); \draw [dotted] (2,3) grid (5,2); \draw [blue] (2,3)--(2,0)--(5,0)--(5,1)--(6,1)--(6,2)--(7,2)--(7,3)--(2,3); \draw [red,thick] (0,3) rectangle (5,0); \draw [red,thick] (5,3) rectangle (7,1); \end{scope} \begin{scope}[scale=0.3, shift={(2,-5)}] \draw (5,3) node {\tiny{$(a^p)=(2^2)$}}; \draw (5,1.5) node {\tiny{$(b^q)=(5^3)$}}; \draw (5,0) node {\tiny{$\xi^{(1)}=(6,3,2)$}}; \draw (5,-1.5) node {\tiny{$\xi^{(2)}=(1,0)$}}; \end{scope} \end{tikzpicture} \end{center} Furthermore, we obtain other parameters of Etingof-Freund-Ma functor as $N=q+p=5$, $q=3$ and $\mu=\frac{b-a}{N}=\frac{3}{5}$. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(2,-12)}] \draw [blue] (0,3)--(6,3)--(6,2)--(3,2)--(3,1)--(2,1)--(2,0)--(0,0)--(0,3); \draw [blue] (0,0)--(0,-1)--(1,-1)--(1,0); \draw (3.5,3.5) node {\tiny{$\xi=(6,3,2,1,0)$}}; \draw (1.5,2) node {\tiny{$\xi^{(1)}$}}; \draw (0.5,-0.6) node {\tiny{$\xi^{(2)}$}}; \draw [dotted] (0,3) grid (6,2); \draw [dotted] (0,2) grid (2,0); \end{scope} \begin{scope}[scale=0.5, shift={(-10,-12)}] \draw [draw=none,fill=gray!30] (0,3) rectangle (6,2); \draw [draw=none,fill=gray!30] (0,2) rectangle (3,1); \draw [draw=none,fill=gray!30] (0,1) rectangle (2,0); \draw [draw=none,fill=gray!30] (6,2) rectangle (7,1); \draw [red] (0,3) rectangle (5,0); \draw [red] (5,3) rectangle (7,1); \draw [dotted] (0,3) grid (5,0); \draw [dotted] (5,3) grid (7,1); \draw (2.5,3.3) node {\tiny{$b$}}; \draw (6,3.3) node {\tiny{$a$}}; \draw (7,3)--(7,3.5); \draw (0,3)--(0,3.5); \draw (5,3)--(5,3.5); \draw[->] (2.2,3.3)--(0,3.3); \draw[->] (2.8,3.3)--(5,3.3); \draw[->] (5.7,3.3)--(5,3.3); \draw[->] (6.3,3.3)--(7,3.3); \draw (-0.3,1.5) node {\tiny{$q$}}; \draw (7.3,2) node {\tiny{$p$}}; \draw (0,3)--(-0.5,3); \draw (0,0)--(-0.5,0); \draw (7,1)--(7.5,1); \draw (7,3)--(7.5,3); \draw[->] (-0.3,1.8)--(-0.3,3); \draw[->] (-0.3,1.2)--(-0.3,0); \draw[->] (7.3,2.3)--(7.3,3); \draw[->] (7.3,1.7)--(7.3,1); \draw (1.5,2) node {\tiny{$\xi^{(1)}$}}; \draw (6.5, 1.5) node {\tiny{\rotatebox{180}{$\xi^{(2)}$}}}; \end{scope} \end{tikzpicture} \end{center} \end{eg} \textbf{Case 3.} The corner $\zeta_{r_1}=\frac{|\kappa_2|}{2}$ and the corner $\zeta_{r_2}$ doesn't exist. Let $j=s+\frac{|\kappa_2|}{2}$. Then the cell $(0,j)$ on the diagonal of eigenvalue $-\frac{|\kappa_2|}{2}$. We explore the following in two subcases.\\ \textbf{Case 3a.} $j \geq 1$. Set two rectangles $$(a^p)=(j^1)$$ and $$(b^q)=(\nu_1^{\ell(\nu)+1}).$$ Moreover, $\xi=(\xi_1, \cdots, \xi_{\ell(\nu)})$ with $\xi_1=\nu_1+j$ and $\xi_k=\beta_{k-1}$.\\ \begin{eg} Let $L$ be an irreducible representation in $\mathcal{M}(H_7(1,-2))$ with a minimal weight $\zeta=[-1,2,1,0,3,2,\textcolor{brown}{1}]$ such that $L_{\zeta} \neq 0$. There is only one corner $\zeta_{7}=1$. So $$\zeta_{r_1}=\zeta_7=1=\frac{|\kappa_2|}{2}.$$ The standard tableau of $\zeta$ is as follows. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(-8,4)}] \draw [blue] (2,3)--(2,2)--(0,2); \draw [draw=none, fill=brown!20] (2,1) rectangle (3,0); \draw [orange!50] (2,1)--(4,-1); \draw (4.2,-1) node {\tiny{$1$}}; \draw [orange!50] (0,5)--(2,3); \draw (-0.2,5) node {\tiny{$-1$}}; \draw (2.5,2.5) node {$1$}; \draw (2.5,1.5) node {$4$}; \draw (0.5,1.5) node {$2$}; \draw (1.5,1.5) node {$3$}; \draw (2.5,0.5) node {$7$}; \draw (0.5,0.5) node {$5$}; \draw (1.5,0.5) node {$6$}; \draw [draw=blue, fill=gray!50] (0,3) rectangle (2,2); \draw [blue] (0,3) rectangle (3,0); \draw [dotted] (0,4) grid (3,0); \draw [blue] (1.5,4.5) node {\tiny{$j$}}; \draw [blue] (-0.5,3.5) node {\tiny{$0$}}; \end{scope} \begin{scope}[scale=0.5, shift={(2,6)}] \draw [scale=0.5] (5,3) node {\tiny{$s=1$}}; \draw [scale=0.5] (5,1.5) node {\tiny{$\nu=(3,3,3)$}}; \draw [scale=0.5] (5,0) node {\tiny{$\beta=(2,0,0)$}}; \draw [scale=0.5] (5,-2) node {\tiny{$\ell(\nu)=3$}}; \draw [scale=0.5] (5,-3.5) node {\tiny{$j=2$}}; \end{scope} \end{tikzpicture} \end{center} The two rectangles are $(a^p)=(2^1)$ and $(b^q)=(3^4)$. Place the southeastern corner of $(b^q)$ at $T_{\zeta}(r_1)=T_{\zeta}(7)$ and the northwestern corner of $(a^p)$ at the cell $(0, \nu_1+1)$. The gray area forms $\xi$.\\ \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(-10,4)}] \draw [draw =none, fill=yellow!20] (2,1) rectangle (3,0); \draw [blue] (1.5,4.5) node {\tiny{$j$}}; \draw [blue] (3.5,4.5) node {\tiny{$\nu_1+1$}}; \draw [blue] (3.5,4.2)--(3.5,3.9); \draw [blue] (-0.5,3.5) node {\tiny{$0$}}; \draw [draw=blue, fill=gray!50] (0,3) rectangle (2,2); \draw [draw=none, fill=gray!50] (0,4) rectangle (5,3); \draw [orange!50] (2,1)--(4,-1); \draw (4.2,-1) node {\tiny{$1$}}; \draw [orange!50] (0,5)--(2,3); \draw (-0.2,5) node {\tiny{$-1$}}; \draw [red,thick] (0,4) rectangle (3,0); \draw [red,thick] (3,4) rectangle (5,3); \draw [dotted] (0,3) grid (3,0); \draw [dotted] (0,4) grid (5,3); \draw [blue] (0,3) rectangle (3,0); \draw (2.5,2.5) node {$1$}; \draw (2.5,1.5) node {$4$}; \draw (0.5,1.5) node {$2$}; \draw (1.5,1.5) node {$3$}; \draw (2.5,0.5) node {$7$}; \draw (0.5,0.5) node {$5$}; \draw (1.5,0.5) node {$6$}; \end{scope} \begin{scope}[scale=0.5, shift={(2,4)}] \draw [scale=0.5] (5,6.5) node {\tiny{$(a^p)=(2^1)$}}; \draw [scale=0.5] (5,4.5) node {\tiny{$(b^q)=(3^4)$}}; \draw [scale=0.5] (5,2.5) node {\tiny{$\xi=(5,2,0,0,0)$}}; \end{scope} \end{tikzpicture} \end{center} Furthermore, we obtain other parameters of Etingof-Freund-Ma functor as $N=p+q=5$, $p=1$ and $\mu=\frac{a-b}{N}=-\frac{1}{5}$. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(2,-12)}] \draw [blue] (0,4)--(5,4)--(5,3)--(2,3)--(2,2)--(0,2)--(0,4); \draw (2.5,5) node {\tiny{$\xi=(5,2,0,0,0)$}}; \draw [dotted] (0,4) grid (5,3); \draw [dotted] (0,3) grid (2,2); \end{scope} \begin{scope}[scale=0.5, shift={(-10,-12)}] \draw [red] (0,4) rectangle (3,0); \draw [red] (3,4) rectangle (5,3); \draw [draw=none, fill=gray!30] (0,4) rectangle (5,3); \draw [draw=none, fill=gray!30] (0,3) rectangle (2,2); \draw (1.5,3) node {\tiny{$\xi$}}; \draw [dotted] (0,4) grid (3,0); \draw [dotted] (3,4) grid (5,3); \draw (1.5,4.3) node {\tiny{$b$}}; \draw (4,4.3) node {\tiny{$a$}}; \draw (5,4)--(5,4.5); \draw (0,4)--(0,4.5); \draw (3,4)--(3,4.5); \draw[->] (1.2,4.3)--(0,4.3); \draw[->] (1.8,4.3)--(3,4.3); \draw[->] (3.7,4.3)--(3,4.3); \draw[->] (4.3,4.3)--(5,4.3); \draw (-0.3,2) node {\tiny{$q$}}; \draw (5.3,3.5) node {\tiny{$p$}}; \draw (0,4)--(-0.5,4); \draw (0,0)--(-0.5,0); \draw (5,4)--(5.5,4); \draw (5,3)--(5.5,3); \draw[->] (-0.3,2.3)--(-0.3,4); \draw[->] (-0.3,1.7)--(-0.3,0); \end{scope} \end{tikzpicture} \end{center} \end{eg} \textbf{Case 3b.} $j \leq 0$. Set two rectangles $$(a^p)=(1^1)$$ and $$(b^q)=((\nu_1-j+1)^{\ell(\nu)+1}).$$ Moreover, $\xi=(\xi_1, \cdots, \xi_{\ell(\nu)})$ with $\xi_1=\nu_1-j+2$ and $\xi_k=\beta_{k-1}-j+1$.\\ \begin{eg} Let $L$ be an irreducible representation in $\mathcal{M}(H_7(1,-2))$ with a minimal weight $\zeta=[0,-2,-1,1,2,0,\textcolor{brown}{1}]$ such that $L_{\zeta} \neq 0$. There is only one corner $\zeta_{7}=1$. So $$\zeta_{r_1}=\zeta_7=1=\frac{|\kappa_2|}{2}.$$ The standard tableau of $\zeta$ is as follows.\\ \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(-8,4)}] \draw [blue] (2,4)--(2,3)--(1,3); \draw [dotted] (0,5) grid (3,0); \draw [draw=none, fill=brown!20] (2,1) rectangle (3,0); \draw [orange!50] (2,1)--(4,-1); \draw (4.2,-1) node {\tiny{$1$}}; \draw [orange!50] (-1,6)--(1,4); \draw (-1.2,6) node {\tiny{$-1$}}; \draw [blue] (-0.5,4.5) node {\tiny{$0$}}; \draw [blue] (0.5,5.5) node {\tiny{$j$}}; \draw (1.5,2.5) node {$1$}; \draw (1.5,1.5) node {$4$}; \draw (2.5,3.5) node {$2$}; \draw (2.5,2.5) node {$3$}; \draw (2.5,0.5) node {$7$}; \draw (1.5,0.5) node {$5$}; \draw (2.5,1.5) node {$6$}; \draw [draw=blue, fill=gray!50] (1,4) rectangle (2,3); \draw [blue] (1,4) rectangle (3,0); \end{scope} \begin{scope}[scale=0.5, shift={(2,7)}] \draw [scale=0.5] (5,3) node {\tiny{$s=-1$}}; \draw [scale=0.5] (5,1.5) node {\tiny{$\nu=(2,2,2,2)$}}; \draw [scale=0.5] (5,0) node {\tiny{$\beta=(1,0,0,0)$}}; \draw [scale=0.5] (5,-2) node {\tiny{$\ell(\nu)=4$}}; \draw [scale=0.5] (5,-3.5) node {\tiny{$j=0$}}; \end{scope} \end{tikzpicture} \end{center} The two rectangles are $(a^p)=(1^1)$ and $(b^q)=(3^5)$. Place the southeastern corner of $(b^q)$ at $T_{\zeta}(r_1)=T_{\zeta}(7)$ and the northwestern corner of $(a^p)$ at the cell $(0,\nu_1+1)$. The gray area forms $\xi$.\\ \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(-9,4)}] \draw [draw =none, fill=yellow!20] (2,1) rectangle (3,0); \draw [draw=blue, fill=gray!50] (1,4) rectangle (2,3); \draw [draw=none, fill=gray!50] (0,5) rectangle (1,0); \draw [draw=none, fill=gray!50] (1,5) rectangle (4,4); \draw [orange!50] (2,1)--(4,-1); \draw (4.2,-1) node {\tiny{$1$}}; \draw [orange!50] (-1,6)--(1,4); \draw (-1.2,6) node {\tiny{$-1$}}; \draw [blue] (-0.5,4.5) node {\tiny{$0$}}; \draw [blue] (0.5,5.5) node {\tiny{$j$}}; \draw [blue] (3.5,5.5) node {\tiny{$\nu_1+1$}}; \draw [blue] (3.5,5.2)--(3.5,4.9); \draw [red,thick] (0,5) rectangle (3,0); \draw [red,thick] (3,5) rectangle (4,4); \draw [dotted] (0,5) grid (3,0); \draw [blue] (1,4) rectangle (3,0); \draw (1.5,2.5) node {$1$}; \draw (1.5,1.5) node {$4$}; \draw (2.5,3.5) node {$2$}; \draw (2.5,2.5) node {$3$}; \draw (2.5,0.5) node {$7$}; \draw (1.5,0.5) node {$5$}; \draw (2.5,1.5) node {$6$}; \end{scope} \begin{scope}[scale=0.5, shift={(1,5)}] \draw [scale=0.5] (5,6.5) node {\tiny{$(a^p)=(1^1)$}}; \draw [scale=0.5] (5,4.5) node {\tiny{$(b^q)=(3^5)$}}; \draw [scale=0.5] (5,2.5) node {\tiny{$\xi=(4,2,1,1,1,0)$}}; \end{scope} \end{tikzpicture} \end{center} Furthermore, we obtain other parameters of Etingof-Freund-Ma functor as $N=p+q=6$, $p=1$ and $\mu=\frac{a-b}{N}=-\frac{1}{3}$. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(0,-12)}] \draw [blue] (0,5)--(4,5)--(4,4)--(2,4)--(2,3)--(1,3)--(1,0)--(0,0)--(0,5); \draw (2,5.5) node {\tiny{$\xi=(4,2,1,1,1,0)$}}; \draw [dotted] (0,5) grid (1,0); \draw [dotted] (1,5) grid (2,3); \draw [dotted] (2,5) grid (4,4); \end{scope} \begin{scope}[scale=0.5, shift={(-10,-12)}] \draw [red] (0,5) rectangle (3,0); \draw [red] (3,5) rectangle (4,4); \draw [draw=none, fill=gray!30] (0,5) rectangle (4,4); \draw [draw=none, fill=gray!30] (0,4) rectangle (2,3); \draw [draw=none, fill=gray!30] (0,3) rectangle (1,0); \draw (1,4) node {\tiny{$\xi$}}; \draw [dotted] (0,5) grid (3,0); \draw (1.5,5.3) node {\tiny{$b$}}; \draw (3.5,5.3) node {\tiny{$a$}}; \draw (0,5)--(0,5.5); \draw (3,5)--(3,5.5); \draw (4,5)--(4,5.5); \draw[->] (1.2,5.3)--(0,5.3); \draw[->] (1.8,5.3)--(3,5.3); \draw (-0.3,2.5) node {\tiny{$q$}}; \draw (4.3,4.5) node {\tiny{$p$}}; \draw (0,5)--(-0.5,5); \draw (0,0)--(-0.5,0); \draw (4,4)--(4.5,4); \draw (4,5)--(4.5,5); \draw[->] (-0.3,2.8)--(-0.3,5); \draw[->] (-0.3,2.2)--(-0.3,0); \end{scope} \end{tikzpicture} \end{center} \end{eg} \textbf{Case 4.} The corner $\zeta_{r_1}=-\frac{|\kappa_2|}{2}$ and there is no corner $\zeta_{r_2}$. Set $j=s-\frac{|\kappa_2|}{2}$. Then the cell $(0,j)$ is on the diagonal of eigenvalue $\frac{|\kappa_2|}{2}$. Let us discuss in two subcases.\\ \textbf{Case 4a.} When $j \geq 1$. Set two rectangles $$(a^p)=(j^1)$$ and $$(b^q)=(\nu_1^{\ell(\nu)+1}).$$ Moreover, $\xi=(\xi_1, \cdots, \xi_{\ell(\nu)})$ with $\xi_1=\nu_1+j$ and $\xi_k=\beta_{k-1}$.\\ \begin{eg} Let $L$ be an irreducible representation in $\mathcal{M}(H_7(1,-2))$ with a minimal weight $\zeta=[4,3,2,-2,1,0,\textcolor{brown}{-1}]$ such that $L_{\zeta} \neq 0$. There is only one corner $\zeta_{7}=-1$. So $$\zeta_{r_1}=\zeta_7=-1=-\frac{|\kappa_2|}{2}.$$ The standard tableau of $\zeta$ is as follows. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(-10,3)}] \draw [blue] (5,2)--(5,1)--(0,1); \draw [draw=none, fill=brown!20] (5,1) rectangle (6,0); \draw (0.5,0.5) node {$1$}; \draw (5.5,1.5) node {$4$}; \draw (1.5,0.5) node {$2$}; \draw (2.5,0.5) node {$3$}; \draw (5.5,0.5) node {$7$}; \draw (3.5,0.5) node {$5$}; \draw (4.5,0.5) node {$6$}; \draw [draw=blue, fill=gray!50] (0,2) rectangle (5,1); \draw [dotted] (0,3) grid (6,0); \draw [orange!50] (5,1)--(7,-1); \draw (7.3,-1) node {\tiny{$-1$}}; \draw [orange!50] (0,4)--(2,2); \draw (-0.2,4) node {\tiny{$1$}}; \draw [blue] (1.5,3.5) node {\tiny{$j$}}; \draw [blue] (-0.5,2.5) node {\tiny{$0$}}; \draw [blue] (0,2) rectangle (6,0); \end{scope} \begin{scope}[scale=0.5, shift={(5,5)}] \draw [scale=0.5] (5,3) node {\tiny{$s=3$}}; \draw [scale=0.5] (5,1.5) node {\tiny{$\nu=(6,6)$}}; \draw [scale=0.5] (5,0) node {\tiny{$\beta=(5,0)$}}; \draw [scale=0.5] (5,-2) node {\tiny{$\ell(\nu)=2$}}; \draw [scale=0.5] (5,-3.5) node {\tiny{$j=2$}}; \end{scope} \end{tikzpicture} \end{center} The two rectangles are $(a^p)=(2^1)$ and $(b^q)=(6^3)$. Place the southeastern corner of $(b^q)$ at $T_{\zeta}(r_1)=T_{\zeta}(7)$ and the northwestern corner of $(a^p)$ at cell $(0,\nu_1+1)=(0,7)$. The gray area forms $\xi$. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(-11,4)}] \draw [draw =none, fill=yellow!20] (5,1) rectangle (6,0); \draw [draw=blue, fill=gray!50] (0,2) rectangle (5,1); \draw [draw=none, fill=gray!50] (0,3) rectangle (8,2); \draw [orange!50] (5,1)--(7,-1); \draw (7.3,-1) node {\tiny{$-1$}}; \draw [orange!50] (0,4)--(2,2); \draw (-0.2,4) node {\tiny{$1$}}; \draw [blue] (1.5,3.5) node {\tiny{$j$}}; \draw [blue] (6.5,3.5) node {\tiny{$\nu_1+1$}}; \draw [blue] (6.5,3.2)--(6.5,2.9); \draw [blue] (-0.5,2.5) node {\tiny{$0$}}; \draw [red,thick] (0,3) rectangle (6,0); \draw [red,thick] (6,3) rectangle (8,2); \draw [dotted] (0,3) grid (6,0); \draw [dotted] (6,3) grid (8,2); \draw [blue] (0,2) rectangle (6,0); \draw (0.5,0.5) node {$1$}; \draw (5.5,1.5) node {$4$}; \draw (1.5,0.5) node {$2$}; \draw (2.5,0.5) node {$3$}; \draw (5.5,0.5) node {$7$}; \draw (3.5,0.5) node {$5$}; \draw (4.5,0.5) node {$6$}; \end{scope} \begin{scope}[scale=0.5, shift={(4,3)}] \draw [scale=0.5] (5,6.5) node {\tiny{$(a^p)=(2^1)$}}; \draw [scale=0.5] (5,4.5) node {\tiny{$(b^q)=(6^3)$}}; \draw [scale=0.5] (5,2.5) node {\tiny{$\xi=(8,5,0,0)$}}; \end{scope} \end{tikzpicture} \end{center} Furthermore, we obtain other parameters of Etingof-Freund-Ma functor as $N=q+p=4$, $q=3$ and $\mu=\frac{b-a}{N}=1$. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(2,-12)}] \draw [blue] (0,3)--(8,3)--(8,2)--(5,2)--(5,1)--(0,1)--(0,3); \draw (4,3.5) node {\tiny{$\xi=(8,5,0,0)$}}; \draw [dotted] (0,3) grid (8,2); \draw [dotted] (0,2) grid (5,1); \end{scope} \begin{scope}[scale=0.5, shift={(-10,-12)}] \draw [red] (0,3) rectangle (6,0); \draw [red] (6,3) rectangle (8,2); \draw [draw=none, fill=gray!30] (0,3) rectangle (8,2); \draw [draw=none, fill=gray!30] (0,2) rectangle (5,1); \draw (3,2) node {\tiny{$\xi$}}; \draw [dotted] (0,3) grid (8,2); \draw [dotted] (0,2) grid (6,0); \draw (3,3.3) node {\tiny{$b$}}; \draw (7,3.3) node {\tiny{$a$}}; \draw (0,3)--(0,3.5); \draw (6,3)--(6,3.5); \draw (8,3)--(8,3.5); \draw[->] (2.7,3.3)--(0,3.3); \draw[->] (3.3,3.3)--(6,3.3); \draw[->] (6.7,3.3)--(6,3.3); \draw[->] (7.3,3.3)--(8,3.3); \draw (-0.3,1.5) node {\tiny{$q$}}; \draw (8.3,2.5) node {\tiny{$p$}}; \draw (0,3)--(-0.5,3); \draw (0,0)--(-0.5,0); \draw (8,3)--(8.5,3); \draw (8,2)--(8.5,2); \draw[->] (-0.3,1.8)--(-0.3,3); \draw[->] (-0.3,1.2)--(-0.3,0); \end{scope} \end{tikzpicture} \end{center} \end{eg} \textbf{Case 4b.} When $j \leq 0$. Set two rectangles $$(a^p)=(1^1)$$ and $$(b^q)=((\nu_1-j+1)^{\ell(\nu)+1}).$$ Moreover, $\xi=(\xi_1, \cdots, \xi_{\ell(\nu)})$ with $\xi_1=\nu_1-j+2$ and $\xi_k=\beta_{k-1}-j+1$.\\ \begin{eg} Let $L$ be an irreducible representation in $\mathcal{M}(H_7(1,-2))$ with a minimal weight $\zeta=[0,-1,2,1,-2,0,\textcolor{brown}{-1}]$ such that $L_{\zeta} \neq 0$. There is only one corner $\zeta_{7}=-1$. So $$\zeta_{r_1}=\zeta_7=-1=-\frac{|\kappa_2|}{2}.$$ The standard tableau of $\zeta$ is as follows. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(-10,4)}] \draw [blue] (2,2)--(2,1)--(1,1); \draw [dotted] (0,3) grid (5,0); \draw [blue] (-0.5, 2.5) node {\tiny{$0$}}; \draw [blue] (0.5, 3.5) node {\tiny{$j$}}; \draw [draw=none, fill=brown!20] (4,1) rectangle (5,0); \draw [orange!50] (4,1)--(6,-1); \draw (6.3,-1) node {\tiny{$-1$}}; \draw (2.5,1.5) node {$1$}; \draw (2.5,0.5) node {$4$}; \draw (3.5,1.5) node {$2$}; \draw (1.5,0.5) node {$3$}; \draw (4.5,0.5) node {$7$}; \draw (4.5,1.5) node {$5$}; \draw (3.5,0.5) node {$6$}; \draw [draw=blue, fill=gray!50] (1,2) rectangle (2,1); \draw [blue] (1,2) rectangle (5,0); \end{scope} \begin{scope}[scale=0.5, shift={(2,5)}] \draw [scale=0.5] (5,3) node {\tiny{$s=1$}}; \draw [scale=0.5] (5,1.5) node {\tiny{$\nu=(4,4)$}}; \draw [scale=0.5] (5,0) node {\tiny{$\beta=(1,0)$}}; \draw [scale=0.5] (5,-2) node {\tiny{$\ell(\nu)=2$}}; \draw [scale=0.5] (5,-3.5) node {\tiny{$j=0$}}; \end{scope} \end{tikzpicture} \end{center} The two rectangles are $(a^p)=(1^1)$ and $(b^q)=(5^3)$. Place the southeastern corner of $(b^q)$ at $T_{\zeta}(r_1)=T_{\zeta}(7)$ and the northwestern corner of $(a^p)$ at the cell $(0,\nu_1+1)$. The gray area forms $\xi$. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(-10,4)}] \draw [blue] (-0.5, 2.5) node {\tiny{$0$}}; \draw [blue] (0.5, 3.5) node {\tiny{$j$}}; \draw [blue] (5.5, 3.5) node {\tiny{$\nu_1+1$}}; \draw [blue] (5.5,3.2)--(5.5,2.7); \draw [draw =none, fill=yellow!20] (4,1) rectangle (5,0); \draw [orange!50] (4,1)--(6,-1); \draw (6.3,-1) node {\tiny{$-1$}}; \draw [draw=blue, fill=gray!50] (1,2) rectangle (2,1); \draw [draw=none, fill=gray!50] (0,3) rectangle (1,0); \draw [draw=none,fill=gray!50] (1,3) rectangle (6,2); \draw [red,thick] (0,3) rectangle (5,0); \draw [red,thick] (5,3) rectangle (6,2); \draw [dotted](0,3) grid (5,0); \draw [blue] (1,2) rectangle (5,0); \draw (2.5,1.5) node {$1$}; \draw (2.5,0.5) node {$4$}; \draw (3.5,1.5) node {$2$}; \draw (1.5,0.5) node {$3$}; \draw (4.5,0.5) node {$7$}; \draw (4.5,1.5) node {$5$}; \draw (3.5,0.5) node {$6$}; \end{scope} \begin{scope}[scale=0.5, shift={(2,3)}] \draw [scale=0.5] (5,6.5) node {\tiny{$(a^p)=(1^1)$}}; \draw [scale=0.5] (5,4.5) node {\tiny{$(b^q)=(5^3)$}}; \draw [scale=0.5] (5,2.5) node {\tiny{$\xi=(6,2,1,0)$}}; \end{scope} \end{tikzpicture} \end{center} Furthermore, we obtain other parameters of Etingof-Freund-Ma functor as $N=q+p=4$, $q=3$ and $\mu=\frac{b-a}{N}=1$. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(2,-12)}] \draw [blue] (0,3)--(6,3)--(6,2)--(2,2)--(2,1)--(1,1)--(1,0)--(0,0)--(0,3); \draw (2,3.5) node {\tiny{$\xi=(6,2,1,0)$}}; \draw [dotted] (0,3) grid (6,2); \draw [dotted] (0,2) grid (2,1); \end{scope} \begin{scope}[scale=0.5, shift={(-10,-12)}] \draw [red] (0,3) rectangle (5,0); \draw [red] (5,3) rectangle (6,2); \draw [draw=none,fill=gray!30] (0,3) rectangle (6,2); \draw [draw=none,fill=gray!30] (0,2) rectangle (2,1); \draw [draw=none,fill=gray!30] (0,1) rectangle (1,0); \draw (1,2) node {\tiny{$\xi$}}; \draw [dotted] (0,3) grid (5,0); \draw (2.5,3.3) node {\tiny{$b$}}; \draw (5.5,3.3) node {\tiny{$a$}}; \draw (0,3)--(0,3.5); \draw (5,3)--(5,3.5); \draw (6,3)--(6,3.5); \draw[->] (2.2,3.3)--(0,3.3); \draw[->] (2.8,3.3)--(5,3.3); \draw (-0.3,1.5) node {\tiny{$q$}}; \draw (6.3,2.5) node {\tiny{$p$}}; \draw (0,3)--(-0.5,3); \draw (0,0)--(-0.5,0); \draw (6,2)--(6.5,2); \draw (6,3)--(6.5,3); \draw[->] (-0.3,1.8)--(-0.3,3); \draw[->] (-0.3,1.2)--(-0.3,0); \end{scope} \end{tikzpicture} \end{center} \end{eg} \textbf{Case 5.} The corner $\zeta_{r_1}<-\frac{|\kappa_2|}{2}$. Let $j_1=\nu_{\ell(\nu)+\frac{|\kappa_2|}{2}}+\zeta_{r_1}$ and $j_2=\nu_{\ell(\nu)-\frac{|\kappa_2|}{2}}+\zeta_{r_1}$. Set two rectangles $$(a^p)=((\nu_1-j_1)^{\ell(\nu)})$$ and $$(b^q)=((\nu_1-j_2)^{\ell(\nu)}).$$ \begin{claim} According to the setting above, the number $\nu_{\ell(\nu)}-j_1-j_2 \geq 0$ \end{claim} \begin{proof} There exist a weight $\tilde{\zeta}$ such that $L_{\tilde{\zeta}} \neq 0$, $Im(T_{\tilde{\zeta}})=Im(T_{\zeta})$ and $T_{\tilde{\zeta}}(n)=(\ell(\nu), \nu_{\ell(\nu)})$. Let $v$ be a nonzero weight vector of weight $\tilde{\zeta}$. Since $\zeta_{r_1}<-\frac{|\kappa_2|}{2}$, we obtain a nonzero weight vector $\phi_n v$ of weight $\gamma_n \tilde{\zeta}$. Moreover, $$Im(T_{\gamma_n \tilde{\zeta}})=Im(T_{\zeta}) \setminus \{(\ell(\nu), \nu_{\ell(\nu)})\} \cup \{(\ell(\nu)+1, 2\ell(\nu)-\nu_{\ell(\nu)}+2s+1)\}.$$ Since $Im(T_{\gamma_n \tilde{\zeta}})$ is a skew shape, it follows $2\ell(\nu)-\nu_{\ell(\nu)}+2s+1 \leq 1$. Applying $j_1=\nu_{\ell(\nu)}+\frac{|\kappa_2|}{2}+\zeta_{r_1}$ and $j_2=\nu_{\ell(\nu)}-\frac{|\kappa_2|}{2}+\zeta_{r_1}$, the statement $\nu_{\ell(\nu)}-j_1-j_2 \geq 0$ follows. \end{proof} Set $\xi^{(1)}=(\xi^{(1)}_1, \cdots, \xi^{(1)}_{\ell(\nu)})$ with $$\xi^{(1)}_k=\beta_k+\nu_1-j_1-j_2$$ for $k=1, \cdots, \ell(\nu)$, $\xi^{(2)}=(\xi^{(2)}_1, \cdots, \xi^{(2)}_{\ell(\nu)})$ with $$\xi^{(2)}_k=\nu_1-\nu_{\ell(\nu)-k+1}$$ for $k=1, \cdots, \ell(\nu)$ and $\xi=(\xi_1,\cdots, \xi_{2\ell(\nu)})$ with $$\xi_k=\xi^{(1)}_k$$ for $k=1,\cdots, \ell(\nu)$ and $$\xi_k=\xi^{(2)}_{k-\ell(\nu)}$$ for $k=\ell(\nu)+1, \cdots, 2\ell(\nu)$. \begin{rmk} Claim 10.22 implies the following two facts. \begin{enumerate} \item It follows $\nu_1-j_1-j_2 \geq 0$. \item The inequality $\nu_1-\nu_{\ell(\nu)}=\xi^{(2)}_1 \leq \xi^{(1)}_{\ell(\nu)}=\nu_1-j_1-j_2$ holds and hence $\xi$ is a well-defined Young diagram. \end{enumerate} \end{rmk} \begin{eg} Let $L$ be an irreducible representation in $\mathcal{M}(H_7(1,-2))$ with a minimal weight $\zeta=[-2,-1,-5,\textcolor{brown}{-6},-3,\textcolor{brown}{-4},\textcolor{brown}{-2}]$ such that $L_{\zeta} \neq 0$. The corners of $\zeta$ are $\zeta_4=-6$, $\zeta_6=-4$ and $\zeta_7=-2$. So $\zeta_{r_1}=\zeta_7=-2$. The standard tableau of $\zeta$ is as follows. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(-10,4)}] \draw [dotted] (3,2) grid (5,0); \draw [dotted](5,3) grid (6,1); \draw [draw =none, fill=brown!20] (4,1) rectangle (5,0); \draw [draw =none, fill=brown!20] (5,2) rectangle (6,1); \draw [orange!50] (4,1)--(6,-1); \draw (6.3, -1) node {\tiny{$-2$}}; \draw [orange!50] (3,1)--(5,-1); \draw (5.3, -1) node {\tiny{$-1$}}; \draw (3.5,1.5) node {$1$}; \draw (6.5,2.5) node {$4$}; \draw (3.5,0.5) node {$2$}; \draw (4.5,0.5) node {$7$}; \draw (5.5,1.5) node {$6$}; \draw (4.5,1.5) node {$5$}; \draw (5.5,2.5) node {$3$}; \draw [draw=blue, fill=gray!50] (3,3) rectangle (5,2); \draw [blue] (3,2)--(5,2)--(5,3); \draw [blue] (3,3)--(3,0)--(5,0)--(5,1)--(6,1)--(6,2)--(7,2)--(7,3)--(3,3); \end{scope} \begin{scope}[scale=0.5, shift={(2,5)}] \draw [scale=0.5] (5,3) node {\tiny{$s=-3$}}; \draw [scale=0.5] (5,1.5) node {\tiny{$\nu=(4,3,2)$}}; \draw [scale=0.5] (5,0) node {\tiny{$\beta=(2,0,0)$}}; \draw [scale=0.5] (5,-1.5) node {\tiny{$\ell(\nu)=3$}}; \end{scope} \end{tikzpicture} \end{center} The two rectangles $(a^p)=(3^3)$ and $(b^q)=(5^3)$ follow. Place the northeastern corner of $(a^p)=(3^3)$ at the cell $(1, \nu_1)$ and the southeastern corner of $(b^q)=(5^3)$ at the cell $(\ell(\nu), \ell(\nu)+\frac{|\kappa_2|}{2}+s$. The gray area on the left forms $\xi^{(1)}$ and the gray area on the right forms \rotatebox[origin=c]{180}{$\xi^{(2)}$}. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5, shift={(-10,4)}] \draw [orange!50] (1,1)--(3,-1); \draw (3.2, -1) node {\tiny{$1$}}; \draw [orange!50] (3,1)--(5,-1); \draw (5.3, -1) node {\tiny{$-1$}}; \draw [draw =none, fill=yellow!20] (3,1) rectangle (4,0); \draw [draw =none, fill=yellow!20] (6,3) rectangle (7,2); \draw [draw=blue, fill=gray!50] (3,3) rectangle (5,2); \draw [draw=none, fill=gray!50] (-1,3) rectangle (3,0); \draw [draw=none,fill=gray!50] (5,1) rectangle (7,0); \draw [draw=none,fill=gray!50] (6,2) rectangle (7,1); \draw [blue] (-1.5,0.5) node {\tiny{$\ell(\nu)$}}; \draw [blue] (3.5,3.5) node {\tiny{$\ell(\nu)+s+\frac{|\kappa_2|}{2}$}}; \draw [blue] (-1.5,2.5) node {\tiny{$1$}}; \draw [blue] (6.5,3.5) node {\tiny{$\nu_1$}}; \draw [blue] (3.5,3.2)--(3.5,2.7); \draw [dotted] (-1,3) grid (7,0); \draw [red, thick] (-1,3) rectangle (4,0); \draw [red,thick] (4,3) rectangle (7,0); \draw [blue] (3,2)--(5,2)--(5,3); \draw [blue] (3,3)--(3,0)--(5,0)--(5,1)--(6,1)--(6,2)--(7,2)--(7,3)--(3,3); \draw (3.5,1.5) node {$1$}; \draw (6.5,2.5) node {$4$}; \draw (3.5,0.5) node {$2$}; \draw (4.5,0.5) node {$7$}; \draw (5.5,1.5) node {$6$}; \draw (4.5,1.5) node {$5$}; \draw (5.5,2.5) node {$3$}; \end{scope} \begin{scope}[scale=0.5, shift={(2,4)}] \draw [scale=0.5] (5,6.5) node {\tiny{$(a^p)=(3^3)$}}; \draw [scale=0.5] (5,4.5) node {\tiny{$(b^q)=(5^3)$}}; \draw [scale=0.5] (5,2.5) node {\tiny{$\xi^{(1)}=(6,4,4)$}}; \draw [scale=0.5] (5,0.5) node {\tiny{$\xi^{(2)}=(2,1,0)$}}; \end{scope} \end{tikzpicture} \end{center} So the three shapes $(a^p)$, $(b^q)$ and $\xi$ are set as follows. The other parameters of Etingof-Freund-Ma functor are set as $N=6$, $p=3$ and $\mu=1/3$. \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.5,shift={(-10,3)}] \draw [red,thick] (-1,3) rectangle (4,0); \draw [red,thick] (4,3) rectangle (7,0); \draw [draw=none,fill=gray!30] (-1,3) rectangle (5,2); \draw [draw=none,fill=gray!30] (-1,2) rectangle (3,0); \draw (1,1.5) node {\tiny{$\xi^{(1)}$}}; \draw [draw=none,fill=gray!30] (6,2) rectangle (7,1); \draw [draw=none,fill=gray!30] (5,1) rectangle (7,0); \draw (6.3,0.5) node {\tiny{\rotatebox{180}{$\xi^{(2)}$}}}; \draw [dotted] (-1,3) grid (7,0); \draw (1.5,3.3) node {\tiny{$b$}}; \draw (5.5,3.3) node {\tiny{$a$}}; \draw (-1.3,1.5) node {\tiny{$q$}}; \draw (7.3,1.5) node {\tiny{$p$}}; \draw (-1,3)--(-1,3.5); \draw (4,3)--(4,3.5); \draw (7,3)--(7,3.5); \draw[->] (1.2,3.3)--(-1,3.3); \draw[->] (1.8,3.3)--(4,3.3); \draw[->] (5.2,3.3)--(4,3.3); \draw[->] (5.8,3.3)--(7,3.3); \draw (-1,3)--(-1.5,3); \draw (-1,0)--(-1.5,0); \draw (7,3)--(7.5,3); \draw (7,0)--(7.5,0); \draw[->] (-1.3,1.8)--(-1.3,3); \draw[->] (-1.3,1.2)--(-1.3,0); \draw[->] (7.3,1.8)--(7.3,3); \draw[->] (7.3,1.2)--(7.3,0); \end{scope} \begin{scope}[scale=0.5, shift={(2,4)}] \draw [blue] (-1,3)--(-1,0)--(3,0)--(3,2)--(5,2)--(5,3)--(-1,3); \draw [blue] (-1,0)--(-1,-2)--(0,-2)--(0,-1)--(1,-1)--(1,0); \draw (2,1.5) node {\tiny{$\xi^{(1)}$}}; \draw (0,-0.7) node {\tiny{$\xi^{(2)}$}}; \draw [dotted] (-1,3) grid (5,2); \draw [dotted] (-1,2) grid (3,0); \draw [dotted] (-1,0) grid (1,-1); \draw (2,4) node {\tiny{$\xi=(6,4,4,2,1,0)$}}; \end{scope} \end{tikzpicture} \end{center} \end{eg} \begin{rmk} When we fix the number $n$, for different input $(\xi, N, p, \mu)$, we could actually get isomorphic $H_n$-modules. Consider the following example of representations of $H_3(1, -1)$. \\ Let $\xi = (3,3,2)$, $N = 4$ , $ p = 1$ and $\mu = - \frac{1}{4}$.\\ In this case,$ a=\mu q + \frac{|\xi| + n}{N} = 2$ and $b= - \mu p + \frac{|\xi| + n}{N} = 3$. Then the image $F=F_{3,1,- \frac{1}{4}}(V^{\xi})$ is an $H_3(1,-1)$-module with the following minimal shape $\varphi_{3,1,-\frac{1}{4}}^{\xi}=(5,3,3)/(3,3,2)$.\\ \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.4,shift={(0,0)}] \draw [red,thick] (0,3) rectangle (3,0); \draw [red,thick] (3,3) rectangle (5,2); \draw [draw=none, fill= gray!50] (0,3) rectangle (3,1); \draw (1.5,2) node {\tiny{$\xi$}}; \draw [draw=none, fill= gray!50] (0,1) rectangle (2,0); \draw [dotted] (0,3) grid (3,0); \draw [dotted] (3,3) grid (5,2); \draw (1.5,3.3) node {\tiny{$b$}}; \draw (4,3.3) node {\tiny{$a$}}; \draw (-0.3,1.5) node {\tiny{$q$}}; \draw (5.3,2.5) node {\tiny{$p$}}; \draw (0,3)--(0,3.5); \draw (3,3)--(3,3.5); \draw (5,3)--(5,3.5); \draw[->] (1.2,3.3)--(0,3.3); \draw[->] (1.8,3.3)--(3,3.3); \draw[->] (4.2,3.3)--(5,3.3); \draw[->] (3.8,3.3)--(3,3.3); \draw (0,3)--(-0.5,3); \draw (0,0)--(-0.5,0); \draw (5,3)--(5.5,3); \draw (5,2)--(5.5,2); \draw[->] (-0.3,1.8)--(-0.3,3); \draw[->] (-0.3,1.2)--(-0.3,0); \draw [orange!50] (2,1)--(4,-1); \draw (4.2,-1) node {\tiny{$\frac{1}{2}$}}; \end{scope} \end{tikzpicture} \end{center} Then the basis is indexed by the standard tableaux on the skew shapes: $(5,3,3)/\xi$, $(4,3,3,1)/\xi$ and $(3,3,3,2)/\xi$. There is a minimal weight $\zeta=[\frac{1}{2}, -\frac{5}{2}, -\frac{7}{2}]$ such that $F_{\zeta} \neq 0$. Now let us recover a functor $F_{n,p',\mu'}$ such that $F_{n,p',\mu'}(V^{\xi'})$ is an $H_3(1,-1)$-module with a minimal weight $\zeta=[\textcolor{brown}{\frac{1}{2}}, -\frac{5}{2}, \textcolor{brown}{-\frac{7}{2}}]$. According to Case 1, $(a'^{p'})=(3^1)$, $(b'^{q'})=(3^2)$, $\xi'=(4,2,0)$ and $\mu'=0$.\\ \begin{center} \begin{tikzpicture} \begin{scope}[scale=0.4,shift={(0,0)}] \draw [draw=none, fill= gray!50] (0,2) rectangle (2,0); \draw [draw=none, fill= gray!50] (0,2) rectangle (4,1); \draw (1,1) node {\tiny{$\xi'$}}; \draw [dotted] (0,2) grid (6,1); \draw [dotted] (0,1) grid (3,0); \draw [draw=none,fill=brown!20] (2,1) rectangle (3,0); \draw [draw=none,fill=brown!20] (5,2) rectangle (6,1); \draw [red,thick] (0,2) rectangle (3,0); \draw [red,thick] (3,2) rectangle (6,1); \draw (2.5,0.5) node {\tiny{$1$}}; \draw (4.5,1.5) node {\tiny{$2$}}; \draw (5.5,1.5) node {\tiny{$3$}}; \draw (1.5,2.3) node {\tiny{$b'$}}; \draw (4.5,2.3) node {\tiny{$a'$}}; \draw (-0.3,1) node {\tiny{$q'$}}; \draw (6.4,1.5) node {\tiny{$p'$}}; \draw (0,2)--(0,2.5); \draw (3,2)--(3,2.5); \draw (6,2)--(6,2.5); \draw[->] (1.2,2.3)--(0,2.3); \draw[->] (1.8,2.3)--(3,2.3); \draw[->] (4.8,2.3)--(6,2.3); \draw[->] (4.2,2.3)--(3,2.3); \draw (0,2)--(-0.5,2); \draw (0,0)--(-0.5,0); \draw (6,2)--(6.5,2); \draw (6,1)--(6.5,1); \draw[->] (-0.3,1.3)--(-0.3,2); \draw[->] (-0.3,0.7)--(-0.3,0); \draw [orange!50] (2,1)--(4,-1); \draw (4.2,-1) node {\tiny{$\frac{1}{2}$}}; \draw [orange!50] (5,2)--(7,0); \draw (7.3,0) node {\tiny{$-\frac{7}{2}$}}; \end{scope} \end{tikzpicture} \end{center} \end{rmk} \subsection{Other $\mathcal{Y}$-semisimple representations} The image of the Etingof-Freund-Ma functor does not exhaust all the $\mathcal{Y}$-semisimple representations. The following are two examples of $\mathcal{Y}$-semisimple $H_n(1,\kappa_2)$ representation which are not in $\mathcal{M}(H_n(1,\kappa_2))$. \begin{eg} Obviously, the representation obtained under the Etingof-Freund-Ma does not contain a weight vector of weight $\zeta$ with $ -\frac{|\kappa_2|}{2}< \zeta_n < \frac{|\kappa_2|}{2}$. Consider the representation of $H_3(1, -6)$ generated by the weight vector of weight $[1,2,-3]$. This representation has the following characters:\\ \begin{center} \begin{tikzpicture}[scale=0.5] \begin{scope}[shift = {(8,0)}] \draw (0,0) node {\tiny{$[-3,-2,-1]$}}; \draw [->] (0,-0.3)--(0, -1.7); \draw (-0.5,-1) node {\tiny{$\mathfrak{m}_3$}}; \draw (0,-2) node {\tiny{$[-3,-2,1]$}}; \draw [->](0,-2.3)--(0, -3.7); \draw (-0.5,-3) node {\tiny{$\mathfrak{m}_2$}}; \draw (0,-4) node {\tiny{$[-3,1,-2]$}}; \draw [->](0.3,-4.4)--(2.7, -5.7); \draw (2.0,-5) node {\tiny{$\mathfrak{m}_3$}}; \draw (4,-6) node {\tiny{$[-3,1,2]$}}; \draw [->](-0.3,-4.4)--(-2.7, -5.7); \draw (-2,-5) node {\tiny{$\mathfrak{m}_1$}}; \draw (-4,-6) node {\tiny{$[1,-3,-2]$}}; \draw [->](-2.7,-6.4)--(-0.3, -7.6); \draw (-2,-7) node {\tiny{$\mathfrak{m}_3$}}; \draw (0,-8) node {\tiny{$[1,-3,2]$}}; \draw [->](2.7,-6.4)--(0.3, -7.6); \draw (2,-7) node {\tiny{$\mathfrak{m}_1$}}; \draw (0,-10) node {\tiny{$[1,2,-3]$}}; \draw [->](0,-8.3)--(0, -9.7); \draw (-0.5,-9) node {\tiny{$\mathfrak{m}_2$}}; \end{scope} \end{tikzpicture} \end{center} \end{eg}
1,116,691,497,096
arxiv
\section{Introduction} Fermi surface instability in itinerant electron systems gives rise to abundant quantum states of matter in various fields of condensed matter physics, such as superconductivity and magnetism. For example, the nesting property of the Fermi surface has been identified as the origin of charge and/or spin density waves in metallic alloys, organic conductors, and other itinerant magnets~\cite{Gruner_RevModPhys.60.1129, Gruner_RevModPhys.66.1}. As such a Fermi surface nesting can ubiquitously occur for various lattice geometry depending on the electronic band structure, itinerant electron systems are an optimal platform to realize further exotic electronic ordered states. In particular, when the Fermi surfaces are nested by multiple different wave vectors, there is a chance of inducing the multiple-$Q$ states accompanied with noncollinear and noncoplanar spin configurations~\cite{hayami2021topological}. The most well-known examples are the triple-$Q$ noncoplanar (double-$Q$ coplanar) states found in the triangular and pyrochlore (checkerboard) lattice systems, where the perfect nesting of the Fermi surface occurs at a particular electron filling~\cite{Martin_PhysRevLett.101.156402, Chern_PhysRevLett.105.226403, Venderbos_PhysRevLett.109.166405}. Subsequently, similar multiple-$Q$ states have been revealed under various lattice structures when ($d-2$)-dimensional portions of the Fermi surfaces are connected by the multiple-$Q$ wave vectors in the extended Brillouin zone ($d$ is the spatial dimension): triangular~\cite{Akagi_JPSJ.79.083711, Kato_PhysRevLett.105.266405, Akagi_PhysRevLett.108.096401,Hayami_PhysRevB.90.060402,Hayami_PhysRevB.94.024424}, square~\cite{Agterberg_PhysRevB.62.13816,hayami_PhysRevB.91.075104,Hayami_PhysRevB.94.024424}, cubic~\cite{Hayami_PhysRevB.89.085124}, kagome~\cite{Barros_PhysRevB.90.245119, Ghosh_PhysRevB.93.024401}, honeycomb~\cite{Jiang_PhysRevLett.114.216402, Venderbos_PhysRevB.93.115108}, and Shastry-Sutherland lattices~\cite{Shahzad_PhysRevB.96.224402}. More recently, the concept of the multiple-$Q$ states induced by the Fermi surface instability has been extended to long-period magnetic structures, such as the double-$Q$ stripe state~\cite{Ozawa_doi:10.7566/JPSJ.85.103703,batista2016frustration} and the triple-$Q$ skyrmion crystal (SkX)~\cite{Ozawa_PhysRevLett.118.147205, Hayami_PhysRevB.99.094420, Eto_PhysRevB.104.104425, Eto_PhysRevLett.129.017201,kobayashi2022skyrmion}. The emergence of the multiple-$Q$ states in itinerant electron systems is intuitively understood from the competition between the negative bilinear exchange interaction and the positive biquadratic interaction in momentum space: The former originates from the Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction to induce the single-$Q$ spiral instability~\cite{Ruderman, Kasuya, Yosida1957} and the latter originates from the higher-order RKKY interaction to lead to the multiple-$Q$ instability~\cite{Akagi_PhysRevLett.108.096401, Hayami_PhysRevB.95.224424}, both of which are characterized by effective long-range spin interactions in real space. Especially, an effective spin model incorporating the effect of the above interactions describes a similar multiple-$Q$ instability to that in the original itinerant electron model. As the computational cost for the effective spin model is much cheaper compared to that for the itinerant electron model, it can be used to investigate the multiple-$Q$ instabilities in various situations with different lattice structures and magnetic anisotropy. In fact, it was clarified that manifold multiple-$Q$ states appear as the ground state by analyzing the effective spin model, such as the SkX in the hexagonal~\cite{hayami2020multiple,Hayami_PhysRevB.103.054422,Hayami_PhysRevB.105.014408,Hayami_PhysRevB.105.184426,Hayami_PhysRevB.105.224411,Hayami_PhysRevB.105.224423}, tetragonal~\cite{Hayami_PhysRevLett.121.137202,hayami2018multiple,Su_PhysRevResearch.2.013160,Hayami_PhysRevB.103.024439}, trigonal~\cite{yambe2021skyrmion,hayami2022skyrmion}, and orthorhombic~\cite{Hayami_doi:10.7566/JPSJ.91.093701} lattice systems, the hedgehog crystal in the cubic lattice system~\cite{Okumura_PhysRevB.101.144416,Shimizu_PhysRevB.103.054427,hayami2021field,Kato_PhysRevB.104.224405,Shimizu_PhysRevB.103.184421,Okumura_doi:10.7566/JPSJ.91.093702}, and the meron--antimeron crystal in the hexagonal lattice system~\cite{Hayami_PhysRevB.104.094425}. These systematic investigations might be useful to understand the origin of the SkXs in the hexagonal compounds Gd$_2$PdSi$_3$~\cite{Saha_PhysRevB.60.12162,kurumaji2019skyrmion,sampathkumaran2019report,Kumar_PhysRevB.101.144440,paddison2022magnetic,Bouaziz_PhysRevLett.128.157206} and Gd$_3$Ru$_4$Al$_{12}$~\cite{chandragiri2016magnetic,Nakamura_PhysRevB.98.054410,hirschberger2019skyrmion,Hirschberger_10.1088/1367-2630/abdef9} and the tetragonal compounds GdRu$_2$Si$_2$~\cite{khanh2020nanometric,Yasui2020imaging,khanh2022zoology} and EuAl$_4$~\cite{Shang_PhysRevB.103.L020405,kaneko2021charge,takagi2022square,Zhu_PhysRevB.105.014423}, the hedgehog crystal in the cubic compounds MnSi$_{1-x}$Ge$_{x}$~\cite{tanigaki2015real,kanazawa2017noncentrosymmetric,fujishiro2019topological,Kanazawa_PhysRevLett.125.137202,Kanazawa_doi:10.7566/JPSJ.91.101002} and SrFeO$_3$~\cite{Mostovoy_PhysRevLett.94.137205,Ishiwata_PhysRevB.84.054427,Ishiwata_PhysRevB.101.134406,Rogge_PhysRevMaterials.3.084404,Onose_PhysRevMaterials.4.114420,yambe2020double}, the vortex crystal in the hexagonal compound Y$_3$Co$_8$Sn$_4$~\cite{takagi2018multiple}, and the bubble state in the tetragonal compound CeAuSb$_2$~\cite{Marcus_PhysRevLett.120.097201,Park_PhysRevB.98.024426,Seo_PhysRevX.10.011035,seo2021spin}. On the other hand, the studies on the multiple-$Q$ instability against thermal fluctuations have still been limited compared to those in the ground state. As it was demonstrated that thermal fluctuations tend to enhance the stability of the multiple-$Q$ states in the localized spin model~\cite{Muhlbauer_2009skyrmion, Okubo_PhysRevLett.108.017206, Buhrandt_PhysRevB.88.195137, Hayami_PhysRevB.93.184413, Laliena_PhysRevB.96.134420, Laliena_PhysRevB.98.224407} and they induce the finite-temperature phase transitions between the multiple-$Q$ states in the itinerant electron model~\cite{Chern_PhysRevLett.109.156801, Barros_PhysRevB.88.235101, Hayami_10.1088/1367-2630/ac3683, hayami2021phase}, the appearance of further intriguing multiple-$Q$ states is also expected based on the effective spin model~\cite{Kato_PhysRevB.105.174413}. In addition, it is important to construct the magnetic phase diagram against not only the magnetic field at low temperatures but also at higher temperatures in order to further clarify the validity of the effective spin model for real materials. In the present study, we examine the magnetic field--temperature phase diagram of the effective spin model consisting of the momentum-resolved interactions with a particular emphasis on the stabilization of the square-lattice SkX in centrosymmetric itinerant magnets. To this end, we perform numerical calculations based on the steepest descent method, which enables us to efficiently find the optimal spin configurations in the thermodynamic limit~\cite{Kato_PhysRevB.105.174413}. We focus on two mechanisms of the square-lattice SkX: One is the positive biquadratic interaction~\cite{Hayami_PhysRevB.103.024439} and the other is the high-harmonic wave-vector interaction~\cite{Hayami_doi:10.7566/JPSJ.89.103702, Hayami_PhysRevB.105.174437}. By carrying out the calculations in a wide range of the two interaction parameters, we find their similarity and difference in their magnetic field and temperature dependence. The mechanism based on the high-harmonic wave-vector interaction tends to favor both the single-$Q$ and double-$Q$ states depending on the field, while that based on the biquadratic interaction tends to favor the double-$Q$ states irrespective of the magnetic field. Furthermore, we show that the former mechanism can induce the SkX only at finite temperatures by tuning the interaction. We also discuss the relevance to the skyrmion-hosting material GdRu$_2$Si$_2$. The results of our systematic investigation will be a reference to understanding the microscopic mechanism of the SkX-hosting tetragonal magnets in the wide range of the temperatures, such as GdRu$_2$Si$_2$~\cite{khanh2020nanometric,Yasui2020imaging,khanh2022zoology}, EuAl$_4$~\cite{Shang_PhysRevB.103.L020405,kaneko2021charge,takagi2022square,Zhu_PhysRevB.105.014423}. EuGa$_4$~\cite{zhang2022giant,Zhu_PhysRevB.105.014423}, EuGa$_2$Al$_2$~\cite{moya2021incommensurate}, Mn$_{2-x}$Zn$_x$Sb~\cite{Nabi_PhysRevB.104.174419}, and MnPtGa~\cite{ibarra2022noncollinear}. The rest of this paper is organized as follows. In Sec.~\ref{sec: Model and method}, we introduce the effective spin model of the itinerant electron model on a square lattice, which has anisotropic bilinear and biquadratic interactions in momentum space. We describe two mechanisms to stabilize the SkX: the high-harmonic wave-vector interaction and the biquadratic interaction. We also outline the numerical method based on the steepest descent method~\cite{Kato_PhysRevB.105.174413}. Then, we present the magnetic field--temperature phase diagram while changing the high-harmonic wave-vector interaction and the biquadratic interaction under the out-of-plane field in Sec.~\ref{sec: Phase diagram under out-of-plane field} and the in-plane field in Sec.~\ref{sec: Phase diagram under in-plane field}. Finally, we compare the phase diagrams in the effective spin model with that in GdRu$_2$Si$_2$ in Sec.~\ref{sec: Comparison with skyrmion-hosting materials}. We conclude this paper in Sec.~\ref{sec: Summary}. \section{Model and method} \label{sec: Model and method} \subsection{Model} \begin{figure}[t!] \begin{center} \includegraphics[width=1.0 \hsize ]{fig1.pdf} \caption{ \label{fig: model} Momentum-resolved interactions $\bm{\Gamma}_{\bm{Q}_\nu}$ at $\bm{Q}_1=(Q,0)$, $\bm{Q}_2=(0,Q)$, $\bm{Q}_3=(Q,Q)$, and $\bm{Q}_4=(-Q,Q)$ with $Q=\pi/3$. } \end{center} \end{figure} The square SkX in centrosymmetric magnets can emerge when considering the multi-spin interaction~\cite{Christensen_PhysRevX.8.041022, Hayami_PhysRevB.103.024439} and the high-harmonic wave-vector interaction~\cite{Hayami_doi:10.7566/JPSJ.89.103702, hayami2022multiple, Hayami_PhysRevB.105.104428, Hayami_PhysRevB.105.174437} in the effective spin model that originates from the itinerant electron model or considering the bond-dependent anisotropy~\cite{Wang_PhysRevB.103.104408}, dipolar interaction~\cite{Utesov_PhysRevB.103.064414}, and the staggered DM interaction~\cite{hayami2022square} as well as the frustrated exchange interaction in the localized spin model. Among them, we focus on the stabilization of the square SkX in the former situation based on the effective spin model. Specifically, we consider the effective spin model on a two-dimensional square lattice under the point group $D_{4\rm h}$ in the following: \begin{align} \label{eq: Ham} \mathcal{H}= &-J \sum_{\nu,\alpha,\beta}\Gamma^{\alpha\beta}_{\bm{Q}_\nu}S^{\alpha}_{\bm{Q}_\nu}S^{\beta}_{-\bm{Q}_\nu} \nonumber \\ &+\frac{K}{N} \sum_{\nu}\left(\sum_{\alpha,\beta}\Gamma^{\alpha\beta}_{\bm{Q}_\nu}S^{\alpha}_{\bm{Q}_\nu}S^{\beta}_{-\bm{Q}_\nu}\right)^2 - \sum_{j} \bm{H} \cdot \bm{S}_{j} \end{align} where $S^\alpha_{\bm{Q}_\nu}$ is characterized by the wave vector $\pm\bm{Q}_1, \pm\bm{Q}_2, \cdots, \pm\bm{Q}_{N_{\bm{Q}}}$ and the spin component $\alpha,\beta=x,y,z$, which corresponds to the Fourier transformation of the classical localized spin $\bm{S}_{j}$ with $|\bm{S}_{j}|=1$: \begin{align} \bm{S}_{\bm Q} = \frac{1}{\sqrt{N}} \sum_j \bm{S}_j e^{-i {\bm Q}\cdot{\bm r}_j}, \end{align} where $N$ represents the total number of sites and $\bm{r}_j=(r^x_j, r^y_j)$ denotes the position vector at site $j$. We set the lattice constant as unity, and $r^x_j$ and $r^y_j$ are integers. The first and second terms in Eq.~(\ref{eq: Ham}) represent the bilinear and biquadratic spin interactions in momentum space, respectively; $\Gamma^{\alpha\beta}_{\bm{Q}_\nu}$ represents the anisotropic form factor depending on the wave vector $\bm{Q}_\nu$ and spin component $\alpha,\beta$. The third term represents the Zeeman coupling under an external magnetic field $\bm{H}$; we consider the $z$-directional field $\bm{H}=(0,0,H)$ in Sec.~\ref{sec: Phase diagram under out-of-plane field} and the $x$-directional field $\bm{H}=(H,0,0)$ in Sec.~\ref{sec: Phase diagram under in-plane field}. The effective spin model in Eq.~(\ref{eq: Ham}) is derived from the perturbation theory for the Kondo lattice model in the weak-coupling regime~\cite{Hayami_PhysRevB.95.224424,yambe2022effective}. The coupling constants in the first and second terms, $J>0$ and $K>0$, correspond to the lowest- and second-lowest-order contributions in terms of the Kondo coupling $J_{\rm K}$, respectively; for example, $J$ ($K$) is proportional to the second (fourth) order of $J_{\rm K}$. We set $J=1$ as the energy unit of the model and treat $K$ as a phenomenological parameter. It is noted that we neglect the other four-spin interactions, e.g., $(S^{\alpha}_{\bm{Q}_\nu}S^{\beta}_{-\bm{Q}_\nu})(S^{\alpha'}_{\bm{Q}_{\nu'}}S^{\beta'}_{-\bm{Q}_{\nu'}})$ for $\nu \neq \nu'$, for simplicity~\cite{Akagi_PhysRevLett.108.096401, Ozawa_doi:10.7566/JPSJ.85.103703, Hayami_PhysRevB.95.224424}. The anisotropic form factor $\Gamma^{\alpha\beta}_{\bm{Q}_\nu}$ is determined by the spin--orbit coupling and the lattice symmetry in addition to the electronic band structure. For the dominant interaction channel at $\bm{Q}_\nu$, we consider the interactions at fourfold-symmetric wave vectors $\{\pm \bm{Q}_1=\pm(Q,0), \pm \bm{Q}_2=\pm(0,Q)\}$ with $Q=\pi/3$ by supposing that the nesting at $\bm{Q}_1$ and $\bm{Q}_2$ is important; $\bm{Q}_1$ and $\bm{Q}_2$ are related to the fourfold rotational symmetry of the tetragonal lattice structure. Then, $\Gamma^{\alpha\beta}_{\bm{Q}_\nu}$ is given as follows: $\bm{\Gamma}_{\bm{Q}_1}\equiv \Gamma^{\alpha\alpha}_{\bm{Q}_1}=(\Gamma_x, \Gamma_y, \Gamma_z)$ and $\bm{\Gamma}_{\bm{Q}_2}=(\Gamma_y, \Gamma_x, \Gamma_z)$, where $\Gamma^{\alpha\beta}_{\bm{Q}_\nu}=0$ for $\alpha \neq \beta$. We set $\Gamma_x=0.855$, $\Gamma_y=0.95$, and $\Gamma_z=1$ unless otherwise stated~\cite{khanh2022zoology}; $\Gamma_z> \Gamma_x, \Gamma_y$ means the easy-axis anisotropic interaction, which tends to favor the SkX and $\Gamma_y> \Gamma_x$ means the bond-dependent anisotropic interaction, which fixes the spiral plane onto the $yz$ ($xz$) plane for $\bm{Q}_1$ ($\bm{Q}_2$). Moreover, we consider the contribution from the high-harmonic wave vectors, i.e., $\pm \bm{Q}_3=\pm (\bm{Q}_1+\bm{Q}_2)$ and $\pm\bm{Q}_4=\pm(-\bm{Q}_1+\bm{Q}_2)$, since it can lower the energy to form the multiple-$Q$ state compared to the single-$Q$ state within the RKKY level~\cite{hayami2022multiple}. As we suppose that the interactions at $\bm{Q}_3$ and $\bm{Q}_4$ are smaller than those at $\bm{Q}_1$ and $\bm{Q}_2$, we set the isotropic form factor for simplicity; $\bm{\Gamma}_{\bm{Q}_3}=\bm{\Gamma}_{\bm{Q}_4}=(\Gamma', \Gamma', \Gamma')$. In the end, we investigate the instability toward the SkX while changing $K$ and $\Gamma'$. The wave vectors $\bm{Q}_1$--$\bm{Q}_4$ and their interactions are presented in Fig.~\ref{fig: model}. \subsection{Method} We investigate the optimal spin configurations of the effective spin model in Eq.~(\ref{eq: Ham}) at finite temperatures based on the steepest descent method with a set of self-consistent equations, which has been recently formulated~\cite{Kato_PhysRevB.105.174413}. In general, the effect of thermal fluctuations, i.e., the entropic effect, leads to the shrinking of the localized spin moment. As such fluctuations are brought about by the spatial correlation between the spins with a distance by the magnetic period in the classical spin model, we define an averaged spin for each sublattice in an $L \times L$ periodic array of the magnetic unit cell consisting of $\Lambda \times \Lambda$-site square cluster~\cite{comment_muc} as \begin{align} \bar{\bm{S}}_{\eta}= \frac{1}{L^2}\sum_{l} \bm{S}_{l,\eta}, \end{align} where the site index ($j$) is redefined by a pair of numbers ($l,\eta$); $l$ and $\eta$ denote the indices of the magnetic unit cell and the sublattice site within the magnetic cell, respectively. With this setup, the linear dimension of the entire system is $L\Lambda$, and the total number of sites is $N=(L\Lambda)^2$; the position vector $\bm{r}_\eta =(r^x_\eta, r^y_\eta)$ of sublattice $\eta$ is restricted within a magnetic unit cell: $r^x_\eta$, $r^y_\eta \in [0,\Lambda-1]$. In this paper, we consider the case of $Q=\pi/3$ that corresponds to $\Lambda=6$. Then, the partition function is calculated by~\cite{Kato_PhysRevB.105.174413} \begin{align} Z= \int \left[ \Pi_{\eta} d\bar{\bm{S}}_{\eta} \rho (\bar{\bm{S}}_{\eta}) \right] e^{-\mathcal{H}/T}, \end{align} where $\int d\bar{\bm{S}}_{\eta}$ means an integral over the unit ball ($|\bar{\bm{S}}_{\eta}| \leq 1$), $\rho (\bar{\bm{S}}_{\eta})$ is the density of states for $\bar{\bm{S}}_{\eta}$, and $T$ is the temperature (the Boltzmann constant is set to be unity). By taking the thermodynamic limit of $L \to \infty$ and using the steepest descent method, the resultant partition function is given by~\cite{Kato_PhysRevB.105.174413} \begin{align} Z &\sim e^{L^2 G(\{\overline{\bar{S}^\alpha_{\eta}}\})}, \\ G(\{\bar{S}^\alpha_{\eta}\}) & = -\frac{\beta}{L^2} \mathcal{H} + \sum_\eta V_\eta \end{align} with \begin{align} \label{eq: V} V_\eta = \ln \left[ \frac{4\pi \sinh v_0 (|\bar{\bm{S}}_{\eta}|)}{v_0 (|\bar{\bm{S}}_{\eta}|)}\right] - v_0 (|\bar{\bm{S}}_{\eta}|)|\bar{\bm{S}}_{\eta}| \end{align} where $\{\overline{\bar{S}^\alpha_{\eta}}\}$ represents the saddle point that gives the maximum of $G(\{\bar{S}^\alpha_{\eta}\})$ and directly corresponds to the expectation value of each spin in the thermodynamic limit. In Eq.~(\ref{eq: V}), $v_0(|\bar{\bm{S}}|)$ is determined by \begin{align} \coth v_0(|\bar{\bm{S}}|) - \frac{1}{v_0(|\bar{\bm{S}}|)}=|\bar{\bm{S}}|. \end{align} Once the saddle-point solution is obtained, the free energy is calculated via $-T \ln Z$. When several stable solutions are obtained for different initial spin configurations, we adopt the state with the lowest free energy. \section{Phase diagram under out-of-plane field} \label{sec: Phase diagram under out-of-plane field} \begin{figure*}[t!] \begin{center} \includegraphics[width=1.0 \hsize ]{fig2.pdf} \caption{ \label{fig: PD_Hz} Magnetic field($H$)--temperature ($T$) phase diagrams of the model in Eq.~(\ref{eq: Ham}) for $\bm{H} \parallel \hat{\bm{z}}$ with changing the biquadratic interaction $K$ and the high-harmonic wave-vector interaction $\Gamma'$. The schematic spin configurations appearing in the phase diagrams are presented in the bottom panel; the arrows represent the direction of the spin moments and their color shows the $z$-spin component. $1Q$ and $2Q$ represent the single-$Q$ and double-$Q$ states, respectively. PS, C, S, CS, and SkX mean proper-screw, conical, sinusoidal, chiral stripe, and skyrmion crystal, respectively. } \end{center} \end{figure*} \break In this section, we discuss the case when applying the magnetic field along the $z$ direction, i.e., $\bm{H}=(0,0,H)$. Figure~\ref{fig: PD_Hz} shows a collection of the magnetic field--temperature phase diagrams of the model in Eq.~(\ref{eq: Ham}) with changing $K$ by $\Delta K =0.1$ and $\Gamma'$ by $\Delta \Gamma' = 0.1$. In the wide range of parameters in terms of $K$ and $\Gamma'$, we obtain seven magnetic phases in addition to the paramagnetic (PM) state at high temperatures. We present the real-space spin configurations in each phase in the bottom panel of Fig.~\ref{fig: PD_Hz}, where the arrows represent the direction of the spin moments and their color shows the $z$-spin component. We also list nonzero scalar chirality $\chi^{\rm sc}$ and nonzero $\bm{Q}_\nu$ components of the magnetic moments $m^{\alpha}_{\bm{Q}_\nu}$ in each phase in Table~\ref{table: OP_Hz}, which are given by \begin{align} \chi^{\rm sc}&= \frac{1}{2 \Lambda^2} \sum_{\eta} \sum_{\delta,\delta'= \pm1} \delta \delta' \bar{\bm{S}}_{\eta} \cdot (\bar{\bm{S}}_{\eta+\delta\hat{x}} \times \bar{\bm{S}}_{\eta+\delta'\hat{y}}), \\ m^{\alpha}_{\bm{Q}_\nu}&= \frac{1}{\Lambda^2} \sqrt{\sum_{\eta,\eta'} \bar{S}^{\alpha}_{\eta}\bar{S}^{\alpha}_{\eta'}e^{i \bm{Q}_\nu \cdot (\bm{r}_\eta-\bm{r}_{\eta'})}}, \end{align} where $\hat{x}$ ($\hat{y}$) represents a shift by lattice constant in the $x$ ($y$) direction. \begin{table}[htb!] \centering \caption{ Scalar chirality $\chi^{\rm sc}$ and momentum-resolved magnetic moments $\bm{m}_{\bm{Q}_{\nu}}$ for $\nu=$1--4, in each magnetic phase for $\bm{H} \parallel \hat{\bm{z}}$. The subscript for $Q$ represents the index for the ordering vector. In the 2$Q$ S II phase, the spin configuration with $m^{x}_{\bm{Q}_{2}}$ and $m^{z}_{\bm{Q}_{1}}$ also gives the same energy. \label{table: OP_Hz}} \renewcommand{\arraystretch}{2} \begin{tabular}{lccccccccccc}\hline \hline Phase &$\chi^{\rm sc}$ & $m^{x}_{\bm{Q}_{1,2}}$ & $m^{y}_{\bm{Q}_{1,2}}$ & $m^{z}_{\bm{Q}_{1,2}}$ & $m^{x}_{\bm{Q}_{3,4}}$ & $m^{y}_{\bm{Q}_{3,4}}$ & $m^{z}_{\bm{Q}_{3,4}}$ \\ \hline 1$Q$ PS & -- & -- & $1Q$ & $1Q$ & -- & -- & -- \\ 1$Q$ S & -- & -- & -- & $1Q$ & -- & -- & --\\ 1$Q$ C & -- & $1Q$ & $1Q$ & -- & -- & -- & --\\ 2$Q$ CS & -- & $1Q_2$ & $1Q_1$ & $1Q$ & $2Q$ & -- & -- \\ 2$Q$ S I & -- & $1Q_2$ & $1Q_1$ & -- & -- & -- & -- \\ 2$Q$ S II & -- & -- & $1Q_1$ & $1Q_2$ & -- & -- & --\\ SkX & $\checkmark$ & $1Q_2$ & $1Q_1$ & $2Q$ & $2Q$ & $2Q$ & $2Q$ \\ \hline \hline \end{tabular} \end{table} When $K=0$ and $\Gamma'=0$, the SkX does not appear in the phase diagram, as shown in the upper-left panel of Fig.~\ref{fig: PD_Hz}. Meanwhile, there are several double-$Q$ states in addition to the single-$Q$ states. By looking at the low-temperature region, the single-$Q$ proper-screw spiral (1$Q$ PS) state appears for low $H$, whose spiral plane lies on the plane perpendicular to $\bm{Q}_\nu$. For example, this state has nonzero components of $m^y_{\bm{Q}_1}$ and $m^z_{\bm{Q}_1}$ or $m^x_{\bm{Q}_2}$ and $m^z_{\bm{Q}_2}$. With increasing $H$, the 1$Q$ PS state continuously turns into the double-$Q$ chiral stripe I (2$Q$ CS I) state with the additional sinusoidal modulation at $\bm{Q}_2$ ($\bm{Q}_1$) for the spiral state with $\bm{Q}_1$ ($\bm{Q}_2$). Thus, this state is characterized by nonzero $m^y_{\bm{Q}_1}$, $m^z_{\bm{Q}_1}$, and $m^x_{\bm{Q}_2}$ or $m^x_{\bm{Q}_2}$, $m^z_{\bm{Q}_2}$, and $m^y_{\bm{Q}_1}$. In addition, this 2$Q$ CS I state has small but nonzero amplitudes of $m^x_{\bm{Q}_3}$ and $m^x_{\bm{Q}_4}$ due to a superposition of the spin density waves at $\bm{Q}_1$ and $\bm{Q}_2$. Reflecting a noncoplanar spin texture, this state accompanies the density wave in terms of the scalar chirality, although its uniform component becomes zero. The 2$Q$ CS I state changes into the single-$Q$ conical (1$Q$ C) state with a jump of $m^\alpha_{\bm{Q}_\nu}$, whose spiral plane lies on the $xy$ plane, i.e., $m^x_{\bm{Q}_1} \neq m^y_{\bm{Q}_1} \neq 0$ or $m^x_{\bm{Q}_2} \neq m^y_{\bm{Q}_2} \neq 0$. With further increasing $H$, the 1$Q$ C state is replaced by the double-$Q$ sinusoidal I (2$Q$ S I) state with a jump of $m^\alpha_{\bm{Q}_\nu}$, whose spin configuration consists of the two sinusoidal waves of the $y$-spin ($x$-spin) component along the $\bm{Q}_1$ ($\bm{Q}_2$) direction with the same amplitude; $m^y_{\bm{Q}_1} = m^x_{\bm{Q}_2}$. Meanwhile, there is no $\bm{Q}_\nu$ component in the $z$ spin. The 2$Q$ S I state continuously turns into the fully-polarized state denoted by the PM in the phase diagram in Fig.~\ref{fig: PD_Hz}. When considering the effect of finite temperatures, two characteristic points are found. One is that the 1$Q$ C state is rapidly destabilized compared to the other three states, 1$Q$ PS, 2$Q$ CS I, and 2$Q$ S I states. Especially, one finds that the region of the 1$Q$ C state is replaced by that of the 2$Q$ S I state with increasing $T$, which implies that the entropic effect tends to favor the sinusoidal superposition rather than the single spiral. The other is the appearance of the single-$Q$ sinusoidal (1$Q$ S) state in the low-field and high-temperature region. This is attributed to the easy-axis exchange interaction in the model, i.e., $\Gamma_z> \Gamma_x, \Gamma_y$. It is noted that these two points have been also found in the frustrated spin model with the dipolar interaction~\cite{Utesov_PhysRevB.103.064414}, which suggests that these are feasible features irrespective of the short-range and long-range interactions. The appearance of various magnetic phases in the phase diagram is due to the presence of the easy-axis and bond-dependent anisotropic exchange interactions at $\bm{Q}_1$ and $\bm{Q}_2$. Indeed, only the 1$Q$ C state appears in the phase diagram when setting $\Gamma_x=\Gamma_y=\Gamma_z$. However, only the anisotropic exchange interactions are not enough to stabilize the SkX, at least, in the present parameters; $\Gamma_x=0.855$, $\Gamma_y=0.95$, and $\Gamma_z=1$. In the following, we show that the SkX emerges by additionally taking into account $\Gamma'$ and $K$ in Secs.~\ref{sec: Case of high-harmonic wave-vector interaction_Hz} and \ref{sec: Case of biquadratic interaction_Hz}, respectively. We also discuss the magnetic field--temperature phase diagram under both $\Gamma'$ and $K$ in Sec.~\ref{sec: Case of both interactions_Hz}. \subsection{Case of high-harmonic wave-vector interaction} \label{sec: Case of high-harmonic wave-vector interaction_Hz} The phase diagrams at $K=0$ for $\Gamma'=0.1$--$0.5$ are shown in the top panel of Fig.~\ref{fig: PD_Hz}. When introducing $\Gamma'$, the SkX appears in the vicinity region among the $2Q$ CS I, $1Q$ C, and $2Q$ S I, whose stability region extends with increasing $\Gamma'$. The SkX is characterized by a double-$Q$ superposition of two spiral waves along the $\bm{Q}_1$ and $\bm{Q}_2$ directions, as shown in the bottom panel of Fig.~\ref{fig: PD_Hz}. Although the in-plane spin configuration is similar to that in the $2Q$ S I, the SkX exhibits the additional $z$-spin modulation, which results in a nonzero uniform scalar chirality $\chi^{\rm sc}$ causing the topological Hall effect. In addition, the SkX has the intensities in both in-plane and $z$ spin components at high-harmonic wave vectors $\bm{Q}_3$ and $\bm{Q}_4$. Remarkably, the instability of the SkX is found at finite temperatures rather than zero temperature. Especially, the SkX only appears at finite temperatures for small $\Gamma'=0.1$ and $\Gamma'=0.2$. This indicates that thermal fluctuations tend to favor the SkX in the effective spin model with the momentum-resolved interactions~\cite{Kato_PhysRevB.105.174413}, similar to that in the frustrated spin model with the real-space competing interactions~\cite{Okubo_PhysRevLett.108.017206,Mitsumoto_PhysRevB.104.184432, Mitsumoto_PhysRevB.105.094427}. On the other hand, the present SkX phase is replaced by the other phases before entering the paramagnetic phase when increasing $T$, which is in contrast to the frustrated spin model, where the SkX region touches the paramagnetic region. This result suggests that the entropy in the SkX phase is larger than that of the 2$Q$ CS I phase, while it is smaller than that of the 2$Q$ S I state. \begin{figure*}[t!] \begin{center} \includegraphics[width=1.0 \hsize ]{fig3.pdf} \caption{ \label{fig: Mq_Hz} $T$ dependence of (first row) the magnetization $M^z$, (second row) the scalar chirality $\chi^{\rm sc}$, (third row) the in-plane magnetic moments at $\bm{Q}_{1,2}$, $m^{x,y}_{\bm{Q}_{1,2}}$, (fourth row) the out-of-plane magnetic moments at $\bm{Q}_{1,2}$, $m^{z}_{\bm{Q}_{1,2}}$, (fifth row) the in-plane magnetic moments at $\bm{Q}_{3,4}$, $m^{x,y}_{\bm{Q}_{3,4}}$, and (sixth row) the out-of-plane magnetic moments at $\bm{Q}_{3,4}$, $m^{z}_{\bm{Q}_{3,4}}$ at (a) $K=0$, $\Gamma'=0.2$, and $H=0.7$, (b) $K=0$, $\Gamma'=0.3$, and $H=0.75$, and (c) $K=0.3$, $\Gamma'=0$, and $H=0.6$. The vertical dashed lines represent the phase boundaries. } \end{center} \end{figure*} \begin{figure*}[t!] \begin{center} \includegraphics[width=1.0 \hsize ]{fig4.pdf} \caption{ \label{fig: PD_Hz_diff} $H$--$T$ phase diagram for several values of $\Gamma_x$ and $\Gamma_y$ at (a) $K=0$ and $\Gamma'=0.3$, and (b) $K=0.3$ and $\Gamma'=0$. } \end{center} \end{figure*} We present the $T$ dependence of the magnetization $M^z$, the scalar chirality $\chi^{\rm sc}$, and the $x$, $y$, and $z$ components of magnetic moments at $\bm{Q}_1$--$\bm{Q}_4$, $m^\alpha_{\bm{Q}_\nu}$, at $K=0$, $\Gamma'=0.2$, and $H=0.7$ in Fig.~\ref{fig: Mq_Hz}(a), and $K=0$, $\Gamma'=0.3$, and $H=0.75$ in Fig.~\ref{fig: Mq_Hz}(b). The SkX only appears at finite temperatures in Fig.~\ref{fig: Mq_Hz}(a), while it is stabilized from zero to finite temperatures in Fig.~\ref{fig: Mq_Hz}(b). In both figures, there is a clear jump in each quantity between the SkX and $2Q$ CS I (or 2$Q$ S I), which clearly indicates the first-order phase transition. The emergence of the SkX is attributed to the interplay between $\Gamma'$, the easy-axis anisotropy $\Gamma_z > \Gamma_{x,y}$, and the bond-dependent anisotropy $\Gamma_x \neq \Gamma_y$. To demonstrate that, we show the $H$--$T$ phase diagrams at fixed $K=0$ and $\Gamma'=0.3$ but different $\Gamma_x=0.868$, $0.88$, and $0.9$ in Fig.~\ref{fig: PD_Hz_diff}(a). With increasing $\Gamma_x$, the region in the 1$Q$ C phase is extended, while those in the SkX and 2$Q$ S I phases are shrunk. This is because the isotropic Heisenberg interaction tends to favor the 1$Q$ C state without the intensities at higher-harmonic wave vectors due to $m^x_{\bm{Q}_1}=m^y_{\bm{Q}_1}$ or $m^x_{\bm{Q}_2}=m^y_{\bm{Q}_2}$. Furthermore, one finds that the SkX in the ground state is rapidly replaced by the 1$Q$ C state with increasing $\Gamma_x$, which also indicates that the instability toward the SkX is larger at finite temperatures than at zero temperature. From the energetic viewpoint, the stabilization of the SkX by $\Gamma'$ is reasonable. This is understood from the spin configuration of the SkX, which is approximately given by \begin{align} \label{eq:2QSkX} \bm{S}_j \propto \left( \begin{array}{c} \cos \bm{Q}_1 \cdot \bm{r}_{j}+ \cos \bm{Q}_2 \cdot \bm{r}_{j} \\ \cos \bm{Q}_1 \cdot \bm{r}_{j} - \cos \bm{Q}_2 \cdot \bm{r}_{j} \\ a_z (\sin \bm{Q}_1\cdot \bm{r}_{j} +\sin \bm{Q}_2\cdot \bm{r}_{j})+ \tilde{M}_z \end{array} \right)^{\rm T}, \end{align} where $a_z$ and $\tilde{M}_z$ are the variational parameters depending on the model parameters, such as the magnetic anisotropy and the magnetic field. Owing to the normalization constraint in terms of the spin length, i.e., $|\bm{S}_i| = 1$, there are intensities not only at $\bm{Q}_1$ and $\bm{Q}_2$ but also at high-harmonic wave vectors at $\bm{Q}_3$ and $\bm{Q}_4$. This means that the interactions in the $\bm{Q}_3$ and $\bm{Q}_4$ channels tend to favor the SkX. In addition, it is noteworthy that the contribution from such high-harmonic wave vectors is coupled to that from $\bm{Q}_1$ and $\bm{Q}_2$ like $(\bm{S}_{\bm{0}}\cdot \bm{S}_{-\bm{Q}_3})(\bm{S}_{\bm{Q}_1}\cdot \bm{S}_{\bm{Q}_2})$ and $(\bm{S}_{\bm{0}}\cdot \bm{S}_{-\bm{Q}_4})(\bm{S}_{-\bm{Q}_1}\cdot \bm{S}_{\bm{Q}_2})$ in the presence of the magnetic field in the free energy due to $\bm{Q}_1+\bm{Q}_{2}-\bm{Q}_3=0$ and $-\bm{Q}_1+\bm{Q}_2-\bm{Q}_4=0$. From the orientations of ${\bm m}_{{\bm Q}_\nu}$ in Figs.~\ref{fig: Mq_Hz}(a) and \ref{fig: Mq_Hz}(b), we conclude that the couplings in the form of $S^z_{\bm{0}} S^z_{-\bm{Q}_3} (\bm{S}_{\bm{Q}_1}\cdot \bm{S}_{\bm{Q}_2})+{\rm c.c.}$ and $S^z_{\bm{0}} S^z_{-\bm{Q}_4} (\bm{S}_{-\bm{Q}_1}\cdot \bm{S}_{\bm{Q}_2})+{\rm c.c.}$, for example, are important for the stabilization. Despite the importance of the high-harmonic channels, we refer to the SkX as the double-$Q$ state rather than the four-$Q$ state, since the this phase is continuously connected by that for nonzero $K$ without $\Gamma'$ [Fig.~\ref{fig: PD_Hz}], as discussed in the subsequent section. \subsection{Case of biquadratic interaction} \label{sec: Case of biquadratic interaction_Hz} For $\Gamma'=0$, the $H$--$T$ phase diagrams for different $K=0.1$--$0.4$ are shown in the leftmost panel of Fig.~\ref{fig: PD_Hz}. For nonzero $K$, the SkX appears in the intermediate-field region similar to nonzero $\Gamma'$, while two single-$Q$ states (1$Q$ C and 1$Q$ PS) tend to be destabilized for $K>0$; the 1$Q$ C state vanishes for $K \gtrsim 0.1$ and the 1$Q$ PS state vanishes for $K \gtrsim 0.3$. Especially, the zero-field phase at low temperatures becomes the 2$Q$ CS I state instead of the 1$Q$ PS state for $K>0$, which indicates that the double-$Q$ instability at zero field means the importance of the biquadratic interaction $K$. In addition, a double-$Q$ state denoted as 2$Q$ S II appears only at finite $T$ for $K \gtrsim 0.3$, whose region is sandwiched by the 2$Q$ CS I and $1Q$ S states. The spin configuration of the 2$Q$ S II state is described by a linear combination of the sinusoidal waves with $\bm{Q}_1$ and $\bm{Q}_2$, whose spin components are given by $y$ ($z$) and $z$ ($x$) components, respectively. The obtained SkX for $K>0$ exhibits similar spin and scalar chirality textures to that for $\Gamma'>0$. For example the $T$ dependence of the magnetic moments and the spin scalar chirality at $K=0.3$, $\Gamma'=0$, and $H=0.6$ in Fig.~\ref{fig: Mq_Hz}(c) are similar to those in Fig.~\ref{fig: Mq_Hz}(b). Nevertheless, there are two different points in their $H$--$T$ phase diagrams. One is that the instability toward the SkX occurs at finite temperatures for nonzero $\Gamma'$, while such a clear feature is not found for nonzero $K$. Except for the $H$--$T$ phase diagram for $K=0.1$ and $\Gamma'=0$, where the SkX only appears in the narrow field region, the region of the SkX becomes smaller with increasing $T$. Such a tendency is also found when changing the anisotropic exchange interactions $\Gamma_x$ and $\Gamma_y$; the high-temperature region of the SkX becomes narrower for larger $\Gamma_x$ and $\Gamma_y$ while keeping the ground-state SkX, as shown in Fig.~\ref{fig: PD_Hz_diff}(b). The other is the enhancement of the SkX phase when increasing $K$ and $\Gamma'$. Compared to $\Gamma'$, the $K$ dependence of the SkX region against $H$ is small. Such a difference is accounted for by the different origins of the SkX. In the case of $K$, all the double-$Q$ states at low temperatures, 2$Q$ CS I, SkX, and 2$Q$ S I, have an energy gain by $K$, since $K$ brings about the energy loss to form the single-$Q$ spin configuration~\cite{Hayami_PhysRevB.95.224424}. Meanwhile, in the case of $\Gamma'$, only the 2$Q$ CS I and the SkX show an energy gain by $\Gamma'$ by reflecting their nonzero amplitudes of $\bm{m}_{\bm{Q}_{3,4}}$. In other words, there is no energy gain by $\Gamma'$ in the 2$Q$ S I state. In fact, one finds that the SkX region is extended to the high-field region with increasing $\Gamma'$. One also notices that the amplitudes of $\bm{m}_{\bm{Q}_{3,4}}$ tend to be larger for nonzero $\Gamma'$ in Fig.~\ref{fig: Mq_Hz}(b) than those for nonzero $K$ in Fig.~\ref{fig: Mq_Hz}(c). This suggests that the effective coupling like $S^z_{\bm{0}} S^z_{-\bm{Q}_3} (\bm{S}_{\bm{Q}_1}\cdot \bm{S}_{\bm{Q}_2})+{\rm c.c.}$ and $S^z_{\bm{0}} S^z_{-\bm{Q}_4} (\bm{S}_{-\bm{Q}_1}\cdot \bm{S}_{\bm{Q}_2})+{\rm c.c.}$ plays an important role in stabilizing the SkX in the presence of $K$ as same as the case of $\Gamma'$ described in Sec.~\ref{sec: Case of high-harmonic wave-vector interaction_Hz}. \subsection{Case of both interactions} \label{sec: Case of both interactions_Hz} When taking into account both interactions, the SkX tends to be more stable compared to the individual case, as shown in Fig.~\ref{fig: PD_Hz}. Thus, both interactions play a role in stabilizing the SkX in an additive way. The overall behavior in each phase is common to that in Secs.~\ref{sec: Case of high-harmonic wave-vector interaction_Hz} and \ref{sec: Case of biquadratic interaction_Hz}. With increasing $\Gamma'$, the phase boundary between the SkX and 2$Q$ S I phases moves to the high-field region so as to make the SkX phase more robust, while there is almost no $\Gamma'$ dependence in the other phase boundaries. On the other hand, the single-$Q$ states are replaced by the double-$Q$ states with increasing $K$ due to energy gain discussed in the previous section. In addition, the SkX region is slightly extended for larger $K$. \section{Phase diagram under in-plane field} \label{sec: Phase diagram under in-plane field} \begin{figure*}[t!] \begin{center} \includegraphics[width=1.0 \hsize ]{fig5.pdf} \caption{ \label{fig: PD_Hx} $H$--$T$ phase diagrams of the model in Eq.~(\ref{eq: Ham}) for $\bm{H} \parallel \hat{\bm{x}}$ corresponding to Fig.~\ref{fig: PD_Hz}. TC represents transverse conical. } \end{center} \end{figure*} \begin{table}[htb!] \centering \caption{ Scalar chirality $\chi^{\rm sc}$ and momentum-resolved magnetic moments $\bm{m}_{\bm{Q}_{\nu}}$ for $\nu=$1--4, in each magnetic phase for $\bm{H} \parallel \hat{\bm{x}}$. The subscript for $Q$ represents the indices for the ordering vector. The prime symbol for $Q$ represents the different magnitudes at $\bm{Q}_1$ and $\bm{Q}_2$. \label{table: OP_Hx}} \renewcommand{\arraystretch}{2} \begin{tabular}{lccccccccccc}\hline \hline Phase &$\chi^{\rm sc}$ & $m^{x}_{\bm{Q}_{1,2}}$ & $m^{y}_{\bm{Q}_{1,2}}$ & $m^{z}_{\bm{Q}_{1,2}}$ & $m^{x}_{\bm{Q}_{3,4}}$ & $m^{y}_{\bm{Q}_{3,4}}$ & $m^{z}_{\bm{Q}_{3,4}}$ \\ \hline 1$Q$ TC' & -- & -- & $1Q_1$ & $1Q_1$ & --& -- & -- \\ 1$Q$ S' & -- & -- & -- & $1Q$ & -- & -- & --\\ 2$Q$ CS' I & -- & $1Q_2$ & $1Q_1$ & $1Q_1$ & -- & $2Q$ & $2Q$\\ 2$Q$ CS' II & -- & $1Q_2$ & $1Q_1$ & $1Q_2$ & -- & $2Q$ & --\\ 2$Q$ CS' III & -- & -- & $1Q_1$ & $2Q'$ & $2Q$ & -- & --\\ 2$Q$ S' II & -- & -- & $1Q_1$ & $1Q_2$ & -- & -- & --\\ 2$Q$ S' III & -- & -- & -- & $2Q$ & $2Q$ & -- & --\\ SkX' & $\checkmark$ & $2Q'$ & $1Q_1$ & $1Q_2$ & $2Q$ & $2Q$ & $2Q$\\ \hline \hline \end{tabular} \end{table} We present the $H$--$T$ phase diagrams when applying the magnetic field along the $x$ direction, i.e., $\bm{H}=(H,0,0)$. We show the same plots as Fig.~\ref{fig: PD_Hz} under the in-plane field in Fig.~\ref{fig: PD_Hx}. In the $H$--$T$ phase diagrams produced for $\Delta K=0.1$ and $\Delta \Gamma'=0.1$, eight magnetic phases emerge with decreasing the temperature from the paramagnetic state at high temperatures. The nonzero components of the magnetic moments at $\bm{Q}_1$--$\bm{Q}_4$ in each phase are summarized in Table~\ref{table: OP_Hx}. Besides, the real-space spin configuration in each phase is shown in the bottom panel of Fig.~\ref{fig: PD_Hx}. In contrast to the out-of-plane field in Sec.~\ref{sec: Phase diagram under out-of-plane field}, the instability toward the SkX denoted as SkX' occurs only at $K=0.4$ and $\Gamma'=0.4$. Here and hereafter, the prime symbol means the magnetic phase under the in-plane field. Thus, larger biquadratic and high-harmonic wave-vector interactions are required to stabilize the topological spin textures under the in-plane field. At $\Gamma'=K=0$, there are only two magnetic phases in the phase diagram: One is the single-$Q$ transverse conical (1$Q$ TC') state and the other is the single-$Q$ sinusoidal (1$Q$ S') state. The 1$Q$ TC' state is characterized by the spiral wave along the $\bm{Q}_1$ direction, whose spiral plane lies on the $yz$ plane. With increasing $H$ and $T$, the $y$-spin modulation becomes zero while remaining the $z$-spin modulation, which means the appearance of the 1$Q$ S' state. In the end, there is no double-$Q$ instability under the in-plane field, which is different from the out-of-plane field. Qualitatively the same phase diagrams are obtained even for finite $\Gamma'$ at least up to $\Gamma'=0.4$ as shown in the top panel of Fig.~\ref{fig: PD_Hx}. When considering $K$ but $\Gamma'=0$, the double-$Q$ chiral stripe (2$Q$ CS' I) state appears in the low-field region for $K \gtrsim 0.1$ as shown in the leftmost panel of Fig.~\ref{fig: PD_Hx}. Similar to the 2$Q$ CS I under the out-of-plane field, the 2$Q$ CS' I state has double-$Q$ modulations consisting of the spiral wave along the $\bm{Q}_1$ direction and the sinusoidal wave along the $\bm{Q}_2$ direction. With further increasing $K$, the 1$Q$ TC' phase in the intermediate-to-high field regions is replaced by the double-$Q$ sinusoidal II (2$Q$ S' II) state for $K \gtrsim 0.3$. The spin configuration in this state consists of the two sinusoidal waves with $m^y_{\bm{Q}_1}$ and $m^z_{\bm{Q}_2}$. In addition, in the low-field region at low temperatures, the 2$Q$ CS' I state is replaced by the 2$Q$ CS' II state, where the sinusoidal modulation changes from $\bm{Q}_2$ to $\bm{Q}_1$. At $K=0.4$, the 2$Q$ CS' I is completely replaced by the 2$Q$ CS' II state. For both nonzero $\Gamma'$ and $K$, the overall $H$--$T$ phase diagrams are qualitatively similar with changing $\Gamma'$ for fixed $K$ when $K$ is small. At $K=0.3$, the 2$Q$ CS' II state is replaced by the 2$Q$ CS' I state when $\Gamma'$ is increased. At $\Gamma'=0.4$ and $K=0.4$, in addition to the SkX', two double-$Q$ states denoted as 2$Q$ CS' III and 2$Q$ S' III appear in the high-field region, which are characterized by different double-$Q$ superpositions as summarized in Table~\ref{table: OP_Hx}. Among all the obtained phases, only the SkX' exhibits a nonzero scalar chirality. \section{Comparison with skyrmion-hosting materials} \label{sec: Comparison with skyrmion-hosting materials} Finally, let us compare the $H$--$T$ phase diagrams of the effective spin model in the wide range of the model parameters with that observed in the SkX-hosting tetragonal material GdRu$_2$Si$_2$~\cite{khanh2020nanometric, Yasui2020imaging,khanh2022zoology}. In experiments, there are three magnetic phases denoted as Phase I, Phase II, and Phase III in the out-of-plane field direction, while there are four magnetic phases denoted as Phase I, Phase IV, Phase III', and Phase V in the in-plane field direction~\cite{khanh2022zoology}. Based on the resonant x-ray scattering and spectroscopic-imaging scanning tunneling microscopy measurements, each phase was identified as follows~\cite{khanh2020nanometric, Yasui2020imaging,khanh2022zoology}: 2$Q$ CS I (and 2$Q$ CS' I) for Phase I, the SkX for Phase II, 2$Q$ S I for Phase III, 1$Q$ TC' for Phase IV, 2$Q$ S' II for Phase III', and 1$Q$ S' for Phase V. First, let us consider the case under the out-of-plane field. As the zero-field state at low temperatures corresponds to the 2$Q$ CS I state, the biquadratic interaction should be nonzero in this compound. Indeed, for nonzero $K$, the emergence of three phases with changing $H$ at low temperatures is consistent with the experimental observations. Moreover, the fragility of the SkX against thermal fluctuations compared to the 2$Q$ CS I and 2$Q$ S I states is well reproduced in the effective spin model. It is noted that a large value of $K$ is not necessary to realize such a phase sequence by taking into account $\Gamma'$. For example, the stability region of the SkX phase at $K=0.3$ and $\Gamma'=0$ is similar to that at $K=0.1$ and $\Gamma'=0.1$. In addition, by focusing on the high-temperature region for nonzero $K$, additional phases denoted as the 1$Q$ S and 2$Q$ S II appear depending on $K$ in the effective spin model, as discussed above. Notably, the magnetization measurement and the resonant x-ray scattering measurement implied the emergence of the 2$Q$ S II phase~\cite{khanh2020nanometric, khanh2022zoology}. Although the appearance of the 1$Q$ S phase has not been clarified, our results based on the effective spin model indicate that the 1$Q$ S phase might additionally appear in the higher-temperature region next to the 2$Q$ S II phase. Next, let us compare the case under the in-plane field. As shown in the phase diagrams in Fig.~\ref{fig: PD_Hx}, we obtain the instabilities toward the magnetic states observed in GdRu$_2$Si$_2$, i.e., 2$Q$ CS' I, 1$Q$ TC', 2$Q$ S' II, and 1$Q$ S' with changing $K$ and $\Gamma'$. Thus, the effective spin model roughly reproduces the $H$--$T$ phase diagram in GdRu$_2$Si$_2$. Meanwhile, we could not obtain the phase sequence from the 1$Q$ TC' to the 2$Q$ S' II at low temperatures in the present model-parameter range, which was observed in experiments~\cite{khanh2022zoology}. Thus, further additional interactions and anisotropy might be required to realize such a phase sequence under the in-plane field. \section{Summary} \label{sec: Summary} To summarize, we have investigated the magnetic field--temperature phase diagram of the effective spin model in itinerant centrosymmetric tetragonal magnets. By focusing on the two mechanisms to induce the SkX, the higher-harmonic wave-vector interaction and the biquadratic interaction, we construct the phase diagrams in a wide range of model parameters based on the efficient steepest descent method. As a result, we show the stability tendency of the SkX against these interactions as well as the magnetic field and temperature. Especially, we find that the instability toward the SkX under the out-of-plane magnetic field occurs at finite temperatures by the higher-harmonic wave-vector interaction, while that occurs in the ground state by the biquadratic interaction. Furthermore, we reveal the tendency of the other single-$Q$ and double-$Q$ phases in both in-plane and out-of-plane magnetic fields, which provides information about the microscopic important interactions. We also discuss the relevance of our results to the experimental phase diagram in GdRu$_2$Si$_2$. Based on the obtained phase diagram in the effective spin model, we conclude that the biquadratic interaction plays an important role and propose the additional phase at high temperatures. Our systematic investigation of the magnetic field--temperature phase diagrams would be a useful reference to construct the effective spin model for the materials hosting the multiple-$Q$ states in the centrosymmetric tetragonal magnets, such as EuAl$_4$~\cite{Shang_PhysRevB.103.L020405,kaneko2021charge,takagi2022square,Zhu_PhysRevB.105.014423}. EuGa$_4$~\cite{zhang2022giant, Zhu_PhysRevB.105.014423}, EuGa$_2$Al$_2$~\cite{moya2021incommensurate}, Mn$_{2-x}$Zn$_x$Sb~\cite{Nabi_PhysRevB.104.174419}, and MnPtGa~\cite{ibarra2022noncollinear}. \begin{acknowledgments} S.H. thank S. Seki for fruitful discussions. S.H. acknowledges Y. Motome for enlightening discussions in the early stage of this study. This research was supported by JSPS KAKENHI Grants Numbers JP21H01037, JP22H04468, JP22H00101, JP22H01183, JP22K03509, and by JST PRESTO (JPMJPR20L8). Parts of the numerical calculations were performed in the supercomputing systems in ISSP, the University of Tokyo. \end{acknowledgments} \bibliographystyle{apsrev}
1,116,691,497,097
arxiv
\section{Computer Vision Using Convolutional Neural Networks} \label{sec:background} \subsection{CNNs for Object Recognition and Detection} Convolutional Neural Networks (CNN) are a specific class of neural networks that are often used as deep architectures, which means that the networks contain several so called "hidden" layers (i.e., more layers, increased depth) in addition to the input and output ones. CNNs are used to extract features from images which are subsequently used in the recognition or detection tasks. One of their biggest advantages compared to many other computer vision techniques is the absence of tedious manual feature engineering. \textit{Object recognition}, also sometimes called image classification, is among the most studied tasks commonly done using CNNs. Such networks output labels corresponding to classes of recognized objects along with some confidence levels when given an image as an input. Vast majority of work on object recognition has focused on improving the recognition accuracy for which the state of the art CNNs include Inception\cite{Szegedy_2015_CVPR,pmlr-v37-ioffe15,Szegedy_2016_CVPR}, VGG, and ResNet. However, some work exists also on providing faster inference with slightly reduced accuracy. The prominent example is MobileNets\cite{howard17mobilenets} which is a set of CNNs for different latency vs. accuracy tradeoffs achieved by using two hyperparameters, namely width and resolution multipliers, to control the number of parameters and required multiply-accumulate operations in the resulting CNN. MobileNets are mainly targeted for mobile and embedded computing. Unlike the CNNs for object recognition, \textit{object detectors} are able to tell also where in the picture the objects reside by outputting bounding box coordinates in addition to class labels. Detection is a more complex task than recognition but CNN-based object detectors can leverage the same CNNs for feature extraction that are used by recognition models, as explained in detail in~\cite{Huang2017CVPR}. The more traditional types of detectors, such as Faster RCNN\cite{ren17frcnn}, work in two stages: First, a set of bounding box proposals are generated and in the second stage a class and class specific box adjustments are predicted for each proposal. Although accurate, these detectors have been reported to be relatively slow. To overcome that limitation, detectors that work in a "single shot" manner, where bounding box proposals are not generated in a separate stage, have been developed. Examples include SSD\cite{liu16ssd} and Yolo\cite{redmon16CVPR}. An important part of object detectors is the last step where detections that might correspond to the same object are merged, also know as \textit{non-maximum suppression}~\cite{HosangBS17}. \subsection{Software Tools and Frameworks} Convolutional Neural Networks can be implemented with various machine learning frameworks. The frameworks all have specific formats to model the structure of a CNN network. Despite some efforts, a single universal language for describing a CNN model is not yet possible. The frameworks do however offer conversion tools between different formats with varying support. We mostly use TensorFlow to build and benchmark the tested networks. TensorFlow is an open-source software library for dataflow programming used by Google both in research and production. \textit{TensorFlow Serving} is a high-performance serving system for machine learning models, particularly for inference. We use it to benchmark both object recognition and object detection models in Sections~\ref{sec:remote} and \ref{sec:detection}. It has out of the box integration for TensorFlow models and is designed for production-ready systems. More specifically we utilize the gRPC model server implementation of the prediction service proto buffer available in TensorFlow Serving. GRPC is an open source remote procedure call (RPC) system using HTTP/2 for transport. \textit{TensorRT} is a inference model runtime by NVidia \cite{tensorrt}. It is used to optimize and execute inference models on different GPU platforms, from datacenter GPUs to portable embedded systems with GPU acceleration. We use TensorRT to characterize object recognition and detection models on the embedded Jetson TX2 platform as well as on desktop GPUs to contrast the results obtained with the TensorFlow Serving. \textit{Snapdragon Neural Processing Engine} \cite{snpe} is a software development kit produced by Qualcomm for running neural network inference on their Snapdragon 800 and 600 series chips. The SDK has tools for converting TensorFlow and Caffe machine learning models into its custom Deep Learning Container (DLC) format. Snapdragon NPE can utilize Adreno GPUs for hardware acceleration if OpenCL library is available on the device. We utilize the Snapdragon NPE in Section~\ref{sec:mobile} together with the mobile implementation of TensorFlow. \subsection{Experiments and Performance Metrics} We focus on inference with CNN-based models for mobile computer vision. On one hand, we study on-device situations where typically a single model is loaded and initialized at a time on mobile device for a specific application. On the other hand, we study remote inference scenarios in which potentially large number of clients send inference jobs to a server serving one or multiple models. In the latter case, we also study scenarios where there are more CNN inference models running than there are available accelerators (GPUs). Such scenarios stress the system and reveal their performance characteristics but call for mechanisms for resource sharing. These experiment use the Linux process abstraction to access shared GPU resources. We also take a look at the effect of NVidia Multi-Process Service server (MPS)\cite{mps} to the latency and throughput behavior of concurrent processes. The most important metrics for a computer vision system are accuracy, inference latency, and system throughput. We do not focus on accuracy in this paper and refer the reader to~\cite{Huang2017CVPR} for a study of accuracy of object detectors. Considering on-device vs. remote inference, the key differences in end-to-end latencies are the added network latency for the remote case and the additional latency due to model loading and initialization in the on-device case, as mobile devices would only in specific cases always keep the potentially large models in RAM all the time. A common way to increase system throughput is to introduce \textit{batching}. Batching executes the CNN in parallel for several different input images, which reduces overall processing time by improving the reuse of filter weights in convolution operations. However, batching increases latency because running a batch of images instead of a single one through the CNN takes more time. Hence, one possible throughput optimization strategy is to set an upper bound for latency and increase batch size until that bound is met. \section{Conclusion} \label{sec:conclusion} This paper describes our study on latency and throughput characteristics of CNNs for mobile computer vision, which was based on measurements with real workloads and real processing platforms. On the platform side, we concentrated especially on TensorFlow and TensorRT but we also used several types of hardware and related driver software. Our measurements included both embedded processors found on mobile devices and high-performance processors that can be used on the network side of mobile systems. Our main finding was that CNNs for mobile computer vision have significant latency--throughput trade-offs, but the behavior is very complex. There a number of different factors that affect the performance yielding the complex behavior. This makes development of automatic optimization mechanisms challenging. However, in many cases the system behavior makes it possible to get significant performance improvements by tuning the systems. The systems supporting CNN-based computer vision are rapidly evolving. Along the development, we see two important directions for future research. On one hand, attention is needed on the performance characteristics to make system tuning easier. On the other hand, automated means of optimization are to make usage of the system feasible for the large body of mobile system programmers. \bibliographystyle{ACM-Reference-Format} \section{Discussion} \label{sec:discussion} Machine learning landscape is rapidly evolving. The inference performance of terminal devices (smartphones, IoT devices, etc.) is constantly increasing through specialized hardware, vendor optimized runtimes, and tailored lightweight inference models\footnote{E.g., Android 8.1 introduced Neural Networks API (https://developer.android.com/ndk/guides/neuralnetworks/index.html) which can be used with TensorFlow Lite (https://www.tensorflow.org/mobile/tflite/).}. Similarly, hardware manufacturers are developing more capable CPUs and GPUs, and new types of accelerators such as the TPU and Intel Nervana\footnote{https://ai.intel.com}. The still immature state of tools and frameworks poses substantial challenges in model conversion and portability. Execution of same models on different runtimes is not always possible because different runtimes support acceleration of different operations. Predicting CNN inference model runtime performance is difficult. The interplay of heterogeneuos computing hardware, optimizing runtimes, software libraries, and (possibly) multiple different inference models sharing the same resources form a multi-dimensional optimization problem. Usually the optimization space is narrowed by fixing initial parameters such as the GPU architecture or the inference model version with the desired accuracy. These choices already limit many later optimizations. For example, different GPUs behave differently with larger batch sizes or multiple execution contexts (e.g., threads, processes or CUDA contexts). As seen in section~\ref{sec:android_recognition} Android throughput is increased with two model instances, but with Jetson TX the situation is the opposite. The accuracy of a model is affected by many things, such as input size or number and complexity of layers. All of these have also an effect on how the model utilizes the underlying resources. In Section~\ref{sec:object_detection_jetson}, different object detection models were characterized on the Jetson TX2 platform. Using the TensorRT runtime which has good support to accelerate operations with the underlying hardware, a computationally more complex model was able to attain better performance than computationally more simple models on a the TensorFlow runtime. But then, by changing the operational mode of the hardware the relative performance of the models changed. Regarding performance, there are always trade-offs between accuracy, throughput, and latency. While machine learning frameworks implement different tools and methods to automatically optimize their configurations, many parameters still require manual tuning. For example, finding an optimal batch size to maximize throughput with latency constraints requires direct measurements of the inference performance on the execution platform (see Figures~\ref{fig:android_inc2_tf_batch},~\ref{fig:jetson_lat},~\ref{fig:concurrency_combined_class},~\ref{fig:inception_tensorrt},~\ref{fig:concurrency_latency_frcnninc},~\ref{fig:fr_tesla}). Manual placement of operations between GPU and CPU can also significantly improve the execution performance of an inference model (Section~\ref{sec:TF_serving}). Our results indicate that tools and methods available lack performance portability. Regarding performance characterization, it is difficult to reproduce measurements in full detail because of the many affecting factors. And additionally, in real-world deployments the dynamic and usually unpredictable nature of system input is never the same as in "controlled" measurements. \section{Introduction} Computer vision has many important mobile applications in both the consumer realm, such as mobile Augmented Reality (AR) and intelligent traffic, as well as in the Industrial Internet realm, such as perception and control in robotics and intelligent video surveillance. The basis for computer vision is object recognition and detection from images and videos. Usually these tasks need to complete with low latency either for the sake of good human user experience or because the actions they trigger have latency requirements. In the recent years, deep learning, particularly in the form of deep Convolutional Neural Networks (CNN), has become the dominant approach for implementing computer vision algorithms\cite{lecun15nature}. CNNs have proven to be a powerful and efficient way to implement, e.g., object detection and video scene recognition. CNNs are mainly used for supervised learning in which the network is trained for a specific task. While training a network requires usually a large amount of time and computational resources, a single inference operation (e.g., a detection) can be performed in just a few milliseconds\cite{redmon16CVPR}. However, the computation time depends largely on the combination of hardware and software used. The availability of GPUs or other accelerators can speed up the execution of the underlying mathematical operations tremendously but the software on top of it must be able to reap these advantages properly. Despite the recent progress with deep learning on terminal devices (e.g., \cite{lane16deepx} for mobile devices), mobile applications often rely on remote servers with specific computing capabilities for fast and energy efficient inference. Real-time operation \cite{nishihara2017realtime} is required also from the server side. The facilitation of the related cloud computing is undergoing a change~\cite{yi15mobidata} and the change affects also the way software utilizes the cloud capabilities~\cite{maas2017cloud30}. As for computing platforms, the scenery has become very diverse. In addition to the rich variety of GPUs applicable to CNN computations, a number of specific accelerators have been developed. The scale varies from small low-power devices (e.g., \cite{snpe}) to warehouse scale computing (e.g., \cite{jouppi17tpu}). Meanwhile the CPU development~\cite{lee2010debunk} has continued and many CPUs offer acceleration for CNN computations. The same diversity applies to runtime systems~\cite{nguyen2017notanother}. As a result, the computational behavior and performance of CNNs for computer vision are not yet well understood. In this paper, we study the computational behavior and performance of object recognition and detection from images using CNNs. Our focus is specifically on the inference and we use trained networks on mobile devices and remote servers with and without hardware acceleration. Our contribution is performance characterization of mobile CNN-based object recognition and detection with several different hardware platforms and software frameworks, both for local on-device and for remote inference. We mainly focus on system throughput and latency and the trade-offs between them. We also examine the impact of parameterization of the software tools for model inference as well as some characteristics of the models themselves on the performance. Our results show that using CNN based models yields complex performance behavior but also that they exhibit characteristics that enable performance tuning of the typical systems. Currently, though, human expertise is required to get the most out of the hardware. We believe that our results provide valuable input for work towards self-adapting distributed systems for mobile computer vision that would not need such manual performance tuning The structure of the paper is as follows. Section~\ref{sec:background} introduces the concept of CNNs, presents the tools and frameworks used in the measurements and justifies why latency and system throughput are important metrics for CNN based computer vision. Mobile object detection and recognition is measured in Section~\ref{sec:mobile} both for a mobile phone and an embedded computing device. Remote measurements are divided into remote object recognition measurements (Section~\ref{sec:remote}) and remote object detection measurements (Section~\ref{sec:detection}). We discuss the findings in Section~\ref{sec:discussion} and the related work in Section~\ref{sec:relatedwork} before concluding the paper in Section~\ref{sec:conclusion}. \section{Mobile On-Device Object Recognition and Detection} \label{sec:mobile} In this section, we present results from experimentation with on-device object recognition and detection using CNNs. We use two mobile platforms: a state of the art Android smartphone and Nvidia Jetson TX2 which is a GPU powered embedded computing device. The smartphone represents a regular consumer use case, while the Jetson represents an IoT use case. \subsection{Object Recognition on Smartphone} \label{sec:android_recognition} On smartphone we focus on object recognition because detection models have special operations that are not well supported by the mobile inference frameworks. \subsubsection{Experiment setup} We use a Nokia 8 smartphone equipped with Qualcomm SoC (Snapdragon 835 with 8-core Kryo CPU and Adreno 540 GPU) and Android 8.0 (Oreo). Frameworks chosen for CNN inference are TensorFlow 1.5-rc1 Java API and Qualcomm Snapdragon Neural Processing Engine (NPE) 1.10.1 which has both CPU and GPU runtime modes. At the time of our experimentation, TensorFlow has only CPU runtime available for Android. Two object recognition models are extracted from TensorFlow-Slim image classification library: Inception V2 and full-width (1.0) MobileNet V1, both with input resolution 224x224. We freeze the models into TensorFlow protobuf format and convert with Snapdragon NPE SDK conversion tool into its special DLC format. The disk space requirements for each format are similar but depend on the model: Inception V2 takes up 45 megabytes and MobileNet 17 megabytes in the static asset files of the Android app. In our application, images are captured as 480x640 preview-quality JPEG images by Android's Camera2 API with exposure time fixed to 1/60 seconds to ensure consistent supply of input images. They are then preprocessed into input tensors (resize, crop, and pixel normalization) before feeding them to the neural networks. The performance of different frameworks and models was evaluated with measurement of execution latencies in two different cases: total latency of inference with a single image (Figures ~\ref{fig:android_latency_inc2} and ~\ref{fig:android_latency_mob}), and throughput of continuous inference with repeating camera capture (Figures ~\ref{fig:android_throughput_inc2} and ~\ref{fig:android_throughput_mob}). \subsubsection{Results} \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{android_latency_inc2_edit.png} \caption{Total latency of single image Inception V2 object recognition on Android} \label{fig:android_latency_inc2} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{android_latency_mob_edit.png} \caption{Total latency of single image MobileNet object recognition on Android} \label{fig:android_latency_mob} \end{figure} The results in Figures ~\ref{fig:android_latency_inc2} and ~\ref{fig:android_latency_mob} show that for single image object recognition the dominating delay is the model file load and neural network setup, with the actual inference being comparatively fast on all frameworks and both models. Launching the device's camera also takes more time than the actual capture of picture and preparing it for inference. TensorFlow Java API (TensorFlowInferenceInterface) has quite fast network setup, resulting in approximately 750ms total latency for MobileNet where Snapdragon NPE takes more than 1250ms. Interestingly, the NPE's GPU acceleration runtime is useless in this case because the increase in network setup time is longer than the time saved by faster inference when compared to the CPU runtime. For the case of continuous inference, Figures ~\ref{fig:android_throughput_inc2} and ~\ref{fig:android_throughput_mob} show frames per second after the initial network and camera setups have already completed. In our application, a frame consists of preprocessing the latest captured image and running inference on a neural network instance, with camera capturing images repeatedly in its own background thread. The results show that throughputs of all frameworks can be increased by loading more than one instance of the neural network API object in their own separate threads, thus providing more concurrency, even when using GPU-acceleration which is already highly parallel. However, using multiple network instances increases the initial setup latency and system memory usage. Figure ~\ref{fig:android_inc2_tf_batch} shows that the throughput of TensorFlow can be further increased by feeding multiple images at a time as a batch. At the time of our experimentation, batching is not available for Snapdragon NPE. In many ways TensorFlow appears to be better optimized for CPU performance than Snapdragon NPE. For example, with batch size of five or more, and two networks running parallel, TensorFlow achieves sub-200ms frame time with Inception and 88ms with Mobilenet, the latter being twice as fast as its 175ms inference-part latency in the single image case. However, in GPU-accelerated mode Snapdragon NPE achieves 56ms frame time with Inception and 36ms with MobileNet which is fast enough for many real-time applications. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth,height=0.6\columnwidth]{android_throughput_inc2.png} \caption{Throughput of continuous Inception V2 inference on Android, with one or two neural network instances running simultaneously} \label{fig:android_throughput_inc2} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth,height=0.6\columnwidth]{android_throughput_mob.png} \caption{Throughput of continuous MobileNet inference on Android, with one or two neural network instances running simultaneously} \label{fig:android_throughput_mob} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{android_inc2_tf_batch.png} \caption{TensorFlow 1.5 Inception V2 throughput vs batch latency on Android, with one or two neural network instances running simultaneously taking same number of images per batch. Batch size is varied from 1 to 33.} \label{fig:android_inc2_tf_batch} \end{figure} \subsection{Object Recognition Using Jetson TX2} \label{sec:jetson_recognition} In this section, we characterize the throughput and latency trade-offs of object recognition and detection on the NVIDIA Jetson TX2 embedded computing device. TX2 is an embedded system-on-module (SoM) with dual-core NVIDIA Denver2 + quad-core ARM Cortex-A57, 8GB 128-bit LPDDR4 and integrated 256-core Pascal GPU. The GPU has two streaming multiprocessors. \subsubsection{Experiment setup} We measure the throughput and latency of Inception v2 inference model when increasing the image batch size. The experiment is done first with one inference process and then with two concurrently running processes. With one process we adjust the image batch size from 1 to 34 in increments of 1, and with two processes we adjust the batch size from 1 to 16. Input is generated by feeding 300 random batches of images of size 224x224 to the each process. The Inception v2 model is executed on top of the TensorFlow 1.5 runtime. The Inception v2 model is retrieved from the TensorFlow-Slim image classification library. We also measure the average core utilizations and clock speeds using the \textit{tegrastats} tool during continuous inference with batch size of one image. During experiments, the device is in the \textit{Max-N} powermode, in which all cores of the TX2 can run at maximum clock speeds but the clock speed is throttled by the Dynamic Voltage and Frequency Scaling (DVFS) of the Jetson TX2 SoM. \subsubsection{Results} Figure~\ref{fig:jetson_lat} shows object recognition latency with TX2 and Figure~\ref{fig:jetson_perf_inception_p2} shows the average core utilization and clock speeds. The average inference latency of one Inception v2 model instance with TensorFlow 1.5 on TX2 is on average 33ms with batch size of one and 540ms with batch size of 32. With two concurrently running model instances the average inference latency is 82ms with batch size of one and 620ms with batch size 16. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{jetson_inceptionv2_lat.png} \caption{Object recognition latency and throughput on Jetson TX2 with Inception V2 model on TensorFlow 1.5. Images are batched together for higher throughput using one and two model instances. Numerical sub-indexing denotes the number of concurrently processed images.} \label{fig:jetson_lat} \end{figure} In Figure ~\ref{fig:jetson_lat}, the latency-throughput curve of one model instance shows an decline of throughput between batch sizes 16 and 17 and between batch sizes 32 and 33. Observed behavior is due to the TensorFlow runtime choosing different kernels for several inference operations when the batch size increases beyond 16 and 32. For an optimal latency-throughput balance some batch sizes are thus more optimal than others. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{jetson_perf_inception_b1_2p.png} \caption{Jetson TX2 average utilization and core clock speeds during continuous inference with one and two concurrent Inception v2 model processes.} \label{fig:jetson_perf_inception_p2} \end{figure} In Figure ~\ref{fig:jetson_perf_inception_p2}, the average GPU utilization of one process is 86\%. Both of the Denver cores are idle. The A57 cores are running with average utilization of 25\%. The average GPU utilization of two processes is 90\%. Both of the Denver cores are idle. The A57 cores are running with average utilization of 34\%. With two concurrently running Inception v2 inference model processes the Jetson TX2 has a slightly higher GPU utilization and similarly slightly higher A57 core utilizations. However, at the same time the A57 cores are running on lower clock speeds. In general, Figure ~\ref{fig:jetson_lat} shows that the execution of two inference model instances concurrently induces execution overhead and the overall latency increases when compared to executing a single model. This is mostly due to context switch overhead of scheduling the two processes as described in ~\cite{Amert2017GPUSO}. The observed behavior is opposite of the results from the Android experiment presented in Figure~\ref{fig:android_inc2_tf_batch}, where two model instances yield better throughput compared to only one instance. With both the Android and Jetson the latency-throughput increases are not linearly dependent on batch size. Instead, sometimes an increase of one image in the batch size leads to a better throughput with a minimal increase in latency, and on other other occasions the increase results in both worse throughput and increased latency. Depending on the inference model and the computation platform certain batch sizes are thus more optimal than others. The optimal batch size is difficult to estimate without actual measurements. \subsection{Object Detection Using Jetson TX2} \label{sec:object_detection_jetson} We now turn the attention to object detection models and study the inference performance of three different detectors on the Jetson TX2 embedded computing platform. \subsubsection{Experiment setup} The three object detectors used in this experiment are SSD Mobilenet v1 COCO, SSD Inception v2 COCO, and VGG16 FasterRCNN PASCAL VOC. The SSD ones are from the TensorFlow Object Detection library, while the FasterRCNN model comes included in the Jetson TX JetPack 3.2RC SDK. The SSD models are executed on top of the TensorFlow 1.5 runtime using the Python API. The FasterRCNN model uses the TensorRT 3.0 runtime via the C++ API. We feed 500 random jpg images to each model and measure the average throughput. \subsubsection{Results} Figure ~\ref{fig:jetson_perf_obj_det} shows the average throughput of SSD Mobilenet, SSD inception, and VGG16 FasterRCNN models. In the figure MaxN-powermode represents the performance mode of the Jetson TX2. For comparison, the measurement is done also in full clock mode, where DFVS is disabled and all cores run all the time at maximal speed. This represents the theoretical maximum attainable with the system. The average throughputs in MaxN-powermode are 2.7fps for SSD Mobilenet v1, 1.1fps for SSD Inception v2, and 3.2fps for FasterRCNN, while in full clock mode the respective numbers are 4.3fps, 3,3fps and 3.2fps. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{jetson_perf_obj_det_b.png} \caption{Jetson TX2 average inference throughput with TensorFlow SSD inception v2, TensorFlow SSD Mobilenet v1, and TensorRT VGG16 FasterRCNN using batch size of one image. The related core utilizations and clock speeds for the MaxN-powermode are presented in Figure ~\ref{fig:jetson_perf_3}.} \label{fig:jetson_perf_obj_det} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{jetson_perf_3.png} \caption{Jetson TX2 average core utilizations and clock speeds when running inference using the MaxN-powermode with TensorFlow SSD inception v2, TensorFlow SSD Mobilenet v1, and TensorRT VGG16 FasterRCNN using batch size of one image. The related inference throughputs are presented in Figure ~\ref{fig:jetson_perf_obj_det}.} \label{fig:jetson_perf_3} \end{figure} Figure ~\ref{fig:jetson_perf_3} show the measured average core utilizations and clock speeds in the MaxN-powermode from the throughput measurements in Figure ~\ref{fig:jetson_perf_obj_det}. During continuous inference with the FasterRCNN detector the GPU executes on an average utilization of 95\% with an average clock speed of 1300MHz. The Denver cores are not active and the A57 cores are in practice idle. During inference with the SSD detectors the GPU utilization is 30\% on average. The CPU cores are all active with utilization ranging from 6\% to 52\%. These results reveal that the SSD detectors on TensorFlow runtime are unable to fully utilize the GPU for accelerated inference, whereas the FasterRCNN detector on TensorRT runtime is utilizing the GPU for practically all of the inference operations. From the models used in this experiment the SSD Mobilenet v1 is generally reported to execute with the highest throughput, and the FasterRCNN model with the lowest throughput~\cite{Huang2017CVPR}. But in practice there are many factors in the interplay of the inference model implementation, runtime support and hardware that dictate the actual performance of a model on a given hardware. The FasterRCNN model used in this experiment runs on top of the TensorRT runtime and is able to use nearly 100\% of the GPU capacity for inference. The SSD models contain operations that have no GPU support on the TensorFlow runtime for Jetson TX2, and cannot thus be executed solely using the GPU, but are instead divided on-the-fly between the GPU and CPUs. With the DVFS enabled the SSD models perform worse than the FasterRCNN model, but when the DVFS is disabled and the Jetson SoM runs at full clock speeds the SSD models perform better than the FasterRCNN model. Different object detectors or platform configurations would likely lead to yet different behavior than what was observed in our experiments. Finding an optimal system configuration is not a simple task and requires characterization of a large parameter space. \section{Remote Object Detection} \label{sec:detection} We characterize the computational behavior of object detectors also using the two platforms TensorFlow Serving and TensorRT that we used with object recognition. In these experiments, we use two recent detectors: 1) SSD meta-architecture combined with the Inception V2 feature extractor in our benchmark model, which has been shown to be fast with a relatively high accuracy, and 2) Faster R-CNN using either the Inception V2 or the VGG16 feature extractor. We use the model implementations introduced in~\cite{Huang2017CVPR} as they include the full pipeline from image pre-processing to post-processed object detections. \subsection{TensorFlow Serving} \label{sec:TF_serving} \subsubsection{Experiment setup} The setup and measurement method in the experiments described in this section are the same as in the object recognition experiments with TensorFlow Serving. \subsubsection{Device placement} The previous object detection measurements with the Jetson TX2 (Section~\ref{sec:object_detection_jetson}) were performed using TensorFlow's automatic operation device placement. However, the optimal computing device (e.g., CPU, GPU, TPU, Tensor Core) placement of individual operations in a model is an open research question~\cite{mirhoseini17icml}. The operations are still usually assigned to devices by human experts. In this section we experiment with three strategies: 1) Only using CPU, 2) Automatic assignment by the software (TensorFlow) and 3) Manual placement. In automatic mode, TensorFlow maps operations to GPU if the operation has an implementation for it. In the manual strategy we assign pre- and post-processing (mainly non-maximum suppression) stages for the CPU and handle the convolutional network part of the model on the GPU. Figure~\ref{fig:barchart_combined} shows how the detection latency rises as we increase the batch size of a single inference using three different GPUs (Nvidia Titan V, Nvidia GTX 1080 and GTX 1050 Ti) and CPUs (Intel i7 8700K, Intel i7 7700K and Intel i5 2500K). The results indicate that TensorFlow by default does not place the device operations optimally as the manual placement strategy outperforms the automatic placement by roughly 20-50\%. Even the high-end CPUs cannot outperform the GPU strategies. We placed the GTX 1050 Ti GPU to a PC equipped with an older CPU (Intel i5-2500K) and a newer CPU (Intel i7-7700K) to see the combined effect of a modern GPU and CPU for performance. The less powerful CPU unsurprisingly runs the model much more slower on its own and also slows down the GPU-powered scenarios by 10-30\%. The Nvidia Titan V GPU shows its power when raising the batch size. With batch size set to one, the Titan V GPU and GTX 1080 have similar latencies. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{barchart_combined-crop.pdf} \caption{Inference latency with SSD Inception V2 using TensorFlow Serving with different CPU/GPU combinations and batch sizes.} \label{fig:barchart_combined} \end{figure} \subsubsection{Split model} In order to understand why the manual device placement runs the model faster, we divided the model into three separating the pre- and postprocessing from the rest of the model. We then run the split model parts separately. Figure~\ref{fig:breakdown} shows how the post-processing part runs over two times slower when TensorFlow assigns the operations. Preprocessing is equally fast on CPU and GPU. The rest of the model includes mainly convolutional network operations which run considerably faster when executed on the GPU. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{breakdown_combined-crop.pdf} \caption{Inference latency breakdown with SSD Inception V2 trained with COCO dataset (left) and OID dataset (right) using TensorFlow Serving with Intel i7 7700K CPU and Nvidia GTX 1080 GPU.} \label{fig:breakdown} \end{figure} Table~\ref{tab:opcounts} shows the number of operations run on both CPU and GPU in different parts of the model using automatic device placement. The detection part is almost completely run on the GPU with just a couple of memory copies before and after the detection part. This leads to efficient inference when run on the GPU. Preprocessing has almost an equal number of CPU and GPU operations. However there are again only a couple of memory copy operations between the CPU and the GPU which leads to efficient inference when accelerated with a GPU. \begin{table}[] \centering \caption{Device placement and memory copy counts for SSD Inception V2 with automatic placement using TensorFlow. (H=Host, D=Device} \label{tab:opcounts} \resizebox{\columnwidth}{!}{% \begin{tabular}{l|c|c|c|} \cline{2-4} & \multicolumn{1}{l|}{\textbf{Preprocessing}} & \multicolumn{1}{l|}{\textbf{Detection}} & \multicolumn{1}{l|}{\textbf{Postprocessing}} \\ \hline \multicolumn{1}{|l|}{\textbf{CPU ops}} & 89 & 9 & 506 \\ \hline \multicolumn{1}{|l|}{\textbf{GPU op}} & 106 & 433 & 5356 \\ \hline \multicolumn{1}{|l|}{\textbf{MemCpy (HtoD)}} & 6 & 6 & 270 \\ \hline \multicolumn{1}{|l|}{\textbf{MemCpy (DtoH)}} & 6 & 8 & 734 \\ \hline \multicolumn{1}{|l|}{\textbf{MemCpy (DtoD)}} & 2 & 40 & 32 \\ \hline \end{tabular}} \end{table} Postprocessing part on the other hand shows a high number of memory copies between the CPU and the GPU. This explains the inefficiency of this part of the model when accelerated with a GPU. The memory copies are result of a non-max suppression function which does not have a GPU implementation. This leads to a lot of context changes between the CPU and the GPU as TensorFlow automatically prefers the GPU implementations on other operations. The results show that it is crucial to separate the parts of the model which can't be effectively accelerated on a GPU for models with non-trivial CPU loads. Otherwise the CPU part might become the bottleneck for the entire model. We further demonstrate this by training the same version of the inference model with a dataset with a larger number of trainable classes as with SSD the number of classes directly affects the postprocessing load (number of detections fed to the non-maximum suppression is a function of number of classes). Figure~\ref{fig:breakdown} also shows how the postprocessing (CPU load) becomes even more dominant part of the overall latency when the model is trained with the Open Images Dataset (546 classes) instead of the COCO data set (90 classes). We use the manual placement strategy in the next section where we measure the throughput in relation to the inference latency. \subsubsection{Impact of Concurrency} The concurrency of the inference can be adjusted in TensorFlow Serving by increasing the batch size and the number of batching threads. The number of concurrent detections can be calculated by multiplying the batch size with the number of batching threads. Figure~\ref{fig:concurrency_combined} demonstrates how the concurrency affects both the latency and throughput with both tested systems. The figure shows that the system throughput can be doubled with the right batching configuration. However, the average latency for object detection grows fast which is a problem for real-time applications. Similarly to the object recognition benchmarks, the Titan V GPU can handle more concurrent detections than the GTX 1080 with the same latency limit. Depending on the system, a specific batch size must be set to match the latency limit of the application. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{concurrency_combined-crop.pdf} \caption{Inference latency/throughput trade-off when increasing concurrency with SSD Inception V2 using TensorFlow Serving with either Intel i7 7700K CPU and Nvidia GTX 1080 GPU or Intel i7 8700K CPU with Nvidia Titan V GPU using manual device placement. (bt = batching thread count)} \label{fig:concurrency_combined} \end{figure} Figure~\ref{fig:concurrency_latency_frcnninc} shows the behavior of another object detection model (FasterRCNN InceptionV2) when increasing the concurrency. This model does not have a significant CPU load and increasing the batching thread count does not raise the throughput. The throughput of the model also does not grow much when increasing the batch size. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{concurrency_combined_frcnnincoco-crop.pdf} \caption{Inference latency/throughput trade-off when increasing concurrency with FasterRCNN Inception V2 using TensorFlow Serving with Intel i7 7700K CPU and Nvidia GTX 1080 GPU or i7 8700K with Nvidia Titan V. (bt = batching thread count)} \label{fig:concurrency_latency_frcnninc} \end{figure} \subsection{TensorRT} In the previous section, we used threading together with batching with TensorFlow Serving to increase the concurrency when serving a single model. In this section, we contrast this by performing two experiments with TensorRT to characterize the behavior of model inference when the underlying GPU hardware is shared by different processes that could potentially serve different models. In the first experiment we measure the latency-throughput performance of multiple inference model processes on different GPU hardware. In the second experiment we measure the latency-throughput performance of multiple inference model processes with and without the MPS server. In both experiments we use 1000 random jpg images as input to each process. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fr_1_rel.png} \caption{Relative throughput speedup with different GPU hardware when batching images using one process context. Batch size is marked on the figure using subindex numbering. Throughput is normalized individually for each GPU by dividing the throughput values by the throughput of batch size one.} \label{fig:fr_1_rel} \end{figure} \subsubsection{Resource sharing} In this experiment we compare executing one and two processes on different GPUs, namely GTX 1050, Titan V, Jetson TX2, and TeslaK40. Figure ~\ref{fig:fr_1_rel} shows the relationship between the mean inference latency and relative throughput of FasterRCNN detector with different GPUs and varying image batch size. The batch size varies from 1 to 20 for the GTX1050, Titan V, and Tesla K40, and from 1 to 8 for the Jetson TX2. The newest Titan V is able to achieve the best relative increase in throughput from batching, with a relatively small increase in batch inference latency. Jetson TX2 and GTX 1050 also benefit from batching, but only to a small degree, and their latency increases rapidly with the batch size. The TeslaK40 attains the highest throughput with batch size of two after which the performance degrades. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fr_2_rel.png} \caption{Relative throughput speedup with different GPU hardware when batching images using two process contexts. Total number of concurrent images in processing is marked on the figure using subindex numbering. Throughput is normalized individually for each GPU by dividing the throughput values by the throughput of batch size one.} \label{fig:fr_2_rel} \end{figure} In Figure ~\ref{fig:fr_2_rel} the relationship between inference latency and relative throughput of two concurrent FasterRCNN processes is measured using different GPUs by varying the image batch size. The batch size is varied from 1 to 20 for the GTX1050, Titan V, and Tesla K40 GPUs, and from 1 to 4 for the Jetson TX2. With two concurrent processes, the total number of concurrently processed images is double the batch size. The number of concurrently processed images is marked on Figure ~\ref{fig:fr_2_rel} using subindex numbering. With two concurrently running processes the behavior of relative throughput gain is somewhat similar as with one process. The TeslaK40 achieves the best results with batch size of two per process. The Jetson TX2 gets a small benefit from batching, but the mean inference time increases rapidly. The GTX 1050 gets slightly better throughput speedup from batching with two processes as compared to one process, but the gain is minimal and the latency is also increasing rapidly. The Titan V performs better with one context. GPU architecture and performance has a remarkable effect on concurrent execution of inference models. It appears that the new TensorRT runtime is not efficient on the older TeslaK40 GPU. The batching support of the fasterRCNN model implementation is not very good, and the relative speedups from batching remain small with all the GPUs used in the experiment. \subsubsection{MPS} MPS (Multi-process service) server is acting as a proxy process between user processes and the GPU~\cite{mps}. API requests from different host processes go through MPS and use the GPU as they would come through one process. This way, for example, some kernels that would be serialized can be run concurrently. In this experiment we measure the latency-throughput performance of multiple procesesses running the fasterRCNN model on top of the TensorRT runtime on TeslaK40 GPU. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{fr_tesla2.png} \caption{Throughput and latency of a single cNN process compared to two concurrent processes. The behavior of two concurrent processes is measured with and without the NVidia multi-process service (MPS).} \label{fig:fr_tesla} \end{figure} Figure ~\ref{fig:fr_tesla} presents the measurement results of batching images for inference using two concurrent processes. Measurement is done with and without the MPS server. As a baseline the behavior of a single process is also included. The measurement is done using the Tesla K40 GPU. Subindex numbering along the measurement points denotes the total number of concurrently processed images. The figure shows that best throughput for multiple contexts is achieved using two execution contexts with batch sizes of two images and without the MPS server. With higher and lower batch sizes using the MPS server with two contexts yields better results. Lowest latency is achieved with two contexts and without batching (if the one procesess baseline is omitted from the comparison). Figures ~\ref{fig:fr_1_rel}, ~\ref{fig:fr_2_rel} and ~\ref{fig:fr_tesla} show that the behavior of fasterRCNN is not trivial. In Figure ~\ref{fig:fr_1_rel} Titan V GPU achieves throughput speedup with image batching and this induces a relatively small latency overhead. With the GTX1050 and Jetson TX2 the throughput gain via batching is minimal, and already with small batch sizes the maximum throughput is achieved. The TeslaK40 GPU attains the maximum throughput with batch size of two. Larger batch sizes give lower throughput with an increasing latency. In general, the MPS server helps increase the overall throughput of the system, with the exception of two fasterRCNN processes and batch size of two images, when the behavior is opposite. The optimal number of processes and batch sizes is dependent on the underlying hardware and the inference models and the inference runtimes that are executed in the processes. It seems, that with small batch sizes also the question of whether to use MPS server or not is not a straightforward question to answer. Instead, when one tries to find the optimal latency-throughput configuration for a system, performance profiling of the system is required. \section{Related work} \label{sec:relatedwork} Some CNN frameworks support mobile devices. Recently Qualcomm announced hardware acceleration support for TensorFlow using their latest Snapdragon SoC~\cite{qualcomm}. Some research prototypes that leverage mobile device special purpose processors (e.g., DSP, GPU) also exist~\cite{lane15ubicomp,lane16deepx,lane16mobicase,LatifiOskouei16mm,huynh2017deepmon}. Other recent research has looked at the computational behavior of CNNs and the impact of the neural network architecture, such as number of layers, depth, etc., on it~\cite{qi17paleo,dong17dnnmark,howard17mobilenets}. Some research has also looked at how to tune the CNNs through model adaptation and optimizations, for instance, in order to tailor them into a specific mobile context and constrained devices~\cite{li16deepcham,han16mobisys,bhattacharya16sensys,huynh2017deepmon}. The custom solutions presented in these papers to utilize the GPUs and/or DSPs on mobile device are likely to become obsolete with the introduction new tools, such as the Android Neural Network API and SDKs provided by hardware manufacturers (e.g., Qualcomm SNPE). However, their algorithmic optimizations remain valuable. In this paper, we only used the "standard" CNN optimization methods provided by the platforms. Our results also suggest that purely on-device deployment strategies are not desirable in most cases due to memory constraints and long model load and intialization latency. Concerning CNN inference optimization in distributed systems, Han et al.~\cite{han16mobisys} design a cloud based system for mobile computer vision that executes algorithms either on device or on remote server. Since that work was published, much has changed in short time in terms of the state of the art CNNs for computer vision as well as in terms of software frameworks and hardware acceleration available for neural network inference. That work also focused solely on object recognition, while our results suggest that the computational behaviour of object detection differs substantially from that of recognition. Zhang et al.~\cite{zhang17nsdi} studied how to make video analytics systems adaptive in order to manage resources efficiently and with controlled degradation of the quality of analytics. However, the only neural network model used in that work was a DNN classifier and the paper does not specify its architecture or the runtime used to execute the model. The paper also does not present any results with GPUs or any other type of accelerated computing. Crankshaw et al.\cite{crankshaw17nsdi} developed Clipper, which is a general-purpose prediction serving system designed for low latency. Clipper is comparable to the TensorFlow Serving. Mobile on-device inference is not discussed in the paper and the computer vision part includes only object recognition algorithms. One of the most closely related work is the paper by Huang et al.\cite{Huang2017CVPR}. The difference to our work is that they focus only on object detection but also examine the accuracy of the detectors. They also did not study the computational behavior on different computing hardware and runtimes. \section{Remote Object Recognition} \label{sec:remote} The memory requirements together with the long delay caused by model file load and neural network setup limit the attractiveness of on-device inference with mobile devices. Instead, it is often more convenient and even faster to do remote inference, especially when applications need to use several different neural networks. We next study remote object recognition from images using two different standalone server deployments: TensorFlow Serving system and the NVidia TensorRT runtime. \subsection{TensorFlow Serving} \subsubsection{Experiment setup} With TensorFlow Serving, we study the throughput and latency of object recognition using the Inception V2 model on high-end desktop servers. We use two machines, one equipped with an Intel i7 7700K CPU and an Nvidia GTX 1080 GPU (Pascal architecture) and another equipped with an Intel i7 8700K CPU and an Nvidia Titan V GPU (most recent Volta architecture including Tensor Cores). TensorFlow Serving is built using CUDA 9.1 and CUDNN 7.0. We vary both the batch size as well as the number of batching threads which controls the the maximum number of batches processed concurrently. To measure latency, we modified TensorFlow Serving's gRPC code to measures the time spent during each executed inference call at the server side, hence neglecting the client side and network delays. Throughput with a specific batch size is obtained by dividing batch size by per batch latency. In each experiment, the amount of input images corresponding to the total concurrently processed images, i.e. batch size multiplied by number of batching threads, is sent to Serving in one shot and the next set of input is sent only after all responses from the previous set of input have been received in order not to overload the serving system. In total, 800 images were input in each experiment. \subsubsection{Results} Figure~\ref{fig:concurrency_combined_class} shows how the inference latency and throghput grow when increasing concurrency. The numbers next to the plots indicate the concurrency which is the batch size multiplied by the number of batching threads. As expected, the lowest latency is achieved with only one concurrent detection. Both overall latency and throughput is optimized using four batching threads with the i7 7700K / GTX 1080 GPU system. The CPU has 4 cores so additional threading does not help beyond this point. The i7 8700K on the other system has 6 cores which shows in the results. With Titan V the system benefits from batching with significantly higher batch sizes compared to the GTX 1080. With a latency limit of 100 ms, the GTX 1080 has a throughput of 850 images per second while the Titan V can go up to 4000 images per second. The variation of latency was however very high on the Titan V using 8 batching threads (coefficient of variation of latency ranging from 0.2 to 2). This variation can be problematic when designing a production system with latency requirements. Compared to the mobile scenario in Section~\ref{sec:mobile}, the server deployment can perform roughly 40 concurrent image classifications in the same time as the mobile deployment in its fastest scenario. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{concurrency_combined_classification-crop.pdf} \caption{Object recognition latency/throughput trade-off when increasing concurrency with Inception V2 using TensorFlow Serving with Intel i7 7700K CPU and Nvidia GTX 1080 GPU (top) compared to Intel i7 8700K and Nvidia Titan V GPU (bottom).} \label{fig:concurrency_combined_class} \end{figure} \subsection{TensorRT} \subsubsection{Experiment setup} Similar to TensorFlow Serving, we examine the latency-throughput behavior with the Inception V2 model using the TensorRT inference optimizer runtime on the same two desktop machines. We use TensorRT 3.0.1 with CUDA 9.0 and CUDNN 7.0. In the experiment, we vary the image batch size from 1 to 160 and perform the measurements using full-precision and half-precision floating point representation for the computations. The GTX 1080 has no acceleration support for half-precision floating point operations (they execute at the same speed as full-precision operations). In this experiment we use random input to the model. The throughput and inference time of for each batch size is measured by averaging the execution time of 100 batched inputs. \subsubsection{Results} Figure ~\ref{fig:inception_tensorrt} presents the results from the experiment. On Titan V with full-floating point representation the latency starts increasing more rapidly after batch size of about 20. The increase in throughput from batch size 50 to 100 is minimal, but the latency grows almost by 100\%. With half-floating point representation throughput increases steadily up to batch size of 100 images. After that, the throughput is not increasing, but the overall latency grows. GTX 1080 has a slightly lower latency with single image inference, but the latency grows fast when batch size is increased. TensorRT half-floating point support is able to almost triple the throughput of InceptionV2 on Titan V with the higher batch sizes. Also, with half-floats the latency remains relatively low to higher batch sizes as when using full-floating point representation. Behavior of the latency--throughput relationship is similar in Figures ~\ref{fig:inception_tensorrt} and ~\ref{fig:concurrency_combined_class}. Increases in batch size lead to a better throughput until the system saturates, after which the throughput is not increasing but the latency grows. The actual saturation point depends on many factors, such as the inference model, the underlying hardware, the runtime and its configuration. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{inception_tensorrt.png} \caption{Object recognition latency/throughput trade-off when increasing batch size with Inception V2 on TensorRT with GTX 1080 and Titan V GPUs, using full-precision (float32) and half-precision (float16) representation for inference on the Titan V.} \label{fig:inception_tensorrt} \end{figure}
1,116,691,497,098
arxiv
\section{Introduction} \label{sec:1} The abundance of gas observable in galaxies today can be expressed with dimensionless numbers, normalised with the critical density of the universe. While stars in galaxies account for $\Omega_*$ = 3 10$^{-3}$ (e.g. Fukugita {\rm et\ts al. } 1998), the HI gas contributes by $\Omega_{HI} \sim$ 3.5 10$^{-4}$ (Zwaan {\rm et\ts al. } 2005), and the molecular gas by $\Omega_{H_2} \sim$ 1.2 10$^{-4}$ (Young {\rm et\ts al. } 1995, Sauty {\rm et\ts al. } 2003, Keres {\rm et\ts al. } 2003, Saintonge {\rm et\ts al. } 2011). Theoretical considerations and semi-analytical models predict that the molecular-to-atomic gas ratio decreases regularly with cosmic time in galaxies (Obreschkow \& Rawlings, 2009, Obreschkow {\rm et\ts al. } 2009). The phase transition to molecular hydrogen can be formulated in terms of pressure (Blitz \& Rosolowsky 2006), and the surface density and consequently the pressure is higher in high-z galaxies. The modelisation leads to a dependency of H$_2$/HI varying as (1+z)$^{1.6}$. This is essentially due to the expectation that the size of galaxies is growing as (1+z)$^{-1}$ with cosmic time. The evolution with z of $\Omega_{HI}$ in galaxies is not yet known from emission, but can be derived from the damped Lyman-$\alpha$ absorption in front of quasars, since these systems are thought to correspond to galaxies. Albeit with large error bars, the abundance of HI appears about constant from z=4 to z=0 (Zwaan {\rm et\ts al. } 2005). It is however expected to experience a strongly varying phase at higher z, when cold gas settles in galaxies, through accretion and cooling, mergers, etc. At these early epochs, molecules might have difficulties to form, since metals and dust are building up slowly, but the exact processes are not yet well known. What is better known is the cosmic evolution of star formation rate density, from UV to far-infrared light, and its decrease by a factor 20 since z=2 (e.g. Hopkins \& Beacom 2006, Bouwens {\rm et\ts al. } 2011). How does this SFR evolution relate to the cosmic cold gas evolution? Is it linked to HI or H$_2$ density, or/and to the star formation efficiency (SFE)? \section{High-z molecular observations} \label{sec:2} Since about 20 years now, molecular gas is observed in high redshift galaxies. Due to the lack of sensitivity, mostly lensed galaxies were first discovered (cf the review by Solomon \& vanden Bout 2005). More and more "normal" objects, on the main sequence of star forming galaxies are observed now, and this will increase considerably with ALMA. The detection of CO lines at high redshift is made easier by the existence of the rotational ladder, where the flux of the higher transitions can be much higher than the fundamental line. This is not the sase of the HI gas and the only 21cm line, which will have to wait SKA to be detectable at high redshift. \subsection{Starbursts and ULIRGs} \label{ULIRG} Until very recently, only very luminous galaxies in the far-infrared (LIRGs and ULIRGs) were detected in the CO lines at high redshift, due to limited sensitivity. In the local universe, it is now well established that ULIRGs are starbursts triggered by galaxy interactions and mergers (e.g. Solomon {\rm et\ts al. } 1997). At high redshift, the global star formation rate is increasing rapidly, and even ULIRG are not all starbursts. It is thought now that the starburst mode is likely to represent only 10\% of the stars formed at z=2, the cosmic peak of the star formation activity (Rodighiereo {\rm et\ts al. } 2011). Already Greve {\rm et\ts al. } (2005) showed that the SFE (defined by the ratio of FIR luminosity, taken as an indicator of SFR, to the CO luminosity, indicator of the gas mass) was increasing significantly with redshift, reaching maxima around z=2 for submillimeter galaxies (SMG) with an SFE up to 2 orders of magnitude higher than for local LIRGs. The gas consumption time-scale, being the inverse of the SFE, could then fall to 20 Myrs, instead of the average 2 billion yrs locally. \begin{figure}[ht] \includegraphics[width=5.5cm]{SFE-fill.ps} \includegraphics[width=5.5cm]{GASF-fill.ps} \caption{{\bf Left}: Evolution of the star formation efficiency (SFE) with redshift. {\bf Right}: Cosmic evolution of the gas to stellar mass ratio, for the LIRG and ULIRG compilation of Combes {\rm et\ts al. } (2013). The green area corresponds to the CO-detected points, and the hatched area also includes the 3$\sigma$ upper limits. The width of the shaded regions correspond to the statistical scatter in N$^{-1/2}$. The red curve is indicative of the logarithmic variations of the cosmic star formation rate density (Hopkins \& Beacom 2006).} \label{fig:galevol} \end{figure} The redshift range between z=0.2 and z=1 is very important for the cosmic gas evolution, since it is the period when the cosmic star formation density drops by a factor 10, and it corresponds to 40\% of the universe age. Unfortunately, this domain was not easily observed because of atmospheric lines, and the need of sensitive 2mm instruments. A sample of 69 ULIRG was observed in different CO lines with the IRAM-30m precisely in this redshift range (Combes {\rm et\ts al. } 2011, 2013). From the galaxies where the gas excitation is known, and from the dust masses derived from the continuum emission, the adoption of the ULIRG CO-to-H$_2$ conversion factor is justified (e.g. Solomon {\rm et\ts al. } 1997). This ratio is 5.7 times smaller than the standard ratio adopted for Milky Way-like galaxies. The average molecular mass is however 1.45 10$^{10}$ M$_\odot$, an order of magnitude higher than in the Milky Way. Compiling this sample with other LIRGs and ULIRGs, both the molecular gas to stellar mass ratio and the SFE significantly increase with redshift, by factors of $\sim$ 3 from z = 0 to 1, as shown in Figure \ref{fig:galevol}, suggesting that both factors play an important role and complement each other in cosmic star formation evolution. \begin{figure}[ht] \centerline{ \includegraphics[width=9cm]{h2z-walter.ps}} \caption{Evolution of the cosmic H$_2$ mass density versus redshift, comparing observational limits obtained from blind detections in the Hubble Deep Field North by Decarli {\rm et\ts al. } (2014) shown in blue-shaded areas, to predictions from semi-analytical cosmological models (Obreschkow {\rm et\ts al. } 2009; Lagos {\rm et\ts al. } 2011) and empirical predictions by Sargent {\rm et\ts al. } 2014 (grey-shaded areas). The red upper limit corresponds only to galaxies selected via optical spectroscopic redshifts. The evolution of the atomic gas mass density ($\rho_{HI}$) and of the stellar mass density ($\rho$(M$_*$)) are also plotted (from Walter {\rm et\ts al. } 2014).} \label{fig:h2z-walter} \end{figure} \subsection{Main sequence galaxies} \label{MS} Not all star forming galaxies at z=1-2 have a high SFE. Some galaxies, selected only from their optical colors, were detected in the CO lines with surprising high CO luminosities (Daddi {\rm et\ts al. } 2008). These galaxies, although still in the ULIRGs range, have a low gas excitation (Dannerbauer {\rm et\ts al. } 2009), and are relatively extended. They are interpreted as disk-like galaxies with steady star formation rate, while the most excited ULIRGs are nuclear starbursts. It is possible that the Milky Way-like conversion ratio applies for these objects, which will further lower their SFE. However, the adoption of a bimodal conversion ratio leads to an artificial bimodal star formation regime, separating the starbursts from the more quiescent disks with a gap of an order of magnitude in gas consumption time-scales. In reality, there must exist a continuous conversion ratio, according to gas density, temperature, and other factors like metallicity, and the SF regimes are certainly continuous too. A continuity of galaxy properties between the two modes of star formation, main sequence and starburst, is developed further by Daddi {\rm et\ts al. } (2013) and Sargent {\rm et\ts al. } (2014). Although starbursts have larger SFE, it is not easy to know whether the cause is a lack of gas (may be the consequence of a short boost of star formation), or an excess of young stars. If the starburst is triggered by a merger, numerical simulations show that gas is driven inwards by gravity torques from the outer reservoir, and more gas is then observable (e.g. Di Matteo {\rm et\ts al. } 2007, Montuori {\rm et\ts al. } 2010). An excess of fresh gas in star forming galaxies is also supported by the fundamental mass-metallicity relation, which precisely depends on SFR (Manucci {\rm et\ts al. } 2010). Starbursts have also a larger molecular gas to stellar mass ratio, so their elevated SFR is both due to a larger gas content and a larger SFE. The latter could be due to the larger central concentration of the gas, and this will be clarified through resolved SFR/gas density studies. The PHIBSS large program on the IRAM interferometer (Tacconi {\rm et\ts al. } 2010, 2013, see also this conference) has targeted a sample of massive star forming galaxies, likely to be on the main sequence as defined in the stellar mass-SFR diagram (e.g. Wuyts {\rm et\ts al. } 2011). From the 52 CO-detected objects at z=1-3, the gas mass fraction is found to increase with z, up to 50\%, and decrease with mass. Most of the objects look like disks with regular rotation, and are more steady star forming disks than starbursts, without any interaction or merger. Since the molecular gas depletion time-scale is typically 0.7 Gyr and varies as (1+z)$^{-1}$, the star formation must be fueled by gas accretion episodes, which are frequent at high and moderate redshift (e.g. Combes 2014). The resolved Kennicutt-Schmidt relation obtained in a few objects is compatible with a linear relation, and a depletion time-scale lower at high-z (Freundlich {\rm et\ts al. } 2013, Genzel {\rm et\ts al. } 2013). In all these massive star forming galaxies, atomic gas cannot be dominating the cold gas, since the sum of the molecular and stellar masses are so close to the dynamical mass. Unless the CO-to-H$_2$ conversion factor is largely in error, the H$_2$/HI ratio has indeed increased with z, as predicted by models. Another recent study supports this conclusion: Decarli {\rm et\ts al. } (2014) have carried out a blind molecular line survey in the Hubble Deep Field North, scanning the whole 3mm band with the IRAM interferometer. Their blind detection of 17 CO lines, together with the upper limit obtained by stacking the observations towards spectroscopically identified objects, constrain the CO luminosity functions at the corresponding redshifts. They deduce that optical/MIR bright galaxies contribute less than 50\% to the star formation rate density at 1 $<$ z $<$ 3. Their derived evolution of the H$_2$ mass density is compared to models in Figure \ref{fig:h2z-walter}. A recent 870$\mu$m continuum survey with ALMA of SMG in the Extended Chandra Deep Field South (Swinbank {\rm et\ts al. } 2014) has discovered that the well detected sources (S$_{870}>$ 2mJy) are in average ULIRGs with SFR=300 M$_\odot$/yr. The extrapolation of the counts down to S$_{870}>$ 1mJy show that these sources contribute to 20\% of the cosmic star formation density over z=1-4. Deriving H$_2$ masses from dust masses, the average SFE is found rather high, with depletion time-scale of 130 Myr. This is to be compared to the compilation by Bauermeister {\rm et\ts al. } (2013), who observed normal star forming galaxies in the redshift range z=0.05-0.5. They find a depletion time-scale for normal galaxies of 760 Myr, and for starbursts 60 Myr. Their derived molecular gas to stellar mass ratio is plotted in Figure \ref{fig:EGNOG}, and is compatible with the model-expected behavior. \begin{figure}[ht] \centerline{ \includegraphics[width=8cm]{EGNOG.ps}} \caption{Evolution of the molecular gas to stellar mass ratio (r$_{mgas}$) versus z, from the compilation of Bauermeister {\rm et\ts al. } (2013). Symbols are filled for main sequence galaxies, and empty for starbursts. The 7 bold black triangles are the average for the different redshift bins. The shaded grey zone indicates the expected region for normal galaxies, with the solid curve being the average.} \label{fig:EGNOG} \end{figure} \section{Models and simulations} \label{sec:3} Semi-analytical models (SAM) have been run, within the standard $\Lambda$CDM model, to compute the cosmic evolution of the cold gas content in galaxies. Lagos {\rm et\ts al. } (2011) show that the best recipe to control the phase transition from atomic to molecular is the pressure model (Blitz \& Rosolowsky 2006), rather than the theory-based model from Krumholz {\rm et\ts al. } (2009) taking into account UV-dissociation of molecules and their reformation on grains. In their best fit model, the H$_2$/HI ratio rises above one at high redshift, as in Obreschkow {\rm et\ts al. } (2009). Fu {\rm et\ts al. } (2010, 2012) claim that the Krumholz {\rm et\ts al. } (2009) recipe is better, but on a limited mass range. Their best fit requires that the depletion time-scale remains 1-2 Gyr at high redshift. Using a simple phenomenological model, Feldmann (2013) claims that the relation between SFR and H$_2$ content is likely to be linear at all redshift. This assumption provides the best fit to the data, i.e. the cosmic star formation history, the evolution of the mass-metallicity relation, and the gas-to-stellar mass ratio in galaxies. This means that the variation of SFE with redshift might be too little to be sensitive. Models where the SFR relation is non-linear with gas density produce too much stars and metals early on to be compatible with the observations. To obtain the right star formation histories, gas accretion must be limited to a halo mass range between a critical minimum mass M$_c$(z), below which photoionisation limits the baryon fraction, and an upper limit M$_{shock} \sim$2 10$^{12}$ M$_\odot$, above which the gas is heated by shocks in entering the galaxy (Birnboim \& Dekel, 2003). At early epochs, for redshifts higher than 2, the gas accretion time-scale is very short, and the SFR not enough to consume the accreted gas, which accumulates in galaxies. After z=2, the SFR has increased to its maximum; within the halo mass range between M$_c$ and M$_{shock}$, the depletion time-scale is comparable to the accretion time-scale, and the SFR is limited by gas accretion (Bouch\'e {\rm et\ts al. } 2010). In this global model, an equilibrium settles between gas inflow and outflow, and star formation rate, equalling depletion time to accretion time. Stellar masses, metallicity, and cosmic gas evolution are moderated by this equilibrium. The relation between SFR and stellar mass on the main sequence has been examined in detail from 25 studies in the literature, and the corresponding slope is a decreasing function of cosmic time (Speagle {\rm et\ts al. } 2014). The star formation histories derived from these are delayed-$\tau$ models, where the SFR is first increasing linearly with time in the first half of the universe age, and then decreasing exponentially. With a SAM approach Popping {\rm et\ts al. } (2012, 2014) also tested several recipes for the molecular gas and star formation evolution; either pressure-based, or metallicity-based models represent rather well observations, with some variations for low mass galaxies. To compare with high-z observations, they deduce their gas content from their SFR, through inversion of the Kennicutt-Schmidt (KS) relation, but in this case the best fit is for a density-dependent SFE. Also, the CO-to-H$_2$ conversion factor should be continuous, as a function of the galaxy physical properties. That the SFE should depend on gas surface density (non-linear KS relation) is certainly a solution to explain why SFE varies with redshift. Galaxies were more compact at high z (Newman {\rm et\ts al. } 2012, Morishita {\rm et\ts al. } 2014), so not only their surface density was higher for a given gas content, but also their dynamical time was shorter, which favors the dynamical triggers. Another feature is the volumic density dependency, which could play a role, even for a linear KS relation. It has already been observed that SFE is declining with radius in galaxy disks at z=0, possibly due to gas disk flaring (Bigiel {\rm et\ts al. } 2010, Dessauges-Zavadsky {\rm et\ts al. } 2014). \bigskip There are still large uncertainties on key factors to determine the cosmic evolution of cold gas content in galaxies: not only the SFR laws as a function of density, the phase transition between atomic and molecular gas, but also the star formation efficiency, regulated by feedback mechanisms due to supernovae or AGN, the quenching due to environment, slowing down the gas accretion. We are just at the beginning of the ALMA era, and our knowledge on these physical processes will progress exponentially. \begin{acknowledgement} My great thanks to the organisers, David Block, Ken Freeman and Bruce Elmegreen for this wonderful meeting with wide scientific interests. The European Research Council is gratefully acknowledged for the Advanced Grant Program Num 267399-Momentum. \end{acknowledgement} \parindent=0pt {\bf References} \parindent=0pt {\small Bauermeister A., Blitz L., Bolatto A., {\rm et\ts al. } 2013, ApJ, 768, 132 \\ Bigiel, F., Leroy, A., Walter, F., {\rm et\ts al. } 2010, AJ, 140, 1194 \\ Birnboim Y., Dekel A.: 2003 MNRAS 345, 349 \\ Blitz L., Rosolowsky E.: 2006, ApJ 650, 933 \\ Bouch\'e N., Dekel A., Genzel R. {\rm et\ts al. } 2010, ApJ 718, 1001 \\ Bouwens, R. J., Illingworth, G. D., Oesch, P. A. {\rm et\ts al. } : 2011 ApJ 737, 90 \\ Combes F., Garc{\'{\i}}a-Burillo S., Braine J. {\rm et\ts al. } 2011, A\&A, 528, A124 \\ Combes F., Garc{\'{\i}}a-Burillo S., Braine J. {\rm et\ts al. } 2013, A\&A, 550, A41 \\ Combes F., 2014, Arkansas conf. , arXiv:1309.1603 \\ Daddi E., Dannerbauer, H., Elbaz, D. {\rm et\ts al. } 2008, ApJ 673, L21 \\ Daddi E., Sargent M.~T., B{\'e}thermin M., Magdis G., 2013, IAUS, 295, 64 \\ Dannerbauer, H., Daddi, E., Riechers, D. A. {\rm et\ts al. } 2009, ApJ 698, L178 \\ Decarli, R., Walter, F., Carilli, C. {\rm et\ts al. } 2014 ApJ 782, 78 \\ Dessauges-Zavadsky M., Verdugo C., Combes F., Pfenniger D.: 2014, A\&A in press \\ Di Matteo, P., Combes, F., Melchior A-L., Semelin, B.: 2007, A\&A 468, 61 \\ Feldmann R. 2013, MNRAS 433, 1910 \\ Freundlich J., Combes F., Tacconi L. {\rm et\ts al. } 2013, A\&A 553, A130 \\ Fu, J., Guo, Q., Kauffmann, G., Krumholz, M. R. 2010, MNRAS 409, 515 \\ Fu, J., Kauffmann, G., Li, C., Guo, Q. 2012, MNRAS 424, 2701 \\ Fukugita M., Hogan C. J., Peebles P. J. E., 1998, ApJ, 503, 518 \\ Genzel, R., Tacconi, L. J., Kurk J. {\rm et\ts al. } 2013, ApJ 773, 68 \\ Greve. T. R., Bertoldi, F., Smail, I. {\rm et\ts al. } 2005, MNRAS, 359, 1165 \\ Hopkins A. M., Beacom J. F.: 2006, ApJ 651, 142 \\ Keres, D., Yun, M. S., Young, J. S. 2003, ApJ, 582, 659 \\ Krumholz M. R., McKee C. F., Tumlinson J.: 2009 ApJ 699, 850 \\ Lagos C. d P., Baugh C. M., Lacey C. G. {\rm et\ts al. } 2011, MNRAS 418, 1649 \\ Mannucci, F., Cresci, G., Maiolino, R. {\rm et\ts al. } 2010, MNRAS 408, 2115 \\ Montuori M., Di Matteo, P., Lehnert, M. D., Combes, F., Semelin, B.: 2010, A\&A 518, A56 \\ Morishita T., Ichikawa, T., Kajisawa, M.: 2014, ApJ 785, 18 \\ Newman A. B., Ellis, R. S., Bundy, K., Treu, T. : 2012, ApJ 746, 162 \\ Obreschkow D., Croton D., de Lucia G. {\rm et\ts al. } 2009, ApJ 698, 1467 \\ Obreschkow D., Rawlings S.: 2009, ApJ 696, L129 \\ Popping G., Caputi K.~I., Somerville R.~S., Trager S.~C., 2012, MNRAS, 425, 2386 \\ Popping G., Somerville R.~S., Trager S.~C., 2014, arXiv:1308.6764 \\ Rodighiero, G., Daddi, E., Baronchelli, I. {\rm et\ts al. } 2011, ApJ 739, L40 \\ Saintonge, A., Kauffmann, G., Kramer, C. {\rm et\ts al. } 2011, MNRAS 415, 32 \\ Sargent, M. T., Daddi, E., Bethermin, M {\rm et\ts al. } : 2014, ApJ in press, arXiv1303.4392 \\ Sauty S., Casoli, F., Boselli, A. {\rm et\ts al. } 2003, A\&A, 411, 381 \\ Solomon P., Downes D., Radford S., Barrett J.: 1997, ApJ 478, 144 \\ Solomon, P. M., Vanden Bout, P. A.: 2005, ARA\&A 43, 677 \\ Speagle J. S., Steinhardt C. L., Capak P. L., Silverman J. D: 2014, ApJ sub arXiv1405.2041 \\ Swinbank A.~M., Simpson J. M., Smail I., {\rm et\ts al. } 2014, MNRAS, 438, 1267 \\ Tacconi L. J., Genzel R., Neri R. {\rm et\ts al. } 2010 Nature 463, 781 \\ Tacconi L. J., Neri R., Genzel R. {\rm et\ts al. } 2013, ApJ 768, 74 \\ Walter F., Decarli R., Sargent M. {\rm et\ts al. } 2014, ApJ, 782, 79 \\ Wuyts, S., F\"orster Schreiber, N. M., van der Wel, A. {\rm et\ts al. } 2011, ApJ 742, 96 \\ Young, J. S., Xie, S., Tacconi, L. {\rm et\ts al. } 1995, ApJS 98, 219 \\ Zwaan, M. A., Meyer, M. J., Staveley-Smith, L., Webster, R. L.: 2005, MNRAS 359, L30 } \end{document}
1,116,691,497,099
arxiv
\section{Introduction}\label{Sec:Intro} The natural problem of computation of continuous cohomologies for non-commutative structures on manifolds has proven to be a subject of great geometrical interest \cite{BS, Fei, Fuks, Wag}. For Riemann surfaces, and even for higher dimension complex manifolds, the classical cohomology of holomorphic vector fields is often trivial \cite{Kaw, Wag}. In \cite{Fei} Feigin has obtained various results concerning (co)-homology of cosimplicial objects associated to holomorphic vector fields $Lie(M)$. Vertex algebra \cite{BZF, FHL, K} theory of automorphic forms \cite{Fo} goes back to celebrated Moonshine problem \cite{MT}. Most of $n$-point characteristic functions \cite{FS, FHL, KZ, MT, Zhu} for vertex algebras deliver examples of modular forms with respect to appropriate groups attached to geometry of corresponding underlying manifolds. $n$-point functions are subject to action of differential operators with specific analytical behavior \cite{BKT, GK, GN, Ob}. In this paper we develop ideas and previous results on cohomology of Jacobi forms originating from algebraic and geometrical procedures in conformal field theory \cite{FS, TUY}. This paper aims at developing algebraic, differential geometry, and topological methods for the investigation of cohomology theories of Jacobi forms generated by vertex algebras, with applications in algebraic topology, number theory and mathematical physics. In most cases of lower genera Riemann surfaces, there exist algebraic formulas relating $n$-point functions with $n-1$-point functions in a linear way for fixed genus $g$ \cite{Zhu, MT, MTZ}. The reduction cohomology is defined via reduction formulas \cite{Zhu, BKT} relating $n$-point characteristic functions with $(n-1)$-functions. Our new algebraic and geometrical approach for computation of reduction (co)homology involves vertex algebras and applications of techniques \cite{Huang, Y} used in conformal field theory. Computation of moduli forms reduction cohomology is useful in further studies of constructions in algebraic topology, analytical and geometrical structure of spaces of modular forms originating from the description of vertex algebras by means of characteristic functions on manifolds. The main aim of the reduction cohomology is to describe non-commutative structures in terms of commutative ones. In contrast to more geometrical methods in classical cohomology for Lie algebras \cite{Fuks}, the reduction cohomology pays more attention to the differential, analytical, and automorphic structure of chain complex elements constructed by means of characteristic functions for non-commutative elements of vertex algebras with complex parameters. Computational methods involving reduction formulas proved their effectiveness in conformal field theory \cite{KMI, KMII, MT, MT1, MTZ, TZ, DLM, Miy}. Though the Zhu reduction formulas were obtained for odrinary $n$-point functions of vertex operators, it also works for multi-parametric automorphisms inserted into traces written for the torus case. Then coefficients in the reduction formulas are expressed in terms of quasi-modular forms. Since quasi-modular forms are holomorphic on the complex upper half-plane $\mathbb{H}$, then it follows that $n$-point Jacobi functions are also holomorphic. The plan of this paper is the following. We define the reduction cohomology, chain condition, and co-boundary operator for complexes of Jacobi forms. Specific examples of coboundary operators are provided subject to various conditions on vertex algebra elements. A statement relating $n$-th reduction cohomology with analytic extensions of solutions to a counterpart of Knizhnik--Zamolodchikov equation is proven, and its geometrical meaning is found. In appendixes we recall the notions of quasi-modular forms, reduction formulas for Jacobi functions, and vertex algebras. Quasi-Jacobi forms have found applications in vertex algebra theory in \cite{HE}, for characteristic functions of topological $N=2$ vertex algebras, Gromov-Witten potentials \cite{Kaw}, computation of elliptic genera \cite{Lib}related to Jacobi zero-point functions, Landau-Ginzburg orbifolds \cite{KYY}. \section{Chain complex for vertex algebra $n$-point functions} \label{chain} In this section we will give definition of a chain complex associated to the space of Jacobi forms \cite{BKT} defined by vertex algebras. \subsection{Spaces of $n$-point Jacobi functions via vertex operators} Let us fix a vertex algebra $V$. We denote by ${\bf v}_n=(v_1, \ldots, v_n) \in V^{\otimes n}$ a tuple of vertex algebra elements (see Appendix \ref{vosa} for definition of a vertex algebra). Mark $n$ points ${\bf p}_n=(p_1, \ldots, p_n)$ on the torus $\T$. Denote by ${\bf z}_n=(z_1, \ldots, z_n)$ local coordinates around ${\bf p}_n \in \T$. Let us introduce the notation: ${\bf x}_n= \left({\bf v}_n, {\bf z}_n \right)$. In \cite{MTZ} we considered the orbifold Jacobi $n$-point functions associated with a vertex operator superalgebra \cite{K} (see Appendix \ref{vosa}), with an automorphism inserted in traces. Let $\sigma \in \mathrm{Aut}(V)$ denote the parity automorphism \begin{equation} \sigma a=(-1)^{p(a)}a. \label{sigma} \end{equation} Let $g\in \mathrm{Aut}(V)$ denote any other automorphism which commutes with $\sigma $. Let $W$ be a $V$-module. Assume that $W$ is stable under both $\sigma $ and $g$, i.e., $\sigma$ and $g$ act on $W$. \begin{definition} The $n$-point Jacobi function on $W$ for ${\bf x}_{n} \in V^{\otimes n}\times \C^n$, and $g\in \mathrm{Aut}(V)$ is defined by \begin{eqnarray} \mathcal Z_{W}^J({\bf x}; g, \tau ) =\mathrm{STr}_{W}\left( {\bf Y}_{W}\left({\bf q}^{L_V(0)} {\bf v}, {\bf q}\right)_n \; g \; q^{L_V(0)-c/24}\right), \label{npointfunction} \end{eqnarray} $q=\exp (2\pi i\tau )$, $q_{i}=\exp (z_{i})$, $1\leq i\leq n$. \end{definition} Here $\mathrm{STr}_{W}$ denotes the supertrace defined by \begin{equation} \mathrm{STr}_{W}(X)=Tr_{W}(\sigma X)=Tr_{W_{\bar{0}}}(X)-Tr_{W_{\bar{1}}}(X). \label{Supertrace} \end{equation} The orbifold Jacobi zero-point function for general $g$ is then \begin{equation} \mathcal Z_{W}(g, \tau)=\mathrm{STr}_{W}\left( g \; q^{L_V(0)-c/24}\right) . \label{ZMg} \end{equation} In particular, when $V$ is a vertex operator algebra with Virasoro vector $\omega$ of central charge $c_V$. Consider an element $J\in V_{1}$ such that $J(0)$ acts semisimply on $V$. For ${\bf v}_n\in V^{\otimes n}$, on $\T$, and a weak $V$-module $W$ \cite{MT}, the Jacobi $n \ge 0$-point function is \begin{align} \mathcal Z_W^J \left( {\bf x}_n; B \right) ={\rm Tr}_{W} \left( {\bf Y}\left( e^{{\bf z} \;L_V(0)} {\bf v}, e^{\bf z} \right)_n \zeta^{J(0)} q^{L(0)}\right), \label{eq:npt} \end{align} where $B$ denotes parameters of $\mathcal Z_W^J$ including $z$ and $\tau$, and \[ \zeta=q_z=e^{\tpi z}. \] The Jacobi one-point function, for $v \in V$, is given by \begin{align} \mathcal Z_W^J\left (x_1; B \right) ={\rm Tr}_{W} \left( o_{\,0}(v_1) \; \zeta^{J(0)}\; q^{L(0)} \right), \notag \label{eq:1pt} \end{align} which does not depend on $z_1$. Here $o_{\,0}(v_1)=v_1(\wt \;v_1 - 1)$ (see Appendix \ref{vosa}), with $z\in \C$, and $\tau$ being the modular parameter of $\T$. \begin{definition} For a $V$-module $W$, we consider the spaces of $n$-point Jacobi forms \[ C^n(W)= \left\{ \mathcal Z_W^J \left({\bf x}_n,; B \right), n \ge 0, \right\}. \] \end{definition} The coboundary operator $\delta^n ({\bf x}_{n+1})$ on $C^n(W)$-space is defined according to the reduction formulas (see section \ref{Sec:Zhu} and Appendix \ref{redya}) \cite{BKT, MTZ} for $V$-module $W$ Jacobi forms. \begin{definition} For $n \ge 0$, and any $x_{n+1} \in V\times \C$, define \begin{eqnarray} \label{delta_operator} \delta^n({\bf x}_{n+1}): C^n(W)&{\rightarrow }& C^{n+1}(W), \end{eqnarray} with non-commutative operators $T_j(v[m].)$, $j \ge 0$, is given by the reduction formulas \eqref{poros} \begin{eqnarray} \label{poros} \delta^n \left({\bf x}_{n+1}\right) \mathcal Z_W^J \left( {\bf x}_n; B \right) &=& \sum\limits_{k=0 \atop m \ge 0}^{n} f_{k, m}({\bf x}_{n+1}; B) \; T_k(v_{n+1}[m].) \mathcal Z^J_W \left( {\bf x}_n; B \right), \end{eqnarray} \end{definition} The operators $T_k(v[m].)$ are insertion operators of vertex algebra modes $v[m].$, $m \ge 0$, into $\mathcal Z^J_W \left( {\bf x}_n; B\right)$ at the $k$-th entry: \begin{eqnarray} T_k(v[m].)\; \mathcal Z^J_W \left( {\bf x}_n; B \right) &=& \mathcal Z^J_W \left( T_{k} \left(v[m]\right). {\bf x}_{n}; B \right), \nonumber \end{eqnarray} where we use the notation \[ (\Gamma.)_k\; {\bf x}_{n} = \left(x_1, \ldots, \Gamma.x_k, \ldots, x_n \right), \] for an operator $\Gamma$ acting on $k$-th entry. \begin{remark} The reductions formulas have an interpretation in terms of torsors \cite{BZF} (Chapter 6). In such formulation ${\bf x}_n$ is a torsor with respect to the group of transformation of the space $V^{\otimes \; n}\times \C^n$. In particular, from \eqref{poros} we see that $T_k\left(u[m].\right)$-operators act on $V^{\otimes \; n}$-entries of ${\bf x}_n$, while $f_{k, m}({\bf x}_{n+1}; B)$-functions act on ${\bf z}_n$ of $\mathcal Z^J_W ({\bf x}_n; B)$ as a complex function. \end{remark} For $n \ge 0$, let us denote by ${\mathfrak V}_n$ the subsets of all ${\bf x}_{n} \in V^{\otimes n}\times \C^n$, such that the chain condition \begin{equation} \label{torba} \delta^{n+1}({\bf x}_{n+1})\; \delta^n ({\bf x}_{n}) \; \mathcal Z^J_{W} \left({\bf x}_n; B \right)=0, \end{equation} for the coboundary operators \eqref{poros} for complexes $C^n(W)$ is satisfied. Explicitly, the chain condition \eqref{torba} leads to an infinite $n \ge 0$ set of equations involving functions $f_{k, m}\left({\bf x}_{n+1}; B \right)$ and $\mathcal Z^J_W \left({\bf x}_n; B \right)$: \begin{eqnarray} \label{conditions} \left( \sum\limits_{k', k=0 \atop m', m \ge 0}^{n+1, n} f_{k', m'} \left({\bf x}_{n+1}; B \right) f_{k, m} \left( {\bf x}_{n}; B\right) T_{k'}(v_{n+2}[m'].) \; T_k(v_{n+1}[m].) \right) \mathcal Z^J_W \left( {\bf x}_n; B \right)=0. \nn \end{eqnarray} \begin{remark} \eqref{conditions} contain finite series and narrows the space of compatible $n$-point functions. It follows from considerations of \cite{BKT}, the subspaces of $C^n(W)$, $n \ge 0$, of $n$-point Jacobi forms such that the condition \eqref{conditions} is fulfilled for reduction cohomology complexes are non-empty. Indeed, the condition \eqref{conditions} represents an infinite $n \ge 0$ set of functional-differential equations (with finite number of summands) on converging complex functions $\mathcal Z^J_W ({\bf x}_n; B )$ defined for $n$ local complex variables on $\T$ with functional coefficients $f_{k, m} \left({\bf x}_{n+1}; B \right)$ (in our examples in subsection \ref{main}--\ref{vosa}, these are generalizations of elliptic functions) on $\T$. Note that all vertex algebra elements of ${\bf v}_n\in V^{\otimes n}$, as non-commutative parameters are not present in final form of functional-differential equations since they incorporated into either matrix elements, traces, etc. According to the theory of such equations \cite{FK, Gu}, each equation in the infinite set of \eqref{conditions} always have a solution in domains they are defined. Thus, there always exist solutions of \eqref{conditions} defining $\mathcal Z^J_W \in C^n(W)$, and they are not empty. \end{remark} \begin{definition} The spaces with conditions \eqref{conditions} constitute a semi-infinite chain complex \begin{equation} \label{buzova} 0 \rmap C^0 \stackrel {\delta^{0}( x_{1})} {\rmap} C^1 \stackrel {\delta^{1}({\bf x}_2)} {\rmap} \ldots \; \; \stackrel{\delta^{n-2} ({\bf x}_{n-1})}{\rmap} C^{n-1} \stackrel{\delta^{n-1} ({\bf x}_{n})}{\rmap} C^n \stackrel{\delta^{n} ({\bf x}_{n+1})}{\rmap} \ldots. \end{equation} For $n \ge 1$, we call corresponding cohomology \begin{equation} \label{pusto} H^n_J(W)={\rm Ker}\; \delta^{n}({\bf x}_{n+1})/{\rm Im}\; \delta^{n-1}({\bf x}_n), \end{equation} the $n$-th reduction cohomology of a vertex algebra $V$-module $W$ on $\T$. \end{definition} \section{Reduction formulas and examples of coboundary operators for Jacobi $n$-point functions} \label{Sec:Zhu} \subsection{The main formula for coboundary operator} \label{main} In this subsection, using Propositions \ref{prop:apnpt} and \ref{prop:apnpt0} of \cite{BKT} (see Appendix \ref{redya}), we introduce the definition of a coboundary operator associated to the most general (up to certain assumptions) reduction formulas available for Jacobi forms. Recall the definition of square bracket vertex operators from Appendix \ref{squa}. Summing \eqref{eq:2ZhuRed} over $l$ multiplied by $z^{l-1}_{n+1}$, $Z_{W}^J \left( v_{n+1}[-l].x_{1}, {\bf x}_{2, n}; B\right)$. and using associativity of vertex operators we formulate the following definition of the coboundary operator \begin{definition} Let $v_{n+1}\in V$ such that \[ v_{n+1}[l].v_{k}=0, \] for $l \ge 1$, $1 \le k \le n$, and such that \[ J(0)v_{n+1}=\alpha v_{n+1}. \] with $\alpha \in \C$. Then the coboundary operator is given by \eqref{poros} with the summation over $l \in \Z$, i.e., \begin{eqnarray} \label{poros1} \delta^n \left({\bf x}_{n+1}\right) \mathcal Z_W^J \left( {\bf x}_n; z, \tau \right) = \sum\limits_{l \in \Z \atop {m \ge 0, \atop k=0} }^{n} f_{k, m}({\bf x}_{n+1}; B) \; T_k(v_{n+1}[m].)\; \mathcal Z^J_W \left( {\bf x}_n; z, \tau \right), \end{eqnarray} \begin{align} & f_0({\bf x}_{n+1}; B)\; T_0(v[m].) = \sum\limits_{l \in \Z} (-1)^{l+1} \; \delta_{ \alpha z, {\mathbb{Z}\tau}+\Z }\; \frac{\lambda^{l-1}}{(l-1)!} \; z_{n+1}^{l-1} \; T_0(o_\lambda(v_{n+1})), \notag \\ &f_{k, m} ({\bf x}_{n+1}; B) = \sum\limits_{l \in \Z} (-1)^{m+1}\binom{m+l-1}{m} \; z_{n+1}^{l-1} \; F_{k, m}({\bf x}_{n+1}; l, \alpha z, \tau), \end{align} where \begin{eqnarray*} && F_{k, m}({\bf x}_{n+1}; l, \alpha z, \tau) = \notag \delta_{0,m} \; T^{1-\delta_{\alpha z, {\mathbb{Z}\tau}+\Z} }. \widetilde{E}_{m+l,\lambda} \left( (1-\delta_{\alpha z, {\mathbb{Z}\tau}+\Z} )\; \alpha z, \tau\right) \nn && \qquad + T^{ 1- \delta_{\alpha z, {\mathbb{Z}\tau}+\Z}}.\widetilde{P}_{m+l, \left(1- \delta_{\alpha z, {\mathbb{Z}\tau}+\Z} \right) \lambda} \left(\frac{z_1 -z_k}{\tpi}, (1- \delta_{\alpha z, {\mathbb{Z}\tau}+\Z} ) \;\alpha z, \tau \right), \notag \end{eqnarray*} with tilde applying operator $T$, i.e., \begin{eqnarray*} T.{E}_{m+l,\lambda} &=& \widetilde{E}_{m+l,\lambda}, \nn T.{P}_{m+l,\lambda} &=& \widetilde{P}_{m+l,\lambda}, \end{eqnarray*} and $\widetilde {E}_{m+k, \lambda}( \alpha z, \tau)$, $\widetilde{P}_{m+l, \lambda}(z', \alpha z, \tau)$ given by \eqref{eq:Gkl} and \eqref{eq:PellPm} correspondingly. \end{definition} \subsection{The simplest coboundary operator} \label{simplest} For certain further restriction on $v_{n+1}$, we are able to define the simplest version of coboundary operator for the reduction cohomology. Using propositions \ref{prop:Zhured} and \ref{prop:Zhured0} proven in \cite{BKT} (see Appendix \ref{redya}), we obtain \begin{definition} For $v_{n+1}$, with $J(0) v_{n+1} =\alpha v_{n+1}$, $\alpha \in \C$, the coboundary operator is defined by \eqref{poros} with \begin{align} &f_0({\bf x}_{n+1}; \alpha z, \tau) \; T_0(v_{n+1}[m]) =\delta_{\alpha z, \lambda\tau+\mu\in {\mathbb{Z}\tau+\Z}}\; e^{-{z_{n+1}}\lambda} \; T_0(o_\lambda(v_{n+1})), \nn & f_{k, m}({\bf z}_{n+1}; \lambda, k, \alpha z, \tau ) = T^{1-\delta_{\alpha z, \lambda\tau+\mu\in {\mathbb{Z}\tau+\Z}}} {P}_{m+1,\lambda} \left( \frac{z_{n+1}-z_k}{\tpi}, (1- \delta_{\alpha z, \lambda\tau+\mu\in {\mathbb{Z}\tau+\Z}}) \; \alpha z, \tau\right), \label{eq:ZhuRed0} \end{align} with $\widetilde P_{m+1,\lambda}\left( {\bf z}_{n+1}, \alpha z ,\tau\right)$ defined in \eqref{eq:PellPm}. \end{definition} \subsection{Coboundary operator for a shifted Virasoro vector} \label{shifted} Suppose that $J(0)a=\alpha a$ for $\alpha\not \in \mathbb{Z}\setminus \{0\}$, and define a $V$-automorphism $g\in\Aut(V)$ by \begin{align} g=e^{ \tpi \frac{\mu}{\alpha}J(0)}, \notag \label{eq:gaut} \end{align} for $\mu\in \Z$ for which $ga=a$. Then Corollary~\ref{cor:ZeroRes} follows from Proposition~6 of \cite{MTZ} which states that \begin{equation} \sum_{k=1}^{n} \tr_{W} \left( T_k (a[0].) {\bf Y} \left( e^{ {\bf z} L_V(0)} v, e^{\bf z} \right)_n g\; q^{L(0)} \right)=0. \label{eq:Zg0} \end{equation} For \[ J(0)v_k=\alpha_k v_k, \] for $k=1,\dots,n$, For the case of shifted Virasoro vector \cite{MTZ} (see Appendix \ref{vosa}) we find In a similar fashion, we can relate Proposition~\ref{prop:Zhured0} to Theorem~2 of \cite{MTZ} for the above shifted Virasoro grading $L_h(0)$ and with $g=e^{\tpi \frac{\mu}{\alpha}J(0)}$. Using Theorem~2 of \cite{MTZ} we give the following \begin{definition} The shifted coboundary operator for the shifted Jacobi form \begin{align} &\mathcal Z_{W}^J({\bf x}_{n+1}; h, \mu, \alpha, z, \tau ) = \tr_{W} \left( {\bf Y} \left( e^{\bf z \; L_{h}(0)} {\bf v}, e^{\bf x} \right)_{n+1}\;g \; q^{L_{h}(0)}\right), \end{align} is given by \eqref{poros} with \begin{align} & f_0({\bf x}_{n+1}; B) \; T_0(v_{n+1}[m].)= T_0(o_{h}(v_{n+1})), \\ & f_{k, m}({\bf x}_{n+1}; B)= P_{m+1} \left(\frac{z_{n+1}-z_k}{\tpi} ,\tau \right) \label{eq:Zhuorb} \end{align} and $T_k(v_{n+1}[m]_h.)$, where \begin{equation} \label{torsa} o_{h}(v_{n+1})=v_{n+1}(\wt_{h}(v_{n+1})-1)=v_{n+1}(\wt(v_{n+1})-1+\mu)= o_{\mu}(v_{n+1}). \end{equation} \end{definition} \subsection{Vertex operator superalgebra case} \label{vosa} For the case of orbifold Jacobi $n$-point functions, we have the following. Let $v_{n+1}$ be homogeneous of weight $\wt(v_{n+1})\in \mathbb{R}$ and define $\phi \in U(1)$ by \begin{equation} \phi =\exp (2\pi i \; \wt(v_{n+1})). \label{phi} \end{equation} We also take $v_{n+1}$ to be an eigenfunction under $g$ with \begin{equation} gv_{n+1}=\theta ^{-1}v_{n+1} \label{theta} \end{equation} for some $\theta \in U(1)$ so that \begin{equation} g^{-1}v_{n+1}(k)g=\theta v_{n+1}(k). \label{gv(k)} \end{equation} Then we have \begin{definition} Let $v,\theta $ and $\phi $\ be as as above. Then the coboundary operator is defined by \begin{eqnarray} && f_0({\bf x}_{n+1}; B)\; T_0(v_{n+1}[m])= \delta _{\theta ,1} \delta _{\phi ,1} T_0(o_0(v_{n+1})), \nn && f_{k, m} ({\bf x}_{n+1}; B)= p(v_{n+1}, {\bf v}_{k-1}) \; P_{m+1}\left[ \begin{array}{c} \theta \\ \phi \end{array} \right] (z_{n+1}-z_{k},\tau ) \end{eqnarray} where the deformed Weierstrass functions are defined in (\ref{Pkuv})) (see Appendix \ref{defo}). \end{definition} Note that, as it was shown in \cite{MTZ}, the orbifold Jacobi function case is related to the shifted Virasoro vector case above. \section{Cohomology} \label{cohomology} In this section we compute the reduction cohomology defined by \eqref{buzova}--\eqref{pusto}. \subsection{The $n$-th cohomology and analytic extensions of solutions to Knizhnik-Zamolodchikov equations} The main result of this paper is the following. \begin{proposition} Under assumptions of subsections \ref{main}--\ref{vosa}, the $n$-th reduction cohomology of the space of Jacobi forms for $V$-module $W$ is given by space of analytical continuations of solutions $\mathcal Z_{W}^J \left({\bf x}_n; B \right)$ to the Knizhnik-Zamolodchikov equation \begin{equation} \label{poroserieroj} \sum_{k=0}^{n}\sum_{m\ge 0} f_{k, m}\left({\bf x}_n; B\right) T_k(v_{n+1}[m]_\beta.)\; \mathcal Z_M^J\left({\bf x}_n; B\right)=0, \end{equation} with $x_i \notin {\mathfrak V}_{i}$, $\beta=h$ for a shifted Virasoro element and zero otherwise, for $1 \le i \le n$. These are given by the spaces of quasi-modular forms in terms of series of deformed Weierstrass functions, defined in Appendix \ref{defo}, recursively generated by reduction formulas \eqref{poros}. \end{proposition} \begin{proof} The $n$-th reduction cohomology is defined by the subspace of $C^n(W)$ of functions $\mathcal Z^J \left({\bf x}_n; B \right)$ satisfying \eqref{poroserieroj}, modulo the subspace of $C^{n}(W)$ $n$-point functions $\mathcal Z^J_W \left({\bf x}'_n; B\right)$ resulting from: \begin{eqnarray} \label{poroserieroj_2} \mathcal Z^J_W \left( {\bf x}'_n; B \right) &=& \left( \sum\limits_{k=1}^{n-1} \sum\limits_{m \ge 0} f_{k, m} \left( {\bf x}_n; B \right) \; T^{(g)}_k(v'_n[m]_\beta) \right) \; \mathcal Z^J_W\left( {\bf x}'_{n-1}; B\right). \nn&& \end{eqnarray} Subject to other fixed parameters, $n$-point functions are completely determined by all choices ${\bf x}_n \in V^{\otimes n}\times \C^n$ which does not belong to $\mathfrak V$. Thus, the reduction cohomology can be treated as depending on set of ${\bf x}_n$ only with appropriate action of endomorphisms generated by $x_{n+1}$. Consider a non-vanishing solution $\mathcal Z^J_W \left({\bf x}_n; B \right)$ to \eqref{poroserieroj} for some ${\bf x}_n$. Let us use the reduction formulas \eqref{poros} recursively for each $x_i$, $1 \le i \le n$ of ${\bf x}_n$ in order to express $\mathcal Z^J_W \left({\bf x}_n; B \right)$ in terms of the partition function $\mathcal Z^J_W\left( B\right)$, i.e., we obtain \begin{equation} \label{topaz} \mathcal Z^J_W \left({\bf x}_n; B \right)= {\mathcal D}({\bf x}_n; B) \; \mathcal Z^J_W\left( B\right), \end{equation} as in \cite{MT, MTZ, TZ}. Thus, $x_i \notin {\mathfrak V}_{i}$ for $1 \le i \le n$, i.e., at each stage of the recursion procedure reproducing \eqref{topaz}, otherwise $\mathcal Z^J_W \left({\bf x}_n; B \right)$ is zero. Therefore, $\mathcal Z^J_W \left({\bf x}_n; B, \right)$ is explicitly known and is repsented as a series of auxiliary functions ${\mathcal D}({\bf x}_n; B \tau)$ depending on $V$. Consider now $\mathcal Z^J_W \left({\bf x}'_n; B \right)$ given by \eqref{poroserieroj_2}. It is either vanishes when $v_{n-i} \in {\mathfrak V}_{n-i}$, $ 2 \le i \le n$, or given by \eqref{topaz} with ${\bf x}'_n$ arguments. The way the reduction relations \eqref{poros} were derived in \cite{Y} is exactly the same as for the vertex algebra derivation \cite{KZ, TK} for the Knizhnik-Zamolodchikov equations. Namely, one considers a double integration of $\mathcal Z^J_W \left({\bf x}_n; B \right)$ along small circles around two auxiliary variables with the action of appropriate reproduction kernels inserted. Then, these procedure leads to recursion formulas relating $\mathcal Z^J_W({\bf x}_{n+1}; B)$ and $\mathcal Z^J_W({\bf x}_{n}; B)$ with functional coefficients depending on the nature of the vertex algebra $V$. Thus, \eqref{poroserieroj} can be seen as a version of the Knizhnik-Zamolodchikov equation. In \cite{Y, MT, MTZ, BKT} formulas for $n$-point functions in various specific examples of $V$ and configuration of Riemann surfaces were explicitly obtained. In terms of $x_{n+1}$, by using \eqref{Ydefn}, one transfers in \eqref{poroserieroj} the action of $v_{n+1}$-modes into an analytical continuation of $\mathcal Z^J_W \left({\bf x}_n; B\right)$ multi-valued holomorphic functions to domains $\T_{n} \subset \T$ with $z_{i} \neq z_{j}$ for $i\ne j$. Namely, in \eqref{poroserieroj}, the operators $T_k(v_{n+1}[m]_\beta.)$ act by certain modes $v_{n+1}[m].$ of a vertex algebra element $v_{n+1}$ on ${\bf v}_n \in V^{\otimes n}$. Using vertex algebra associativity we express the action of of operators $T_k(v_{n+1}[m].)$ in terms of modes $v_{n+1}[m]$ inside vertex operators in actions of $V$-modes on the whole vertex operator at expense of a shift of their formal parameters ${\bf z}_n$ by $z_{n+1}$, i.e., ${\bf z}'_n= {\bf z}_n + z_{n+1}$. Note that under such associativity transformations $v$-part of ${\bf x}_n$, i.e., ${\bf v}_n$ do not change. Thus, the $n$-th reduction cohomology of a $V$-module $W$ is given by the space of analytical continuations of $n$-point functions $\mathcal Z^J_W \left({\bf x}_n; B \right)$ with ${\bf x}_{n-1} \notin {\mathfrak V}_{n-1}$ that are solutions to the Knizhnik-Zamolodchikov equations \eqref{poroserieroj}. The above analytic extensions for the Knizhnik-Zamolodchikov equations generated by $x_{n+1}$ and with coefficients provided by functions $f_{k, m}\left( {\bf x}_{n+1}; B \right)$ on $\T$. \end{proof} One can make connection with the first cohomology of grading-restricted vertex algebras in terms of derivations, and to the second cohomology in terms of square-zero extensions of $V$ by $W$ \cite{Huang}. In certain cases of coboundary operators, we are able to compute the $n$-th cohomology even more explicitly by using reduction formulas in terms of generalized elliptic functions In particular, for orbifold $n$-point Jacobi functions associated to a vertex operator superalgebra described in Appendix \ref{vosa}, we obtain from \cite{MTZ} \begin{corollary} For ${\bf v}_n \notin {\mathfrak V}_n$, the $n$-th cohomology is given by the space of determinants of $n \times n$-matrices containing deformed elliptic functions depending on $z_i-z_j$, $1 \le i, j \le n$, for all possible combinations of ${\bf v}_n$-modes. \end{corollary} \subsection{Geometrical meaning of reduction formulas and conditions \eqref{conditions}} In this section we show that the Jacobi forms reduction formulas \eqref{poros} appear as of multipoint connections on a vector bundle over $\T$ generalizing ordinary holomorphic connections on complex curves \cite{BZF}. Let us recall the notion of a multipoint connection which will be useful for further identifying reduction cohomology in this subsection. Motivated by the definition of a holomorphic connection for a vertex algebra bundle (cf. Section 6, \cite{BZF} and \cite{Gu}) over a smooth complex curve, we introduce the definition of the multiple point connection over $\T$. \begin{definition} \label{mpconnection} Let $\V$ be a holomorphic vector bundle over $\T$, and $\mathcal T_0 \subset \T$ be its subdomain. Denote by ${\mathcal S \mathcal V}$ the space of sections of $\V$. A multi-point connection $\mathcal G$ on $\V$ is a $\C$-multi-linear map \[ \mathcal G: \T^{\times n} \times V^{\otimes n} \to \C, \] such that for any holomorphic function $f$, and two sections $\phi(p)$ and $\psi(p')$ of $\V$ at points $p$ and $p'$ on $\mathcal T_0 \subset \T$ correspondingly, we have \begin{equation} \label{locus} \sum\limits_{q, q' \in \mathcal T_0 \subset \T } \mathcal G\left( f(\psi(q)).\phi(q') \right) = f(\psi(p')) \; \mathcal G \left(\phi(p) \right) + f(\phi(p)) \; \mathcal G\left(\psi(p') \right), \end{equation} where the summation on left hand side is performed over locuses of points $q$, $q'$ on $\mathcal T_0$. We denote by ${\mathcal Con}^{n}$ the space of $n$-point connections defined over $\T$. \end{definition} Geometrically, for a vector bundle $\V$ defined over $\T$, a multi-point connection \eqref{locus} relates two sections $\phi$ and $\psi$ at points $p$ and $p'$ with a number of sections on $\mathcal T_0 \subset \T$. \begin{definition} We call \begin{equation} \label{gform} G(\phi, \psi) = f(\phi(p)) \; \mathcal G\left(\psi(p') \right) + f(\psi(p')) \; \mathcal G \left(\phi(p) \right) - \sum\limits_{q, q' \in \mathcal T_0 \subset \mathcal T} \mathcal G\left( f(\psi(q')).\phi(q) \right), \end{equation} the form of a $n$-point connection $\mathcal G$. The space of $n$-point connection forms will be denoted by $G^n$. \end{definition} Here we prove the following \begin{lemma} \label{pisaka} Jacobi $n$-point forms \eqref{npointfunction} generated by reduction formulas \eqref{poros} are $n$-point connections on the space of automorphisms $g$ deformed sections of the vertex algebra bundle $\V$ associated to $V$. For $n\ge 0$, the $n$-th reduction cohomology of Jacobi forms is given by \begin{equation} \label{chek} H^n_J(W) = H^n_J(\mathcal S\V_g)= {\mathcal Con}^{n}/G^{n-1}, \end{equation} is isomorphic to the cohomology of the space of deformed $\V$-sections. \end{lemma} \begin{remark} Proposition \ref{pisaka} is a deformed section vertex algebra version of the main proposition of \cite{BS, Wag}, i.e., the Bott--Segal theorem for Riemann surfaces. \end{remark} \begin{proof} In \cite{BZF} (Chapter 6, subsection 6.5.3) the vertex operator bundle $\V$ was explicitly constructed. It is easy to see that $n$-point connections are holomorphic connection on the bundle $\V$ with the following identifications. For non-vanishing $f(\phi(p))$ let us set \begin{eqnarray} \label{identifications} &\mathcal G =\mathcal Z^J_W \left({\bf x}_n; B \right), \nn &\psi(p')=\left({\bf x}_{n+1} \right), \nn &\phi(p)=\left({\bf x_n} \right), \nn & \mathcal G\left( f(\psi(q)).\phi(q') \right) = T_k(v[m]_\beta.)\; \mathcal Z^J_W \left( {\bf x}_n; B \right), \nn &- \frac{f(\psi(p'))}{ f(\phi(p))} \; \mathcal G \left(\phi(p) \right)= f_0\left({\bf x}_{n+1}; B \right) \; T_0(o_\lambda(v_{n+1})) \; \mathcal Z^J_W \left( {\bf x}_n; B \right), \nn &f^{-1}(\phi(p)) \sum\limits_{{q}_n, { q'}_n \atop \in {\mathcal T}_0 \subset \T} \mathcal G\left( f(\psi(q)).\phi(q') \right) = \sum\limits_{k=1 \atop m \ge 0}^{n} f_{k, m} \left({\bf x}_{n+1}; B \right) T_k(v[m]_\beta.)\; \mathcal Z^J_W \left( {\bf x}_n; B\right). \nn & \end{eqnarray} Thus, the formula \eqref{identifications} gives \eqref{poros}. Recall \cite{BZF} the construction of the vertex algebra bundle $\V$. Here we use a Virasoro vector shifred version of it. According to Proposition 6.5.4 of \cite{BZF}, one canonically (i.e., coordinate independently) associates ${\rm End} \; \V$-valued sections $\mathcal Y_p$ of the $g$-twisted bundle $\V^*$ (the bundle dual to $\V$). The intrinsic, i.e., coordinate independent, vertex algebra operators are defined by \cite{BZF} \[ \langle u, \left({\mathcal Y}_{\bf p}^*(i({\bf v}_n) )\right)_n \; g \; v \rangle = \langle u, {\bf Y}({\bf x}_n) v \rangle. \] to matrix elements of a number of vertex operators on appropriate punctured disks around points with local coordinates ${\bf z}_n$ on $\T$. The spaces of such $\V$-sections for each $n$ of is described by identifications \eqref{identifications}. Taking into account the construction of Section 6 (subsection 6.6.1, in particular, construction 6.6.4, and Proposition 6.6.7) of \cite{BZF}, we see that $n$-point functions are connections on the space of sections of $\V$, and the reduction cohomology \eqref{pusto} is represented by \eqref{chek}. \end{proof} The geometrical meaning of \eqref{conditions} consists in the following. Since in \eqref{poros} operators act on vertex algebra elements only, we can interpret it as a relation on modes of $V$ with functional coefficients. In particular, all operators $T$ change vertex algebra elements by action either of $o(v)=v_{{\wt} v - 1 }$, or positive modes of $v[m].$, $m \ge 0$. Recall that for $n$-point Jacobi forms are quasi-modular forms. Moreover, the reduction formulas \eqref{poros} can be used to prove modular invariance for higher $n$ Jacobipoint functions. Due to automorphic properties of $n$-point functions, \eqref{conditions} can be also interpreted as relations among modular forms. It also defines a complex variety in ${\bf z}_n \in \C^{n}$ with non-commutative parameters ${\bf v}_n \in V^{\otimes n}$. As most identities (e.g., trisecant identity \cite{Fay, Mu} and triple product identity \cite{K, MTZ}) for $n$-point functions \eqref{conditions} has its algebraic-geometrical meaning. The condition \eqref{conditions} relates finite series of vertex algebra correlations functions on $\T$ with elliptic functions \cite{Zhu, MT, MTZ}. Since $n$-point Jacobi forms are quasi-modular forms, we treat \eqref{conditions} as a source of new identities on such forms. \section*{Acknowledgments} The author would like to thank H. V. L\^e and A. Lytchak for related discussions. \section{Appendix: Quasi-Jacobi forms}\label{Sec:Quasi} In this appendix we recall definitions and properties of Jacobi and quasi-Jacobi forms \cite{BKT}. First, we provide the definition of ordinary Jacobi forms \cite{EZ}. Let $\HH$ be the upper-half plane. \begin{definition} Let $k$, $m\in\N_0$, and $\chi$ be a rational character for a one dimensional representation of the Jacobi group $\SL(2,\Z)\ltimes \Z^2$. A {holomorphic Jacobi form} of weight $k$ and index $m$ on $\SL_2(\Z)$ with rational multiplier $\chi$ is a holomorphic function \[ \phi: \C \times \HH \to\C, \] and \[ \gamma.\tau = \frac{a\tau+b}{c\tau+d}, \] which satisfies the following conditions. Let $\gamma \in \SL_2(\Z)$, \[ \gamma= \begin{psmallmatrix}a&b\\c&d\end{psmallmatrix}. \] Then, for $(\lambda,\mu)\in\Z\times\Z$, \begin{equation} \label{eq:Jactr} \phi\Big|_{k,m}\left( \gamma, (\lambda,\mu)\right)= \chi \left(\gamma, (\lambda,\mu)\right)\phi, \end{equation} where for a function \[ \phi: \C \times \HH \to\C, \] \begin{multline*} \phi\Big|_{k,m}\left(\gamma,(\lambda,\mu)\right)(z,\tau) \\ = (c\tau+d)^{-k}e\left(-\frac{cm(z+\lambda\tau+\mu)^2}{c\tau+d}+m\left(\lambda^2\tau+2\lambda z\right)\right) \phi\left(\frac{z+\lambda\tau+\mu}{c\tau+d}, \gamma. \tau \right). \end{multline*} with $e(w)=e^{2\pi iw}$. For a multiplier $\chi$, \[ \chi\begin{pmatrix}a&b\\c&d\end{pmatrix} =\chi\left(\begin{pmatrix}a&b\\c&d\end{pmatrix}, (0, 0)\right),\quad \chi(\lambda, \mu)=\chi\left(\begin{pmatrix}1&0\\0&1\end{pmatrix}, (\lambda, \mu)\right), \] and $N_1, N_2\in\N$ uniquely defined by \[ \chi\left(\begin{matrix} 1&1\\0&1\end{matrix}\right)=e^{2\pi i\frac{a_1}{N_1}}, \quad\chi(0, 1)=e^{2\pi i\frac{a_2}{N_2}}, \] where for $a_j \in\N$, $\gcd (a_j, N_j)=1$. The function $\phi$ has a Fourier expansion of the form with $q=e(\tau)$, $\zeta =e(z)$, \begin{align} \phi \left(z ,\tau \right)= \sum_{n\in \N_0+\rho_{1}} \sum_{r\in\Z+\rho_{2}\atop{r^2\leq 4nm}} c(n,r)q^{n}\zeta^{r}, \label{eq:phiFourier} \end{align} where $\rho_{j}=\frac{a_j}{N_j}\pmod \Z$ with $0\le \rho_j<1$. \end{definition} We next consider quasi-Jacobi forms as introduced in \cite{Lib}. \begin{definition} An almost meromorphic Jacobi form of weight $k$, index $0$, and depth $(s,t)$ is a meromorphic function in $\C\{q,\zeta\}[z^{-1},\frac{z_2}{\tau_2},\frac{1}{\tau_2}]$ with $z=z_1+iz_2,\ \tau=\tau_1+i\tau_2$) satisfying \eqref{eq:Jactr}, and which has degree at most $s$, $t$ in $\frac{z_2}{\tau_2}$, $\frac{1}{\tau_2}$, respectively. \end{definition} \begin{definition} A { quasi-Jacobi form} of weight $k$, index $0$, and depth $(s,t)$ is defined by the constant term of an almost meromorphic Jacobi form of index $0$ considered as a polynomial in $\frac{z_2}{\tau_2}$, $\frac{1}{\tau_2}$. \end{definition} \subsection{Modular and elliptic functions} For a variable $x$, set \[ D_x = \frac{1}{\tpi} \frac{\partial}{\partial x}, \] and $q_x = e^{2\pi i x}$. Define for \[ m\in\mathbb{N}=\{\ell\in \mathbb{Z}: \ell>0\}, \] the elliptic Weierstrass functions \begin{equation} \label{eq:Pm} \begin{aligned} P_{1}(w,\tau) =&-\sum_{n\in\Z\backslash \{0\}}\frac{q_w^n}{1-q^{n}}-\frac{1}{2}, \\ P_{m+1}(w,\tau) =&\frac{\left(-1\right)^m}{m!} D_w^m \left(P_{1}(w,\tau)\right) =\frac{(-1)^{m+1}}{m!}\sum_{n\in\Z\backslash \{0\}}\frac{n^m q_w^n}{1-q^{n}}. \end{aligned} \end{equation} Next, we have \begin{definition} The modular Eisenstein series $E_{k}(\tau)$, defined by $E_{k}=0$ for $k$ for odd and $k\ge 2$ even \begin{align} E_{k}(\tau)&=-\frac{ B_{k}}{k!}+\frac{2}{(k-1)!} \sum\limits_{n\geq 1}\frac{n^{k-1}q^{n}}{1-q^{n}}, \notag \label{eq:Eisen} \end{align} where $B_{k}$ is the $k$-th Bernoulli number defined by \[ (e^z-1)^{-1} = \displaystyle{\sum\limits_{k\geq 0}\frac{B_{k}}{k!}z^{k-1}}. \] \end{definition} It is convenient to define $E_{0}=-1$. $E_{k}$ is a modular form for $k>2$ and a quasi-modular form for $k=2$. Therefore, \begin{align} E_{k}(\gamma \tau )=(c\tau +d)^{k} E_{k}(\tau )-\delta_{k,2} \frac{ c(c\tau +d)}{\tpi}. \notag \label{eq:Engam} \end{align} \begin{definition} For $w$, $z\in\C$, and $\tau \in \HH$ let us define \begin{align*} \widetilde{P}_1(w, z,\tau) =-\sum_{n\in\Z}\frac{q_w^n}{1-q_z q^n}. \end{align*} \end{definition} We also have \begin{definition} \begin{equation} \widetilde{P}_{m+1}(w,z,\tau) =\frac{(-1)^{m}}{m!} D_w^m \left(\widetilde{P}_1(w,z,\tau)\right) =\frac{(-1)^{m+1} }{m!} \sum_{n\in\Z}\frac{n^m q_w^n}{1-q_zq^n}. \label{eq:Pmtilde} \end{equation} \end{definition} It is thus useful to give \begin{definition} For $m\in\mathbb{N}_0$, let \begin{eqnarray} \label{eq:PellPm} P_{m+1, \lambda}\left(w,\tau\right) &=& \frac{(-1)^{m+1}}{m!}\sum_{n\in \Z\backslash \{-\lambda\}}\frac{n^mq_w^n}{1-q^{n+\lambda}}. \end{eqnarray} \end{definition} On notes that \[ P_{1,\lambda}\left(w,\tau\right)=q_w^{-\lambda}(P_1(w,\tau)+1/2), \] with \begin{align} P_{m+1,\lambda}\left(w,\tau\right)&=\frac{(-1)^m}{m!} D_w^m \left(P_{1,\lambda}\left(w,\tau\right)\right). \notag \end{align} We also consider the expansion \begin{align} \notag P_{1,\lambda}(w,\tau)=\frac{1}{\tpi w}-\sum_{k\ge 1} E_{k,\lambda}(\tau )(\tpi w)^{k-1}, \end{align} where we find \cite{Zag} \begin{align} E_{k,\lambda}(\tau )&=\sum_{j=0}^{k}\frac{\lambda^j}{j!} E_{k-j}(\tau). \label{eq:Gkl} \end{align} \begin{definition} We define another generating set $\widetilde{E}_k(z,\tau)$ for $k\ge 1$ together with $E_2(\tau)$ given by \cite{Ob} \begin{align} \widetilde{P}_1(w,z,\tau) = \frac{1}{\tpi w}-\sum_{k\ge 1}\widetilde{E}_k(z,\tau) (\tpi w)^{k-1}, \label{eq:P1Gn} \end{align} where we find that for $k\ge 1$, \begin{equation} \label{eq:Gktild} \begin{aligned} \widetilde{E}_k(z,\tau) =& -\delta_{k,1}\frac{q_z}{q_z-1} -\dfrac{B_{k}}{k!} +\frac{1}{(k-1)!} \sum_{m,n\ge 1}\left(n^{k-1} q_z^{m}+(-1)^{k}n^{k-1}q_z^{-m} \right)q^{mn}, \end{aligned} \end{equation} and $\widetilde{E}_0(z,\tau)=-1$. \end{definition} \subsection{Deformed elliptic functions} \label{defo} In this subsection we recall the definition of deformed elliptic functions \cite{DLM, MTZ}. Let $(\theta ,\phi )\in U(1)\times U(1)$ denote a pair of modulus one complex parameters with $\phi =\exp (2\pi i\lambda )$ for $0\leq \lambda <1$. \begin{definition} For $z\in \mathbb{C}$, $\tau \in \mathbb{H}$ we define deformed Weierstrass functions for $k\geq 1$ as \begin{equation} P_{k}\left[ \begin{array}{c} \theta \\ \ph \end{array \right] (z, \tau )=\frac{(-1)^{k}}{(k-1)!}\sum\limits_{n\in \mathbb{Z +\lambda }^{\prime }\frac{n^{k-1}q_{z}^{n}}{1-\theta ^{-1}q^{n}}, \label{Pkuv} \end{equation for $q=q_{2\pi i\tau }$ where $\sum\limits^{\prime }$ means we omit $n=0$ if $(\theta ,\phi )=(1,1)$. \end{definition} The functions (\ref{Pkuv}) converge absolutely and uniformly on compact subsets of the domain $\left\vert q\right\vert <\left\vert q_{z}\right\vert <1$ \cite {DLM}. For $k\geq 1$, \begin{equation} P_{k}\left[ \begin{array}{c} \theta \\ \ph \end{array }\right] (z,\tau )=\frac{(-1)^{k-1}}{(k-1)!}\frac{d^{k-1}}{dz^{k-1}}P_{1 \left[ \begin{array}{c} \theta \\ \ph \end{array }\right] (z,\tau ). \label{Pkplus1} \end{equation} \section{Appendix: Reduction formulas for Jacobi $n$-point functions} \label{redya} In this subsection we recall the reduction formulas derived in \cite{MTZ, BKT}. \subsection{Vertex operator superalgebra case} Recall \cite{MTZ}, the following \begin{proposition} \label{Propa[0]comm} Suppose that $v_{n+1}\in V$ is homogeneous of integer weight $\wt(v_{n+1})\in \mathbb{Z}$. Then we have \begin{equation} \sum\limits_{k=1}^{n} p(v_{n+1},{\bf v}_{k-1}) \; \mathcal Z^J_{w} \left( (v[0].)_k {\bf v}_n, B \right)=0, \label{v[0]comm0} \end{equation} with $p(v_{n+1},{\bf v}_{k-1})$ given by \begin{equation} p(A,B_{1}\ldots B_{r-1})=\left\{ \begin{array}{cc} 1\text{ } & \text{for }r=1 \\ (-1)^{p(A)[p(B_{1})+...+p(B_{r-1})]} & \text{ for }r>1 \end{array} \right. . \label{parityAB} \end{equation} \end{proposition} Let $v_{n+1}$ be homogeneous of weight $\wt(v_{n+1})\in \mathbb{R}$ and define $\phi \in U(1)$ by \begin{equation} \phi =\exp (2\pi i \; \wt(v_{n+1})). \label{phi0} \end{equation} We also take $v_{n+1}$ to be an eigenfunction under $g$ with \begin{equation} gv_{n+1}=\theta ^{-1}v_{n+1} \label{theta} \end{equation} for some $\theta \in U(1)$ so that \begin{equation} g^{-1}v_{n+1}(k)g=\theta v_{n+1}(k). \label{gv(k)0} \end{equation} Then we obtain the following generalization of Zhu's Proposition 4.3.2 \cite{Zhu} for the $n$-point function: \begin{theorem} \label{Theorem_npt_rec0} Let $v_{n+1},\theta $ and $\phi $\ be as as above. Then for any ${\bf v}_{n}\in V^{\otimes n}$ we have \begin{eqnarray} &&\mathcal Z_{W}^J \left({\bf x_{n+1}}; B \right)= \delta _{\theta ,1}\delta _{\phi ,1} \mathrm{STr}_{W}\left( o(v_{n+1}) \; {\bf Y}_{W} \left({\bf q}^{L_V(0)}{\bf v}, {\bf q}\right)_n \; g \; q^{L(0)-c/24}\right) \notag \\ &&+\sum\limits_{k=1 \atop m \geq 0}^{n} p(v_{n+1},{\bf v}_{k-1}) \; P_{m+1}\left[ \begin{array}{c} \theta \\ \ph \end{array} \right] (z_{n+1}-z_{k},\tau) \mathcal Z_{W}^J ( (v_{n+1}[m])_k.{\bf v}_{n}; B). \label{nptrec0} \end{eqnarray} The deformed Weierstrass function is defined in (\ref{Pkuv})). \end{theorem} \subsection{The first reduction formula} Suppose that $v_{n+1}\in V$ with \[ L(0)v_{n+1}=\wt(v_{n+1})v_{n+1}, \] \[ J(0)v_{n+1}=\alpha v_{n+1}, \] for $\alpha \in \mathbb{C}$. The simplest case of reduction formulas for modes \[ v_{n+1}(\wt(v_{n+1})-1 + \beta)= o_\beta(v_{n+1}), \] by $\beta \in \Z$, is given in \cite{BKT}: \begin{lemma} \label{lem:Rec1} For all $k\in \C$, we have \begin{align} &\left(1-\zeta^{-\alpha} q^\beta \right) \; T_0(o_\beta(v_{n+1})) \; \mathcal Z^J_W({\bf x}_n, B) \nn & \qquad = \sum\limits_{k=1}^{n}\sum_{m\ge 0} \mathcal Z_M^J \left( \left(e^{z_k \beta} \frac{\beta^m}{m!}v_{n+1}[m].\right)_k {\bf x}_n ; B \right). \notag \label{eq:Rec1} \end{align} \end{lemma} Lemma~\ref{lem:Rec1} implies the following corollary. \begin{corollary} \label{cor:ZeroRes} Let $J(0)v_{n+1}=\alpha v_{n+1}$. If $\alpha z=\lambda\tau+\mu \in {\mathbb{Z}\tau}+\Z$, then \begin{equation} \sum_{k=1}^{n} \sum_{m\ge 0} \mathcal Z_M^J\left( \left(e^{z_k \lambda} \frac{\lambda^m}{m!}v_{n+1}[m].\right)_k {\bf v}_n; B\right)=0. \label{eq:ZeroRes} \end{equation} \end{corollary} We now provide the following reduction formula for formal Jacobi $n$-point functions \cite{BKT}. For eigenstates $v_{n+1}$ with respect to $J(0)$ we obtain: \begin{proposition} \label{prop:Zhured} Let ${\bf x}_{n+1}\in V^{\otimes (n+1)} \times \C^{n+1}$, with $J(0)v_{n+1}=\alpha v_{n+1}$, $\alpha \in \mathbb{C}$. For $\alpha z\notin {\mathbb{Z}\tau} +\mathbb{Z}$, we have \begin{align} \mathcal Z_M^J\left ({\bf x}_{n+1}; B \right) =\sum_{k=1}^{n}\sum_{m\ge 0} \widetilde{P}_{m+1} \left(\frac{z_{n+1}-z_k}{\tpi}, \alpha z, \tau \right) \mathcal Z_W^J\left ( (v_{n+1}[m].)_k \; {\bf x}_n; B \right). \label{eq:ZhuRed} \end{align} \end{proposition} \begin{proposition} \label{prop:Zhured0} Let ${\bf x}_{n+1} \in V^{\otimes (n+1)} \times \C^{n+1}$, with $J(0)v_{n+1}=\alpha v_{n+1}$. For $\alpha z=\lambda\tau+\mu\in {\mathbb{Z}\tau+\Z}$, we have \begin{align} &\mathcal Z_W^J\left( {\bf x}_{n+1}; B \right)\notag \\ &\quad = e^{-z_{n+1}\lambda}\tr_{W} \left( v_{n+1}( \wt(v_{n+1})-1+\lambda ) \; {\bf Y} \left(e^{{\bf z} L(0)} {\bf v}, e^{\bf x}\right)_n \zeta^{J(0)} q^{L(0)}\right) \notag \\ &\quad \quad +\sum_{k=1}^{n} \sum_{m\ge 0}P_{m+1,\lambda} \left( \frac{z_{n+1}-z_k}{\tpi}, \tau\right) \mathcal Z_M^J \left( (v_{n+1}[m].)_k {\bf x}_n); B \right), \label{eq:ZhuRed0} \end{align} with $P_{m+1,\lambda}\left(w ,\tau\right)$ defined in \eqref{eq:PellPm}. \end{proposition} Next we provide the reduction formula for Jacobi $n$-point functions. \begin{proposition} \label{prop:apnpt} Let ${\bf x}_{n+1} \in V^{\otimes (n+1)} \times \C^{n+1}$, with $J(0)v_{n+1}=\alpha v_{n+1}$. For $l\ge 1$ and $\alpha z \notin {\mathbb{Z}\tau}+\Z $, we have \begin{align} &\mathcal Z_W^J\left (v_{n+1}[-l]. x_1, {\bf x}_{2, n}); B \right) \notag \\ &\quad = \sum_{m\ge 0}(-1)^{m+1}\binom{m+l-1}{m}\widetilde{G}_{m+l}(\alpha z,\tau) \mathcal Z_W^J\left (v_{n+1}[m].x_1), {\bf x}_{2, n}; B \right)\notag \\ &\quad \quad+ \sum_{k=2}^{n}\sum_{m\ge 0} (-1)^{l+1} \binom{m+l-1}{m} \widetilde{P}_{m+l} \left(\frac{z_1-z_k}{\tpi}, \alpha z, \tau \right) \mathcal Z_W^J\left( v_{n+1}[m].{\bf x}_n; B \right). \label{eq:2ZhuRed} \end{align} \end{proposition} Propositions~\ref{prop:Zhured0} and \ref{prop:apnpt} imply the next result \cite{BKT}: \begin{proposition} \label{prop:apnpt0} Let ${\bf x}_{n+1} \in V^{\otimes (n+1)} \times \C^{n+1}$, with $J(0)v_{n+1}=\alpha v_{n+1}$. For $l \geq 1$ and $\alpha z = \lambda\tau+\mu\in {\mathbb{Z}\tau}+\Z$, we have \begin{align} &\mathcal Z_W^J\left( v_{n+1}[-l].x_1, {\bf x}_{2, n}); B \right)\notag \\ &\quad = (-1)^{l+1}\frac{\lambda^{l-1}}{(l-1)!} {\rm Tr}_{W} \left( v_{n+1}(\lambda+\wt(v_{n+1})-1) {\bf Y } \left(e^{{\bf x} L_V(0)}{\bf v}, e^{\bf x}\right)_n \zeta^{J(0)}q^{L(0)} \right) \notag \\ &\quad \quad +\sum\limits_{m\ge 0}(-1)^{m+1}\binom{m+l-1}{m} {E}_{m+l,\lambda}(\tau) \; \mathcal Z_W^J\left (v_{n+1}[m].x_1, {\bf x}_{2, n}; B \right)\notag \\ &\quad \quad + \sum\limits_{k=2}^{n}\sum_{m\ge 0} (-1)^{l+1} \binom{m+l-1}{m} P_{m+l,\lambda}\left(\frac{x_1-x_{k}} {\tpi},\tau \right)\; \mathcal Z_W^J\left( v_{n+1}[m]. {\bf x}_n; B \right), \notag \label{eq:2ZhuRed0} \end{align} for ${E}_{k, \lambda}$ given in \eqref{eq:Gkl}. \end{proposition} \begin{remark} In the case $\alpha=0$ we have that $\lambda=\mu=0$ and Propositions~\ref{prop:Zhured0} and \ref{prop:apnpt0} imply the standard results of \cite{Zhu} or \cite{MTZ} with $a(\lambda+\wt(a)-1)=o(a)$. \end{remark} \subsection{Vertex operator (super)algebras} \label{vertex} In this subsection we recall the notion of vertex operator (super)algebras \cite{B, FHL, FLM, K, MN}. Let $V$ be a superspace, i.e., a complex vector space \[ V=V_{\bar{0}}\oplus V_{\bar{1}}=\oplus _{\alpha }V_{\alpha }, \] with index label $\alpha $ in $\mathbb{Z}/2\mathbb{Z}$ so that each $a\in V$ has a parity $p(a)\in \mathbb{Z}/2\mathbb{Z}$. An $\C$-graded vertex operator superalgebra is defined by $(V,Y,\mathbf{1}_V,\omega)$ where $V$ is a superspace with a $\mathbb{C}$-grading where \begin{equation*} V=\oplus _{r\geq r_{0}}V_{r}, \end{equation*} for some $r_{0}$ and with parity decomposition \[ V_{r}=V_{\bar{0},r}\oplus V_{\bar{1},r}. \] $\mathbf{1}_V\in V_{\bar{0},0}$ is the vacuum vector and $\omega \in V_{\bar{0},2}$ the conformal vector with properties described below. The vertex operator $Y$ is a linear map \[ Y: V\rightarrow (\mathrm{End}V)[[z,z^{-1}]], \] for formal variable $z$, so that for any vector $x = (a,v) \in V\times \C$, \begin{equation} Y(x)=\sum_{n\in \mathbb{Z}} a(n) z^{-n-1}. \label{Ydefn} \end{equation The component operators (modes) $a(n)\in \mathrm{End}V$ are such that \[ a(n) \mathbf{1}_V=\delta _{n,-1}a, \] for $n\geq -1$ and \begin{equation} a(n)V_{\alpha }\subset V_{\alpha +p(a)}, \label{a(n)parity} \end{equation} for $a$ of parity $p(a)$. The vertex operators satisfy the locality property for all $x_i=(v_i, z_i) \in V\times \C$, $i=1,2$, \begin{equation} (z_1-z_2)^{N}[Y(x_1),Y(x_2)]=0, \label{locality} \end{equation} for $N\gg 0$, where the commutator is defined in the graded sense, i.e., \begin{equation*} \lbrack Y(x_1),Y(x_2)]=Y(x_1)Y(x_2)-(1)^{p(v_1)p(v_2)}Y(x_2)Y(x_1). \end{equation*} The vertex operator for the vacuum is \[ Y(\mathbf{1}_V, z)={\rm Id}_{V}, \] whereas that for $\omega $ is \begin{equation} Y(\omega ,z)=\sum_{n\in \mathbb{Z}}L(n)z^{-n-2}, \label{Yomega} \end{equation} where $L(n)$ forms a Virasoro algebra for central charge $c$ \begin{equation} \lbrack L_V(m),L_V(n)]=(m-n)L_V(m+n)+\frac{c}{12}(m^{3}-m)\delta _{m,-n} {\rm Id}_V. \label{Virasoro} \end{equation} $L_V(-1)$ satisfies the translation property \begin{equation} Y\left(L_V(-1)x \right)= \frac{d}{dz}Y(x). \label{YL(-1)} \end{equation $L_V(0)$ describes the $\C$-grading with \[ L_V(0)a=\wt(a)a, \] for weight $\wt(a)\in \C$ and \begin{equation} V_{r}=\{a\in V| \wt(a)=r\}. \label{Vdecomp} \end{equation} We quote the standard commutator property of vertex operator superalgebra, e.g., \cite{K, FHL, MN}, for $x_1=(a, z_1)$, $x=(b, z_2)$ \begin{equation} \lbrack a(m),Y(x)]=\sum\nolimits_{j\geq 0}\binom{m}{j}Y(a(j).x)z_1^{m-j}. \label{aYcomm} \end{equation} Taking $a=\omega $ this implies for $b$ of weight $\wt(b)$ that \begin{equation} \lbrack L_V(0),b(n)]=(\wt(b)-n-1)b(n), \label{L0b} \end{equation} so that \begin{equation} b(n)V_{r}\subset V_{r+\wt(b)-n-1}. \label{bnVr} \end{equation} In particular, we define for $a$ of weight $\wt(a)$ the zero mode \begin{equation} o_\lambda(a)=\left\{ \begin{array}{c} a(\wt(a)-1+\lambda), \text{ \ for } \wt(a)\in \mathbb{Z} \\ 0\text{, \ otherwise,} \end{array} \right. \label{o(v)} \end{equation} which is then extended by linearity to all $a\in V$. \subsection{Square bracket formalism} \label{squa} Define the square bracket operators for $V$ by \begin{align} Y[x] = Y\left(e^{z\;L(0)} v, e^z -1 \right) = \sum_{n\in \mathbb{Z}} v[n] z^{-n-1}. \label{eq:Ysq} \end{align} For $v$ of weight $\wt(v)$ and $k\in \Z$ (see \cite[Lemma~4.3.1]{Zhu}), we have \begin{align} \sum_{j\geq 0}\binom{k+\wt(v)-1}{j} v(j)= \sum_{m\ge 0}\frac{k^m}{m!}v[m]. \label{eq:asqround} \end{align} The square bracket operators form an isomorphic vertex operator algebra with Virasoro vector \begin{align} \widetilde{\omega} = \omega-\frac{c}{24} \vac_V. \notag \label{eq:ometilde} \end{align} Let us now introduce \cite{DMs} the shifted Virasoro vector \begin{equation} \omega_{h}=\omega +h(-2)\vac_V, \notag \label{eq:omh} \end{equation} where \[ h=-\frac{\lambda}{\alpha}J, \] for $\lambda\in \Z$. Then the shifted grading operator is \begin{align*} L_{h}(0)=L(0)-h(0)=L(0)+\frac{\lambda}{\alpha}J(0). \label{eq:L0h} \end{align*} Denote the square bracket vertex operator for the shifted Virasoro vector by \begin{equation} Y[x]_h =Y\left(e^{z \; L_h(0)}v, e^z -1\right) = \sum_{n\in \Z} v[n]_h \; z^{-n-1}, \notag \label{eq:} \end{equation} Therefore, \[ Y[a,z]_h=e^{z\lambda} Y[a,z], \] or equivalently, \begin{align} a[n]_h=\sum_{m\ge 0}\frac{\lambda^m}{m!}a[n+m]. \label{eq:anh} \end{align}
1,116,691,497,100
arxiv
\section{Introduction} It is known that reheating is a crucial epoch which connects inflation to the hot big-bang phase~\cite{Turner:1983he}. This era is conceptually very important, but it is observationally poorly known. The physics of this phase transition is thought to be highly non-linear~\cite{Amin:2014eta}. Also, the physics of reheating has turned out to be very complicated~\cite{Traschen:1990sw,Kofman:1994rk,Kofman:1996mv,Kofman:1997yn}. Since the first CMB constraints have performed on the reheating temperature by the WMAP7~\cite{Martin:2010kz}, the current Planck satellite measurements of the CMB anisotropy constrain the kinematic properties of the reheating era for almost 200 of the inflationary models~\cite{Martin:2014nya}. The nonminimal derivative coupling (NDC)~\cite{Amendola:1993uh,Sushkov:2009hk} was made by coupling the inflaton kinetic term to the Einstein tensor such that the friction is enhanced gravitationally~\cite{Germani:2010gm}. The gravitationally enhanced friction mechanism has been considered as an alternative to increase friction of an inflaton rolling down its own potential. Actually, the NDC makes a steep (non-flat) potential adequate for inflation without introducing higher-time derivative terms (ghost state)~\cite{Germani:2011ua,Germani:2011mx}. This implies that the NDC increases friction and thus, it flattens the potential effectively. It is worth to note that there was a difference in whole dynamics between canonical coupling (CC) and NDC even for taking the same potential~\cite{Myung:2015tga}. A clear difference appears after the end of inflation. We note that there are three phases in the CC case~\cite{Donoghue:2007ze}: i) Initially, kinetic energy dominates. ii) Due to the rapid decrease of the kinetic energy, the trajectory runs into the inflationary attractor line (potential energy dominated). All initial trajectories are attracted to this line, which is the key feature of slow-roll inflation. iii) At the end of inflation, the inflaton velocity decreases. Then, there is inflaton decay and reheating [the appearance of spiral sink in the phase portrait $(\phi,\dot{\phi})$]. On the other hand, three stages of NDC are as follows: i) Initially, potential energy dominates. ii) Due to the gravitationally enhanced friction (restriction on inflaton velocity $\dot{\phi}$), all initial trajectories are attracted quickly to the inflationary attractor. iii) At the end of inflation, the inflaton velocity increases. Then, there is inflaton decay and followed by reheating. Importantly, there exist oscillations of inflaton velocity without damping due to violent oscillations of Hubble parameter. This provides stable limited cycles in the phase portrait $(\phi,\dot{\phi})$, instead of spiral sink in CC. However, it was shown that analytic expressions for inflaton and Hubble parameter after the inflation could be found by applying the averaging method to the NDC~\cite{Ghalee:2013ada}. The inflaton oscillates with time-dependent frequency, while the Hubble parameter does not oscillate. Introducing an interacting Lagrangian of ${\cal L}_{\rm int}=-\frac{1}{2}g^2\phi^2\chi^2$, they have claimed that the parametric resonance instability is absent, implying a crucial difference when comparing to the CC. This requires a complete solution by solving NDC-equations numerically. Recently, the authors in~\cite{Ema:2015oaa} have investigated particle production after inflation by considering the combined model of CC+NDC. They have insisted that the violent oscillation of Hubble parameter causes particle production even though the Lagrangian instability appears due to oscillations of the sound speed squared $c_s^2$ which also appeared in the generalized Galilean theory~\cite{Ohashi:2012wf}. One usually assumes that the field mode is frozen (time-independent) at late time after entering into the super-horizon. Therefore, it was accepted that the perturbation during the reheating is less important than that of inflation. However, in exploring the effects of reheating on the cosmological perturbations of CC case, one has to face the breakdown of the curvature perturbation $\zeta$ at $\dot{\phi}=0$ when choosing the comoving gauge of $\varphi=0$. This issue may be bypassed by replacing $\dot{\phi}^2$ by its time average $\langle \dot{\phi}^2\rangle$ over the inflaton oscillation~\cite{Finelli:1998bu,Jedamzik:2010dq,Easther:2010mr}. Recently, it was proposed that the breakdown of the comoving gauge $\varphi=0$ at $\dot{\phi}=0$ could be resolved by introducing the $cd$-gauge which eliminates $\varphi$ in the Hamiltonian formalism of the CC model and thus, provides a well-behaved curvature perturbation $\zeta$~\cite{Algan:2015xca}. However, it turned out that choosing the Newtonian gauge is necessary to study the perturbation during the oscillating period, since the comoving gauge is not suitable for performing the perturbation analysis during the reheating~\cite{Germani:2015plv}. In this work, we find a complete solution for inflaton and Hubble parameter by solving the NDC-equations numerically in Section 2. The NDC model may be dangerous because the inflaton becomes strongly coupled when the Hubble parameter tends towards zero. Hence, we wish to obtain a complete solution for inflaton and Hubble parameter by solving the CC+NDC-equations numerically in Section 3. Here, we can control mutual importance of the CC and NDC by adjusting two coefficients. In Section 4, we will investigate the curvature perturbation $\zeta$ during reheating by considering the NDC with the chaotic potential and choosing the comoving gauge. We find that violent oscillations of Hubble parameter induce oscillations of the sound speed squared, implying the Lagrangian instability of curvature perturbation. More seriously, we show that the curvature perturbation blows up at $\dot{\phi}=0$, implying that the curvature perturbation is ill-defined under the comoving gauge of $\varphi=0$. This suggests a different gauge without problems at $\dot{\phi}=0$. Hence, we choose the Newtonian gauge to perform the perturbation analysis where the Newtonian potential is considered as a physical variable in Section 5. \section{NDC with chaotic potential} We introduce an inflation model including the NDC of scalar field $\phi$ with the chaotic potential~\cite{Feng:2014tka,Myung:2015tga} \begin{eqnarray} \label{mact} S_{\rm}=\frac{1}{2}\int d^4x \sqrt{-g}\Big[M_{\rm P}^2R+\frac{1}{\tilde{M}^2}G_{\mu\nu}\partial^{\mu}\phi\partial^{\nu}\phi-2V(\phi)\Big],~~V=V_0\phi^2, \end{eqnarray} where $M_{\rm P}$ is a reduced Planck mass, $\tilde{M}$ is a mass parameter and $G_{\mu\nu}$ is the Einstein tensor. Here, we do not include a canonical coupling (CC) term like as a conventional combination of CC+NDC [$(g_{\mu\nu}-G_{\mu\nu}/\tilde{M}^2)\partial^\mu\phi\partial^\nu \phi$]~\cite{Tsujikawa:2012mk,Skugoreva:2013ooa} because this combination won't make the whole analysis transparent. From the action (\ref{mact}), we derive the Einstein and inflaton equations \begin{eqnarray} \label{einseq} &&G_{\mu\nu} =\frac{1}{M_{\rm P}^2} T_{\mu\nu},\\ \label{einseq-1}&&\frac{1}{\tilde{M}^2}G^{\mu\nu}\nabla_{\mu}\nabla_{\nu}\phi+V'=0, \end{eqnarray} where $T_{\mu\nu}$ takes a complicated form \begin{eqnarray} T_{\mu\nu}&=&\frac{1}{\tilde{M}^2}\Big[\frac{1}{2}R\nabla_{\mu}\phi\nabla_{\nu}\phi -2\nabla_{\rho}\phi\nabla_{(\mu}\phi R_{\nu)}^{\rho} +\frac{1}{2}G_{\mu\nu}(\nabla\phi)^2-R_{\mu\rho\nu\sigma}\nabla^{\rho}\phi\nabla^{\sigma}\phi \nonumber \\&&\hspace*{5em}-\nabla_{\mu}\nabla^{\rho}\phi\nabla_{\nu}\nabla_{\rho}\phi +(\nabla_{\mu}\nabla_{\nu}\phi)\nabla^2\phi\nonumber\\ && \label{em1}-g_{\mu\nu}\Big(-R^{\rho\sigma}\nabla_{\rho}\phi\nabla_{\sigma}\phi+\frac{1}{2}(\nabla^2\phi)^2 -\frac{1}{2}(\nabla^{\rho}\nabla^{\sigma}\phi)\nabla_{\rho}\nabla_{\sigma}\phi \Big)\Big]. \end{eqnarray} Considering a flat FRW spacetime by introducing cosmic time $t$ as \begin{eqnarray} \label{deds1} ds^2_{\rm FRW}~=~\bar{g}_{\mu\nu}dx^\mu dx^\nu~=~-dt^2+a^2(t)\delta_{ij}dx^idx^j, \label{deds2} \end{eqnarray} two Friedmann and inflaton equations (NDC-equations) derived from (\ref{einseq}) and (\ref{einseq-1}) are given by \begin{eqnarray} H^2&=&\frac{1}{3M_{\rm P}^2}\Big[\frac{9H^2}{2\tilde{M}^2}\dot{\phi}^2+V\Big],\label{Heq}\\ &&\nonumber\\ \dot{H}&=&-\frac{1}{2M_{\rm P}^2}\Big[\dot{\phi}^2\Big(\frac{3H^2}{\tilde{M}^2}-\frac{\dot{H}}{\tilde{M}^2}\Big)-\frac{2H}{\tilde{M}^2}\dot{\phi}\ddot{\phi} \Big],\label{dHeq}\\ &&\nonumber\\ &&\hspace*{-4em}\frac{3H^2}{\tilde{M}^2}\ddot{\phi}+3H\Big(\frac{3H^2}{\tilde{M}^2}+\frac{2\dot{H}}{\tilde{M}^2}\Big)\dot{\phi}+V'=0.\label{seq} \end{eqnarray} Here $H=\dot{a}/a$ is the Hubble parameter and the overdot ($\dot{}$) denotes derivative with respect to time $t$. It is evident from (\ref{Heq}) that the energy density for the NDC is positive (ghost-free). \begin{figure}[t!] \begin{center} \begin{tabular}{cc} \includegraphics[width=.9 \linewidth,origin=tl]{fig1.eps} \end{tabular} \end{center} \caption{The whole evolution of $\phi(t)$ [left] and $\dot{\phi}(t)$ [right] with respect to time $t$ for chaotic potential $V=V_0\phi^2$ with $V_0=0.1$. The left figure shows that the inflaton varies little during large inflationary period ($0\le t\le 200$) for the NDC, while it varies quickly during small inflationary period ($0\le t\le 45$) for the CC. After inflation (see figure in box), $\phi$ decays with oscillation for CC, while it oscillates rapidly for NDC. The right one indicates that for large $t$, $\dot{\phi}$ oscillates without damping for NDC, while it oscillates with damping for the CC. Figure in box shows initially kinetic energy phase for CC and initially potential phase for NDC. } \end{figure} At this stage, the CC model of $-g_{\mu\nu}\partial^\mu \phi \partial^\nu \phi$ is introduced to compare with the NDC case. In this case, the CC-equations are given by \begin{eqnarray} H^2&=&\frac{1}{3M_{\rm P}^2}\Big[\frac{1}{2}\dot{\phi}^2+V\Big],\label{Heqc}\\ \dot{H}&=&-\frac{1}{2M_{\rm P}^2}\dot{\phi}^2,\label{dHeqc}\\ \ddot{\phi}&+&3H\dot{\phi}+m^2\phi=0 \label{seqc} \end{eqnarray} with $m^2=2V_0$. Fig. 1 shows a whole evolution of $\phi$ and $\dot{\phi}$ based on numerical computation. When the universe evolves according to (\ref{Heqc})-(\ref{seqc}), there are three phases in the CC case~\cite{Donoghue:2007ze}: i) Initially, kinetic energy dominates [see Fig. 1 (right)]. ii) Due to the rapid decrease of the kinetic energy, the trajectory runs quickly to the inflationary attractor line. All initial trajectories are attracted to this line, which is the key feature of slow-roll inflation. iii) Finally, after the end of inflation, there is inflaton decay and reheating which corresponds to spiral sink in the phase portrait ($\phi,\dot{\phi}$). Explicitly, (\ref{Heqc}) can be parameterized by using the Hubble parameter $H$ and the angular variable $\theta$ as \begin{eqnarray} \label{hat1} \dot{\phi}&=&\sqrt{6}HM_{\rm P}\cos \theta,\\ \label{hat2}m\phi&=&\sqrt{6}HM_{\rm P}\sin \theta, \end{eqnarray} while (\ref{dHeqc}) and (\ref{seqc}) implies \begin{eqnarray}\label{hat3} \dot{H}&=&-3H^2\cos^2 \theta,\\ \label{hat4}\dot{\theta}&=&-m-\frac{3}{2}H\sin(2\theta). \end{eqnarray} For $m\gg H$, (\ref{hat4}) reduces to $\dot{\theta} \simeq -m$ which implies a solution of $\theta\simeq-mt$. Plugging the latter into (\ref{hat2}) indicates that $\phi$ oscillates with frequency $\omega\simeq m=0.45$ for $V_0=0.1$. Solving (\ref{hat3}) leads to \begin{equation} \label{hat5} H(t)\simeq\frac{2}{3t}\Big[1+\frac{\sin(2mt)}{2mt}\Big]^{-1} \end{equation} which shows small oscillations around $\frac{2}{3t}$. Actually, its time rate is given by \begin{equation} \label{hat6} \dot{H}(t)\simeq-\frac{16m^2 \cos^2(mt)}{3\Big[2mt +\sin(2mt)\Big]^2}=-\frac{8m^2[1+ \cos(2mt)]}{3\Big[2mt +\sin(2mt)\Big]^2} \end{equation} whose amplitude approaches zero ($-\frac{2}{3t^2}$) with oscillations as $t$ increases. Its frequency is given by $\omega^{\rm CC}_{\dot{H}}=2m$. Substituting (\ref{hat5}) into (\ref{hat2}) provides us the scalar \begin{equation}\label{hat7} \phi(t)\simeq\sqrt{\frac{8}{3}}\frac{M_{\rm P}}{mt}\sin(mt)\Bigg[1-\frac{\sin(2mt)}{2mt}\Bigg], \end{equation} which implies that after the end of inflation, the friction becomes subdominant and thus, $\phi(t)$ becomes an oscillator whose amplitude gets damped due to the universe evolution $H$. The time rate is given by \begin{equation}\label{hat8} \dot{\phi}(t)\simeq\sqrt{\frac{8}{3}}\frac{M_{\rm P}}{t}\cos(mt)\Bigg[1-\frac{\sin(2mt)}{2mt}\Bigg]. \end{equation} We observe that $\omega^{\rm CC}_\phi=\omega^{\rm CC}_{\dot{\phi}}=m$. The scale factor can be extracted from (\ref{hat5}) as \begin{equation} a(t)\simeq t^{\frac{2}{3}}, \end{equation} while the energy density of $\phi$ decreases in the same way as the energy density of non-relativistic particles of mass $m$ \begin{equation} \rho_{\phi}=\frac{1}{2}\Big[\dot{\phi}^2+m^2\phi^2\Big] \sim \frac{1}{a^3}. \end{equation} This indicates that the inflaton oscillations can be interpreted to be a collection of scalar particles, which are independent from each other, oscillating coherently at the same frequency $m$. Differing with the CC model, the upper limit of $\dot{\phi}^2$ is set for the NDC model \begin{eqnarray} \label{cond-NDC} 0 <\dot{\phi}^2 \le \phi_{c}^2\equiv\frac{2}{3}M_{\rm P}^2\tilde{M}^2, \end{eqnarray} which comes from Eq.(\ref{Heq}) showing that $3M^2_{\rm P}H^2(1-\dot{\phi}^2/\phi^2_c)=V\ge0$. Based on (\ref{Heq})-(\ref{seq}), we can figure out a whole picture numerically [see Fig.1 (left)]. Three stages are in the NDC: i) Initially, potential energy dominates. ii) Due to the gravitationally enhanced friction, all initial trajectories are attracted quickly to the inflationary attractor. iii) At the end of inflation, the inflaton velocity increases. Then, there is inflaton decay and followed by reheating. However, there exist oscillations of inflaton velocity without damping. This provides stable limited cycles in the phase portrait $(\phi,\dot{\phi})$, instead of spiral sink. We stress that an analytic solution for NDC is not yet known because equations (\ref{Heq})-(\ref{seq}) are too complicated to be solved. However, the would-be analytic solution might be found in~\cite{Ghalee:2013ada}. \begin{figure}[t!] \begin{center} \begin{tabular}{c} \includegraphics[width=.90\linewidth,origin=tl]{fig2.eps} \end{tabular} \end{center} \caption{After the end of inflation, behaviors of inflation $\phi$ (blue) and Hubble parameter $H$ (red) with respect to time $t$. Left picture is for CC while right one represents the NDC case. We observe violent oscillations of $H$ for NDC. Here, angular frequency of $H$ is given by $\omega_H(t)=2\omega_\phi(t)$ for NDC, while frequency of $\phi$ is $\omega^{\rm CC}_{\phi}=m$ for CC. } \end{figure} Now we are in a position to focus on the reheating period after the end of inflation (post-inflationary phase). We remind the reader that the friction term dominates in the slow-roll inflation period, while the friction term becomes subdominant in the reheating process. Therefore, the inflaton becomes an oscillator whose amplitude gets damped due to the universe expansion. Fig. 2 shows behaviors of inflation $\phi$ and Hubble parameter $H$ with respect to time $t$. Left figure is designed for CC [(\ref{hat7}) and (\ref{hat5})], while the right one represents the NDC case. We observe violent oscillations of $H$ for NDC. Here, oscillation frequency of $H$ is given by $\omega_H(t)=2\omega_\phi(t)$ for NDC. Fig. 3 indicates behaviors of inflaton velocity $\dot{\phi}$ (blue) and Hubble parameter $H$ (red) with respect to time $t$. Left picture is for CC [(\ref{hat8}) and (\ref{hat5})], while the right one represents the NDC case. We find violent oscillations of $H$ for NDC. Here, oscillation frequency of $H$ is still given by $\omega_H(t)=2\omega_{\dot{\phi}}(t)$ for NDC. Importantly, we observe a sizable difference that $\dot{\phi}$ oscillates with damping (CC), while it oscillates without damping and its frequency increases (NDC). \begin{figure}[t!] \begin{center} \begin{tabular}{c} \includegraphics[width=.90\linewidth,origin=tl]{fig3.eps} \end{tabular} \end{center} \caption{After the end of inflation, behaviors of inflaton velocity $\dot{\phi}$ (blue) and Hubble parameter $H$ (red) with respect to time $t$. Left picture is for CC, while right one represents the NDC case. We observe violent oscillations of $H$ for NDC. Oscillation frequency of $H$ is given by $\omega_H(t)=2\omega_{\dot{\phi}}(t)$ for NDC, while the frequency of $\dot{\phi}$ is $\omega^{\rm CC}_{\dot{\phi}}=m$ for CC.} \end{figure} \begin{figure}[t!] \begin{center} \begin{tabular}{c} \includegraphics[width=.90\linewidth,origin=tl]{fig4.eps} \end{tabular} \end{center} \caption{Oscillations of $\dot{H}$ after the end of inflation: Left (CC) and Right (NDC). Here we observe the difference between CC and NDC: $\dot{H}\le 0$ for CC and $-0.01 \le \dot{H} \le 0.01$.} \end{figure} Here, we mention that different behaviors of $\phi$ and $\dot{\phi}$ between CC and NDC have arisen from different oscillations of their Hubble parameter $H$. Their change of rates $\dot{H}$ are depicted in Fig. 4, which would be used to obtain the sound speed squared $c^2_s$. It is quite interesting to note the difference that $\dot{H}$ of CC [(\ref{hat6})] approaches zero (along $-\frac{2}{3t^2}$) with oscillations ($\omega^{\rm CC}_{\dot{H}}=2m$), while $\dot{H}$ oscillates between $-0.01$ and 0.01 with frequency $\omega_{\dot{H}}(t)$. At this stage, we note that an analytic solution might be obtained by using the averaging method~\cite{Ghalee:2013ada}. This is given by \begin{eqnarray} \label{analtic-sol2} &&H_{\rm a}(t)=\frac{2}{3(2-\sqrt{2})t},\\ \label{analtic-sol3}&&\phi_{\rm a}(t)=\frac{\sqrt{6}M_{\rm P}H_{\rm a}(t)}{ m} \cos\Big[\frac{m\tilde{M}}{2}(2-\sqrt{2})(\sqrt{2}-\frac{1}{2})t^2\Big], \end{eqnarray} where $\phi_{\rm a}(t)$ oscillates with time-dependent frequency. Their time-rates are given by \begin{eqnarray} &&\dot{H}_{\rm a}(t)=-\frac{2}{3(2-\sqrt{2})t^2},\\ &&\dot{\phi}_{\rm a}(t)=-\frac{(4-\sqrt{2})M_{\rm P}\tilde{M}}{\sqrt{3}}\sin\Big[\frac{m\tilde{M}}{2}(2-\sqrt{2})(\sqrt{2}-\frac{1}{2})t^2\Big]+\cdots. \end{eqnarray} We wish to comment here that even though $\dot{\phi}_{\rm a}(t)$ could mimic $\dot{\phi}$ in the right picture of Fig. 3, $\dot{H}_{\rm a}(t)$ could not describe oscillations of $\dot{H}$ in the right-picture of Fig. 4. This implies that the analytic solution (\ref{analtic-sol2}) is not a proper solution to NDC-equations because $H_{\rm a}(t)$ did not show violent oscillations of Hubble parameter. Also we observe the difference in frequency between CC and NDC: $\omega^{\rm CC}_{\phi}=\omega^{\rm CC}_{\dot{\phi}}=m$, $ \omega^{\rm CC}_{\dot{H}}=2m$ (time-independent) and $\omega_H(t)=2\omega_{\phi}=2\omega_{\dot{\phi}}$, $\omega_{\dot{H}}(t)$(time-dependent). Hence, it is not proven that the parametric resonance is absent for NDC when considering the decay of inflaton into a relativistic field (${\cal L}_{\rm int}=-\frac{1}{2}g^2\phi^2\chi^2$), whereas the parametric resonance is present for CC. \section{CC + NDC with chaotic potential} \begin{figure}[t!] \begin{center} \begin{tabular}{c} \includegraphics[width=.90\linewidth,origin=tl]{fig5.eps} \end{tabular} \end{center} \caption{The whole evolution of $\phi(t)$ [left] and $\dot{\phi}(t)$ [right] with respect to time $t$ for chaotic potential $V=V_0\phi^2$ with $V_0=0.1$. In these figures, CC-dominant (blue) and NDC-dominant (red) cases correspond to $\sigma=10^4\gg1$ and $\sigma=10^{-4}\ll1$, respectively. } \end{figure} \begin{figure}[t!] \begin{center} \begin{tabular}{c} \includegraphics[width=.90\linewidth,origin=tl]{fig6.eps} \end{tabular} \end{center} \caption{After the end of inflation, behaviors of inflaton $\phi$ (blue) and Hubble parameter $H$ (red) with respect to time $t$. Left picture is for CC-dominant case ($\sigma=10^{4}\gg1$), while right one represents the NDC-dominant ($\sigma=10^{-4}\ll1$) case.} \end{figure} \begin{figure}[t!] \begin{center} \begin{tabular}{c} \includegraphics[width=.90\linewidth,origin=tl]{fig7.eps} \end{tabular} \end{center} \caption{After the end of inflation, behaviors of inflaton velocity $\dot{\phi}$ (blue) and Hubble parameter $H$ (red). Left picture is for CC-diminant case ($\sigma=10^{4}\gg1$), while right one represents the NDC-dominant ($\sigma=10^{-4}\ll1$) case.} \end{figure} In this section, we wish to study the homogeneous evolution of the CC$+$NDC model. It is noted that the NDC (\ref{mact}) without CC term might be dangerous when the Hubble parameter tends to zero. That is, tending of Hubble parameter to zero may induce a strongly coupled inflaton\footnote{We thank the anonymous referee for pointing out this.}. To this end, we start with the CC$+$NDC action with chaotic potential as \begin{eqnarray} \label{mactt} S_{\rm CC+NDC}=\frac{1}{2}\int d^4x \sqrt{-g}\Big[M_{\rm P}^2R-\Big(\sigma_{\rm C}g_{\mu\nu}-\sigma_{\rm N}G_{\mu\nu}\Big)\partial^{\mu}\phi\partial^{\nu}\phi-2V(\phi)\Big],~V=V_0\phi^2. \end{eqnarray} The CC$+$NDC-equations are given by \begin{eqnarray} H^2&=&\frac{1}{3M_{\rm P}^2}\Big[\frac{1}{2}(\sigma_{\rm C}+9H^2\sigma_{\rm N})\dot{\phi}^2+V\Big],\label{Heqf}\\ &&\nonumber\\ \dot{H}&=&-\frac{1}{2M_{\rm P}^2}\Big[\dot{\phi}^2(\sigma_{\rm C}+3H^2\sigma_{\rm N}-\dot{H}\sigma_{\rm N})-2H\sigma_{\rm N}\dot{\phi}\ddot{\phi} \Big],\label{dHeqf}\\ &&\nonumber\\ &&\hspace*{-4em}(\sigma_{\rm C}+3H^2\sigma_{\rm N})\ddot{\phi}+3H(\sigma_{\rm C}+3H^2\sigma_{\rm N}+2\dot{H}\sigma_{\rm N})\dot{\phi}+V'=0\label{seqf}, \end{eqnarray} where $\sigma_{\rm N}=1/\tilde{M}^2$ and $\sigma_{\rm C}$ is introduced to denote a new coefficient for the CC term. Now we can solve Eqs.(\ref{Heqf})-(\ref{seqf}) numerically. Denoting $\sigma\equiv \sigma_{\rm C}/\sigma_{\rm N}$, we obtain the CC-dominant case by taking $\sigma\gg1$ and the NDC-dominant case by taking $\sigma\ll1$. Fig. 5 shows the whole evolution of $\phi$ (left) and $\dot\phi$ (right), while Fig. 6 and 7 indicate the evolution after the end of inflation for ($\phi,H)$ and ($\dot\phi,H)$, respectively. Also, after the end of inflation, $\dot{H}(t)$ is depicted in Fig. 8. Importantly, we note that the evolutions given in Fig. 1-4 correspond to those in Fig. 5-8, respectively. We observe that they are very similar to each other. Therefore, it is clear that the evolution of the NDC-equations (\ref{Heq})-(\ref{seq}) could be recovered from the NDC-dominant case of the CC$+$NDC-equations (\ref{Heqf})-(\ref{seqf}), while the CC-equations (\ref{Heqc})-(\ref{seqc}) could be recovered from the CC-dominant case of the CC$+$NDC-equations. \begin{figure}[t!] \begin{center} \begin{tabular}{c} \includegraphics[width=.90\linewidth,origin=tl]{fig8.eps} \end{tabular} \end{center} \caption{Oscillations of $\dot{H}$ after the end of inflation: Left (CC-dominance:~$\sigma=10^{4}\gg1$) and right (NDC-dominance:~$\sigma=10^{-4}\ll1$). } \end{figure} \section{Curvature perturbation in the comoving gauge } In order to find what happens in the post-inflationary phase, it would be better to analyze the perturbation. We use the ADM formalism to resolve the mixing between scalar of metric and inflaton \begin{eqnarray} ds_{\rm ADM}^2=-N^2dt^2+\gamma_{ij}(dx^i+\beta^i dt)(dx^j+\beta^j dt), \end{eqnarray} where $N$, $\beta_i$, and $\gamma_{ij}$ denote lapse, shift vector, and spatial metric tensor. In this case, the action (\ref{mact}) can be written as \begin{eqnarray}\label{mact1} S=\int d^4x\sqrt{-g}\Big[\frac{M_{\rm P}^2}{2}R+\frac{G^{00}}{\tilde{M}^2}\frac{\dot{\phi}^2}{2}-V\Big], \end{eqnarray} where \begin{eqnarray} R&=&R^{(3)}+\frac{1}{N^2}(E^{ij}E_{ij}-E^2)-2\nabla_{\mu}(Kn^{\mu})-\frac{2}{N}\Delta^{(3)}N,\\ G^{00}&=&\frac{1}{2N^2}\Big[R^{(3)}+\frac{1}{N^2}(E^2-E^{ij}E_{ij})\Big]. \end{eqnarray} Here $E_{ij}$ is related to the extrinsic curvature $K_{ij}$ and $n^\mu$ is the unit normal vector of the timelike hypersurface as \begin{eqnarray} E_{ij}=NK_{ij}=\frac{1}{2}(\nabla_{i}^{(3)}\beta_j+\nabla_{j}^{(3)}\beta_i-\dot{\gamma}_{ij}),~~n^{\mu}=\frac{1}{N}(1,-\beta^i). \end{eqnarray} Then, we express the action (\ref{mact1}) as \begin{eqnarray}\label{mact2} S&=&\frac{M_{\rm P}^2}{2}\int d^4x\sqrt{\gamma}\Big[R^{(3)}\Big(N+\frac{\dot{\phi}^2}{2NM_{\rm P}^2\tilde{M}^2}\Big)\nonumber\\ &&\hspace*{5em}+(E^{ij}E_{ij}-E^2)\Big(\frac{1}{N}-\frac{\dot{\phi}^2}{2N^3M_{\rm P}^2\tilde{M}^2}\Big)-\frac{2NV}{M_{\rm P}^2}\Big]. \end{eqnarray} Varying (\ref{mact2}) with respect to $N$ and $\beta_j$ lead to two constraints \begin{eqnarray} &&\hspace*{-3em}R^{(3)}\Big(1-\frac{\dot{\phi}^2}{2N^2M_{\rm P}^2\tilde{M}^2}\Big)-(E^{ij}E_{ij}-E^2)\Big(\frac{1}{N^2}-\frac{3\dot{\phi}^2}{2N^4M_{\rm P}^2\tilde{M}^2}\Big)-\frac{2V}{M_{\rm P}^2}=0,\label{hceq}\\ &&\hspace*{10em}\nabla_i^{(3)}\Big[\Big(\frac{1}{N}-\frac{\dot{\phi}^2}{2N^3M_{\rm P}^2\tilde{M}^2}\Big)(E^{i}_j-\delta^{i}_j E)\Big]=0.\label{mceq} \end{eqnarray} Hereafter, we choose the comoving gauge for the inflaton ($\phi=\phi(t)+\varphi$) \begin{equation} \label{comoving-g}\varphi=0. \end{equation} For simplicity, we consider the scalar perturbations \begin{equation} N=1+\alpha,~\beta_i=\partial_i\psi,~\gamma_{ij}=a^2e^{2\zeta}\delta_{ij}, \end{equation} where $\zeta$ denotes the curvature perturbation. Solving (\ref{hceq}) and (\ref{mceq}), we find the perturbed relations \begin{eqnarray} \alpha=\frac{A_1}{H}\dot{\zeta},~~\psi=-\frac{A_1}{H}\zeta+\chi,~~\partial_i^2\chi=\frac{a^2}{H^2}\frac{A_1^2 A_2}{1-\epsilon_{N}^H/3}\dot{\zeta}, \end{eqnarray} where $A_{1,2}$ and a slow-roll parameter $\epsilon_N^H$ are given by \begin{eqnarray}\label{a12} A_1=\frac{1-\epsilon_N^H/3}{1-\epsilon_N^H},~~A_2=\frac{\epsilon_N^H H^2(1+\epsilon_N^H)}{1-\epsilon_N^H/3},~~\epsilon_N^H=\frac{3\dot{\phi}^2}{2M_{\rm P}^2\tilde{M}^2}. \end{eqnarray} Now we wish to expand (\ref{mact2}) to second order to obtain its bilinear action. Making some integration by parts, we find the bilinear action for $\zeta$ as \begin{eqnarray}\label{s2f} \delta S_{(2)}=M_{\rm P}^2\int d^4x a^3\frac{A_1^2 A_2}{H^2}\Big[\dot{\zeta}^2-\frac{c_s^2} {a^2}(\partial_i\zeta)^2\Big]. \end{eqnarray} Here the sound speed squared $c_s^2$ is given by \begin{eqnarray} c_s^2&=&\frac{H^2}{A_1^2A_2}A_3\label{csss}\\ \label{sss}&=&1+\frac{4}{9A_1}\frac{\epsilon_N^H}{1+\epsilon_N^H}+\frac{2\dot{H}}{H^2}\frac{1-\epsilon_N^H/3}{1+\epsilon_N^H} \end{eqnarray} with \begin{eqnarray}\label{a3e} A_3=\frac{d}{adt}\Big[\frac{aA_1}{H}\Big(1-\frac{\epsilon_N^H}{3}\Big)\Big]-1-\frac{\epsilon_N^H}{3}. \end{eqnarray} We have $A_1^2 A_2/H^2\ge0$, which means ghost-free. Unfortunately, we find from Fig. 9 that $c_s^2$ (NDC) oscillates increasingly after the end of inflation, while it is constant for CC. The former has arisen from the presence of $\dot{H}$ in (\ref{sss}) and it may induce the Lagrangian instability (gradient instability) which leads to the fact that the curvature perturbation $\zeta$ grows violently~\cite{Ema:2015oaa}. \begin{figure}[t!] \begin{center} \begin{tabular}{cc} \includegraphics[width=.9 \linewidth,origin=tl]{fig9.eps} \end{tabular} \end{center} \caption{Sound speed squared $c_s^2$ for curvature perturbation $\zeta$ after the end of inflation: Left (CC) is constant and right (NDC) oscillates increasingly after the end of inflation. } \end{figure} On the other hand, it is known that in CC case, the curvature perturbation $\zeta$ diverges during reheating when $\dot\phi=0$~\cite{Finelli:1998bu,Jedamzik:2010dq,Easther:2010mr}. Furthermore, it is apparent that in NDC case, $\zeta$ is divergent when $\dot\phi=\pm\phi_c$ [$\epsilon_N^H=1$] as well as $\dot\phi=0$ [$\epsilon_N^H=0$] during reheating. To see this more closely, we write equation of $\zeta$ from the action (\ref{s2f}) as \begin{eqnarray}\label{ddzeta} \ddot{\zeta}+[3H+F(\epsilon_N^H)]\dot{\zeta}-\frac{c_s^2}{a^2}\partial^2\zeta=0, \end{eqnarray} where \begin{eqnarray} F(\epsilon_N^H)=\frac{\dot{\epsilon}_N^H}{\epsilon_N^H}\times\frac{(\epsilon_N^H)^3-3(\epsilon_N^H)^2+7\epsilon_N^H+3}{(\epsilon_N^H+1)(\epsilon_N^H-1)(\epsilon_N^H-3)}. \end{eqnarray} We observe that $F$ behaves as \begin{eqnarray}\label{exF} F\simeq\left\{\begin{array}{ll} \frac{1}{\epsilon_N^H}\simeq\frac{1}{\dot{\phi}^2}, ~~({\rm at}~\dot\phi=0)\\ \frac{1}{\epsilon_N^H-1}\simeq\frac{1}{\dot\phi^2-\phi_c^2} ~~({\rm at}~\dot\phi=\pm\phi_c)\end{array}\right. \end{eqnarray} which implies that equation (\ref{ddzeta}) becomes singular either at $\dot{\phi}=0$ [$\epsilon_N^H=0$] or at $\dot{\phi}=\pm\phi_c$ [$\epsilon_N^H=1$]. However, these singular behaviors must be checked at the solution level in the superhorizon limit. For this purpose, we consider Fourier mode $\zeta_k$ and then, equation (\ref{ddzeta}) becomes \begin{eqnarray}\label{ddzetak} \ddot{\zeta}_k+[3H+F(\epsilon_N^H)]\dot{\zeta}_k+\frac{c_s^2k^2}{a^2}\zeta_k=0. \end{eqnarray} In the case of $k^2/a^2\gg 1$, equation (\ref{ddzetak}) reduces to \begin{eqnarray}\label{ddsub} \ddot{\zeta}_k+\frac{c_s^2k^2}{a^2}\zeta_k\simeq0. \end{eqnarray} Since $c^2_s$ oscillates in Fig. 9, the curvature perturbation $\zeta_k$ leads to an exponential destabilization at small scales, which is called the gradient instability in the subhorizon. For $k^2/a^2\ll 1$, the superhorizon mode $\zeta_k$ could be illustrated by \begin{equation} \zeta_k(t)\simeq \label{chi-sol0} \zeta^{(0)}_k+c_k\int^{\infty}_tdt'\frac{H^2(t')}{A_1^2(t') A_2(t') a^3(t')}, \end{equation} where $\zeta^{(0)}_k$ and $c_k$ are constants which are determined by choosing vacuum and time of the horizon-crossing $t_k$. Here, the constant mode $c_k$ is not safe. A correction to the superhorizon mode up to $k^2$-order leads to ~\cite{Weinberg:2005vy} \begin{eqnarray}\label{chi-sol1} \zeta_k&\simeq&\zeta_{k}^{(0)}\Bigg[1-k^2\int_{t}^\infty dt'\frac{H^2(t')}{A_1^2(t') A_2(t') a^3(t')} \int^{t'}_{-\infty}dt''c_s^2(t'')a(t'')\frac{A_1^2(t'') A_2(t'')}{H^2(t'')}\Bigg]. \end{eqnarray} Plugging $A_1 $, $A_2$ in (\ref{a12}) and $c_s^2$ in (\ref{csss}) together with $A_3$ (\ref{a3e}) into the mode (\ref{chi-sol1}), the first and second integrals of the last term in (\ref{chi-sol1}) are given by \begin{eqnarray} \int_t^\infty dt'\frac{H^2}{A_1^2 A_2 a^3}&=&\int_t^\infty dt'\frac{1}{\epsilon_N^H}\frac{(1-\epsilon_N^H)^2}{(1-\epsilon_N^H/3)(1+\epsilon_N^H)a^3}\nonumber\\ &=&\frac{2M_{\rm P}^2\tilde{M}^2}{3}\int_t^\infty dt'\frac{1}{\dot{\phi}^2}\frac{[1-3\dot{\phi}^2/(2M_{\rm P}^2\tilde{M}^2)]^2}{[1-\dot{\phi}^2/(2M_{\rm P}^2\tilde{M}^2)][1+3\dot{\phi}^2/(2M_{\rm P}^2\tilde{M}^2)]a^3}\label{int1} \end{eqnarray} and \begin{eqnarray} \int^{t'}_{-\infty} dt''c_s^2a\frac{A_1^2 A_2}{H^2}&=&\int^{t'}_{-\infty} dt''\Bigg\{\frac{d}{dt''}\Big[\frac{a(1-\epsilon_N^H/3)^2}{H(1-\epsilon_N^H)}\Big]-a\Big(1+\frac{\epsilon_N^H}{3}\Big)\Bigg\}\nonumber\\ &=&\frac{a[1-\dot{\phi}^2/(2M_{\rm P}^2\tilde{M}^2)]^2}{H[1-3\dot{\phi}^2/(2M_{\rm P}^2\tilde{M}^2)]}\Bigg|^{t'}_{-\infty}-\int^{t'}_{-\infty} dt''a\Big(1+\frac{\dot{\phi}^2}{2M_{\rm P}^2\tilde{M}^2}\Big), \label{int2} \end{eqnarray} respectively. Substituting (\ref{int1}) and (\ref{int2}) into (\ref{chi-sol1}) leads to \begin{eqnarray} \label{zetas2} \zeta_k&\simeq&\zeta_{k{(0)}}\Bigg[1-k^2\frac{2M_{\rm P}^2\tilde{M}^2}{3}\Bigg\{\int_{t}^{\infty}dt'\frac{1}{\dot{\phi}^2} \frac{[1-\dot{\phi}^2/(2M_{\rm P}^2\tilde{M}^2)][1-3\dot{\phi}^2/(2M_{\rm P}^2\tilde{M}^2)]}{[1+3\dot{\phi}^2/(2M_{\rm P}^2\tilde{M}^2)]a^2H}\nonumber\\ &&\hspace*{3em}-\int_{t}^{\infty} dt'\frac{1}{\dot{\phi}^2}\frac{[1-3\dot{\phi}^2/(2M_{\rm P}^2\tilde{M}^2)]^2}{[1-\dot{\phi}^2/(2M_{\rm P}^2\tilde{M}^2)][1+3\dot{\phi}^2/(2M_{\rm P}^2\tilde{M}^2)]a^3}\times\nonumber\\ &&\Bigg(\frac{a(-\infty)[1-\dot{\phi}^2(-\infty)/(2M_{\rm P}^2\tilde{M}^2)]^2}{H(-\infty)[1-3\dot{\phi}^2(-\infty)/(2M_{\rm P}^2\tilde{M}^2)]}+\int_{-\infty}^{t'} dt''a\Big(1+\frac{\dot{\phi}^2}{2M_{\rm P}^2\tilde{M}^2}\Big)\Bigg)\Bigg\}\Bigg] \end{eqnarray} which implies that the integrand in (\ref{zetas2}) diverges at $\dot{\phi}=0$ [$\epsilon_N^H=0$], while it is finite at $\dot{\phi}=\pm\phi_c$ [$\epsilon_N^H=1$]. We note that even though the singular behavior at $\dot{\phi}=\pm \phi_c$ disappears at the solution level, one cannot avoid the blow-up of the curvature perturbation $\zeta_k$ when $\dot{\phi}=0$. This means that $\zeta$ is unphysical and thus, one has to reanalyze the perturbation during the reheating by looking for a physical gauge~\cite{Germani:2015plv}. This is the Newtonian gauge. Finally, we would like to mention that the homogenous evolution of CC+NDC is not affected by the CC-term in Section 3, provided the coefficient $\sigma_C$ is taken to be a small value. Since we were carrying out the perturbation during the NDC-background evolution, it is not clear how the CC-term influences the perturbation equations. Hence one should check if the evolution by this term could be neglected in the perturbation analysis. In order to see it, one relevant quantity is the sound speed squared $c^2_s$ because it may show a difference of the evolution between the NDC-dominant case of CC+NDC and the NDC. As was shown in Eq.(\ref{ddzetak}), this quantity plays the important role in the perturbed equation for the curvature perturbation mode $\zeta_k$. From Fig. 9, we remind the reader that $c^2_s$ is constant for CC, while it oscillates for NDC. We have computed $c^2_s$ from the CC+NDC and depicted in Fig. 10. In this expression, one term of $\frac{\sigma_C\dot{\phi}^2}{2M^2_{\rm P}}$ is added to $A_2$ in defining $c^2_s$ (\ref{csss}) while keeping the remaining unchanged. Comparing Fig. 9 with Fig. 10 shows that CC-picture [NDC-picture] of $c^2_s$ are very similar to CC-dominant picture [NDC-dominant picture] $c^2_s$ of CC+NDC. It indicates that the oscillating behavior of $c^2_s$ from the NDC persists in the NDC-dominant $c^2_s$ of CC+NDC. Hence we may neglect the CC-term in the perturbation analysis of the NDC. \begin{figure}[t!] \begin{center} \begin{tabular}{cc} \includegraphics[width=.9 \linewidth,origin=tl]{fig10.eps} \end{tabular} \end{center} \caption{Sound speed squared $c_s^2$ for curvature perturbation $\zeta_k$ after the end of inflation: Left (CC-dominant case of CC+NDC) is nearly constant and right (NDC-dominant case of CC+NDC) oscillates increasingly after the end of inflation. } \end{figure} \section{Perturbation analysis in the Newtonian gauge } As was shown in the previous section, the comoving gauge was not suitable for analyzing the perturbation during the reheating. This is so because the curvature perturbation $\zeta$ blows up at $\dot\phi=0$ on superhorizon scales. We have to re-analyze the perturbations by choosing a different gauge without problems at $\dot{\phi}=0$ \cite{Germani:2015plv}. To this end, we consider the scalar perturbation around the background ($\phi=\phi(t)+\varphi(t,{\bf x}))$ and the Newtonian gauge~\cite{Mukh}. Then, the cosmological metric takes the form \begin{eqnarray} ds^2_{\rm NG}=-(1+2\Psi)dt^2+a^2(t)(1-2\Phi)dx^i dx^j \delta_{ij}. \end{eqnarray} Here $\Psi$ is the Newtonian potential, while $\Phi$ is the Bardeen potential~\cite{Motta:2013cwa}. We note that $\Psi=\Phi$ in the CC, but for the Horndenski theories including the NDC, $\Psi$ is not the same with $\Phi$~\cite{Motta:2013cwa}. It is instructive to note that the Bellini-Sawicki parametrization~\cite{Bellini:2014fua} is very useful to describe the perturbation compactly on superhorizon scales including the reheating period. It turns out that for the NDC model (\ref{mact}), the Newtonian potential $\Psi$ is related to $\Phi$ as \begin{eqnarray}\label{psieq} \Psi=\Phi(1+\alpha_{\rm T})\left[1-\frac{\alpha_{\rm M} -\alpha_{\rm T}}{\epsilon+\alpha_{\rm M}-\alpha_{\rm T}}\right] -\frac{\alpha_{\rm M}-\alpha_{\rm T}}{H[\epsilon+\alpha_{\rm M} -\alpha_{\rm T}]}\dot{\Phi} \end{eqnarray} with $\epsilon=-\dot{H}/H^2$. Considering the NDC model (\ref{mact}), one has $K=V=V_0\phi^2,~G_4=M^2_{\rm P}/2,~G_5=-\phi/2\tilde{M}^2$, which determine the two parameters $\alpha_{\rm M}$ and $\alpha_{\rm T}$ as \begin{eqnarray} \alpha_{\rm M}=-\frac{\dot{\phi}\ddot{\phi}}{H(\tilde{M}^2M_{\rm P}^2 -\dot{\phi}^2/2)},~~~~~\alpha_{\rm T}= \frac{\dot{\phi}^2}{\tilde{M}^2M_{\rm P}^2-\dot{\phi}^2/2}. \end{eqnarray} Also, for the NDC model, the Hamiltonian constraint on superhorizon scales reduces to \begin{eqnarray} \label{con-law} \partial_t \left(\frac{HQ}{\dot{\phi}}\right)=0, \end{eqnarray} where $Q$ is the Mukhanov-Sasaki variable (the gauge-invariant combination) \begin{equation} Q=\varphi+\frac{\dot{\phi}}{H}\Phi. \end{equation} Eq.(\ref{con-law}) could be recast in terms of the Bardeen potential $\Phi$ \begin{eqnarray}\label{phieq} \dot{\Phi}+(1+\epsilon+\alpha_{\rm M})H\Phi=CH[\epsilon+\alpha_{\rm M} -\alpha_{\rm T}], \end{eqnarray} where the constant $C$ depends on the initial conditions $\zeta_c$ settled during inflation when changing the Newtonian gauge to the comoving gauge. \begin{figure}[t!] \begin{center} \begin{tabular}{cc} \includegraphics[width=.9 \linewidth,origin=tl]{fig11.eps} \end{tabular} \end{center} \caption{The behaviors of $(\Phi,\dot{\phi})$ [left] and $(\Psi,\dot{\phi})$ [right] with respect to time $t$, after the end of inflation. Both figures show that the evolutions for $\Phi$ and $\Psi$ are regular at $\dot{\phi}=0$.} \end{figure} In the CC+NDC model, one has $K=V=\lambda\phi^4/4,~G_3=-\phi/2,~G_4=M_{\rm P}^2/2,~G_5=-\phi/2M^2$~\cite{Germani:2015plv}, where they have shown that the curvature perturbations $\zeta$ on superhorizon scales are not generally conserved but the the rescaled Mukhanov-Sasaki variable is conserved, implying a constraint equation for the Newtonian potential. This implies that the superhorizon perturbations of $\Phi$ and $\Psi$ are fine with the warning that $\Psi$ could become very large. Coming back to the NDC model, we solve Eq.(\ref{phieq}) for $\Phi$ numerically by taking into account the reheating period. Also, making use of Eq.(\ref{psieq}) leads to the numerical solution for the Newtonian potential $\Psi$. Explicitly, Fig. 11 shows that after the end of inflation, the behaviors of $\Phi$ (left) and $\Psi$ (right) are regular at $\dot\phi=0$. Also, we observe that the Newtonian potential $\Psi$ grows very large values. Since the subhorizon mode $\zeta_k$ of the curvature perturbation has suffered from the gradient instability in the comoving gauge, it is important to check whether this instability is present in the Newtonian gauge. For this purpose, we use the second-order evolution equation for the Bardeen potential mode $\Phi_k$ as \begin{equation}\label{phi-evol} \ddot{\Phi}_k+\frac{\beta_1\beta_2+\beta_3\alpha^2_{\rm B} \frac{k^2}{a^2}}{\beta_1+\alpha^2_{\rm B} \frac{k^2}{a^2}} \dot{\Phi}_k + \frac{\beta_1\beta_4+\beta_1\beta_5 \frac{k^2}{a^2}+c^2_s\alpha^2_{\rm B} (\frac{k^2}{a^2})^2}{\beta_1+\alpha^2_{\rm B} \frac{k^2}{a^2}}\Phi_k=0, \end{equation} where the parameters $\beta_i(\alpha_i,H)$ were defined in the Appendix B of Ref. \cite{Bellini:2014fua}. Here we observe the oscillating sound speed squared $c^2_s$ (\ref{csss}) shown in Fig. 9. This equation was derived by eliminating the inflaton of $\varphi/\dot{\phi}$ which is not an observable in the Newtonian gauge. In the subhorizon regime of $k^2/a^2\gg1$, Eq.(\ref{phi-evol}) reduces to \begin{equation}\label{phi-evol2} \ddot{\Phi}_k+(3+\alpha_{\rm M})H\dot{\Phi}_k + \Big(\frac{\beta_1\beta_5}{\alpha^2_{\rm B}} +c^2_s\frac{k^2}{a^2}\Big)\Phi_k\simeq0, \end{equation} which is rewritten by introducing the Compton mass scale $k_{\rm C}$ $ [k^2_{\rm C}c^2_s/a^2\equiv\beta_1\beta_5/\alpha^2_{\rm B}]$~\cite{DeFelice:2010aj} as \begin{equation}\label{phi-evol3} \ddot{\Phi}_k+(3+\alpha_{\rm M})H\dot{\Phi}_k + c^2_s\Big(\frac{k^2_{\rm C}}{a^2} +\frac{k^2}{a^2}\Big)\Phi_k\simeq0. \end{equation} In the case of $k^2/a^2\gg k^2_{\rm C}/a^2,(3+\alpha_{\rm M})H$, the evolution equation (\ref{phi-evol3}) takes the form \begin{equation}\label{phi-evol4} \ddot{\Phi}_k+ c^2_s \frac{k^2}{a^2}\Phi_k\simeq0, \end{equation} which leads to the gradient instability for the oscillating $c_s^2$. However, we would like to mention that in this case, the gradient instability emerges when taking the extreme quasi-static limit of the dynamics ($k\to \infty$) in the Newtonian gauge. This contrasts to the case of the comoving gauge where one could find the gradient instability easily when requiring the condition of $k^2/a^2\gg1$ as was shown in Eq.(\ref{ddsub}). \section{Summary and Discussions} First of all, we have studied the difference between NDC and CC during reheating after the end of inflation. We have observed a sizable difference that the inflaton velocity $\dot{\phi}$ oscillates with damping for CC, while it oscillates without damping for NDC. We have confirmed that this difference has arisen from different time rates of their Hubble parameters ($\dot{H}$). Analytic expressions for inflaton and Hubble parameter obtained by applying the averaging method to the NDC-equations (\ref{Heq})-(\ref{seq})~\cite{Ghalee:2013ada} are not suitable for describing violent oscillations of Hubble parameter. Hence their argument of disappearing the parametric resonance is not proven for the NDC. Now, we mention the perturbative feature for NDC generated during reheating. We have studied the curvature perturbation $\zeta$ by taking the comoving gauge ($\varphi=0$). This gauge is definitely applicable at the stage of inflation, but it may be incompatible with $\dot{\phi}=0$ during the reheating. As was shown in Eq.(\ref{ddsub}) in the subhorizon regime ($k^2/ a^2\gg1$), the Lagrangian instability (gradient instability) arises easily because the sound speed squared $c^2_s$ oscillates during the reheating. This presumed instability has arisen because the authors in~\cite{Ema:2015oaa} have neglected the second term of (\ref{ddzetak}). However, this is not true for the case of the superhorizon limit ($k^2/a^2\ll1$) as was shown in (\ref{chi-sol0}). Also, this instability never occurs even for the correction to the superhorizon mode up to $k^2$-order [see (\ref{chi-sol1})]. But this case is meaningless since the second term of (\ref{ddzetak}) is singular at $\dot{\phi}=0,\pm\phi_c$. Here, it is noted that the apparent singular behavior at $\dot{\phi}=\pm \phi_c$ disappeared at the solution level. Importantly, it is desirable to comment on the incompatibility of the comoving gauge ($\varphi=0$) with $\dot{\phi}=0$ during the reheating in the NDC model. We remind the reader that the blow-up of $\zeta$ at $\dot{\phi}=0$ happens because the comoving gauge is not suitable for describing the oscillating period, especially for $\dot{\phi}=0$. This indicates that the curvature perturbation is not considered as a physical variable, describing a relevant perturbation during the reheating. Hence, it should not be used to draw any physical conclusion. Here the Bardeen potential $\Phi$ and Newtonian potential $\Psi$ have been employed as physical perturbations by choosing the Newtonian gauge. The superhorizon perturbations are fine with the warning that the Newtonian potential may become large. Finally, we note that the gradient instability of the Bardeen potential mode $\Phi_k$ appeared when taking the extremal quasi-static limit of the dynamics ($k \to \infty$) in the Newtonian gauge and thus, the NDC model would become unviable in the reheating period. \newpage
1,116,691,497,101
arxiv
\section{Introduction} In binary classification, one observes multiple realizations of two different classes, \begin{align*} X_0^1, \ldots, X_0^m & \stackrel{iid}{\sim} P_0, \\ X_1^1, \ldots, X_1^n & \stackrel{iid}{\sim} P_1, \end{align*} where $P_0$ and $P_1$, the class-conditional distributions, are probability distributions on a measurable space $({\mathcal X}, \mathfrak{S})$. The feature vector $X_i^y \in {\mathcal X}$ denotes the $i$-th realization from class $y \in \{0,1\}$. The general goal is to construct a classifier from this data. There are several kinds of noise that can affect a classification problem. A first type of noise occurs when $P_0$ and $P_1$ have overlapping support, meaning that the label is not a deterministic function of the feature vector. In this situation, even an optimal classifier makes mistakes. In this work, we consider a second type of noise, {\em label noise}, that can occur {\em in addition to} the first type of noise. With label noise, some of the labels of the training examples are corrupted. We focus in particular on random label noise, as opposed to feature-dependent or adversarial label noise. To model label noise, we represent the training data via contamination models: \begin{align} X_0^1, \ldots, X_0^m & \stackrel{iid}{\sim} \tilde{P}_0 : = (1-\pi_0)P_0 + \pi_0 P_1, \label{eqn:contam0} \\ X_1^1, \ldots, X_1^n & \stackrel{iid}{\sim} \tilde{P}_1 : = (1-\pi_1)P_1 + \pi_1 P_0. \label{eqn:contam1} \end{align} According to these mixture representations, each ``apparent" class-conditional distribution is in fact a contaminated version of the true class-conditional distribution, where the contamination comes from the other class. Thus, $\tilde{P}_0$ governs the training data with apparent class label $0$. A proportion $1 - \pi_0$ of these examples have $0$ as their true label, while the remaining $\pi_0$ have a true label of $1$. Similar remarks apply to $\tilde{P}_1$. The noise is asymmetric in that $\pi_0$ need not equal $\pi_1$. We emphasize that $\pi_0$ and $\pi_1$ are unknown. The distributions $P_0$ and $P_1$ are also unknown, and we do not wish to impose models for them. In particular, the supports of $P_0$ and $P_1$ may overlap, so that the classes are not separable. Previous work on classification with random label noise, reviewed below, has not considered the problem in this generality. Our contribution is to introduce general sufficient conditions on the elements $P_0, P_1, \pi_0, \pi_1$ of the contamination models for the existence of a consistent discrimination rule; these conditions are the following: \begin{itemize} \item (Total noise level) $\pi_0 + \pi_1 < 1$, \item (Mutual irreducibility) It is not possible to write $P_0$ as a nontrivial mixture of $P_1$ and some other distribution, and {\em vice versa}. \end{itemize} We present a consistent discrimination rule that leverages consistent estimates of the noise proportions. These proportions are recovered in turn via mixture proportion estimation, which is the problem of estimating the proportion of one distribution present in another, given random samples from both distributions. To shed some light on these conditions, we remark that in the absence of any assumption, the solution $(P_0,P_1,\pi_0,\pi_1)$ to \eqref{eqn:contam0}-\eqref{eqn:contam1}, when the contaminated distributions $\tilde{P}_0,\tilde{P}_1$ are given, is non-unique. In particular, were the condition on total label noise not required, for any solution, swapping the role of classes 0 and 1 would also be a solution (with complementary contamination probabilities), while leaving the apparent labels unchanged. Furthermore, we describe in detail (at the population level) the geometry of the set of all possible solutions $(P_0,P_1,\pi_0,\pi_1)$ to \eqref{eqn:contam0}-\eqref{eqn:contam1}. We argue that for any pair $\tilde{P}_0 \neq \tilde{P}_1$, there always exists a {\em unique} solution satisfying the above two conditions. Moreover, this solution uniquely corresponds to the maximum possible total label noise level $(\pi_1+\pi_0)$ compatible with the observed contaminated distributions, and also to the maximum possible total variation separation $\norm{P_1-P_0}_{TV}$ under the condition $\pi_1 + \pi_0 <1$. In this sense, $P_0$ and $P_1$ satisfying the second condition are {\em maximally denoised} versions of the contaminated distributions. Under these conditions, we therefore establish universally consistent learning of (i) a classifier that compensates for everything that could be construed as label noise, and (ii) the corresponding contamination proportions. In particular, we emphasize that the proposed conditions do not put any restrictions on the possible apparent label distributions $\tilde{P}_0,\tilde{P}_1$, so that our consistency result is distribution-free. An alternative way to view the contamination model \eqref{eqn:contam0}-\eqref{eqn:contam1} is to interpret it as a {\em source separation} problem. In the usual source separation setting, the {\em realizations} from the different sources are linearly mixed, whereas in the present model, the {\em source probability distributions} are (we do not observe a signal superposition, but a signal coming from one or the other source). As a common point with the source separation setting, it is necessary to postulate additional constraints on the sources in order to resolve non-uniqueness of the possible solutions. In Independent Component Analysis, for instance, sources are assumed to be independent. Our assumption of mutual irreducibility between the sources plays a conceptually comparable role here. Similarly, the assumption on the total noise level resolves the ambiguity that the sources would be otherwise only identifiable up to permutation. \subsection{Problem Statement and Notation} We consider the problem of designing a discrimination rule, in the presence of label noise, that is consistent with respect to a given performance measure. To state the problem precisely, we define the following terms. A {\em classifier} is a measurable function $f: {\mathcal X} \to \{0,1\}$. A {\em performance measure} $R(f)$ assigns every classifier to a nonnegative real number, and depends on the true distributions, $P_0$ and $P_1$. The optimal performance measure is denoted $R^* = \inf R(f)$, where the infimum is over all classifiers. A {\em discrimination rule} is a function $\widehat{f}_{m,n}:{\mathcal X}^m \times {\mathcal X}^n \to ({\mathcal X} \to \{0,1\})$ mapping training data to classifiers. A discrimination rule is {\em consistent} iff $R(\widehat{f}_{m,n}) \to R^*$ in probability as $\min\{m,n\} \to \infty$. We focus on the minmax criterion, for which $R(f) = \max\{R_0(f), R_1(f) \}$, where \begin{align*} R_0(f) &:= P_0(f(X)=1) \\ R_1(f) &:= P_1(f(X)=0) \end{align*} are the Type I and Type II errors. The optimal performance $R^*$ is called the {\em minmax} error. This choice of performance measure is primarily for concreteness; we expect no difficulty in extending our analysis to other performance measures, both frequentist and Bayesian, that can be defined in terms of $R_0$ and $R_1$, such as Neyman-Pearson or expected misclassification cost. This is because our approach is grounded on a technique to estimate $R_0(f)$ and $R_1(f)$. We also introduce the contaminated Type I and II errors: \begin{align} \tilde{R}_0(f) &:= \tilde{P}_0(f(X)=1) \nonumber \\ &= (1-\pi_0) R_0(f) + \pi_1 (1 - R_1(f)) \label{eqn:r0t} \\ \tilde{R}_1(f) &:= \tilde{P}_1(f(X)=0) \nonumber \\ &= (1-\pi_1) R_1(f) + \pi_0 (1 - R_0(f)) \label{eqn:r1t}. \end{align} \subsection{Motivating Application} \label{sec:motiv} This work is motivated by a nuclear particle classification problem that is critical for nuclear nonproliferation, nuclear safeguards, etc. An organic scintillation detector is a device commonly used to detect high-energy neutrons. When a particle interacts with the detector, the energy deposited by the particle is converted to a pulse-shaped voltage waveform, which is then digitally sampled to obtain a feature vector $X \in \mathbb{R}^d$, where $d$ is the number of digital samples. The energy distribution of detected neutrons is characteristic of the nuclear source material, and these energy distributions can be inferred from the heights of the observed pulses. However, these detectors are also sensitive to gamma rays, which are frequently emitted by the same fission events that produce neutrons, and which are also strongly present in background radiation. Therefore, to render organic scintillation detectors useful for characterization of nuclear materials, it is necessary to classify between neutron and gamma-ray pulses, a problem referred to as pulse shape discrimination (PSD) \citep{adams78,ambers11}. Unfortunately, even in controlled laboratory settings, it is very difficult to obtain pure samples of neutron and gamma-ray pulses. As previously mentioned, the fission events that produce neutrons also yield gamma rays, and gamma rays also arrive from background radiation. Although pure gamma-ray sources do exist, when collecting measurements from such sources, neutrons from the background cannot be completely eliminated. If we view gamma-rays as class 0, by taking a strong and pure gamma-ray source, $\pi_0$ will be small but nonzero. On the other hand, the proportion of gamma-rays emitted during fission is intrinsic to the source material, and cannot be changed. Thus $\pi_1$ could be in the neighborhood of one-half. With additional time-of-flight information, this proportion can be reduced, but is still non-negligible \citep{ambers11}. Thus, PSD is naturally described by the proposed label noise model. \subsection{Related Work} \label{sec:related} Classification in the presence of label noise has drawn the attention of numerous researchers. One common approach is to assume that corrupted labels are more likely to be associated with outlying data points. This has inspired methods to clean, correct, or reweight the training data \citep{brodley99, rebbapragada07}, as well as the use of robust (usually nonconvex) losses \citep{mason00, schuurmans06, vasconcelos08, ding10, denchev12}. The above approaches are not necessarily based on a random label noise model, but rather assume that noisy labels are more common near the decision boundary. Generative models have also been applied in the context of random label noise. These impose parametric models on the data-generating distributions, and include the label noise as part of the model. The parameters are then estimated using an EM algorithm \citep{bouveyron09}. The method of \cite{lawrence01} employs kernels in this approach, allowing for the modeling of more flexible distributions. Negative results for convex risk minimization in the presence of label noise have been established by \citet{long10} and \citet{manwani11}. These works demonstrate a lack of noise tolerance for boosting and empirical risk minimization based on convex losses, respectively, and suggest that any approach based on convex risk minimization will require modification of the loss, such that the risk minimizer is the optimal classifier with respect to the uncontaminated distributions. Along these lines, \citet{stempfel09} recently developed a support vector machine with a modified hinge loss. Proper modification of the loss, however, requires knowledge of the noise proportions. Since these proportions are typically not known {\em a priori}, our consistent estimators of these proportions could make approaches based on convex risk minimization more broadly applicable. Classification with random label noise has also been studied in the PAC literature. Most PAC formulations assume that (i) $P_0$ and $P_1$ have non-overlapping support (i.e., there is a deterministic ``target concept" that provides the true labels), (ii) the label noise is symmetric (i.e., independent of the true class label), and (iii) the performance measure is the probability of error \citep{angluin88, kearns93, aslam96, cesabianchi97, bshouty98, kalai03}. Under these conditions, it typically suffices to train on the contaminated data; only the sample complexity changes. The case of asymmetric label noise was addressed by \citet{blum98} under (i), as the basis of co-training. Some new directions and a thorough review of this body of work were recently presented in \cite{jabbari10}. As we discuss in the next section, new challenges emerge when (i), (ii), and (iii) are not assumed. To our knowledge, previous work under the asymmetric noise model has not addressed a minimal set of conditions for either consistent classification or for consistent estimation of the label noise proportions. Classification with label noise is related to several other machine learning problems. It is the basis of co-training \citep{blum98}. When $\pi_1 = 0$, we have ``one-sided" label noise, and the problem reduces to learning from positive and unlabeled examples (LPUE), also known as semi-supervised novelty detection (SSND); see \citet{blanchard10} for a review of this literature. In particular, \citet{blanchard10} develop theory for ``mixture proportion estimation" that we leverage in our analysis. A basic version of multiple instance learning can be reduced to classification with one-sided label noise \citep[see][]{sabato12}. Finally, below we establish a connection between classification with label noise and class probability estimation. \subsection{Outline} The remainder of the paper is outlined as follows. Section \ref{sec:challenge} discusses the challenges posed by label noise for classifier design. Section \ref{sec:alternate} presents an alternate representation of the contamination models that reduces the problem to that of mixture proportion estimation, which is discussed in Section \ref{sec:mixture}, along with distributional assumptions and maximal denoising. In Section \ref{sec:estError} we introduce estimates of Type I and Type II error, and show that, under the proposed conditions, they satisfy a uniform law of large numbers. In Section \ref{sec:minmax} we focus on the minmax criterion and present a consistent minmax classifier. Section \ref{sec:addmpe} provides additional discussion of mixture proportion estimation, and Section \ref{sec:cpe} makes a connection between our work and the problem of class probability estimation. Proofs of results are found either in the body of the paper, or in an appendix. \section{The Challenge of Label Noise} \label{sec:challenge} In this section, we address the challenges posed by label noise. We focus on the population setting ($m, n = \infty$) and compare classifier design based on the contaminated distributions, $\tilde{P}_0$ and $\tilde{P}_1$, versus the true ones, $P_0$ and $P_1$. We introduce the following condition on the total amount of label noise. \begin{description} \item[(A)] $\pi_0 + \pi_1 < 1$. \end{description} This condition states, in a certain sense, that a majority of the labels are correct on average. It even allows that one of the proportions be very close to one if the other proportion is small enough. This condition was previously adopted by \cite{blum98}. In this section, we assume that $P_0$ and $P_1$ are absolutely continuous with respect to Lebesgue measure. Let $p_0$ and $p_1$ denote corresponding densities. Thus \begin{align*} \tilde{p}_0(x) &:= (1-\pi_0) p_0(x) + \pi_0 p_1(x), \\ \tilde{p}_1(x) &:= (1-\pi_1) p_1(x) + \pi_1 p_0(x), \end{align*} are respective densities of $\tilde{P}_0$ and $\tilde{P}_1$. \begin{prop} \label{prop:p1} Assume {\bf (A)} holds. For all $\gamma \ge 0$, and every $x$ such that $p_0(x) > 0$ and $\tilde{p}_0(x) > 0$, \begin{equation*} \frac{p_1(x)}{p_0(x)} > \gamma \iff \frac{\tilde{p}_1(x)}{\tilde{p}_0(x)} > \lambda, \end{equation*} where \begin{equation} \label{eqn:gamlam} \lambda = \frac{\pi_1 + \gamma (1-\pi_1)}{1 - \pi_0 + \gamma \pi_0}. \end{equation} \end{prop} The proof involves a sequence of simple algebraic steps to transform one likelihood ratio into another, and the use of {\bf (A)} to ensure that the direction of the inequality is preserved. Regardless of the performance measure chosen (probability of error, Neyman-Pearson, etc.), the optimal classifier takes the form of a likelihood ratio test (LRT) based on the true densities. According to the proposition, every true LRT is identical to a contaminated LRT with a different threshold. As the threshold of one LRT sweeps over its range, so too does the threshold of the other LRT. Equivalently, both LRTs generate the same receiver operating characteristic (ROC). However, if we design a classifier with respect to the contaminated Type I and II errors, we will not obtain a classifier that is optimal with respect to the true Type I and II errors, except in very special circumstances. To make this point concrete, we now consider three specific performance measures. {\bf Probability of error.} When the feature vector $X$ and label $Y$ are jointly distributed, the probability of misclassification is minimized by a LRT, where the threshold $\gamma$ is given by the ratio of {\em a priori} class probabilities. If $\gamma = 1$, then the corresponding threshold for the contaminated LRT is also 1, regardless of $\pi_0$ and $\pi_1$, which follows directly from \eqref{eqn:gamlam}. Furthermore, assuming $\pi_0, \pi_1 > 0$ and with some simple algebra it is easy to show that $\lambda = \gamma$ only if $\gamma = 1$. Thus, if the two classes are not equally probable {\em a priori}, setting the correct $\lambda$ for the contaminated LRT is not possible, since $\pi_0$ and $\pi_1$ are unknown. {\bf Neyman-Pearson.} As noted above, the true and contaminated LRTs have the same ROC. If a point on this ROC is chosen such that $\tilde{R}_0(f) = \alpha$, it will generally not be the case that $R_0(f) = \alpha$. This follows because $\tilde{R}_0(f) = (1-\pi_0) R_0(f) + \pi_0 R_1(f)$. Simple algebra shows that $R_0(f) = \tilde{R}_0(f)$ iff $\pi_0 = 0$ or $R_0(f) + R_1(f) = 1$. The latter condition is not satisfied by an optimal classifier unless $P_0 = P_1$, since it corresponds to random guessing. The former case, $\pi_0 = 0$, means the negative class has no contamination, and is equivalent (after swapping class labels) to learning from positive and unlabeled examples. {\bf Minmax.} The minmax classifier corresponds to the point on the ROC of the true and contaminated LRTs where $R_0(f) = R_1(f)$. Indeed, if $R_0(f) \ne R_1(f)$, then $\max \{R_0(f), R_1(f)\}$ can be decreased by moving along the ROC such that the larger of $R_0(f), R_1(f)$ is decreased. Thus, designing a classifier with respect to the contaminated distributions yields a point on the optimal ROC where $\tilde{R}_0(f) = \tilde{R}_1(f)$. Using equations \eqref{eqn:r0t} and \eqref{eqn:r1t}, simple algebra reveals that $\tilde{R}_0(f) = \tilde{R}_1(f)$ and $R_0(f) = R_1(f)$ for the same $f$ iff $\pi_0 = \pi_1$ or $R_0(f) = R_1(f) = \frac12$. The first condition is not satisfied for asymmetric label noise, and the latter condition is not true for an optimal classifier unless $P_0 = P_1$. In summary, a classifier that is optimal with respect to the contaminated Type I and II errors is not optimal with respect to the true Type I and II errors, except in special cases. Based on the above discussion, in the setting of asymmetric, random label noise, it is essential to have accurate estimates of true Type I and Type II errors. These estimates, in turn, facilitate the design of discrimination rules with respect to any criterion. For concreteness, in later sections we examine the minmax criterion in detail. However, our approach readily extends to other performance measures that are based on the false positive and negative rates. \section{Alternate Mixture Representation} \label{sec:alternate} We introduce an alternative mixture representation that facilitates our subsequent analysis. The following lemma reformulates the problem. \begin{lemma} \label{le:le1} If $P_0 \ne P_1$ and {\bf (A)} holds, then $\tilde{P}_1 \neq \tilde{P}_0$, and there exist unique $0 \le \tilde{\pi}_0, \tilde{\pi}_1 < 1$ such that \begin{align} \tilde{P}_0 & = (1-\tilde{\pi}_0) P_0 + \tilde{\pi}_0 \tilde{P}_1 \label{eqn:ssnd0} \\ \tilde{P}_1 & = (1-\tilde{\pi}_1) P_1 + \tilde{\pi}_1 \tilde{P}_0. \label{eqn:ssnd1} \end{align} In particular $\tilde{\pi}_0 = \frac{\pi_0}{1-\pi_1} < 1$ and $\tilde{\pi}_1 = \frac{\pi_1}{1-\pi_0} < 1$. \end{lemma} \begin{proof} To see that $\tilde{P}_1 \neq \tilde{P}_0$, assume by contraposition that equality holds. Plugging in \eqref{eqn:contam0}-\eqref{eqn:contam1}, we obtain \[ (1-\pi_1 - \pi_0)P_1 = (1-\pi_1 - \pi_0)P_0, \] which, since $P_0 \neq P_1$, would imply $\pi_1 + \pi_0 = 1$ and contradict {\bf (A)}. We turn to identity \eqref{eqn:ssnd0}. Matching distributions, the identity holds iff \begin{align*} P_1 (\pi_0 - \tilde{\pi}_0 (1-\pi_1)) & = P_0 (1 - \tilde{\pi}_0 + \pi_1\tilde{\pi}_0 - (1 - \pi_0)) \\ & = P_0 (\pi_0 - \tilde{\pi}_0 (1-\pi_1)). \end{align*} Since $P_0 \ne P_1$, the unique solution is $\tilde{\pi}_0 = \frac{\pi_0}{1-\pi_1}$. From {\bf (A)} it follows that $\tilde{\pi}_0 < 1$. Similar reasoning applies to the second identity. \end{proof} This lemma motivates estimates of the true Type I and Type II errors. For any classifier $f$, we may express the contaminated Type I and Type II errors as \begin{eqnarray} \tilde{R}_0(f) &=& \tilde{P}_0(f(X)=1) \nonumber \\ &=& (1-\tilde{\pi}_0)R_0(f) + \tilde{\pi}_0(1-\tilde{R}_1(f)) \label{eq:noisyz} \\ \tilde{R}_1(f) &=& \tilde{P}_1(f(X)=0) \nonumber \\ &=& (1-\tilde{\pi}_1)R_1(f) + \tilde{\pi}_1(1-\tilde{R}_0(f)), \label{eq:noisyo} \ \end{eqnarray} where Equations (\ref{eq:noisyz}) and (\ref{eq:noisyo}) follow from Lemma~\ref{le:le1}. By solving for $R_0(f)$ and $R_1(f)$ in (\ref{eq:noisyz}) and (\ref{eq:noisyo}), we find \begin{eqnarray} R_0(f) = \frac{\tilde{R}_0(f) - \tilde{\pi}_0(1-\tilde{R}_1(f))}{1-\tilde{\pi}_0} &=& 1-\tilde{R}_1(f) - \frac{1-\tilde{R}_0(f) - \tilde{R}_1(f)}{1-\tilde{\pi}_0} \label{eq:noisyz2} \\ R_1(f) = \frac{\tilde{R}_1(f) - \tilde{\pi}_1(1-\tilde{R}_0(f))}{1-\tilde{\pi}_1} &=& 1-\tilde{R}_0(f) - \frac{1-\tilde{R}_1(f) - \tilde{R}_0(f)}{1-\tilde{\pi}_1}. \label{eq:noisyo2} \ \end{eqnarray} We can estimate $\tilde{R}_0(f)$ and $\tilde{R}_1(f)$ from the training data. Therefore, if we can estimate $\tilde{\pi}_0$ and $\tilde{\pi}_1$, then we can estimate $R_0(f)$ and $R_1(f)$, and thereby design a classifier. In the next section we address the estimation of $\tilde{\pi}_0$ and $\tilde{\pi}_1$. Note that it is not necessary to estimate $\pi_0$ and $\pi_1$, although that would be possible in light of Lemma~\ref{le:le1}. We conclude this section with a converse to Lemma~\ref{le:le1}: \begin{lemma} \label{le:le1conv} Assume that \eqref{eqn:ssnd0}-\eqref{eqn:ssnd1} hold and $\tilde{P}_1\neq\tilde{P}_0$. Then $P_1 \neq P_0$ and there exist unique $\pi_1,\pi_0 \in [0,1)$ (namely $\pi_0 = \frac{\tilde{\pi}_0(1-\tilde{\pi}_1)}{1-\tilde{\pi}_1\tilde{\pi}_0}$ and $\pi_1 = \frac{\tilde{\pi}_1(1-\tilde{\pi}_0)}{1-\tilde{\pi}_1\tilde{\pi}_0} $) so that \eqref{eqn:contam0}-\eqref{eqn:contam1} hold; furthermore, {\bf (A)} is satisfied. \end{lemma} \begin{proof} Assume \eqref{eqn:ssnd0}-\eqref{eqn:ssnd1} hold. Since we assume $\tilde{P}_1\neq \tilde{P}_0$, it holds that $\tilde{\pi}_1,\tilde{\pi}_0<1$. To see that $P_0 \neq P_1$, assume by contraposition that equality holds. Plugging in \eqref{eqn:ssnd0}-\eqref{eqn:ssnd1} and after straightforward manipulation, we obtain equivalently \[ \frac{1-\tilde{\pi}_1 \tilde{\pi}_0}{(1-\tilde{\pi}_1)(1-\tilde{\pi}_0)} \tilde{P}_1 = \frac{1-\tilde{\pi}_1 \tilde{\pi}_0}{(1-\tilde{\pi}_1)(1-\tilde{\pi}_0)} \tilde{P}_0, \] which would contradict the assumption $\tilde{P}_1 \neq \tilde{P}_0$. Next, in order for identity~\eqref{eqn:contam0} to hold, by matching distributions in a similar way as in the proof of Lemma~\ref{le:le1}, we arrive at the equivalent relation $(\tilde{\pi}_0(1-\pi_1)-\pi_0)\tilde{P}_0= (\tilde{\pi}_0(1-\pi_1)-\pi_0)\tilde{P}_1$. Since $\tilde{P}_1\neq\tilde{P}_0$, the unique solution is $\pi_0 = \tilde{\pi}_0(1-\pi_1)$. Similarly, for~\eqref{eqn:contam1} to hold the unique solution is $\pi_0 = \tilde{\pi}_0(1-\pi_1)$. From these we derive the announced expression for $\pi_0,\pi_1$. It is then easy to check that $\pi_0+\pi_1-1=-\frac{(1-\tilde{\pi}_1)(1-\tilde{\pi}_0)}{1-\tilde{\pi}_1\tilde{\pi}_0}<0$, so that {\bf (A)} holds. \end{proof} Together, Lemmas~\ref{le:le1} and~\ref{le:le1conv} imply that for known, distinct uncontaminated distributions $P_0\neq P_1$, there is an explicit one-to-one correspondence between the contamination proportions $(\pi_1,\pi_0)$ of the initial contamination models \eqref{eqn:contam0}-\eqref{eqn:contam1} under constraint {\bf (A)}, and the proportions $(\tilde{\pi}_1,\tilde{\pi}_0)$ in the representation \eqref{eqn:ssnd0}-\eqref{eqn:ssnd1} (with the only constraint $0\leq \tilde{\pi}_1,\tilde{\pi}_0 <1$). The alternate representations \eqref{eqn:ssnd0}-\eqref{eqn:ssnd1} are {\em decoupled} in the sense that \eqref{eqn:ssnd0} does not involve $P_1$, while \eqref{eqn:ssnd1} does not involve $P_0$. This allows us to estimate $\tilde{\pi}_0$ and $\tilde{\pi}_1$ separately, by reducing to the problem of ``mixture proportion estimation." It further motivates the mutual irreducibility condition on $(P_0,P_1)$ that, together with {\bf (A)}, ensures that $\tilde{\pi}_0, \tilde{\pi}_1$ are identifiable. The decoupling perspective also allows us to address the following question: Given the contaminated distributions $\tilde{P}_1,\tilde{P}_0$, while $(P_0,P_1)$ are unknown, what are the solutions $(\pi_0,\pi_1,P_0,P_1)$ satisfying model \eqref{eqn:contam0}-\eqref{eqn:contam1}? Obviously, $(0,0,\tilde{P}_1,\tilde{P}_0)$ is a trivial solution; we will argue that mutual irreducibility ensures that the solution is unique and non-trivial, and furthermore that the resulting $P_0, P_1$ correspond to maximally denoised versions of $\tilde{P}_1,\tilde{P}_0$. The issues are developed in the next section. \section{Mixture Proportion Estimation and Mutual Irreducibility} \label{sec:mixture} Let $F$, $G$, and $H$ be distributions on $({\mathcal X}, \mathfrak{S})$ such that \begin{equation} F = (1-\nu) G + \nu H, \nonumber \end{equation} where $0 \le \nu \le 1$. Mixture proportion estimation is the following problem: given iid training samples $Z_F^m \in {\mathcal X}^m$ and $Z_H^n \in {\mathcal X}^n$ of sizes $m$ and $n$ from $F$ and $H$ respectively, and no information about $G$, estimate $\nu$. This problem was previously addressed by \cite{blanchard10}, and here we relate the necessary definitions and results from that work. Without additional assumptions, $\nu$ is not an identifiable parameter, as noted by Blanchard et al. In particular, if $F = (1-\nu) G +\nu H$\, holds, then any alternate decomposition of the form $F = (1-\nu+\delta) G' + (\nu-\delta) H $\,, with $G' = (1-\nu+\delta)^{-1}((1-\nu) G + \delta H)$\,, and $\delta \in [0,\nu)$\,, is also valid. Because we have no direct knowledge of $G$\,, we cannot decide which representation is the correct one. Therefore, to make the problem well-defined, we will consider estimation of the largest valid $\nu$. The following definition will be useful. \begin{defn} Let $G$\,, $H$ be probability distributions. We say that $G$ is {\em irreducible} with respect to $H$ if there exists no decomposition of the form $G = \gamma H + (1-\gamma) F' $, where $F'$ is some probability distribution and $0< \gamma \leq 1$\,. We say that $G$ and $H$ are {\em mutually irreducible} if G is irreducible with respect to H and vice versa. \end{defn} The following was established by Blanchard et al. \begin{prop} \label{prop:canondecmp} Let $F$\,, $H$ be probability distributions. If $F \neq H$, there is a unique $\nu^*\in[0,1)$ and $G$ such that the decomposition $F = (1-\nu^* ) G+ \nu^* H$ holds, and such that $G$ is irreducible with respect to $H$\,. If we additionally define $\nu^*=1$ when $F = H$, then in all cases \begin{equation} \label{eq:nustar} \nu^* = \max\{\alpha \in[0,1]: \exists \, G' \text{ probability distribution: } F = (1-\alpha)G' + \alpha H \}\,. \nonumber \end{equation} \end{prop} By this result, the following is well-defined. \begin{defn} For any two probability distributions $F$, $H$, define $$ \nu^*(F,H) := \max\{\alpha \in[0,1]: \exists \, G' \text{ probability distribution: } F = (1-\alpha)G' + \alpha H \}\,. $$ \end{defn} Clearly, $G$ is irreducible with respect to $H$ if and only if $\nu^*(G, H) = 0$. Additionally, we show in Section \ref{sec:addmpe} that for any two distributions $F$ and $H$, $\nu^*(F,H) = \inf_{A \in \mathfrak{S}} F(A)/H(A)$. Similarly, when $F$ and $H$ have densities $f$ and $h$, $\nu^*(F,H) = \mathop{\mathrm{ess \ inf}}_{x \in \mathop{\mathrm{supp}}(H)} f(x)/h(x)$. These identities make it possible to check irreducibility in different scenarios. For example, $\nu^*(G,H)=0$ whenever the support of $G$ does not contain the support of $H$. Even if the supports are equal, two distributions can be mutually irreducible, as in the case of two Gaussians with distinct means and equal variances. See Section \ref{sec:addmpe} for additional discussion of mutual irreducibility. To consolidate the above notions, we state the following corollary. \begin{cor} \label{cor:irrd} If $F = (1-\gamma) G + \gamma H$, and $G$ is irreducible with respect to $H$, then $\gamma = \nu^*(F,H)$. \end{cor} Blanchard et al. also studied an estimator $\widehat{\nu} = \widehat{\nu}(Z_F^m,Z_H^n)$ of $\nu^*(F,H)$. They show that $\widehat{\nu}$ is strongly universally consistent, i.e., that for any $F$ and $H$, $\widehat{\nu} \to \nu^*(F,H)$ almost surely. The particular form of the estimator is not important here; only its consistency is relevant for our purposes. See Section \ref{sec:addmpe} for some intuition for this estimation problem. Lemma~\ref{le:le1} allows us to estimate $\tilde{\pi}_0$ and $\tilde{\pi}_1$ using $\widehat{\nu}$. Recalling the result of Lemma~\ref{le:le1}, the distributions $\tilde{P}_0$ and $\tilde{P}_1$ can be written \begin{align} \tilde{P}_0 & = (1-\tilde{\pi}_0) P_0 + \tilde{\pi}_0 \tilde{P}_1 \nonumber \\ \tilde{P}_1 & = (1-\tilde{\pi}_1) P_1 + \tilde{\pi}_1 \tilde{P}_0. \nonumber \ \end{align} By Corollary~\ref{cor:irrd}, we can estimate $\tilde{\pi}_0$ and $\tilde{\pi}_1$ provided the following condition holds: \begin{description} \item[{\bf (B)}] $P_0$ is irreducible with respect to $\tilde{P}_1$ and $P_1$ is irreducible with respect to $\tilde{P}_0$. \end{description} To ensure this condition, we now introduce the following identifiability assumption: \begin{description} \item[{\bf (C)}] $P_0$ and $P_1$ are mutually irreducible. \end{description} Note that it follows from assumption {\bf (C)} \, that $P_0 \ne P_1$. We now establish that {\bf (C)}\, and {\bf (B)}\, are essentially equivalent. \begin{lemma} \label{le:le2} $P_0$ is irreducible with respect to $\tilde{P}_1$ if and only if $P_0$ is irreducible with respect to $P_1$ and $\pi_1<1$. The same statement holds when exchanging the roles of the two classes. In particular, under assumption {\bf (A)}, {\bf (C)} \, is equivalent to {\bf (B)} \,. \end{lemma} \begin{proof} This will be a proof by contraposition. Assume first that $P_0$ is not irreducible with respect to $\tilde{P}_1$. Then there exists a probability distribution $Q'$ and $0<\gamma\le 1$ such that \begin{eqnarray} & P_0 = \gamma \tilde{P}_1 + (1-\gamma) Q'. \nonumber \end{eqnarray} Now, plugging in Equation (2) for $\tilde{P}_1$ yields \begin{eqnarray} & P_0 = \gamma ((1-\pi_1)P_1 + \pi_1 P_0)+(1-\gamma)Q'. \nonumber \end{eqnarray} Solving for $P_0$ produces \begin{eqnarray} & P_0 = (1-\beta)Q' + \beta P_1, \nonumber \end{eqnarray} where $\beta = \gamma (\frac{1- \pi_1}{1-\gamma \pi_1})$. Now, in the case where $\pi_1<1$, then $1-\gamma \pi_1 > 0$, and $\gamma - \gamma \pi_1 > 0$. Since $0 < \gamma \le 1$, we deduce $0 < \beta \le 1$, so that $P_0$ is not irreducible with respect to $P_1$. Conversely, assume by contradiction that $P_0$ is not irreducible with respect to $P_1$, i.e., there exists a decomposition $P_0 = \gamma P_1 + (1-\gamma)Q'$ with $\gamma > 0$. Then the decomposition $P_0 = \beta \tilde{P}_1 + (1-\beta)Q'$ holds with $\beta=\frac{\gamma}{\gamma+(1-\pi_1)(1-\gamma)} \in (0,1]$, so that $P_0$ is not irreducible with respect to $\tilde{P}_1$. Finally, in the case $\pi_1=1$, we have $\tilde{P}_1=P_0$, in which case, trivially, $P_0$ is not irreducible with respect to $\tilde{P}_1$ either. \end{proof} To summarize, if {\bf (A)} and {\bf (C)}\, hold, then we can consistently estimate $\tilde{\pi}_0$ and $\tilde{\pi}_1$, and therefore can also consistently estimate $R_0(f)$ and $R_1(f)$ via Eqns. \eqref{eq:noisyz2}-\eqref{eq:noisyo2}. These ideas are developed in the next section. To conclude this section, we present a result that rounds out the discussion of the initial and modified contamination models, and mutual irreducibility. In particular, we describe all possible solutions $(\pi_0,\pi_1,P_0,P_1)$ to our model equations~\eqref{eqn:contam0}-\eqref{eqn:contam1} when $\tilde{P}_0,\tilde{P}_1$ are given and arbitrary, and an equivalent characterization of the unique mutually irreducible solution. It can be seen as an analogue of Proposition \ref{prop:canondecmp} for the label noise contamination models. \begin{thm} \label{thm:cplt} Let $\tilde{P}_1\neq\tilde{P}_0$ be two given distinct probability distributions. Denote by $\Lambda$ the feasible set of quadruples $(\pi_0,\pi_1,P_0,P_1)$ such that {\bf (A)} and equations \eqref{eqn:contam0}-\eqref{eqn:contam1} are satisfied. \begin{enumerate} \item There is a unique quadruple $(\pi_0^*,\pi_1^*,P_0^*,P_1^*) \in \Lambda$ so that {\bf (C)}\, holds. \item Denoting $\tilde{\pi}_0^* := \nu^*(\tilde{P}_0,\tilde{P}_1)<1$ and $\tilde{\pi}_1^* := \nu^*(\tilde{P}_1,\tilde{P}_0)<1$, it holds \begin{align} \label{eqn:expl} \pi_0^* & = \frac{\tilde{\pi}_0^*(1-\tilde{\pi}_1^*)}{1-\tilde{\pi}_1^*\tilde{\pi}_0^*}, & \pi_1^* = \frac{\tilde{\pi}_1^*(1-\tilde{\pi}_0^*)}{1-\tilde{\pi}_1^*\tilde{\pi}_0^*}\,. \end{align} \item The feasible region $R$ for the proportions $(\pi_0,\pi_1)$ (that is, the projection of $\Lambda$ to its first two coordinates, which is also one-to-one), is the closed quadrilateral defined by the intersection of the positive quadrant of $\mathbb{R}^2$ with the half-planes given by \begin{align} \label{eqn:feas} \pi_0 + \pi_1 \tilde{\pi}_0^* & \leq \tilde{\pi}_0^*, & \pi_1 + \pi_0 \tilde{\pi}_1^* & \leq \tilde{\pi}_1^*\,. \end{align} \item The mutually irreducible solution $(\pi_0^*,\pi_1^*,P_0^*,P_1^*)$ is also equivalently characterized as: \begin{itemize} \item the unique maximizer of $(\pi_0+\pi_1)$ over $\Lambda$; \item the unique extremal point of $\Lambda$ where both of the constraints in \eqref{eqn:feas} are active; \item the unique maximizer over $\Lambda$ of the total variation distance $\norm{P_0-P_1}_{TV}$. \end{itemize} \end{enumerate} \end{thm} The proof of the theorem relies on the explicit one-to-one correspondence established in Lemmas~\ref{le:le1} and~\ref{le:le1conv} between the solutions of the original decomposition \eqref{eqn:contam0}-\eqref{eqn:contam1} and its decoupled reformulation \eqref{eqn:ssnd0}-\eqref{eqn:ssnd1}. The result of Proposition~\ref{prop:canondecmp} is applied to the decoupled formulation, then pulled back, via the correspondence, in the original representation. The last statement concerning the total variation norm is based on the relation \[ (P_1-P_0) = (1 - \pi_0- \pi_1)^{-1} (\tilde{P}_1 - \tilde{P}_0), \] obtained by subtracting \eqref{eqn:contam0} from \eqref{eqn:contam1}. Therefore, the maximum feasible value of $\norm{P_1 - P_0}_{TV}$ corresponds to the maximum of $(\pi_0+\pi_1)$, i.e. the unique mutually irreducible solution. The geometrical interpretation of this theorem is visualized on Figure~\ref{fig:geo}. In particular, point 1 of the theorem shows that conditions {\bf (A)} and {\bf (C)}\, do not restrict the class of possible observable contaminated distributions $(\tilde{P}_1,\tilde{P}_0)$; rather, they ensure in all cases the identifiability of the mixture model. Point 4 indicates that the unique solution satisfying the mutual irreducibility condition {\bf (C)}\, can be characterized as maximizing the possible total label noise level $(\pi_0+\pi_1)$, or, still equivalently, the total variation separation of the source probabilities $P_0,P_1$. In this sense, the mutually irreducible solution can also be interpreted as {\em maximal label denoising} or {\em maximal source separation} of the observed contaminated distributions. \begin{figure} \begin{center} \scalebox{0.33}{\input{feasible.pdf_t}} \end{center} \caption{\label{fig:geo} Geometry of the feasible region $\Lambda$ for proportions $(\pi_0,\pi_1)$ solutions of the contamination model \eqref{eqn:contam0}-\eqref{eqn:contam1}, when contaminated distributions $(\tilde{P}_0,\tilde{P}_1)$ are observed and the true distributions $(P_0,P_1)$ are unknown. Each feasible $(\pi_0,\pi_1)$ corresponds to a single associated solution $(P_0,P_1)$. The extremal point $(\pi_0^*,\pi_1^*)$ is the unique point corresponding to a mutually irreducible solution $(P_0^*,P_1^*)$. The dashed line indicates the maximal level line $(\pi_0+\pi_1)=c$ intersecting with $\Lambda$.} \end{figure} \section{Estimating Type I and Type II Errors} \label{sec:estError} We denote the training data by $Z_0^m = (X_0^1,...,X_0^m) \in {\mathcal X}^m$, and $Z_1^n = (X_1^1,...,X_1^n) \in {\mathcal X}^n$. Given a classifier $f$, and iid samples $Z_0^m$ and $Z_1^n$, we define the following estimates of the contaminated Type I and Type II errors: $$ \widehat{\tilde{R}}_0(f, Z_0^{m}) = \frac{1}{m} \sum_{i=1}^m \mathbf{1}_{\{f(X_0^i) \ne 0\}}, \ \ \ \ \ \widehat{\tilde{R}}_1(f, Z_1^{n}) = \frac{1}{n} \sum_{i=1}^n \mathbf{1}_{\{f(X_1^i) \ne 1\}}. $$ Following the theory developed in Section \ref{sec:mixture}, define the estimates of $\tilde{\pi}_0$ and $\tilde{\pi}_1$ as \begin{eqnarray*} \widehat{\tilde{\pi}}_0(Z_0^m,Z_1^n) &=& \widehat{\nu}(Z_0^m,Z_1^n),\\ \widehat{\tilde{\pi}}_1(Z_0^m,Z_1^n) &=& \widehat{\nu}(Z_1^n,Z_0^m), \end{eqnarray*} where $\widehat{\nu}$ is the estimator of \citet{blanchard10}. Plugging these estimates into Equations (\ref{eq:noisyz2}) and (\ref{eq:noisyo2}), we define the following estimates for the Type I and Type II errors: \begin{eqnarray} \widehat{R}_0(f, Z_0^m,Z_1^n) &=& 1-\widehat{\tilde{R}}_1(f, Z_1^{n})- \frac{1-\widehat{\tilde{R}}_0(f, Z_0^{m}) - \widehat{\tilde{R}}_1(f, Z_1^{n})}{1-\widehat{\tilde{\pi}}_0(Z_0^m,Z_1^n)} \label{eq:star} \\ \widehat{R}_1(f, Z_0^m,Z_1^n) &=& 1-\widehat{\tilde{R}}_0(f, Z_0^{m})- \frac{1-\widehat{\tilde{R}}_1(f, Z_1^{n}) - \widehat{\tilde{R}}_0(f, Z_0^{m})}{1-\widehat{\tilde{\pi}}_1(Z_0^m,Z_1^n)}. \nonumber \ \end{eqnarray} For brevity, we will sometimes write $\widehat{R}_i(f)$. The following theorem shows that the estimators $\widehat{R}_i(f)$ converge uniformly in probability to $R_i(f)$. \begin{thm} \label{thm:errorest} Let $\{\mathcal{F}_k\}_{k=1}^\infty$ denote a family of sets of classifiers, with $\mathcal{F}_k$ having finite VC-dimension $V_{k}$. Let $k(m,n)$ take values in $\mathbb{N}$ such that \begin{eqnarray} \frac{V_{k(m,n)} \log(\min(m,n))}{\min(m,n)} \to 0 \nonumber \ \end{eqnarray} as $\min(m,n) \to \infty$. If assumptions {\bf (A)} and {\bf (C)} \, hold, then, as $\min(m,n) \to \infty$, \begin{eqnarray} \sup_{f \in \mathcal{F}_{k(m,n)}} |\widehat{R}_i(f, Z_0^m,Z_1^n) - R_i(f)| \to 0 \nonumber \end{eqnarray} in probability for $i=0, 1$. \end{thm} The proof consists of a showing that $\widehat{\tilde{R}}_0(f,Z_0^m)$ and $\widehat{\tilde{R}}_1(f,Z_1^n)$ converge uniformly to $\tilde{R}_0(f)$ and $\tilde{R}_1(f)$ (by the VC inequality), that $\widehat{\tilde{\pi}}_i \to \tilde{\pi}_i$ in probability, $i=0,1$ (by the result of Blanchard et al.), and a continuity argument. In the next section, we use the estimators $\widehat{R}_0$ and $\widehat{R}_1$ to develop a consistent minmax classifier. A similar development should be possible for other criteria that depend on Type I and II errors. \section{Minmax Consistency} \label{sec:minmax} Define the max error of a classifier $f$ as \begin{eqnarray} R(f) &:=& \max\{R_0(f), R_1(f)\}. \label{eq:maxdef} \ \end{eqnarray} Let ${\mathcal F}$ denote an arbitrary set of classifiers. We define the minmax error over ${\mathcal F}$ as \begin{eqnarray} R(\mathcal{F}) &:=& \inf_{f \in \mathcal{F}} R(f). \nonumber \ \end{eqnarray} Let ${\mathcal F}_0$ denote the set of all classifiers. We will denote the minmax error over ${\mathcal F}_0$ as \begin{eqnarray*} R^* &:=&\inf_{f \in {\mathcal F}_0} R(f) = R({\mathcal F}_0). \nonumber \ \end{eqnarray*} Define the estimates of $R(f)$ and $R({\mathcal F})$ as \begin{eqnarray} \widehat{R}(f) &:=& \max\{\widehat{R}_0(f), \widehat{R}_1(f)\}, \nonumber \\ \widehat{R}({\mathcal F}) &:=& \inf_{f \in {\mathcal F}} \widehat{R}(f). \nonumber \ \end{eqnarray} Now let $\tau_k$ denote a sequence of positive numbers such that $\tau_k \to 0$ as $k \to \infty$. Define $\widehat{f}_k$ to be any classifier \begin{eqnarray} \widehat{f}_k &\in& \{f \in {\mathcal F}_k : \widehat{R}(f) \le \widehat{R}({\mathcal F}_k) + \tau_k\}. \label{eq:def} \ \end{eqnarray} This construction allows us to avoid assuming the existence of an empirical error minimizer. Let $\{\mathcal{F}_k\}_{k=1}^\infty$ denote a family of sets of classifiers. The following universal approximation property is known to be satisfied for various families of VC classes, such as histograms, decision trees, neural networks, and polynomial classifiers. \begin{description} \item[(D)] For all distributions $Q$ and measurable functions $\tilde{f}:{\mathcal X} \to \{0,1\}$, \begin{eqnarray} \lim_{k \to \infty} \inf_{f \in {\mathcal F}_k} Q(f(X) \ne \tilde{f}(X)) = 0. \nonumber \end{eqnarray} \end{description} Theorem \ref{thm:errorest} gives us control over the estimation error. Condition {\bf (D)} provides control of the approximation error. \begin{lemma} \label{le:le4} Let $\{\mathcal{F}_k\}_{k=1}^\infty$ denote a sequence of classifier sets. If assumption {\bf (D)} holds, then \begin{eqnarray} \lim_{k \to \infty} \inf_{f \in {\mathcal F}_k} R(f) = R^*. \nonumber \end{eqnarray} \end{lemma} We can now state the consistency result. This result is comparable in form to a classical consistency result in the standard classification setup, see Theorem 18.1 of \cite{devroye96} where a condition similar to {\bf (D)}, or more precisely to Lemma~\ref{le:le4}, is discussed. \begin{thm} \label{thm:consist} Let $\{\mathcal{F}_k\}_{k=1}^\infty$ denote a family of sets of classifiers, with $\mathcal{F}_k$ having finite VC-dimension $V_{j}$. Let $k(m,n)$ take values in $\mathbb{N}$ such that $k(m,n) \to \infty$ as $\min(m,n) \to \infty$. If \begin{eqnarray} \frac{V_{k(m,n)} \log(\min(m,n))}{\min(m,n)} \to 0 \nonumber \ \end{eqnarray} as $\min(m,n) \to \infty$ and assumptions {\bf (A)}, {\bf (C)}, and {\bf (D)} \, hold, then $R(\widehat{f}_{k(m,n)}) \to R^*$ in probability as $\min(m,n) \to \infty$. \end{thm} If conditions {\bf (A)} or {\bf (C)}\, fail to hold, our discrimination rule is still consistent with respect to the maximally denoised versions of $\tilde{P}_0$ and $\tilde{P}_1$, which always exist and are unique by Theorem \ref{thm:cplt}. In this sense, our analysis is distribution free and the consistency is universal. The proof of Theorem \ref{thm:consist} proceeds by a decomposition into estimation and approximation errors (denoting $k=k(m,n)$ for brevity), \begin{eqnarray*} R(\widehat{f}_k) - R^* = R(\widehat{f}_k) - R({\mathcal F}_k) + R({\mathcal F}_k) - R^*. \ \end{eqnarray*} The approximation error goes to zero by Lemma \ref{le:le4}. The estimation error is bounded as follows. For the sake of argument, assume $R({\mathcal F}_k)$ is realized by $f_k^* \in {\mathcal F}_k$. Then $$ R(\widehat{f}_k) - R({\mathcal F}_k) = R(\widehat{f}_k) - R(f_k^*) \le \widehat{R}(\widehat{f}_k) - \widehat{R}(f_k^*) + \epsilon \le 2\epsilon, $$ where the first inequality holds for any $\epsilon > 0$, with probability going to one, by Theorem \ref{thm:errorest}. The second inequality holds by definition of $\widehat{f}_k$, for $k$ sufficiently large. See appendix for details. \section{Additional Perspectives on Mixture Proportion Estimation} \label{sec:addmpe} In this section we provide some simple results that characterize $\nu^*(F,H)$. Proof of the following result is embedded in the proof of Proposition 5 of \citet{blanchard10} (recalled as Proposition \ref{prop:canondecmp} of the current paper), but we reproduce it here for convenience. \begin{lemma} \label{lem:nulrt} For any distributions $F, H$ on a measure space $({\mathcal X}, \mathfrak{S})$, $$ \nu^*(F,H) = \inf_{A \in \mathfrak{S}} \frac{F(A)}{H(A)}. $$ If $F$ and $H$ are absolutely continuous with respect to Lebesgue measure, with densities $f$ and $h$, then \begin{equation} \label{eqn:nulrt} \nu^*(F,H) = \mathop{\mathrm{ess \ inf}}_{x \in \mathop{\mathrm{supp}}(H)} \frac{f(x)}{h(x)}. \end{equation} \end{lemma} \begin{proof} We will prove the result for continuous distributions; the general case is entirely analogous. Let $$ \gamma^* = \mathop{\mathrm{ess \ inf}}_{x \in \mathop{\mathrm{supp}}(H)} \frac{f(x)}{h(x)}. $$ We need to show (i) $\exists g$ such that $f = (1-\gamma^*)g + \gamma^* h$, and (ii) if $\gamma > \gamma^*$, then no such $g$ exists. To see (i), take $g = (f - \gamma^* h)/(1-\gamma^*)$, which clearly integrates to one, and is nonnegative by definition of $\gamma^*$. To see (ii), suppose that for some $\gamma > \gamma^*$, there exists a probability density $g$ with $f = (1-\gamma)g + \gamma h$. Then for all $x$ such that $h(x) > 0$, $$ \frac{f(x)}{h(x)} = \gamma + (1-\gamma) \frac{g(x)}{h(x)} \ge \gamma > \gamma^*, $$ which contradicts the definition of $\gamma^*$. \end{proof} Lemma \ref{lem:nulrt} makes it easy to check {\bf (C)}\, for various densities. Indeed, two densities are mutually irreducible iff the (essential) infimum and supremum of their ratio are $0$ and $\infty$, respectively. Figure \ref{fig:mutual} shows three examples where ${\mathcal X} = \mathbb{R}$. In the first example, $P_0$ and $P_1$ are such that the support of one is not contained in the support of the other, and therefore {\bf (C)}\, is satisfied. In the second example, $P_0$ and $P_1$ are Gaussian distributions with equal variances and unequal means. By plugging in the formulas for the Gaussian densities, it is easy to verify that {\bf (C)}\, is again satisfied. In the third example, $P_0$ and $P_1$ are again Gaussian densities with unequal means, but this time with unequal variances. In this case, it is again not hard to show that $\nu^*(P_0,P_1) = 0$, but $\nu^*(P_1,P_0) > 0$, where $P_1$ has the larger variance. Thus, {\bf (C)}\, is not satisfied in this case. We do note, however, that $\nu^*(P_1,P_0)$ tends to zero very fast as the means move apart. \begin{figure} \centering \includegraphics[trim=0 175 0 0, clip, width=\textwidth]{mutual.png} \caption{Three one-dimensional examples that illustrate assumption {\bf (C)}. In each example (row), $P_0$ is on the left (solid line) and $P_1$ on the right (dotted line). In the first two examples, {\bf (C)}\, is satisfied, but in the third example it is not. See text for details. \label{fig:mutual}} \end{figure} For the following result, let $F$ and $H$ be two continuous distributions with densities $f$ and $h$. Lemma \ref{lem:nulrt} allows us to characterize $\nu^*(F,H)$ in terms of the ROC of the LRT. \begin{prop} Assume that the ROC of the likelihood ratio tests $x \mapsto \ind{f(x)/h(x) > \gamma}$ is left-differentiable at $(1,1)$. Then $\nu^*(F,H)$ is the slope (left-derivative) of the ROC at $(1,1)$. \end{prop} \begin{proof} The slope of the ROC of an LRT with threshold $\gamma$ is equal to $\gamma$ wherever the slope is well defined \citep{birdsall54,scharf91}. The right end-point of the ROC corresponds to $\gamma^* = \mathop{\mathrm{ess \ inf}}_{x \in \mathop{\mathrm{supp}}(H)} \frac{f(x)}{h(x)}$. That is, for all $\gamma > \gamma^*$, the Type I error of the LRT is strictly less than 1, whereas it equals 1 at $\gamma^*$. \end{proof} This result provides intuition for the estimator of $\nu^*(F,H)$ studied by \citet{blanchard10}, which can be understood as estimating the slope of the ROC at its right endpoint. See Figure \ref{fig:roc}. This is a more direct method of estimation compared to the ``plug-in" estimate of $\nu^*(F,H)$ that proceeds by estimating the densities $f$ and $h$, plugging these into the expression in the Lemma \ref{lem:nulrt}, and minimizing. \begin{figure} \centering \includegraphics[width=\textwidth]{roc.png} \caption{The receiver operating characteristic of the likelihood ratio test $x \mapsto \ind{f(x)/h(x) > \gamma}$. The curve traces the points $(H(\{x \, | \, f(x)/h(x) > \gamma\}),F(\{x \, | \, f(x)/h(x) > \gamma\})$ as the threshold $\gamma$ varies. The upper right corner corresponds to $\gamma = \nu^*(F,H)$. The slope of the dashed line, which is tangent to the ROC at the upper right corner, is equal to $\nu^*(F,H)$. \label{fig:roc}} \end{figure} We conclude this section by remarking that $1-\nu^*(F,H)$ is an example of an information divergence, like the Kullback-Leibler divergence. In particular, $1-\nu^*(F,H)$ is always nonnegative, and it equals zero if and only if $F=H$, by Proposition \ref{prop:canondecmp}. Furthermore, Lemma \ref{lem:nulrt} states that this divergence can be expressed in terms of the likelihood ratio, like KL and other information divergences. On the other hand, for other information divergences, the likelihood ratio appears in an integral, whereas here we have an infimum. This information divergence has been studied previously for discrete distributions in the analysis of Markov chains \citep{aldous87}, where it is called the ``separation distance." In general, $\nu^*(F, H) \ne \nu^*(H, F)$, so that this is not actually a metric on distributions. In the next section, we leverage Lemma \ref{lem:nulrt} to connect mutual irreducibility to class probability estimation. \section{Mutual Irreducibility and Class Probability Estimation} \label{sec:cpe} In this section,, we relate mutual irreducibility of $P_0$ and $P_1$ to the problem of class probability estimation. We assume that $P_0$ and $P_1$ are continuous distributions with densities $p_0(x)$ and $p_1(x)$. We further assume that the feature vector $X$ and label $Y$ are jointly distributed with joint distribution $Q$, and that $q := Q(Y=1) \in (0,1)$. The posterior probability that $Y=1$ is denoted $$ \eta(x) := Q(Y=1 \, | \, X=x ). $$ The problem of estimating $\eta$ from data is known as class probability estimation \citep{buja05,reid10}. The most well-known approach to class probability estimation is logistic regression, which posits the model $$ \widehat{\eta}(x) = \frac1{1 + \exp\{-(w^T x + b)\}}, $$ where $w$ and $x$ have the same dimension, and $b \in \mathbb{R}$. The parameters $w$ and $b$ are fit to the data by maximum likelihood. More generally, estimates for $\eta$ commonly have the form $$ \widehat{\eta}(x) = \psi^{-1}(h(x)) $$ where $\psi: [0,1] \mapsto \mathbb{R}$ is a {\em link} function, and $h$ is a decision function of some sort. Now define $$ \eta_{\min} := \mathop{\mathrm{ess \ inf}}_{x \in {\mathcal X}} \ \eta(x) \ \ \ \ \ \mbox{ and } \ \ \ \ \ \eta_{\max} := \mathop{\mathrm{ess \ sup}}_{x \in {\mathcal X}} \ \eta(x). $$ The following result connects the posterior class probability to mutual irreducibility. \begin{prop} \label{prop:cpe} With the notation defined above, \begin{equation} \label{eqn:emax} \eta_{\max} = \frac{1}{1+\frac{1-q}{q} \nu^*(P_1,P_0)} \end{equation} and \begin{equation} \label{eqn:emin} \eta_{\min} = 1 - \frac{1}{1+\frac{q}{1-q} \nu^*(P_0,P_1)}. \end{equation} Therefore, $P_0$ and $P_1$ are mutually irreducible if and only if $\eta_{\min} = 0$ and $\eta_{\max} = 1$. \end{prop} \begin{proof} By Bayes' rule, it is true that almost everywhere, \begin{align*} \eta(x) &= \frac{q p_1(x)}{q p_1(x) + (1-q) p_0(x)} \\ &= \frac1{1 + \frac{1-q}{q} \frac{p_0(x)}{p_1(x)}}. \end{align*} Equation \eqref{eqn:emax} now follows from Lemma \ref{lem:nulrt}. Similarly, we have (almost everywhere) \begin{align*} \eta(x) &= 1 - \frac{(1-q) p_0(x)}{(1-q) p_0(x) + q p_1(x)} \\ &= 1 - \frac1{1 + \frac{q}{1-q} \frac{p_1(x)}{p_0(x)}}. \end{align*} Now \eqref{eqn:emin} follows from Lemma \ref{lem:nulrt}. The final statement follows from \eqref{eqn:emax} and \eqref{eqn:emin} and the definition of mutual irreducibility. \end{proof} Thus, estimates of $\nu^*(P_0,P_1)$ and $\nu^*(P_1,P_0)$ could be used to inform choices about the design of the link function and model class of decision functions. Proposition \ref{prop:cpe} also suggest another possible approach to mixture proportion estimation. Suppose $\widehat{\eta}$ is an estimator for $\eta$ that is consistent with respect to the supremum norm, and let $\widehat{q}$ be the empirical estimate of $q$ based on a random sample from $Q$. Inverting Equation \eqref{eqn:emax}, $$ \widehat{\nu}_{1,0} := \left(\frac1{\sup_{x \in {\mathcal X}} \widehat{\eta}(x)}- 1\right) \frac{\widehat{q}}{1-\widehat{q}}, $$ is a consistent estimate of $\nu^*(P_1,P_0)$. Similar remarks apply to $\nu^*(P_0,P_1)$. Although this suggests that class probability estimation solves mixture proportion estimation in the binary classification context, we note that sup-norm consistency will require distributional assumptions, and therefore the distribution-free estimator of Blanchard et al. is a more general solution. \section{Conclusion} \label{sec:conclusion} We have argued that consistent classification with label noise is possible if a majority of the labels are correct on average, and the class-conditional distributions $P_0$ and $P_1$ are mutually irreducible. Under these conditions, we leverage results of \cite{blanchard10} on mixture proportion estimation to design consistent estimators of the false positive and negative probabilities. These estimators are applied to establish a consistent minmax classifier, and it seems clear that other performance measures could be analyzed similarly. Unlike previous theoretical work on this problem, we allow that the supports of $P_0$ and $P_1$ may overlap or even be equal, the noise is asymmetric, and that the performance measure is not the probability of error. We also argued that requiring mutual irreducibility can be equivalently seen as aiming at maximum denoising of the contaminated distributions, or maximum separation of the unknown sources $P_0,P_1$ for given contaminated distributions. Thus, our discrimination rule is universally consistent in the sense that its performance tends to the optimal performance corresponding to the maximally denoised $P_1, P_0$, regardless of $\tilde{P}_0, \tilde{P}_1$. \section*{Acknowledgements} C. Scott was supported in part by NSF Grants 0953135, 1047871, and 1217880. G. Blanchard was supported in part by the European Community's 7th Framework Programme under the E.U. grant agreement 247022 (MASH Project).
1,116,691,497,102
arxiv
\section{Introduction} \label{sec:introduction} Anisotropic engineering materials exhibit directionality in their mechanical characteristics even when subjected to very large strains. These materials represent a wide range of applications in composites and crystals as well as in bio-mechanical systems. Although some advances have been made in the direction of characterizing simple cases of anisotropic material behavior through the proposal of phenomenological models respecting the applicable mathematical theories, the field is far from completion. In general most of the phenomenological studies lack detailed analyses on the mathematical properties of the proposed models. Even in \cite{Scho-Neff:03}, where an analysis of general convexity conditions for transversely isotropic materials is extensively presented, the relationship between the numerous proposed functions with their physical counterparts is still to be fully developed. The material symmetries of an oriented continuum impose definite restrictions on the form of constitutive relations. The procedure used for the construction of constitutive models must from the very beginning assure that the equations are written in a proper manner which reflects the material symmetries. Furthermore, the final goal of the procedure is the development of a mathematical framework that satisfies conditions guaranteeing the existence of solutions for models that lack the standard regularity properties assuring existence and uniqueness. Indeed, uniqueness should not be required because it precludes description of some important physical effects of great interest, for example, buckling (in this respect see \cite{Ball:77}). The procedure, presented in this work, follows the approach laid out in \cite{Scho-Neff:03}. The fundamental aim from a mathematical perspective is to guarantee the existence of solutions. Existence of minimizers of some variational principles in finite elasticity is based on the concept of quasiconvexity (introduced by Morrey in \cite{Morrey:52}), which ensures that the functional to be minimized is weakly lower semi-continuous. Unfortunately, quasiconvexity gives rise to an integral inequality, which is extremely difficult to handle due to its global character. Therefore we turn to the more practical concept of polyconvexity (\cite{Ball:77}) which can be verified locally. The increased complexity of observed mechanical behavior of anisotropic materials requires invariant formulations of anisotropic constitutive laws. The theory of tensor function representations constitutes a rational procedure for consistent mathematical modeling of complex anisotropic material behavior. A particularly strong push in that direction is the work \cite{Weiss:96} by Weiss who introduced an exponential function in terms of the mixed invariants. As extensively presented in \cite{Scho-Neff:03}, the complex mechanical behavior of elastic materials with an oriented internal structure at large strains can be described with tensor-valued functions of several tensor variables: the deformation gradient and a few additional structural tensors. The follow-up strategy is to construct constitutive models with an invariant form of the strain energy function. The general forms of tensor-valued functions have been derived in the form of of representation theorems for tensor functions (\cite{Weyl:46}). The type and minimal number of the scalar variables entering the constitutive equations are also known. The interested reader should consult \cite{Spencer:71,Boehler:79,Boehler:87,Betten:87,Smith&Rivlin:57,Smith&Rivlin:58} for details. In this paper we are concerned with the problem of determining the general form of scalar-valued polynomial expressions, and for that reason we make use of the concept of integrity basis. So far in the literature only the simplest form of material anisotropy represented by transversely isotropic materials with a single preferred principal direction, has been extensively developed following these lines. In this work we develop a procedure for the construction of polyconvex free energy functions for cubic crystal systems. These cubic crystal systems present three orthogonal principal directions, giving rise to a considerable increase in the complication of the mathematical machinery to be dealt with. The main difference with previous works comes from the need to use a fourth order structural tensor to characterize the material symmetry group. The need for the fourth order tensor comes from our desire to use a single structural tensor. We will make use of results obtained by Zheng (\cite{Zheng:93}) on the single structural tensors characterizing the crystal classes. To summarize, this work presents a large deformation mathematical model for anisotropic materials with cubic symmetry. The paper is organized as follows. In Section \ref{sec:cont_mechanics} we present the basic notation and and review some kinematics relations at finite strains to be used in the sequel. After that we focus on the presentation of the mathematical framework for hyperelastic materials which guarantee a priori some meaningful physical conditions, in particular, the material frame indifference and the material symmetry conditions; it will be shown that these two conditions require the introduction of the concept of structural tensors. Section \ref{sec:free_energy} is concerned with the application of the concepts of hyperelasticity and structural tensors to the particular case of a material formed by cubic crystal. After characterizing the material symmetries associated with the cubic crystal anisotropy by means of the appropriate structural tensor, we present a procedure to build up free energy functions for cubic crystal materials that fulfill the appropriate mathematical requirements, more specifically the polyconvexity condition. The proposed functions have the invariants of the deformation gradient and the structural tensor as arguments. This approach requires the concept of polynomial integrity basis, which is also presented. The representation for the stresses and the tangent matrix is given in detail. A model fulfilling all the requirements mentioned above is finally proposed in section \ref{sec:model}. Two conditions are added to fully determine the problem and relate it to the physical data. These conditions are the stress-free reference configuration and the linearized behavior near the natural state ${\bf {C}}={\bf {1}}$; these are dealt with in section \ref{sec:conditions}. We consider the behavior of the proposed model in one dimension in the next section, where its physically desirable stress-strain response can be fully appreciated. A short summary of the variational and finite element formulation is given in section \ref{sec:variational}. Section \ref{sec:numerical} presents numerical results obtained from simulation examples using the proposed model. Finally, in section \ref{sec:conclusion} the main conclusions of the present work are summarized. A few appendices have been added at the end to encompass some of the derivations. \section{Foundations of continuum mechanics} \label{sec:cont_mechanics} In the following we consider the class of hyperelastic materials for which we postulate the existence of a free energy function. The resulting constitutive equations must fulfill some requirements that naturally arise from physical considerations of response invariance of the material under arbitrary coordinate system transformations. It will be shown that the requirement on the constitutive functions of anisotropic solids to satisfy the material frame indifference will force these functions to be isotropic tensor functions. Therefore the material symmetry condition cannot be accomplished simultaneously with the material frame indifference. In order to resolve this issue we will make use of the concept of structural tensors. Structural tensors increase the number of arguments of the energy functions, enabling the model to account separately for the material symmetries. \subsection{Notation and kinematics} In this section we briefly present the notation and main results corresponding to kinematics in the standard theory of continuum mechanics. The theory presented here is based on a material formulation. The movement of a continuum body can be seen as a family of configurations ordered by the time parameter. Thus, for every $t\in[0,T]\subset\Re^{+}$, the application $\phi_{t}:B\rightarrow S\subset\Re^{3}$ is a deformation which transforms the reference configuration $B$ in the configuration $S$ at time $t$. Then, ${\bf {x}}=\phi_{t}({\bf {X}})=\phi({\mathbf{X},t})$ identifies the position $\mathbf{x}$ of point ${\bf {X}}$ at time $t$. We will follow a lagrangian description of the motion, which implies that the material coordinates of a point, $\{ X_{A}\}$, are taken as independent variables. It is usually called \emph{material description} of the motion. The deformation gradient ${\bf {F}}$ is defined as \[ {\bf {F}}\equiv\nabla\phi_{t}({\bf {X}}) \] with the jacobian $J \equiv \mathrm{det}(\mathbf{F})>0$, defined positive as a condition to prevent material interpenetration. Let $\dot{{\bf {F}}}$ denotes the material time derivative of the deformation gradient. It is identical to the material velocity gradient, i.e. \[ \dot{{\bf {F}}}=\frac{\partial{\bf {F}}}{\partial t}=DV_{t} . \] The deformation gradient ${\bf {F}}$ can be used to form the right Cauchy-Green tensor, which corresponds to the chosen strain measure, i.e. \begin{equation} {\bf {C}}={\bf {F}^{T}}{\bf {F}}. \label{eq:c-def} \end{equation} In general spaces all inner products appearing in the former and latter derivations should be properly constructed taking into account the corresponding space metric, defined as a symmetric second order covariant tensor, and denoted by ${\bf {G}}$ for the reference configuration, and by ${\bf {g}}$ for the deformed configuration. In this paper we will restrict ourselves to the case of euclidean space with cartesian coordinates in which the metric tensors become the identity, and therefore they will not explicitly appear in the calculations. \subsection{Hyperelasticity and invariance conditions} As mentioned in the introduction, we will focus our study on hyperelastic materials. They are an elastic materials class which postulates the existence of a stored free energy function $\psi({\bf {X}},{\bf {F}},\cdot)$. The energy function $\psi$ depends on the point in the reference configuration ${\bf {X}}$, the deformation gradient ${\bf {F}}$, and an additional tensor, which characterizes the material anisotropy. We will restrict ourselves to perfectly elastic materials, which means that the internal dissipation $D_{int}$ is zero for every admissible process (see \cite{MARSDEN:83}). Following a standard argument, the constitutive equations relating the stresses to the energy function are obtained by evaluation of the Clausius-Planck inequality \begin{equation} D_{int}={\bf {P}}:\dot{{\bf {F}}}-\dot{\psi}=({\bf {P}}-\partial_{{\bf {F}}}\psi):{\bf {F}}\geq0\Rightarrow{\bf {P}}=\partial_{{\bf {F}}}\psi \label{eq:dissipa} \end{equation} where the thermal effects have been neglected and $\mathbf{P}$ is the first Piola-Kirchhoff stress tensor. The principle of material frame indifference requires the invariance of the constitutive equation under rigid body motions superimposed onto the current configuration. i.e., under the mapping ${\bf {x}}\mapsto{\bf {Q}}{\bf {x}}$ the condition $\psi({\bf {F}})=\psi({\bf {Q}}{\bf {F}})$ holds for every ${\bf {Q}}$ in the special orthogonal group $SO(3)$. As shown in \cite{Truesdell:65} the requirement that the constitutive equations fulfill the principle of material objectivity yields the functional dependence $\psi=\psi({\bf {C}})=\psi({\bf {\bf {C}(}{F})})$, i.e., every dependency on the gradient ${\bf {F}}$ can be properly substituted for a dependency on ${\bf {C}}$. Now, considering the relation between the first and second Piola-Kirchhoff stress tensors, together with the dependency of the energy function on $\mathbf{C}$ and expression (\ref{eq:dissipa}), we can obtain the relation between stresses measured by the second Piola-Kirchhoff stress tensor and the energy function. Thus, we have \begin{equation} {\bf {S}}={\bf {F}}^{-1}{\bf {P}}={\bf {F}}^{-1}\partial_{{\bf {F}}}\psi={\bf {F}}^{-1} (\partial_{\mathbf{C}} \psi \partial_{\mathbf{F}} \mathbf{C}) \label{eq:s-intermedi} \end{equation} and considering the symmetry of ${\bf {C}}$, we deduce that $\partial_{\mathbf{C}} \psi \partial_{{\bf {F}}}{\bf {C}}=2{\bf {F}} \partial_{\mathbf{C}} \psi$, which carried into the expression for ${\bf {S}}$ above gives \begin{equation} {\bf {S}}=2\partial_{{\bf {C}}}\psi . \label{eq:s-derivpsi} \end{equation} The anisotropy of a material can be characterized by the material symmetry group $G_{M}$ with defined respect to a local reference configuration. The elements of $G_{M}$ are those transformations ${\bf {Q}}$ that give an invariant material response, i.e., they are superimposed rotations and reflections on the reference configuration which do not influence the behavior of the anisotropic material, thus \begin{equation} \left\{ \begin{array}{l} \psi({\bf {\bf {C}}{Q}})=\psi({\bf {C}})\quad\forall{\bf {Q}}\in G_{M} \\ {\bf {P}}({\bf {\bf {C}}{Q}})={\bf {P}}({\bf {C}}){\bf {Q}}\quad\forall{\bf {Q}}\in G_{M} . \end{array}\right.\label{eq:fi-P-invari} \end{equation} The conditions (\ref{eq:fi-P-invari}) establish that the function $\psi$ and the tensor ${\bf {P}}$ are $G_{M}-$in\-va\-riant. In general we have $G_{M}\subset SO(3)$, so the material symmetry group corresponds to a subgroup of the whole special orthogonal group $SO(3)$, and only in the case of an isotropic material both groups coincide. This last fact gives rise to a problem regarding two conflicting requirements: from one side the functions should be invariant only under transformation belonging to $G_{M}$, reflecting the material anisotropy, and from another side their formulation should be transformation independent, so that the representation is coordinate-free, i.e., the ${\bf {Q}}$'s in (\ref{eq:fi-P-invari}) should belong to $SO(3)$. To summarize, the material symmetry condition requires the use of an isotropic function, but at the same time that leads to loss of the information concerning the material anisotropy. It has to be emphasized, however, that so far we have been considering only constitutive functions dependent on one argument, the tensor ${\bf {C}}$, and this points out to one possible approach to meet both requirements. It will be shown that both requirements can be satisfied simultaneously by extending the tensorial argument list of the energy functions, thus obtaining an isotropic function embodying the anisotropy information. This approach is put into practise by means of \emph{structural tensors}. \subsection{Isotropic tensor functions for anisotropic material response. Structural tensors} As it has been shown in the previous section, the constitutive equations can be deduced from a free energy function, but we faced the problem of characterizing anisotropic materials with a dependency on ${\bf {C}}$ only. We saw that it was not possible to have both the anisotropy and the invariance under any spatial rotation and reflexion captured in that manner. The idea behind the structural tensors is to be able to have an isotropic tensor function, i.e., one which is invariant under any rotation in space, but at the same time to retain the symmetry information characterizing the anisotropy of the material. Both conditions of rotation invariance and anisotropy can be properly fulfilled by adding more tensors as additional arguments in the free energy function. The \emph{structural tensors} are useful in obtaining irreducible and coordinate-free representations for anisotropic tensor functions because they characterize the spatial symmetry group. The concept of structural tensors was introduced by Boehler in \cite{Boehler:79}. The characterization of the symmetry group is done in the following sense: the tensors ${\bf {\xi}},...,{\bf {\zeta}}$ are said to be the structural tensors of the spatial symmetry group $G_M$ if and only if \[ \left. \begin{array}{l} Q_{ij}\ldots Q_{kl}\xi_{j\ldots l}=\xi_{i\ldots k}\\ Q_{ij}\ldots Q_{kl}\zeta_{j\ldots l}=\zeta_{i\ldots k}\end{array}\right\} \Longleftrightarrow {\bf {Q}}\in G_{M}. \] Basically, the effect of the structural tensors can be captured in the difference between the following two statements \begin{itemize} \item The relation $\psi({\bf {C}},{\bf {\xi}})=\psi({\bf {Q}^{T}{\bf {C}}{\bf {Q}},{Q}\star}{\xi})$ for the free energy function, and consequently for the corresponding stress tensor, ${\bf {Q}^{T}}{\bf {S}({\bf {C}},{\bf {\xi}})}{\bf {Q}}={\bf {S}({\bf {Q}^{T}{\bf {C}}{\bf {Q}},{Q}\star}{\xi})}$, holds for $\forall{\bf {Q}}\in SO(3)$, which means that the function is an \emph{isotropic} scalar-valued tensor function. \item The relation $\psi({\bf {C}},{\bf {\xi}})=\psi({\bf {Q}^{T}{\bf {C}}{\bf {Q}},}{\xi})$ for the free energy function, and the corresponding stress tensor, ${\bf {Q}^{T}}{\bf {S}({\bf {C}},{\bf {\xi}})}{\bf {Q}}={\bf {S}({\bf {Q}^{T}{\bf {C}}{\bf {Q}},}{\xi})}$, holds for $\forall{\bf {Q}}\in G_{M}$, which means that the function is an \emph{anisotropic} scalar-valued tensor function. \end{itemize} On a more intuitive level, the difference between these two statements is as follows: in the first one the deformation and the ``body structure'' are both rotated with $\mathbf{Q}$, while in the second statement only the deformation is rotated because the structural tensor $\xi$ has the appropriate symmetry to rotations with $\mathbf{Q} \in G_M$. \section{Free energy function for cubic materials} \label{sec:free_energy} As we showed in the previous section the final aim in the proposal of a model is to construct energy functions invariant under the appropriate symmetry groups reflecting the underlying material anisotropy. A direct way to do that is by means of functions dependent on the invariants of the right Cauchy-Green tensor and structural tensors, which ensures that the functions to be constructed are also invariant under the proper symmetry group, retaining in this manner the material symmetries of the body of interest. In particular we are interested in proposing polynomial type energy functions. In order to keep the complexity to minimum we will make use of the minimal set of independent invariants of the deformation and structural tensors. This minimal set is called a polynomial basis and further details about it can be found in \cite{Boehler:87}. In the subsections to come we first present the structural tensor for the particular case of a material with cubic symmetry and follow-up with a description of the polynomial basis of invariants constructed from this structural tensor and the right Cauchy-Green tensor $\mathbf{C}$. \subsection{Structural tensor for cubic anisotropy} To determine the structural tensors corresponding to crystals with cubic structure we follow Zheng (\cite{Zheng:93}), where the structural tensors for all the different crystal classes are developed. Zheng's paper is based on the available properties of Kronecker products of orthogonal transformations which allow the development of a simple method to determine the structural tensors with respect to any given symmetry group. As it has been highlighted in \cite{Zheng:93} there may exist many possible sets of the structural tensors for a given symmetry group, so one goal set by the author has been to find out the simplest irreducible representations; in particular, it is shown that each of the anisotropic symmetry groups can be characterized by a single structural tensor, and that is the result we will make use of. The crystal class corresponding to the Hexoctohedral cubic system is characterized by the following generators of its finite symmetry group \[ \mathbf{Q}_1(\frac{\pi}{2}),\mathbf{Q}_2(\frac{\pi}{2}),\mathbf{Q}_3(\frac{\pi}{2}), -\mathbf{1} \] where ${\bf {Q}}_{i}(\theta)$ refers to the rotation transformation about the axis ${\bf {e}}_{i}$, corresponding to a positive oriented orthonormal triad of vectors, through an angle $\theta$, and ${\bf {1}}$ is the second order identity tensor. Considering these group generators and applying properties of Kronecker products, the structural tensor for this particular cubic system is determined to be (\cite{Zheng:93}) \begin{equation} \mathbf{M} = \mathbf{e}^{(4)}_1 + \mathbf{e}^{(4)}_2 + \mathbf{e}^{(4)}_3 \label{eq:structural-tensor} \end{equation} where the notation $\mathbf{e}^{(4)}_i=\mathbf{e}_i \otimes \mathbf{e}_i \otimes \mathbf{e}_i \otimes \mathbf{e}_i$ has been introduced. The novelty of the approach suggested in this work is in the use of this fourth order structural tensor $\mathbf{M}$ in building up the energy function. All anisotropic structural tensors, investigated in the literature so far, have been second order. \subsection{Polynomial Basis} An irreducible polynomial basis consists of a collection of members, where none of them can be expressed as a polynomial function of the others, i.e., they are independent scalars, and any other polynomial invariant of the same tensors can be written as a polynomial function of the basis members. The Hilbert theorem guarantees that a finite integrity basis exists for any finite basis of tensors (\cite{Weyl:46}). Taking \cite{Scho-Neff:03} as a reference, we shall present an analogous procedure for the construction of specific constitutive equations based on functions whose arguments are the (joint) invariants of the right Cauchy-Green tensor $\mathbf{C}$ and the structural tensor $\mathbf{M}$. Next we present the integrity basis invariants which will be the arguments of the constitutive functions to be proposed. The integrity basis consist of the traces of products of powers of the argument tensors. They can be divided in two main groups: the \emph{principal invariants}, which involve invariants of the deformation tensor alone or the structural tensor alone, and the so-called \emph{mixed invariants}, which consider joint invariants of both tensors. In the following we present separately the different kinds of invariants that can be formed from the right Cauchy-Green tensor $\mathbf{C}$ and the structural tensor $\mathbf{M}$. \begin{itemize} \item Invariants of the right Cauchy-Green tensor alone The principal invariants of the second order tensor ${\bf {C}}$, denoted by $I_{k}=I_{k}({\bf {C}}),\,\, k=1,2,3$, are defined as the coefficients of the characteristic polynomial $f(\lambda)=\mathrm{det}\left[\lambda{\bf {1}}-{\bf {C}}\right]$ (see Appendix \ref{sec:char_polynomial} for details). The explicit expressions for the principal invariants of the second order tensor $\mathbf{C}$ are \[ \left\{ \begin{array}{l} I_{1}\equiv \mathrm{tr}\left(\mathbf{C}\right)\\ I_{2}\equiv \mathrm{tr}\left(\mathrm{cof}\mathbf{C}\right)\\ I_{3}\equiv \mathrm{det}\left(\mathbf{C}\right)\end{array}\right.\] which can be expressed in terms of the \emph{basic invariants} $J_{i},\,\, i=1,2,3$, defined as the traces of powers of ${\bf {C}}$ : \[ \left\{ \begin{array}{l} J_{1}\equiv \mathrm{tr}\left({\bf {C}}\right)={\bf {1}}:{\bf {C}}\\ J_{2}\equiv \mathrm{tr}\left({\bf {C}^{2}}\right)={\bf {1}}:{\bf {C}}^{2}\\ J_{3}\equiv \mathrm{tr}\left({\bf {C}^{3}}\right)={1}:{\bf {C}}^{3}\end{array}\right.\] \item Mixed invariants of the right Cauchy-Green and the structural tensors In the case of several tensor variables, we use the term mixed invariant, even though the term simultaneous invariant can also be found in the literature (\cite{Truesdell:65}). We will follow Betten (\cite{Betten:81}) to determine the scalar invariants of the tensors ${\bf {C}}$ and ${\bf {M}}$. To construct a set of mixed invariants of the second-order tensor ${\bf {C}}$ and the fourth-order structural tensor ${\bf {M}}$ we consider a theorem presented in the reference \cite{Betten:81} and its generalization for fourth-order tensors using the Hamilton-Cayley theorem, which means that powers of tensor ${\bf {M}}^{n}$ and higher can be expressed in terms of ${\bf \mathbf{1},{\bf {M}}},$${\bf {M}}^{2},...,{\bf {M}^{n-1}}$, where $n$ represents the vector space dimension of ${\bf {C}}$ (for a symmetric second-order tensor $n=6$). The additional mixed invariants are \begin{equation} \left\{ \begin{array}{l} I_{4}^{k}\equiv{\bf {C}}:{\bf {M}}^{k}:{\bf {C}}\\ I_{5}^{k}\equiv{\bf {C}}:{\bf {M}}^{k}:{\bf {C}}^{2}\\ I_{6}^{k}\equiv{\bf {C}}^{2}:{\bf {M}}^{k}:{\bf {C}}^{2}, \end{array}\right.\label{eq:invar-js}\end{equation} where $k=1,2,3,4,5$. As shown in \cite{Betten:81}, the proof of the theorem relies on the assumption that $\mathbf{M}$ satisfies the symmetry conditions \begin{equation} M_{IJKL}=M_{JIKL}=M_{IJLK}=M_{KLIJ}.\label{eq:M-symmetries} \end{equation} Clearly these conditions are fulfilled by the structural tensor $\mathbf{M} = \mathbf{e}^{(4)}_1+ \mathbf{e}^{(4)}_2 + \mathbf{e}^{(4)}_3$. In addition to the invariants (\ref{eq:invar-js}), \cite{Betten:81} proved that the following expressions are also invariant \begin{equation} \left\{ \begin{array}{l} \bar{I}^k_M \equiv \mathbf{1}: \mathbf{M}^{k}:\mathbf{C} \\ \bar{\bar{I}}^k_M \equiv \mathbf{1}: \mathbf{M}^k: \mathbf{C}^2 . \end{array}\right. \label{eq:bar-inv} \end{equation} We will skip writing the superscript $k$ in (\ref{eq:invar-js}) and (\ref{eq:bar-inv}) when $k=1$. Additionally we will also need the invariant \begin{displaymath} \hat{I}_M \equiv \mathbf{1}:\mathbf{M}:\mathrm{adj}(\mathbf{C}). \end{displaymath} \item Invariants of the fourth-order structural tensor alone The only remaining basic invariant of the single tensor ${\bf {M}}$, formed as $\mathrm{tr}({\bf {M}})$, is constant, and therefore it is not useful for the construction of strain energy functions. \end{itemize} \subsection{Polyconvexity condition\label{sub:Polyconvexity-condition}} In this section we briefly describe sufficient (but not necessary) free energy function conditions which guarantee the existence of minimizers of some variational principles for finite strains. As already mentioned, polyconvexity is the property of interest to us. Local existence and uniqueness theorems in nonlinear elastostatics and elastodynamics are based on strong ellipticity. The ellipticity condition states that the elastic free energy $\psi({\bf {F}})$ leads to an elliptic system if and only if the Legendre-Hadamard condition \[ \forall{\bf {F}},\forall\xi,\eta\in\Re^{3}:\quad D_{{\bf {F}}}^{2}\psi({\bf {F}})(\xi\otimes\eta,\xi\otimes\eta)\geq0\] holds. The early global existence theory for elastostatics was based on convexity of the free energy function. However, that condition, as shown in \cite{Ball:77}, is unreasonable from a physical point of view. Using the notion of quasiconvexity due to Morrey (\cite{Morrey:52}), Ball (\cite{Ball:77}) proved global existence theorems for elastostatics. In particular, it was proven that quasiconvexity implies the existence of minimizers of some variational principles in finite elasticity. The quasiconvexity condition reads \[ \forall{\bf {F}},\forall\omega\in C_{0}^{\infty}(B)\qquad\psi({\bf {F}})\left|B\right|={\displaystyle \int_{B}\psi({\bf {F}})}dV\leq{\displaystyle \int_{B}\psi({\bf {F}}+\nabla\omega)}dV \] Unfortunately this integral inequality is complicated to handle. A concept of greater practical importance is that of polyconvexity in the sense of Morrey (\cite{Ball:77}). Following Marsden (\cite{MARSDEN:83}), we say that the energy function $\psi$ is polyconvex if and only if there exits a function $\varphi$ with arguments ${\bf {F}}$, $\mathrm{adj}({\bf {F}})=\mathrm{det}(\mathbf{F})\mathbf{F}^{-1}$ and $\mathrm{det}({\bf {F}})$ such that \begin{equation} \psi({\bf {F}})=\varphi({\bf {F}},\mathrm{adj}({\bf {F}}),\mathrm{det}({\bf {F}})) \label{eq:polyconvexity} \end{equation} and $\varphi$ is convex function. As an illustrative example we present the case of $\psi({\bf {F}})=f(\mathrm{det}{\bf {F}})$, for a convex function $f$. This function is not convex taken as a function of ${\bf {F}}$ (because the range of definition of ${\bf {F}}$ is not convex), however, it fulfills the polyconvex condition, since the condition of polyconvexity requires to take $\mathrm{det}({\bf {F}})$ as the independent variable and, by hypothesis, the function $f$ is convex in that variable. The polyconvexity condition has additive nature, i.e., if the functions $\psi_{i},\, i=1,2,3$ are all convex in their respective arguments then the function $\psi({\bf {F}})=\psi_{1}({\bf {F}})+\psi_{2}(\mathrm{adj}({\bf {F}}))+\psi_{3}(\mathrm{det}({\bf {F}}))$ is polyconvex. This property turns out to be very useful when proposing models because it permits to construct energy functions out of simpler ones. Due to the material indifference condition, the dependency on ${\bf {F}}$ of the energy function can be completely replaced by dependency on ${\bf {C}}$, but the polyconvexity condition does not translate to functions of $\mathbf{C}$ in a simple manner. Finally, we summarize the implication chain relating all the previous concepts \[ \textrm{convexity}\Rightarrow\textrm{polyconvexity}\Rightarrow\textrm{quasiconvexity}\Rightarrow\textrm{ellipticity} \] None of the opposite implications is true as counter-examples have been found for all of them (\cite{MARSDEN:83}). \subsection{Isotropic free energy terms} For completeness, here we present two statements about polyconvexity of some simple isotropic functions. The interested reader should consult \cite{Scho-Neff:03} about further details on polyconvexity of various isotropic functions. \begin{description} \item [Statement.]The polynomial function \begin{equation} {\bf {F}}\mapsto\gamma(\mathrm{tr}(\mathbf{F}^T \mathbf{F}))^{k}=\gamma I_{1}^{k}\,\,\,\,\,\,\,\textrm{with$\,k\geq 1 \textrm{ and}\,\,\gamma>0$} \label{eq:first-iso} \end{equation} is polyconvex. \item [Proof.] The function $(\mathrm{tr}(\mathbf{F}^T \mathbf{F}))^k=\left \Vert \mathbf{F} \right \Vert ^{2k}$ can be considered to be a function of $\mathbf{F}$ only and therefore, referring to the results in the previous section, it is enough to prove convexity relative to the argument $\mathbf{F}$. As described in \cite{Scho-Neff:03} the one possible approach to show convexity is to check the positivity of the second G\^ateaux derivative: \[ <D\left(\mathbf{F}^T \mathbf{F} \right)^{k},{\bf {H}}>={\displaystyle \frac{d}{d\epsilon}}\left.\left[\left({\bf {F}+{\bf \epsilon{H}}}\right)^{T}\left({\bf {F}+{\bf \epsilon{H}}}\right)\right]^{k}\right|_{\epsilon=0}=2k\left\Vert {\bf {F}}\right\Vert ^{2k-2}<{\bf {F}},{\bf {H}}>, \] and from here the second derivative yields \[ \begin{array}{l} <D^{2}\left( \mathbf{F}^T \mathbf{F} \right)^{k},\left({H},{H}\right)>={\displaystyle \frac{d}{d\epsilon}}\left.<D\left(\mathbf{F}^T \mathbf{F} \right)^{k},{\bf {H}}>\right|_{\epsilon=0}\\ \\\qquad=2k\left(\left\Vert {\bf {F}}\right\Vert ^{2k-2}<{\bf {H}},{\bf {H}}>+(2k-2)\left\Vert {\bf {F}}\right\Vert ^{2k-4}<{\bf {F}},{\bf {H}}>^{2}\right)\ge0 .\end{array} \] The desired result is established by noting that the constant $\gamma$ is positive and does not modify the signs of the derivatives.$\Box$ \end{description} \begin{description} \item [Statement.]The functions \[ {\bf {F}}\mapsto\gamma(\mathrm{det}(\mathbf{F}^T \mathbf{F}))^{k}=\gamma I_{3}^{k},\,\,\,\,\,\,\,\textrm{with $k\ge 1$, $\,\,\gamma>0$ and}\] \[ \mathbf{F}\mapsto -\beta \log(\det(\mathbf{F}^T \mathbf{F})) = -\beta \log(I_3),\,\,\,\,\,\,\, \textrm{with $\beta>0$}\,\,\, \] are polyconvex. \item [Proof.] After noting that $I_3=(\det(\mathbf{F}))^2$, it is sufficient to show convexity relative to $\mathrm{det}(\mathbf{F})$. The convexity is established by checking the non-negativity of the second derivatives of $x^{2k}$ and $-2\log(x)$ which is a trivial exercise. $\Box$ \end{description} \subsection{Anisotropic free energy terms} Next we analyze the polyconvexity of some terms dependent on the structural tensor $\mathbf{M}$. \begin{description} \item [Statement.]The polynomial function \[ {\bf {F}}\mapsto\gamma\left(\mathrm{tr}\left(\mathbf{F}^T \mathbf{F} \mathbf{M} \mathbf{F}^T \mathbf{F} \right)\right)^{k}=\gamma I_{4}^{k}\textrm{ \,\,\,\,\,\,\, with $k\geq1$ and$\,\,\,\,\gamma>0$}\] is polyconvex. \item [Proof.] Mimicking the approach used for $I_1^k$, we proceed by showing that $I_4^k$ is a convex function of $\mathbf{F}$. Given that \[ \left(\mathrm{tr}\left(\mathbf{F}^T \mathbf{F} \mathbf{M} \mathbf{F}^T \mathbf{F} \right)\right)^{k}=\left((\mathbf{F}^T \mathbf{F} ):\mathbf{M}:(\mathbf{F}^T \mathbf{F})\right)^{k}\] we can obtain the first and second G\^ateaux derivatives of $I_4^k$: \begin{displaymath} \begin{array}{l} <D\left((\mathbf{F}^T \mathbf{F}):\mathbf{M}:(\mathbf{F}^T \mathbf{F} )\right)^{k},{\bf {H}}>=\\ ={\displaystyle \frac{d}{d\epsilon}}\left.\left[\left({\bf {F}+{\bf \epsilon{H}}}\right)^{T}\left({\bf {F}+{\bf \epsilon{H}}}\right):{\bf {M}}:\left({\bf {F}+{\bf \epsilon{H}}}\right)^{T}\left({\bf {F}+{\bf \epsilon{H}}}\right)\right]^{k}\right|_{\epsilon=0}= \\ \begin{array}{l} =2k\left((\mathbf{F}^T \mathbf{F} ):\mathbf{M}:(\mathbf{F}^T \mathbf{F} )\right)^{k-1}\left(\mathbf{F}^T \mathbf{H}:{\bf {M}}:\mathbf{F}^{T}\mathbf{F}+ \mathbf{H}^T \mathbf{F} :{\bf {M}}:\mathbf{F}^{T} \mathbf{F}\right) \end{array}\end{array} \end{displaymath} \begin{equation} \begin{array}{l} <D^{2}\left((\mathbf{F}^T \mathbf{F}):\mathbf{M}:(\mathbf{F}^T \mathbf{F} )\right)^{k},\left({H},{H}\right)>={\displaystyle \frac{d}{d\epsilon}}\left.<D\left((\mathbf{F}^T \mathbf{F}):\mathbf{M}:(\mathbf{F}^T \mathbf{F})\right)^{k},{\bf {H}}>\right|_{\epsilon=0}\\ \begin{array}{l} =4k(k-1)\left((\mathbf{F}^T \mathbf{F}):\mathbf{M}:(\mathbf{F}^T \mathbf{F} )\right)^{k-2}\left[\mathbf{F}^T \mathbf{H}:{\bf {M}}:\mathbf{F}^{T}\mathbf{F}+ \mathbf{H}^T \mathbf{F} :{\bf {M}}:\mathbf{F}^{T}\mathbf{F}\right]^2\\ \begin{array}{l} +2k\left((\mathbf{F}^T \mathbf{F} ):\mathbf{M}:(\mathbf{F}^T \mathbf{F})\right)^{k-1}\times [2\mathbf{H}^T\mathbf{H}:\mathbf{M}:\mathbf{F}^T\mathbf{F} +\mathbf{F}^T\mathbf{H}:\mathbf{M}:\mathbf{H}^T\mathbf{F}+\\ + \mathbf{F}^T\mathbf{H}:\mathbf{M}:\mathbf{F}^T\mathbf{H} +\mathbf{H}^T\mathbf{F}:\mathbf{M}:\mathbf{H}^T\mathbf{F}+\mathbf{H}^T\mathbf{F}:\mathbf{M}:\mathbf{F}^T\mathbf{H}]=\\ \end{array}\end{array}\\ \begin{array}{l} \begin{array}{l} \begin{array}{l} =16k(k-1)\left((\mathbf{F}^T \mathbf{F}):\mathbf{M}:(\mathbf{F}^T \mathbf{F} )\right)^{k-2}\left[\mathbf{F}^T \mathbf{H}:{\bf {M}}:\mathbf{F}^{T}\mathbf{F}\right]^{2}+ \end{array}\\ \begin{array}{l} +4k\left((\mathbf{F}^T \mathbf{F}):\mathbf{M}:(\mathbf{F}^T \mathbf{F} )\right)^{k-1}\left[\mathbf{H}^T\mathbf{H}:{\bf {M}}:\mathbf{F}^{T}\mathbf{F}+2 \mathbf{H}^T \mathbf{F} :{\bf {M}}:\mathbf{H}^T \mathbf{F}\right] , \end{array}\end{array} \end{array}\end{array} \label{secondder_i4} \end{equation} where the last equality used the symmetry properties of the structural tensor $\mathbf{M}$, more specifically relations (\ref{eq:A1}), (\ref{A2}) and (\ref{eq:A3}). To complete the proof we separately analyze each term of the second derivative and show its non-negativity. The non-negativity of $I_4$ follows from: \begin{eqnarray*} I_4 & = & (\mathbf{F}^T \mathbf{F}):\mathbf{M}:(\mathbf{F}^T \mathbf{F}) = \sum_{i=1}^3 (\mathbf{F}^T \mathbf{F}):\mathbf{e}_i^2 \otimes \mathbf{e}_i^2 : (\mathbf{F}^T \mathbf{F}) = \\ & = & \sum_{i=1}^3 [\mathbf{F}^T \mathbf{F} : \mathbf{e}_i \otimes \mathbf{e}_i]^2 \ge 0, \end{eqnarray*} where property (\ref{eq:ABBC}) has been used. In a similar manner, making use of both (\ref{eq:ABBC}) and (\ref{eq:eeATA}), we show that the other terms participating in the expression for the second derivative are also non-negative: \begin{eqnarray*} \mathbf{H}^T\mathbf{H}:\mathbf{M}:\mathbf{F}^T\mathbf{F} & = & \sum_{i=1}^3 (\mathbf{H}^T\mathbf{H} : \mathbf{e}_i^2)(\mathbf{e}_i^2: \mathbf{F}^T \mathbf{F} ) = \sum_{i=1}^3 (\mathbf{He}_i)^2 (\mathbf{Fe}_i)^2 \ge 0 ,\\ \mathbf{H}^T\mathbf{F}:\mathbf{M}:\mathbf{H}^T\mathbf{F} & = & \sum_{i=1}^3 (\mathbf{H}^T \mathbf{F} : \mathbf{e}_i \otimes \mathbf{e}_i)^2 \ge 0 . \end{eqnarray*} The non-negativity of all terms in the expression for the second derivative together with the nonnegativity of $\gamma$ implies the desired result. $\Box$ \end{description} Next, analogously to \cite{Scho-Neff:03}, we prove the polyconvexity of $\frac{I_4}{I_3^{2/3}}$ . This is the isochoric term corresponding to $I_4$. \begin{description} \item [Statement.]The function \[ \mathbf{F} \mapsto \gamma \frac{\mathrm{tr}\left(\mathbf{F}^T \mathbf{FMF}^T\mathbf{F} \right)}{\mathrm{det}(\mathbf{F})^{4/3}} = \gamma \frac{I_{4}}{I_{3}^{2/3}}\;\;\;\mathrm{with}\;\; \gamma > 0 \] is polyconvex. \item [Proof.] Let $\phi(x,y)=\frac{x^4}{y^{4/3}}$ and \begin{displaymath} \psi_{\eta}({\mathbf F},\zeta)=\phi(\left\Vert{\mathbf F \eta}\right\Vert,\zeta)=\frac{\left\Vert{\mathbf F \eta}\right\Vert^4}{\zeta^{4/3}}, \end{displaymath} where ${\mathbf{\eta}}$ is an arbitrary vector. We will establish that $\psi_\eta$ is a convex function, when considered as a function of both arguments simultaneously. Condition (\ref{eq:neff-lemma1}) is satisfied for $p=4$ and $\alpha=4/3$, therefore the function $\phi(x,y)$ is convex. The convexity of $\psi_{\eta}({\mathbf F},\zeta)$ follows from the following sequence of inequalities: \begin{displaymath} \psi_{\eta}(\lambda{\mathbf F_1}+(1-\lambda){\mathbf F_2},\lambda \zeta_1+(1-\lambda)\zeta_2)=\frac{\left\Vert\lambda{\mathbf F_1\eta}+(1-\lambda){\mathbf F_2\eta}\right\Vert^4}{(\lambda \zeta_1+(1-\lambda)\zeta_2)^{4/3}}\le \end{displaymath} \begin{displaymath} \frac{(\lambda\left\Vert{\mathbf F_1\eta}\right\Vert+(1-\lambda)\left\Vert{\mathbf F_2\eta}\right\Vert)^4}{(\lambda \zeta_1+(1-\lambda)\zeta_2)^{4/3}}=\phi(\lambda\left\Vert{\mathbf F_1\eta}\right\Vert+(1-\lambda)\left\Vert{\mathbf F_2\eta}\right\Vert,\lambda \zeta_1+(1-\lambda)\zeta_2)\le \end{displaymath} \begin{displaymath} \lambda\phi(\left\Vert{\mathbf F_1 \eta}\right\Vert,\zeta_1)+(1-\lambda)\phi(\left\Vert{\mathbf F_2 \eta}\right\Vert,\zeta_2)=\lambda\psi_{\eta}({\mathbf F_1},\zeta_1)+(1-\lambda)\psi_{\eta}({\mathbf F_2}), \end{displaymath} where the triangular inequality and the convexity of $\phi(x,y)$ have been used. The required result can be obtained directly from: \begin{displaymath} \frac{I_4}{I_3^{2/3}}=\frac{{\mathbf C}:({\mathbf e_1}^4+{\mathbf e_2}^4+{\mathbf e_3}^4):{\mathbf C}}{({\mathrm{ det}}({\mathbf F}))^{4/3}}=\frac{\left\Vert{\mathbf{F e_1}}\right\Vert^4+\left\Vert{ \mathbf{F e_2}}\right\Vert^4+ \left\Vert{\mathbf{F e_3}}\right\Vert^4}{({\mathrm{det}}({\mathbf F}))^{4/3}} \end{displaymath} \begin{displaymath} = \psi_{\mathbf{e}_1}({\mathbf F},{\mathrm{det}}({\mathbf F}))+ \psi_{\mathbf{e}_2}({\mathbf F},{\mathrm{det}}({\mathbf F}))+ \psi_{\mathbf{e}_3}({\mathbf F},{\mathrm{det}}({\mathbf F})). \end{displaymath} The positive coefficient $\gamma$ does not influence the conclusion. The same proof can be applied to $\gamma\frac{I_4}{I_3^{\alpha/2}}$ as long as $0\le \alpha \le 3$. $\Box$ \end{description} As pointed out in \cite{Scho-Neff:03} the apparent symmetry between $\mathbf{F}$ and $\mathrm{adj}(\mathbf{F})$ in the definition (\ref{eq:polyconvexity}) suggests that new polyconvex functions can be obtained by switching the deformation gradient tensor $\mathbf{F}$ with its adjoint tensor $\mathrm{adj}(\mathbf{F})$. The reader must be warned that replacing $\mathbf{C}$ with $\mathrm{adj}(\mathbf{C})$ is not equivalent to replacing $\mathbf{F}$ with $\mathrm{adj}(\mathbf{F})$ because \[ \mathrm{adj}(\mathbf{C})=\mathrm{det}(\mathbf{C})\mathbf{C}^{-1}= \mathrm{det}(\mathbf{F})^2 (\mathbf{F}^T\mathbf{F})^{-1} = \mathrm{det}(\mathbf{F})^2 \mathbf{F}^{-1}\mathbf{F}^{-T} = \mathrm{adj}(\mathbf{F}) \mathrm{adj}(\mathbf{F})^T. \] The difference comes from the position of the transpose symbol: in $\mathbf{C}=\mathbf{F}^T \mathbf{F}$ the first matrix is transposed, while in $\mathrm{adj} (\mathbf{C}) = \mathrm{adj}(\mathbf{F}) \mathrm{adj} (\mathbf{F})^T$ it is the second one, but the proofs already presented in this section are independent of the position of the transpose symbol and we will exploit this to verify the following \begin{description} \item[Statement.] Let $K_3=\mathrm{adj}(\mathbf{C}) : \mathbf{M} : \mathrm{adj}(\mathbf{C})$, then the functions \begin{eqnarray*} \mathbf{F} & \mapsto & \gamma \left( \mathrm{adj}(\mathbf{F}^T \mathbf{F}) :\mathbf{M} : \mathrm{adj}(\mathbf{F}^T \mathbf{F}) \right)^k = \gamma K_3^k\;\;\; \mathrm{with}\;\; k\ge 1 \;\; \mathrm{and} \;\; \gamma > 0,\\ \mathbf{F} & \mapsto & \gamma \frac{ \mathrm{adj}(\mathbf{F}^T \mathbf{F}) :\mathbf{M} : \mathrm{adj}(\mathbf{F}^T \mathbf{F}) }{I_3^{4/3}} = \gamma \frac{K_3}{I_3^{4/3}} \;\;\; \mathrm{with} \;\; \gamma > 0 \end{eqnarray*} are polyconvex. \item[Proof.] If $K_3$ is a scalar invariant of $\mathbf{C}$ and $\mathbf{M}$ then the two functions will be polyconvex because the proofs of the previous two statements are independent of the position of the transpose symbol and will not be repeated here (for the second function, $\alpha=8/3$ is in the range of values which preserve the validity of the previous statement). The function $\frac{K_3}{I_3^{4/3}}$ is the proper isochoric variant of $K_3$. We only need to prove that $K_3$ is a scalar invariant of $\mathbf{C}$ and $\mathbf{M}$. We will prove on the way that $K_1=\mathrm{adj}( \mathbf{C}) :\mathbf{M} : \mathbf{C}$, $K_2=\mathrm{adj}( \mathbf{C}) :\mathbf{M} : \mathbf{C}^2$ and $\hat{I}_M=\mathbf{1} : \mathbf{M} : \mathrm{adj} (\mathbf{C} )$ are also scalar invariants. If the characteristic polynomial (\ref{eq:cayley-hamilton}) is multiplied as a double scalar product by $\mathbf{C}^{-1}\mathbf{M} : \mathbf{C}$, then the following equation for $K_1$ is obtained \begin{displaymath} K_1= I_5-I_1 I_4 +I_2 \bar{I}_M. \end{displaymath} If $K_1$ can be expressed as a combination of scalar invariants of $\mathbf{M}$ and $\mathbf{C}$ , then $K_1$ is a scalar invariant itself. Proceeding in the same manner for $K_2$, one obtains after multiplication by $\mathbf{C}^{-1}\mathbf{M} : \mathbf{C}^2$ \begin{displaymath} K_2= I_6-I_1 I_5 +I_2 \bar{\bar{I}}_M. \end{displaymath} For $K_3$ the multiplication is by $\mathbf{C}^{-1}\mathbf{M} : \mathrm{adj} (\mathbf{C})$ and the result is \begin{displaymath} K_3= K_2-I_1 K_1 +I_2 \hat{I}_M. \end{displaymath} Given that $K_1$ and $K_2$ are scalar invariants, for $K_3$ to be a scalar invariant one needs to show that $\hat{I}_M$ is a scalar invariant. This can be accomplished by multiplication by $\mathbf{C}^{-1}\mathbf{M} : \mathbf{1}$ with the result being \begin{displaymath} \hat{I}_M= \bar{\bar{I}}_M-I_1 \bar{I}_M +I_2. \end{displaymath} In fact, for our specific tensor $\mathbf{M} =\mathbf{e}_1^4 +\mathbf{e}_2^4 +\mathbf{e}_3^4$ the invariant $\hat{I}_M$ is equal to the invariant $I_2$. $\Box$ \end{description} \section{Model for polyconvex free energy function with cubic anisotropy} \label{sec:model} Having presented proofs of polyconvexity for some strain energy functions, we proceed to propose a global model based on these functions for the case of materials with cubic anisotropy. The mathematical sanity of the model is guaranteed in advance by means of the polyconvexity of the proposed strain energy function. Consequently, this allows the application of the theorems concerning the existence of minimizing sequences discussed in Section \ref{sub:Polyconvexity-condition}. The proposed model is a linear combination of the energy functions proved above to be polyconvex. The general form of the free energy function reads \begin{eqnarray} \psi=\underbrace{\alpha (-\log(I_3))}+\underbrace{\beta I_{3}}+\underbrace{\gamma I_{4}} +\underbrace{\delta I_1} \label{Model}\\ \psi_1 \;\;\;\;\;\;\;\;\;\;\;\; \psi_2 \;\;\;\;\; \psi_3 \;\;\;\;\;\; \psi_4 \; \nonumber \end{eqnarray} where the last three terms have been selected to be as simple as possible, even though polyconvexity was shown for more general cases with exponents larger than 1. For completeness we briefly present the relationship between stresses and the free energy function based on the expression (\ref{eq:s-derivpsi}). Making use of the additively decoupled formulation of the polyconvex function (\ref{Model}) we expand the stress tensor in the following general manner \begin{equation} {\bf {S}}=2\frac{\partial\psi}{\partial{\bf {C}}}=2{\displaystyle \sum_{i=1}^{4}\sum_{j=1}^6}\frac{\partial\psi_{i}}{\partial I_{j}}\frac{\partial I_{j}}{\partial{\bf {C}}}\label{eq:stress-divi}\end{equation} From (\ref{eq:stress-divi}), following a procedure analogous to (\ref{eq:s-intermedi}), we can deduce the formal expression for the tangent matrix: \begin{displaymath} {\bf {\mathbb{C}}} = \frac{\partial{\bf {S}}}{\partial{\bf {F}}}=\frac{\partial{\bf {C}}}{\partial{\bf {F}}}\frac{\partial{\bf {S}}}{\partial{\bf {C}}}=2\frac{\partial{\bf {S}}}{\partial{\bf {C}}}=4\frac{\partial{\bf ^{2}\psi}}{\partial{\bf {C}}^{2}}= \end{displaymath} \begin{equation} = 4{\displaystyle \sum_{i=1}^{4}\sum_{j=1}^6}\left[\frac{\partial \psi_{i}}{\partial I_{j}}\frac{\partial^2 I_{j}}{\partial{\bf {C}}^2}+\sum_{k=1}^6 \frac{\partial^2\psi_{i}}{\partial I_{j} \partial I_k}\frac{\partial I_{j}}{\partial{\bf {C}}}\frac{\partial I_{k}}{\partial{\bf {C}}}\right]\label{eq:tangent-divi}\end{equation} Expressions for each individual term corresponding to the first and second derivatives of the invariants can be found in Appendix \ref{sub:Invariants-derivatives}. \section{Adittional Conditions} \label{sec:conditions} In order to make a connection between the model and the physical data some conditions will be imposed on the model. These conditions will help determine uniquely the values of the arbitrary constants accompanying the individual functions appearing in the proposed model. The conditions discussed here refer to the comparison between the model response and the values of the parameters characterizing the physical behavior of a material in a natural state. The natural state chosen in this model corresponds to unstressed, undeformed configuration, i.e. ${\bf {C}}={\bf {1}}$. With the help of (\ref{eq:stress-divi}) and (\ref{eq:tangent-divi}) the following two conditions can be formulated: \begin{itemize} \item \textbf{Stress free reference configuration} This condition states that stresses must be zero when the deformation gradient becomes unity. Physically, this is equivalent to not having any remanent tensions when the material is totally unloaded. Mathematically, the stress free reference configuration means ${\bf {S}}({\bf {1}})={\bf {0}}$, or upon substitution of the numerical values into (\ref{eq:stress-divi}) \begin{equation} -\alpha+\beta+2\gamma+\delta=0 \label{eq:zerostress} . \end{equation} \item \textbf{Tangent matrix at reference configuration} The operation to be performed is the identification between the tangent matrix (\ref{eq:tangent-divi}), particularized at the origin, ${\bf {C}}={\bf {1}}$, with the physical values corresponding to the classical elastic moduli matrix of a cubic material. There is an implicit assumption that the values of the elastic constants remain steady (time independent) for different values of the tensor $\mathbf{C}$; albeit not being a completely realistic assumption, it can be considered a well-posed first approximation in the development of the model. Thus, we have the identification \[ {\mathbb{C}}_{0}\equiv\left.2\partial_{{\bf {C}}}{\bf {S}}\right|_{{\bf {C}}={\bf {1}}}=\left.4\partial_{{\bf {C}}}^{2}\psi\right|_{{\bf {C}}={\bf {1}}}\] where ${\mathbb{C}}_{0}$ represents the tangent matrix at the origin. Substitution of the numerical values for the derivatives gives the following three equations: \begin{eqnarray} \alpha + 2\gamma & = & \mathcal{C}_{11}\\ \beta & = & \mathcal{C}_{12}\\ \alpha-\beta & = & 2\mathcal{C}_{44}, \label{eq:c44} \end{eqnarray} where $\mathcal{C}_{11}$, $\mathcal{C}_{12}$ and $\mathcal{C}_{44}$ are the standard elasticity constants in Voigt notation. \end{itemize} The solution of the system of equations (\ref{eq:zerostress}-\ref{eq:c44}) is: \begin{eqnarray*} \alpha & = & \mathcal{C}_{12}+2\mathcal{C}_{44}\\ \beta & = & \mathcal{C}_{12} \\ \gamma & = & \frac{1}{2}(\mathcal{C}_{11}-\mathcal{C}_{12}-2\mathcal{C}_{44})\\ \delta & = & -\mathcal{C}_{11}+\mathcal{C}_{12} + 4\mathcal{C}_{44}. \end{eqnarray*} The nonnegativity of the elastic constants implies nonnegativity of $\alpha$ and $\beta$. For $\gamma$ and $\delta$ to be nonnegative the following condition must be satisfied: \begin{equation} \frac{1}{2}\le \frac{2 \mathcal{C}_{44}}{\mathcal{C}_{11}-\mathcal{C}_{12}}\le 1. \label{eq:condition} \end{equation} Conversely, if the inequalities (\ref{eq:condition}) are satisfied then $\alpha$, $\beta$, $\gamma$ and $\delta$ are non-negative. The ratio $A=\frac{2 \mathcal{C}_{44}} {\mathcal{C}_{11}-\mathcal{C}_{12}}$ is called the anisotropy ratio (see \cite{Hirth:82}) and its emergence in condition (\ref{eq:condition}) suggests that the relevant measures of anisotropy are an integral part of the model itself. The data available in Hirth (\cite{Hirth:82}) shows that only five transitionary metals which are next to each other in the periodic table satisfy both inequalities: Cr, Mo, W, V, Nb. All five of them have body-centered cubic crystal structure. Some compound solids with cubic structure like AgBr and NaCl also satisfy the inequalities (\ref{eq:condition}). The polyconvexity properties of $I_5$ and $I_6$ are currently unknown to the authors, but the addition of terms proportional to $I_5$ or $I_6$ in the strain energy function will not modify the condition (\ref{eq:condition}) and consequently not enlarge the set of materials to which the model can be applied. \section{Study in 1D} \label{sec:1dstudy} As an initial test for the model, two simple deformation gradient tensors have been applied as inputs: \begin{displaymath} \mathbf{F}=\left[\begin{array}{ccc} \lambda & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right]\;\;\;\mathrm{and}\;\;\; \mathbf{F}=\left[\begin{array}{ccc} 1 & \gamma & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right] . \end{displaymath} Figure \ref{fig:modelcmp} displays the Cauchy stresses $\sigma$ as functions of the stretch $\lambda$. Similarly to linear elasticity, the model predicts that the stresses $\sigma_{11}$ and $\sigma_{22}=\sigma_{33}$ are increasing functions of $\lambda$. The apparent agreement between the proposed model and linear elasticity in the small strains regime is not surprising given that the model parameters are fitted to the linear elasticity coefficients at zero strain. The physically desirable behavior of the stresses going to $+\infty$ ($-\infty$) as the stretch $\lambda$ goes to $+\infty$ ($0$) is also present. The behavior of the model in simple shear is shown in Figure \ref{fig:modelcmp2}. As in the previous figure, the stress curves predicted by the model are tangent to the stress curves for linear elasticity. The most notable difference in this case is that the model predicts nonzero stress $\sigma_{11}$ while according to linear elasticity $\sigma_{11}=0$. \begin{figure} \includegraphics[width=2.75in]{./figures/sig11} \includegraphics[width=2.75in]{./figures/sig22_33} \caption{Model response to simple stretch} \label{fig:modelcmp} \end{figure} \begin{figure} \includegraphics[width=2.75in]{./figures/she11} \includegraphics[width=2.75in]{./figures/she12} \caption{Model response to simple shear} \label{fig:modelcmp2} \end{figure} \section{Variational formulation and finite element discretization} \label{sec:variational} Consider a body $\mathcal{B}$ with boundary $\partial \mathcal{B}=\mathcal{A}_1 \cup \mathcal{A}_2$. The boundary $\mathcal{A}_1$ consists of all surface points where displacement is applied and the boundary $\mathcal{A}_2$ of all surface points where tractions are applied ($\mathcal{A}_1 \cap \mathcal{A}_2 = \phi$). The boundary value problem can be formulated as (following \cite{Ball:77a}): \begin{eqnarray} \mathrm{Div}(\mathbf{P}) & = & 0\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mathrm{in}\; \mathcal{B}, \label{eq:varequilibrium}\\ \mathbf{P} & = & \mathbf{P}(\nabla \mathbf{u}, \mathbf{x})\;\;\;\mathrm{in}\; \mathcal{B}, \label{eq:varconstitutive}\\ \mathbf{u} & = & \mathbf{\bar{u}}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mathrm{on}\; \mathcal{A}_1, \label{eq:varimposeddisp}\\ \mathbf{P} \mathbf{n} & = & \bar{\mathbf{t}}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mathrm{on}\; \mathcal{A}_2,\label{eq:varimposedtrac} \end{eqnarray} where (\ref{eq:varequilibrium}) expresses the equilibrium condition of the solid in the absence of body forces, (\ref{eq:varconstitutive}) is the constitutive law for the solid, (\ref{eq:varimposeddisp}) expresses the boundary condition for the section of the boundary $\mathcal{A}_1$ on which displacement is imposed and (\ref{eq:varimposedtrac}) refers to the section of the boundary $\mathcal{A}_2$ on which the tractions are imposed. For hyperelastic materials possessing strain energy function $\psi$ the first Piola-Kirchhoff stress tensor is given by $\mathbf{P}=\frac{\partial \psi}{\partial \mathbf{F}}$ where $\mathbf{F}=\nabla \mathbf{u}$. It can be shown (\cite{Ball:77a}) that the solution $\mathbf{u}$ of the problem posed by (\ref{eq:varequilibrium}-\ref{eq:varimposedtrac}) in the case of polyconvex strain energy function $\psi$ is the minimizer of the functional \begin{equation} J(\mathbf{u})=\int_{\mathcal{B}} \psi(\mathbf{x}, \mathbf{u}, \nabla \mathbf{u}) dV - \int_{\partial \mathcal{A}_2} \mathbf{u} \cdot \bar{\mathbf{t}} dS, \label{eq:functional} \end{equation} which can be used to formulate the finite element discretization. The spatial discretization is accomplished by representing the body $\mathcal{B}$ as a union of disjoint elements, $\mathcal{B}= \bigcup_{e=1}^{N_{elem}} \Omega_e$. Even though many different elements can be used for the discretization, for simplicity, we will assume that the discretization has been achieved by the use of second order tetrahedral elements. The unknowns to be solved for are the nodal displacements $\mathbf{u}_n$. For each element, the displacement is expressed as a sum of the nodal functions, $\mathbf{u}^{(e)}=\sum_{a=1}^{10} N_a^{(e)} \mathbf{u}_a^{(e)}$. The global displacement approximation becomes $\mathbf{u}_h=\sum_{e=1}^{N_{elem}} \sum_{a=1}^{10} N_a^{(e)} \mathbf{u}_a^{(e)}$. Substitution of $\mathbf{u}_h$ into (\ref{eq:functional}) leads to a discrete functional $J_h(\mathbf{u}_h)$. The minimization procedure for the discrete functional gives rise to a system of equations which can be solved for the nodal displacement. Further details about the finite element procedure can be found in \cite{Radovitzky:99}. \section{Numerical examples} \label{sec:numerical} In this section we will consider two basic examples which illustrate the agreement of the model with linear theory at small strains and the departure from it at large deformations. \subsection{2D Example -- Plate with Hole} To verify the consistency of our model with linear theory and to check its convergence we considered the problem of uniaxial stress applied to a plate with initially circular hole. In addition to the well known analytical solution for isotropic linear elastic material, this problem has analytic solution for orthotropic linear elastic materials \cite{Green:68}. The stress concentration factor, calculated from this solution specialized to cubic anisotropy with one symmetry axis perpendicular to the plane of the plate and another symmetry axis aligned with the loading direction, is shown on the left side of Figure \ref{fig:plate-hole}. The numerical solution obtained from our model is in good agreement with this analytical solution when the applied stress is small compared to the elastic constants of the material. As the stress is increased to become comparable to the elastic constants the nonlinearities become important and approximately the same reduction in the minimum and the maximum stress concentration is observed. The plot on the right of Figure \ref{fig:plate-hole} shows the dependence of the error on the mesh size. Within the linear regime the convergence is quadratic as expected for Newton-Raphson solvers. The convergence is somewhat reduced for the nonlinear regime, but it is still within acceptable levels. \begin{figure} \includegraphics[width=2.75in]{./figures/plotAnalytical} \includegraphics[width=2.75in]{./figures/convergence} \caption{Stress concentration and convergence plots for plate with an initially circular hole} \label{fig:plate-hole} \end{figure} \subsection{3D Example -- Circular Bar} The problem which we will consider here is the extension of a single crystal cylindrical bar. The bottom end of the bar is clamped and the side surface of the bar is traction free. The top end is displaced in the axial direction by a specified amount, but it is left free to move in the plane perpendicular to the original bar axis. Three different orientations of the bar axis relative to the crystal will be considered. In each case the bar axis will coincide with one of the following crystallographic directions: [100], [110] and [122]. Top and side views of the deformed bar for the three cases are shown in Figure \ref{fig:bar-response}. For better visualization the applied displacement is equal to $100$\% of the bar length, but the behavior is similar at smaller stretches. In the first case the cross-section of the bar remains approximately circular and the extension process is approximately symmetric about the bar axis. Similar behavior is observed if the bar axis is aligned with the [111] direction. \begin{figure} \includegraphics[width=1.75in]{figures/top100bw} \includegraphics[width=1.75in]{figures/top110bw} \includegraphics[width=1.75in]{figures/top122bw}\\ \includegraphics[width=1.75in]{figures/front100bw} \includegraphics[width=1.75in]{figures/front110bw} \includegraphics[width=1.75in]{figures/front122bw}\\ \begin{verbatim} [100] [110] [122]\end{verbatim} \caption{Response of single crystal bar to imposed displacement at different orientations of the crystal relative to the bar axis} \label{fig:bar-response} \end{figure} The anisotropic response is clearly observed in the second case. The contraction of cross-section of the bar in the two directions is markedly different due to the anisotropy brought by the term containing $I_4$. As it could be expected the contraction in the [001] direction is much less than in the [$1\bar{1}0$] direction. A possible interpretation of this effect for metallic lattices is that part of the contraction in the latter direction is accomplished by atomic bond rotation rather than pure extension/contraction of the bonds. The third case illustrates the development of transverse displacements when the crystal lattice lacks enough symmetries relative to the loading axis. The tilting effect is completely due to the anisotropy. If the movement of the top end is restricted significant transverse stresses will develop. \section{Conclusions} \label{sec:conclusion} A new model for materials with cubic anisotropy has been proposed in this paper. The model is based on additively decoupled strain energy function which satisfy the polyconvexity condition and therefore guarantees existence of minimizing sequences for the appropriate variational functionals. The polyconvexity of new strain energy terms capturing the anisotropy of cubically symmetric systems has been shown. A simple strain energy function capable of capturing the fundamental effect of the anisotropy ratio has been suggested and tested in numerical simulations which reveal that the model possesses many of the relevant physically desirable properties. The main difference between this model and the orthotropic models in the literature (for example, \cite{Scho-Neff:03}) is the use of a single fourth order structural tensor. In spite of the complications coming from the higher order of the tensor, the model avoids a major complication which the orthotropic models face: enforcing the equality of the properties in the three mutually perpendicular symmetry directions. \section*{Acknowledgment} The support of ASC through grant $ASC????????$ is gratefully acknowledged.
1,116,691,497,103
arxiv
\section{Introduction} Since the early days of NLP~\cite{winograd1971procedure}, conversational agents have been designed to interact with humans through language to solve diverse tasks, e.g., remote instructions~\cite{thomason2015learning} or booking assistants \cite{bordes2016learning,elasri2017frames}. In this goal-oriented dialogue setting, the conversational agents are often designed to compose with predefined language utterances~\cite{lemon2007machine,williams2014dialog,young2013pomdp}. Even if such approaches are efficient, they also tend to narrow down the agent's language diversity. To remove this restriction, recent work has been exploring interactive word-based training. In this setting, the agents are generally trained through a two-stage process~\cite{wei2018airdialogue, de2017guesswhat, shah2018bootstrapping, li2016dialogue, das2017learning}: Firstly, the agent is pretrained on a human-labeled corpus through supervised learning to generate grammatically reasonable sentences. Secondly, the agent is finetuned to maximize the task-completion score by interacting with a user. Due to sample-complexity and reproducibility issues, the user is generally replaced by a game simulator that may evolve with the conversational agent. Unfortunately, this pairing may lead to the \emph{language drift} phenomenon, where the conversational agents gradually co-adapt, and drift away from the pretrained natural language. The model thus becomes unfit to interact with humans~\cite{chattopadhyay2017evaluating, zhu2017interactive, lazaridou2020multi}. While domain-specific methods exist to counter language drift~\cite{lee2019countering,li2016deep}, a simple task-agnostic method consists of combining interactive and supervised training losses on a pretraining corpus~\cite{wei2018airdialogue, lazaridou2016multi}, which was later formalized as Supervised SelfPlay (S2P) ~\cite{lowe2020on}. Inspired by language evolution and cultural transmission~\cite{kirby2001spontaneous, kirby2014iterated}, recent work proposes Seeded Iterated Learning (SIL)~\cite{lu2020countering} as another task-agnostic method to counter language drift. SIL modifies the training dynamics by iteratively refining a pretrained student agent by imitating interactive agents, as illustrated in Figure~\ref{fig:sil}. At each iteration, a teacher agent is created by duplicating the student agent, which is then finetuned towards task completion. A new dataset is then generated by greedily sampling the teacher, and those samples are used to refine the student through supervised learning. The authors empirically show that this iterated learning procedure induces an inductive learning bias that successfully maintains the language grounding while improving task-completion. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.9\linewidth]{sil} \vskip -0.6em \caption{SIL~\cite{lu2020countering}. A student agent is iteratively refined using newly generated data from a teacher agent. At each iteration, a teacher agent is created on top of the student before being finetuned by interaction, e.g. maximizing a task completion-score. Teacher generates a dataset with greedy sampling and students imitate those samples. The interaction step involves interaction with another language agent. } \label{fig:sil} \end{center} \vskip -1em \end{figure*} As a first contribution, we further examine the performance of these two methods in the setting of a translation game~\cite{lee2019countering}. We show that S2P is unable to maintain a high grounding score and experiences a late-stage collapse, while SIL has a higher negative likelihood when evaluated on human corpus. We propose to combine SIL with S2P by applying an S2P loss in the interactive stage of SIL. We show that the resulting \emph{Supervised Seeded Iterated Learning} (SSIL\xspace) algorithm manages to get the best of both algorithms in the translation game. Finally, we observe that the late-stage collapse of S2P is correlated with conflicting gradients before showing that SSIL\xspace empirically reduces this gradient discrepancy. \section{Preventing Language Drift} We describe here our interactive training setup before introducing different approaches to prevent language drift. In this setting, we have a set of collaborative agents that interact through language to solve a task. To begin, we train the agents to generate natural language in a word-by-word fashion. Then we finetune the agents to optimize a task completion score through interaction, i.e., learning to perform the task better. Our goal is to prevent the language drift in this second stage. \subsection{Initializing the Conversational Agents} For a language agent $f$ parameterized by $\bm{\theta}$, and a sequence of generated words $\bm{w}_{1:i} = [w_{j}]_{j=1}^i$ and an arbitrary context $\bm{c}$, the probability of the next word $w_i$ is $p(w_{i+1}|\bm{w}_{1:i}, \bm{c})=f_{\bm{\theta}}(\bm{w}_{1:i}, \bm{c})$ We pretrain the language model to generate meaningful sentences by minimizing the cross-entropy loss $\mathcal{L}^{\SU}_{pretrain}$ where the word sequences are sampled from a language corpus $D_{pretrain}$. Note that this language corpus may either be task-related or generic. Its role is to get our conversational agents a reasonable initialization. \subsection{Supervised Selfplay (S2P)} A common way to finetune the language agents while preventing language drift is to replay the pretraining data during the interaction stage. In S2P the training loss encourages both maximizing task-completion while remaining close to the initial language distribution. Formally, \begin{equation} \mathcal{L}^{\S2P} = \mathcal{L}^{\INT} + \alpha \mathcal{L}^{\SU}_{pretrain} \end{equation} where $\mathcal{L}^{\INT}$ is a differentiable interactive loss maximizing task completion, e.g. reinforcement learning with policy gradients~\cite{sutton2000policy}, Gumbel Straight-through Estimator (STE)~\cite{jang2017categorical} etc., $\mathcal{L}^{\SU}_{pretrain}$ is a cross-entropy loss over the pretraining samples. $\alpha$ is a positive scalar which balances the two losses. \subsection{Seeded Iterated Learning (SIL)} Seeded Iterated Learning (SIL) iteratively refines a pretrained \emph{student} model by using data generated from newly trained \emph{teacher} agents~\cite{lu2020countering}. As illustrated in Figure~\ref{fig:sil}, the student agent is initialized with the pretrained model. At each iteration, a new teacher agent is generated by duplicating the student parameters. It is tuned to maximize the task-completion score by optimizing the interactive loss $\mathcal{L}^{\TEACHER} = \mathcal{L}^{\INT}$ In a second step, we sample from the teacher to generate new training data $D_{teacher}$, and we refine the student by minimizing the cross-entropy loss $\mathcal{L}^{\STUDENT} = \mathcal{L}^{\SU}_{teacher}$ where sequence of words are sampled from $D_{teacher}$. This imitation learning stage can induce an information bottleneck, encouraging the student to learn a well-formatted language rather than drifted components. \subsection{SSIL\xspace: Combining SIL and S2P} S2P and SIL have two core differences: first, SIL never re-uses human pretraining data. As observed in Section~\ref{sec:weaknesses}, this design choice reduces the language modeling ability of SIL-trained agents, with a higher negative likelihood when evaluated on human corpus. Second, S2P agents merge interactive and supervised losses, whereas SIL's student never experiences an interactive loss. As analyzed in Section~\ref{sec:gradient}, the S2P multi-task loss induces conflicting gradients, which may trigger language drift. In this paper, we propose to combine these two approaches and demonstrate that the combination effectively minimizes their respective weaknesses. To be specific, we apply the S2P loss over the SIL teacher agent, which entails $\mathcal{L}^{\TEACHER} = \mathcal{L}^{\INT} + \alpha \mathcal{L}^{\SU}_{pretrain}$. We call the resulting algorithm, Supervised Seeded Iterated Learning\xspace~(SSIL\xspace). In SSIL\xspace, teachers can generate data that is close to the human distribution due to the S2P loss, while students are updated with a consistent supervised loss to avoid the potential weakness of multi-task optimization. In addition, SSIL\xspace still maintains the inductive learning bias of SIL. We list all these methods in Table~\ref{tab:sil_s2p} for easy comparison. We also experiment with other ways of combining SIL and S2P by mixing the pretraining data with teacher data during the imitation learning stage. We call this method \emph{MixData}. We show the results of this approach in Appendix~\ref{sec:mixdata}. We find that this approach is very sensitive to the mixing ratio of these two kinds of data, and the best configuration is still not as good as SSIL\xspace. \begin{table}[t] \centering { \small \begin{tabular}{ll} \toprule \emph{Finetuning Methods} & \emph{Training Losses} \\ \midrule Gumbel & $\mathcal{L}^{\INT} $ \\ \midrule S2P & $\mathcal{L}^{\INT} + \alpha \mathcal{L}^{\SU}_{pretrain}$ \\ \midrule SIL (teacher) & $\mathcal{L}^{\INT}$\\ SIL (student) & $\mathcal{L}^{\SU}_{teacher}$ \\ \midrule SSIL\xspace (teacher) & $\mathcal{L}^{\INT} + \alpha \mathcal{L}^{\SU}_{pretrain}$\\ SSIL\xspace (student) & $\mathcal{L}^{\SU}_{teacher}$ \\ \bottomrule \end{tabular} } \caption{Finetuning with respective training objective.} \label{tab:sil_s2p} \vskip -1em \end{table} \section{Experimental Setting} \label{sec:settings} \subsection{Translation Game} We replicate the translation game setting from ~\cite{lee2019countering} as it was designed to study language drift. First, a \emph{sender} agent translates French to English (Fr-En), while a \emph{receiver} agent translates English to German (En-De). The sender and receiver are then trained together to translate French to German with English as a pivot language. For each French sentence, we sample English from the sender, send it to the receiver, and sample German from the receiver. The task score is defined as the BLEU score between generated German translation and the ground truth (\emph{BLEU De})~\cite{papineni2002bleu}. The goal is to improve the task score without losing the language structure of the intermediate English language. \subsection{Training Details} The sender and the receiver are pretrained on the IWSLT dataset~\cite{cettolo2012wit3} which contains $(\Fr, \En)$ and $(\En, \De)$ translation pairs. We then use the Multi30k dataset~\citep{elliott2016multi30k} to build the finetuning dataset with $(\Fr, \De)$ pairs. As IWSLT is a generic translation dataset and Multi30k only contains visually grounded translated captions, we also call IWSLT task-agnostic while Multi30K task-related. We use the cross-entropy loss of German as the interactive training objective, which is differentiable w.r.t. the receiver. For the sender, we use Gumbel Softmax straight-through estimator to make the training objective also differentiable w.r.t. the sender, as in~\citet{lu2020countering}. Implementation details are in Appendix~\ref{sec:translation_game} \subsection{Metrics for Grounding Scores} In practice, there are different kinds of language drift~\cite{lazaridou2020multi} (e.g. syntactic drift and semantic drift). We thus have multiple metrics to consider when evaluating language drift. We first compute English BLEU score (\emph{BLEU En}) comparing the generated English translation with the ground truth human translation. We include the negative log-likelihood (\emph{NLL}) of the generated En translation under a pretrained language model as a measure of syntactic correctness. In line with \cite{lu2020countering} , we also report results using another language metric: the negative log-likelihood of human translations (\emph{RealNLL}) given a finetuned Fr-En model. We feed the finetuned sender with human task-data to estimate the model's log likelihood. The lower is this score, the more likely the model would generate such human-like language. \section{Experiments} \begin{figure*}[th!] \begin{subfigure}{0.47\columnwidth} \includegraphics[width=\textwidth]{Core/BLEU_De.png} \caption{BLEU De (Task Score)} \end{subfigure} \hfill \begin{subfigure}{0.47\columnwidth} \includegraphics[width=\textwidth]{Core/BLEU_En.png} \caption{BLEU En} \end{subfigure} \hfill \begin{subfigure}{0.47\columnwidth} \includegraphics[width=\textwidth]{Core/NLL.png} \caption{NLL} \end{subfigure} \hfill \begin{subfigure}{0.47\columnwidth} \includegraphics[width=\textwidth]{Core/Real_NLL.png} \caption{RealNLL} \end{subfigure} \caption{Task and language metrics for Vanilla Gumbel, SIL, S2P, and SSIL\xspace in the translation game average over 5 seeds. We also show the results of mixing pretraining data in the teacher dataset (Section~\ref{sec:mixdata}). The plots are averaged over 5 seeds with shaded area as standard deviation. Although SIL and S2P both counter language drift, S2P suffers from late collapse, and SIL has a high \emph{RealNLL}, suggesting that its output may not correlate well with human sentences. } \label{fig:problem} \vskip -0.5em \end{figure*} \begin{figure}[t] \begin{subfigure}{0.47\columnwidth} \includegraphics[width=\textwidth]{GradCosine0.5/BLEU_EN.png} \caption{Bleu En} \end{subfigure} \hfill \begin{subfigure}{0.47\columnwidth} \includegraphics[width=\textwidth]{GradCosine0.5/Grad_Cosine.png} \caption{Cosine Similarity} \end{subfigure} \caption{Cosine similarity between the gradients issued from $\mathcal{L}^{\INT}$ and $\mathcal{L}^{\SU}_{pretrain}$. The collapse of the BLEU En matches the negative cosine similarity.We here set $\alpha=0.5$ but similar values yield identical behavior as shown in Figure~\ref{fig:grad_cosine_0_7} in Appendix. } \label{fig:grad_cosine} \end{figure} \subsection{S2P and SIL Weaknesses} \label{sec:weaknesses} We report the task and grounding scores of vanilla Gumbel, S2P, SIL, and SSIL\xspace in Figure~\ref{fig:problem}. The respective best hyper-parameters can be found in the appendix. As reported by~\citet{lu2020countering}, vanilla Gumbel successfully improves the task score \emph{BLEU De}, but the \emph{BLEU En} score as well as other grounding metric collapses, indicating a language drift during the training. Both S2P and SIL manage to increase \emph{BLEU De} while maintaining a higher \emph{BLEU En} score, countering language drift. However, S2P has a sudden (and reproducible) late-stage collapse, unable to maintain the grounding score beyond 150k steps. On the other hand, SIL has a much higher RealNLL than S2P, suggesting that SIL has a worse ability to model human data. SSIL\xspace seems to get the best of the two worlds. It has a similar task score \emph{BLEU De} as S2P and SIL, while it avoids the late-stage collapse. It ends up with the highest \emph{BLEU En}, and it improves the RealNLL over SIL, though still not as good as S2P. Also, it achieves even better NLL, suggesting that its outputs are favoured by the pretrained language model. \subsection{Mixing Teacher and Human Data} \label{sec:mixdata} We also explore whether injecting pretraining data into the teacher dataset may be a valid substitute for the S2P loss. We add a subset of the pretraining data in the teacher dataset before refining the student, and we report the results in Figure~\ref{fig:problem} and~\ref{fig:mixdata}. Unfortunately, such an approach was quite unstable, and it requires heavy hyper-parameters tuning to match SSIL\xspace scores. As explained in ~\cite{kirby2001spontaneous}, iterated learning rely on inductive learning to remove language irregularities during the imitation step. Thus, mixing two language distributions may disrupt this imitation stage. \subsection{Why S2P collapses?} \label{sec:gradient} We investigate the potential cause of S2P late-stage collapse and how SSIL\xspace may resolve it. We firstly hope to solve this by increasing the supervised loss weight $\alpha$. However, we find that a larger $\alpha$ only delays the eventual collapse as well as decreases the task score, as shown in Figure~\ref{fig:s2p} in Appendix~\ref{sec:s2p}. We further hypothesize that this late-stage collapse can be caused by the distribution mismatch between the pretraining data (IWSLT) and the task-related data (Multi30K), exemplified by their word frequencies difference. A mismatch between the two losses could lead to conflicting gradients, which could, in turn, make training unstable. In Figure~\ref{fig:grad_cosine}, we display the cosine similarity of the sender gradients issued by the interactive and supervised losses $cos(\nabla_{sender} \mathcal{L}^{\INT}$, $\nabla_{sender}\mathcal{L}^{\SU}_{pretrain})$ for both S2P and SSIL\xspace for $\alpha=0.5$ during training. Early in S2P training, we observe that the two gradients remain orthogonal on average, with the cosine oscillating around zero. Then, at the same point where the S2P \emph{Bleu En} collapses, the cosine of the gradients starts trending negative, indicating that the gradients are pointing in opposite directions. However, SSIL\xspace does not have this trend, and the \emph{BLEU En} does not collapse. Although the exact mechanism of how conflicting gradients trigger the language drift is unclear, current results favor our hypothesis and suggest that language drift could result from standard multi-task optimization issues \cite{yu2020gradient, parisotto2015actor, sener2018multi} for S2P-like methods. \paragraph{Conclusion} We investigate two general methods to counter language drift: S2P and SIL. S2P experiences a late-stage collapse on the grounding score, whereas SIL has a higher negative likelihood on human corpus. We introduce SSIL\xspace to combine these two methods effectively. We further show the correlation between S2P late-stage collapse and conflicting gradients. \paragraph{Acknowledgement} We thank Compute Canada (www.computecanada.ca) for providing the computational resources. We thank Miruna Pislar and Angeliki Lazaridou for their helpful discussions. { \small \bibliographystyle{acl_natbib}
1,116,691,497,104
arxiv
\section{Introduction} The structure of loosely bound and unbound nuclei is strongly impacted by many-body correlations and non-perturbative coupling to the external environment of scattering states and decay channels \cite{Oko03,Mic09}. This is particularly important in exotic nuclei where new phenomena, at the borderline of nuclear structure and nuclear reactions, are expected. Some of them, like the halos \cite{Rii00}, the segregation of time scales in the context of non-Hermitian Hamiltonians \cite{Kle85}, the alignment of near-threshold states with decay channels \cite{Ike68}, and the resonance crossings \cite{Zir83,Hei91} appear in various {\em open} mesoscopic systems. Their universality is the consequence of the non-Hermitian nature of an eigenvalue problem in open quantum systems. Resonances are commonly found in quantum systems, independently of their interactions, building blocks and energy scales involved. Much interest is concentrated on resonance degeneracies, the so-called exceptional points (EPs) \cite{Zir83}. Their connection to avoided crossings and spectral properties of Hermitian systems \cite{Hei91a,Duk08} as well as the associated geometric phases have been discussed in simple models in considerable detail \cite{Hei98}. The interesting question is their manifestation in nuclear scattering experiments. Here, a much studied case was the $2^+$ doublet in $^8$Be \cite{Mar65,Pau66,Bar66,Bro66,Hin78}. Based on this example, von Brentano \cite{Bre90} discussed the width attraction for mixed resonances, and Hernand\'{e}z and Mondrag\'{o}n \cite{Her94} showed that the true crossing of resonances can be obtained by the variation of two parameters in the Jordan block of rank two. In this latter analysis, it was shown that the resonating part of the scattering matrix (S-matrix) for one open channel and two internal states is compatible with the two-level formula of the R-matrix theory used in the experimental analysis of excitation functions of elastic scattering $^4$He($\alpha,\alpha_0$)$^4$He \cite{Hin78} and, hence, the $2^+$ doublet in $^8$Be may actually be close to the true resonance degeneracy. Properties of atomic nucleus around the continuum threshold change rapidly with the nucleon number, the excitation energy and the coupling to the environment of scattering states. A consistent description of the interplay between scattering and resonant states requires an open system formulation of the nuclear shell model (see \cite{Oko03,Mic09,Zel06} for recent reviews). The real-energy continuum shell model \cite{Fes58,Fan61,SMEC} provides a suitable unified framework with the help of an effective non-Hermitian Hamiltonian. In this work, for the first time we focus on a realistic model of an unbound atomic nucleus to see whether one or more EPs can appear in the low energy continuum for sensible parameters of the open quantum system Hamiltonian. In particular, we discuss possible experimental signatures of the EPs and show the evolution of these signatures in the vicinity of the EP. Finally, on the example of spectroscopic factors we demonstrate the entanglement of resonance wave functions close to the EP. \section{Formulation of the Continuum Shell Model} Let us briefly review the Shell Model Embedded in the Continuum (SMEC) \cite{SMEC}, which is a recent realization of the real-energy continuum shell model. The total function space of an $A-$particle system consists of the set of square-integrable functions ${\cal Q}\equiv \{\psi_i^{A}\}$, used in the standard nuclear Shell Model (SM), and the set of embedding scattering states ${\cal P}\equiv \{\zeta_E^{c}\}$. These two sets are obtained by solving the Schr\"odinger equation, separately for discrete (SM) states (the closed quantum system) and for scattering states (the environment). Decay channels '$c$' are determined by the motion of an unbound particle in a state $l_j$ relative to the $A-1$ nucleus with all nucleons on bounded single-particle (s.p.) orbits in the SM eigenstate $\psi_j^{A-1}$. Using these function sets, one defines projection operators: \begin{eqnarray} {\hat Q}=\sum_{i=1}^N|\psi_i^A\rangle\langle\psi_i^A|~;~~~~ {\hat P}=\int_0^\infty dE|\zeta_E\rangle\langle\zeta_E| \nonumber \end{eqnarray} and projected Hamiltonians: ${\hat Q}H{\hat Q}\equiv H_{QQ}$, ${\hat P}H{\hat P}\equiv H_{PP}$, ${\hat Q}H{\hat P}\equiv H_{QP}$, ${\hat P}H{\hat Q}\equiv H_{PQ}$. Assuming ${\cal Q}+{\cal P}={\cal I}$, one can determine the third set of functions $\{\omega_i^{(+)}\}$ which contains the continuation of any SM eigenfunction $\psi_i^A$ in ${\cal P}$, and then construct the complete solution in ${\cal Q}+{\cal P}$ \cite{Oko03}. Recently, this approach has been extended to describe the two-proton radioactivity with the two-particle continuum \cite{Rot06}. Open quantum system solutions in ${\cal Q}$, which include couplings to the environment of scattering states and decay channels, are obtained by solving the eigenvalue problem for the energy-dependent effective Hamiltonian: \begin{eqnarray} {\cal H}_{QQ}(E)=H_{QQ}+H_{QP}G_P^{(+)}(E)H_{PQ} \ , \nonumber \end{eqnarray} where $H_{QQ}$ is the closed system Hamiltonian, $G_P^{(+)}(E)$ is the Green function for the motion of a single nucleon in ${\cal P}$ subspace and $E$ is the energy of this nucleon (the scattering energy). Index '+' in $G_P^{(+)}$ stands for the outgoing boundary in the scattering problem. ${\cal H}_{QQ}$ is non-Hermitian for unbound states and its eigenstates $|\Phi_\alpha\rangle$ are linear combinations of SM eigenstates $|\psi_i\rangle$. The eigenstates of ${\cal H}_{QQ}$ are biorthogonal; the left $|\Phi_\alpha\rangle$ and right $|\Phi_{\bar \alpha}\rangle$ eigenstates have the wave functions related by the complex conjugation. The orthonormality condition in the biorthogonal basis reads: $\langle\Phi_{\bar \alpha}|\Phi_{\beta}\rangle = \delta_{\alpha,\beta}$. Similarly, the matrix element of an operator ${\hat O}$ is $O_{\alpha\beta}=\langle\Phi_{\bar \alpha}|{\hat O}|\Phi_{\alpha}\rangle$. The scattering function $\Psi^c_E$ is a solution of a Schr\"{o}dinger equation in the total function space: \begin{eqnarray} \Psi^c_E=\zeta_E^c+\sum_{\alpha}a_{\alpha}{\tilde \Phi}_{\alpha} \ , \nonumber \end{eqnarray} where \begin{eqnarray} a_{\alpha}\equiv\langle\Phi_{\alpha}|H_{QP}|\zeta^c_E\rangle/(E-{\cal E}_{\alpha}) \ , \nonumber \end{eqnarray} and \begin{eqnarray} {\tilde \Phi}_{\alpha}\equiv(1+G_P^{(+)}H_{PQ})\Phi_{\alpha} \ . \nonumber \end{eqnarray} Inside of an interaction region, the dominant contributions to $\Psi^c_E$ are given by eigenfunctions $\Phi_{\alpha}$ of the effective non-Hermitian Hamiltonian \cite{Oko03}: \begin{eqnarray} \Psi^c_E\sim\sum_{\alpha}a_{\alpha}\Phi_{\alpha} \ . \nonumber \end{eqnarray} For bounds states, eigenvalues ${\cal E}_\alpha(E)$ of ${\cal H}_{QQ}(E)$ are real and ${\cal E}_{\alpha}(E)=E$. For unbound states, physical resonances can be identified with the narrow poles of the S-matrix \cite{Sie39,Mic09}, or using the Breit-Wigner approach which leads to a fixed-point condition \cite{Oko03,Zel06,Mad05}: \begin{eqnarray} E_{\alpha}={\rm Re}\left( {\cal E}_{\alpha}(E) \right)|_{E=E_{\alpha}} ~;~~ \mathit{\Gamma}_{\alpha}=-2\,{\rm Im}\left( {\cal E}_{\alpha}(E) \right)|_{E=E_{\alpha}} \label{eq1} \end{eqnarray} Here it is assumed that the origin of ${\rm Re}\left( {\cal E} \right)$ is fixed at the lowest particle emission threshold. An EP is a generic phenomenon in Hamiltonian systems. In our case, the EP can appear as a result of the continuum-coupling term $H_{QP}G_P^{(+)}(E)H_{PQ}$ for energies above the first particle emission threshold ($E>0$). The eigenvalue degeneracies are indicated by common roots of the two equations \cite{Zir83}: \begin{eqnarray} \frac{\partial^{(\nu)}}{\partial {\cal E}} {\rm det}\left[{\cal H}_{QQ}\left(E;V_0\right) -{\cal E}I\right] = 0~~~~~~~~~~\nu=0,1 \label{discr} \end{eqnarray} Single-root solutions of Eq. (2) correspond to EPs associated with decaying states. The maximal number of those roots is $M_{max}=n(n-1)$, where $n$ is the number of states of given angular momentum $J$ and parity $\pi$. In quantum integrable models with at least two parameter-dependent integrals of motion one finds also double-root solutions which correspond to non-singular crossing of two levels with two different wave functions. Hence, the actual number of EPs in these systems is always smaller than $M_{max}$ \cite{Duk08}. The position of EPs in the spectrum of eigenvalues of ${\cal H}_{QQ}$ depends both on the chosen interaction and the energy $E$ of the system. In general, eigenvalues of the energy-dependent effective Hamiltonian ${\cal H}_{QQ}(E)$ need not satisfy the fixed-point condition (\ref{eq1}) and hence need not correspond to poles of the S-matrix (resonances). In the following, we shall consider uniquely the case where EPs are {\em identical} with double-poles of the S-matrix. \section{Exceptional points in the scattering continuum of $^{16}{\rm Ne}$} Let us investigate properties of EPs on the example of $^{16}$Ne. SM eigenstates in this nucleus correspond to a complicated mixture of configurations associated with the dynamics of the $^{16}$O core. Our goal is to see if EPs can be possibly found in the scattering continuum of atomic nucleus at low excitation energies and for physical strength of the continuum coupling. SMEC calculations are performed in $p_{1/2}, d_{5/2}, s_{1/2}$ model space. For $H_{QQ}$ we take the ZBM Hamiltonian \cite{ZBM} which correctly describes the configuration mixing around $N=Z=8$ shell closure. The residual coupling $H_{QP}$ between ${\cal Q}$ and the embedding continuum ${\cal P}$ is generated by the contact force: $H_{QP}=H_{PQ}=V_0\delta(r_1-r_2)$. For each $J^{\pi}$, the SM states $|\psi_i(J^{\pi})\rangle$ of the closed quantum system are interconnected via the coupling to common decay channels $[^{15}{\rm F}(K^{\pi})\otimes {\rm p}_{l_j}]_{E'}^{J^{\pi}}$ with $K^{\pi}=1/2^+, 5/2^+$, and $1/2^-$ which have the thresholds at $E=0$ (the elastic channel), 0.67 MeV, and 2.26 MeV, respectively. In the ZBM model space, these are all possible one-proton (1p) decay channels in $^{16}$Ne. The size of a non-Hermitian correction to $H_{QQ}$ depends on two real parameters: the strength $V_0$ of the continuum coupling in $H_{QP}$ ($H_{PQ}$) and the system energy $E$. The range of relevant $V_0$ values can be determined, for example, by fitting decay widths of the lowest states in $^{15}$F. For the present Hamiltonian, experimental decay widths of the ground state $1/2_1^+$ and the first excited state $5/2_1^+$ in $^{15}$F are reproduced using $V_0=-3500\pm 450$ MeV$\cdot$fm$^3$ and $V_0=-1100\pm 50$ MeV$\cdot$fm$^3$, respectively. The error bars in $V_0$ reflect experimental uncertainties of those widths. The weak dependence of 1p~decay widths on the sign of $V_0$ is generated by the channel-channel coupling and disappears in a single-channel case. \begin{figure}[hbt] \begin{center} {\includegraphics[width=8cm,angle=00]{Fig1.eps} \caption{The map of $J^{\pi}=1^-$ exceptional points in the continuum of $^{16}$Ne as found in SMEC. For more details, see the description in the text.}} \label{fig1} \end{center} \end{figure} Fig. 1 shows energies $E$ and strengths $V_0$ which correspond to $J^{\pi}=1^-$ EPs in the scattering continuum of $^{16}$Ne. Decay channels $[^{15}{\rm F}(K^{\pi})\otimes {\rm p}_{l_j}]_{E'}^{1^-}$ with $K^{\pi}=1/2^+, 5/2^+$, and $1/2^-$ have been included with proton partial waves: $p_{1/2}, p_{3/2}$ for $K^{\pi}=1/2^+$, $p_{3/2}, f_{5/2}, f_{7/2}$ for $K^{\pi}=5/2^+$, and $s_{1/2}, d_{3/2}$ for $K^{\pi}=1/2^-$. The number of $1^-$ SM states is 3 and, hence, the maximal number of $1^-$ EPs in SMEC could be 6. Indeed, all of them exist at $E<20$ MeV in a physical range of $V_0$ values (1100 MeV$\cdot$fm$^3<|V_0|<3500$ MeV$\cdot$fm$^3$). They have been found by scanning the energy dependence of all eigenvalues over a certain range of $V_0$, searching for all real-energy crossings or width crossings (avoided crossings). Once found, we have tuned $V_0$ to find out whether these crossings evolve into EPs at some combination of $V_0$ and $E$. One should stress that the passage through EP always occurs if, e.g., the real-energy crossing moves towards $E=0$. Since such a crossing cannot move into the region $E<0$, therefore it converts into an avoided crossing via the formation of an EP. The lowest EP in Fig. 1 is seen at $V_0^{(\rm cr)}=-1617.4$ MeV$\cdot$fm$^3$ and $E=2.33$ MeV. This EP corresponds to a degeneracy of the first two $1^-$ eigenvalues of ${\cal H}_{QQ}$ for $V_0<0$. \begin{figure}[hbt] \begin{center} \includegraphics[width=6cm,angle=00]{Fig2.eps} \caption{The upper plot exhibits the elastic scattering phase shifts $\delta_{p_{1/2}}$ (dashed-dotted line) and $\delta_{p_{3/2}}$ (dashed line) for ${\rm p +} ^{15}{\rm F}$ reaction in $1^-$ partial waves at around the EP (the double-pole of the S-matrix) with $J^{\pi}=1^-$. Lower plots show real and imaginary parts of $1_1^-$ (solid line) and $1_2^-$ (dotted line) eigenvalues of the effective Hamiltonian ${\cal H}_{QQ}(E)$ as a function of the scattering energy $E$. For other details, see the description in the text.} \label{fig2} \end{center} \end{figure} Energy $E_i$ and width $\mathit{\Gamma}_i$ of $1^-_1$ and $1^-_2$ eigenvalues are shown in Fig. 2 as a function of the scattering energy. For $E>2.33$ MeV, width of these two eigenvalues grow apart very fast. $E_1(E)$ (solid line) and $E_2(E)$ (dotted line) cross again for $E\simeq 3.2$ MeV. At this energy, $\mathit{\Gamma}_1$ and $\mathit{\Gamma}_2$ are different and, hence, the corresponding eigenfunctions are different as well. The upper part of Fig. 2 shows the phase shifts $\delta_{l_j}$ for ${\rm p +} ^{15}{\rm F}$ elastic scattering as a function of the proton energy for $p_{1/2}$ (dashed-dotted line) and $p_{3/2}$ (dashed line) partial waves. In the partial wave $p_{1/2}$, the elastic scattering phase shift exhibits a jump by $2\pi$ at the EP with $J^{\pi}=1^-$. This unusual jump in the elastic scattering phase shift is an unmistakable and robust signal of a double-pole of the S-matrix (EP) which persists also in its neighborhood, as shall be discussed below. \begin{figure}[hbt] \begin{center} \includegraphics[width=7.5cm,angle=00]{Fig3.eps} \caption{Elastic and inelastic cross-sections in the reaction $^{15}{\rm F(p,p')}$ as a function of the proton energy $E$ at around the EP (the double-pole of the S-matrix) with $J^{\pi}=1^-$ for $1^-$ resonances only (dashed line) and for all resonances with $J\leq5$ (solid line). For more details, see the description in the text.} \label{fig3} \end{center} \end{figure} Fig. 3 shows the elastic and inelastic cross sections for $^{15}{\rm F(p,p')}$ in the vicinity of an EP. The solid line represents a sum of different partial contributions of both parities with $J \le 5$ whereas the dashed line shows the resonance part of $1^-$ contribution in these cross sections. The cross sections are plotted as a function of center of mass scattering energy for $V_0^{(\rm cr)}=-1617.4$ MeV$\cdot$fm$^3$. The elastic cross section at the EP shows a characteristic double-hump shape \cite{Mul95} with asymmetric tails in energy. The inelastic cross section in this case exhibits a single peak. Both inelastic channels $[^{15}{\rm F}(5/2^+)\otimes {\rm p}_{l_j}]_{E'}^{1^-}$ and $[^{15}{\rm F}(1/2^-)\otimes {\rm p}_{l_j}]_{E'}^{1^-}$ are opened at the EP. Substantial background contribution to both cross sections comes from broad resonances, mainly $0^+$ and $2^+$. A sharp peak at $E\simeq 1.65$ MeV corresponds to an ordinary resonance $2^-$. The above discussion of the double-poles of the S-matrix (EPs) and their manifestation in the many-body scattering continuum concerns $1^-$ states. The same analysis for $J^{\pi}=0^+, 2^+$ states of $^{16}$Ne gives qualitatively similar results. Also in these two cases, the number of EPs is maximal but only a fraction of them appears in the relevant range of $E$ and $V_0$ values. \subsection{Behavior of scattering wave functions in the vicinity of the exceptional point} A true crossing of two resonant states is accidental and, \begin{figure}[hbt] \begin{center} \includegraphics[width=8cm,angle=00]{Fig4.eps} \caption{The elastic scattering phase shifts $\delta_{p_{1/2}}$ for ${\rm p +} ^{15}{\rm F}$ reaction in $1^-$ partial waves at around the EP (the double-pole of the S-matrix) with $J^{\pi}=1^-$ at $V_0^{(\rm cr)}=-1617.4$ MeV$\cdot$fm$^3$ (solid line). Different curves correspond to different strength $V_0$ of the continuum coupling: $V_0$=-1800 MeV$\cdot$fm$^3$ (long-dashed line), -1700 MeV$\cdot$fm$^3$ (dashed-dotted line), -1500 MeV$\cdot$fm$^3$ (short-dashed line) and -1430 MeV$\cdot$fm$^3$ (dotted line).} \label{fig4} \end{center} \end{figure} hence, improbable in nuclear scattering experimentation. In this section, we will investigate the behavior of scattering states in the vicinity of an EP (the double-pole of the S-matrix) as the observation of such a situation is more plausible. Fig. 4 exhibits the phase shifts $\delta_{l_j}$ for ${\rm p +} ^{15}{\rm F}$ elastic scattering as a function of the proton energy for various values of the strength $V_0$ ($V_0$=-1800 MeV$\cdot$fm$^3$ (long-dashed line), -1700 MeV$\cdot$fm$^3$ (dashed-dotted line), -1617.4 MeV$\cdot$fm$^3$ (solid line), -1500 MeV$\cdot$fm$^3$ (short-dashed line) and -1430 MeV$\cdot$fm$^3$ (dotted line)) of the residual coupling $H_{QP}=H_{PQ}=V_0\delta(r_1-r_2)$ between ${\cal Q}$ and ${\cal P}$ subspaces. The characteristic change by a $2\pi$ of the elastic phase shift is seen in a broad interval -1800 MeV$\cdot$fm$^3$ $\leq V_0 \leq$ -1500 MeV$\cdot$fm$^3$ of the continuum coupling strength. \begin{figure}[hbt] \begin{center} \includegraphics[width=6cm,angle=00]{Fig5.eps} \caption{The same as in Fig. \ref{fig2} but in the subcritical regime of coupling ($V_0=-1560$ MeV$\cdot$fm$^3$). For more details, see the caption of Fig. \ref{fig2} and the description in the text.} \label{fig5} \end{center} \end{figure} \begin{figure}[hbt] \begin{center} \includegraphics[width=6cm,angle=00]{Fig6.eps} \caption{The same as in Fig. \ref{fig2} but in the overcritical regime of coupling ($V_0=-1680$ MeV$\cdot$fm$^3$). For more details, see the caption of Fig. \ref{fig2} and the description in the text.} \label{fig6} \end{center} \end{figure} Fig. 5 and 6 show energies $E_i$ and widths $\mathit{\Gamma}_i$ of $1^-_1$ and $1^-_2$ eigenvalues as a function of the scattering energy for two values of $V_0$: -1560 MeV$\cdot$fm$^3$ (Fig. 5) and -1680 MeV$\cdot$fm$^3$ (Fig. 6). The case shown in Fig. 5 corresponds to a subcritical coupling where two resonances cross freely in energy and repel in width \cite{Phi00}. In this regime, the scattering energy $E$ corresponding to the closest approach of $1^-$ eigenvalues in the complex plane ($E\simeq 2.47$ MeV) is higher than the scattering energy corresponding to the EP at a critical coupling $V_0^{(\rm cr)}$=-1617.4 MeV$\cdot$fm$^3$. Nevertheless, the elastic scattering phase shift shows the jump by $2\pi$ at the position of the EP and not at the point of the closest approach of eigenvalues. Fig. 6 shows the situation corresponding to an overcritical coupling where two resonances exhibit level repulsion in energy and a free crossing of their widths \cite{Phi00}. In this case, the point of the closest approach of $1^-$ eigenvalues in the complex plane is found at the scattering energy ($E=2.13$ MeV) which is lower than than the corresponding energy for the EP. Again, the elastic scattering phase shift shows the jump by $2\pi$ at the position of the double-pole. From these two examples, one can see that the characteristic jump by $2\pi$ of the elastic scattering phase shift remains a robust signature of the EP in all close-to-critical regimes of the coupling to the continuum: the subcritical coupling ($|V_0|<|V_0^{(\rm cr)}|$), the critical coupling ($|V_0|=|V_0^{(\rm cr)}|$), and the overcritical coupling ($|V_0|>|V_0^{(\rm cr)}|$), where real and/or imaginary parts of two eigenvalues coincide. \begin{figure}[hbt] \begin{center} \includegraphics[width=7.5cm,angle=00]{Fig7.eps} \caption{The same as in Fig. \ref{fig3} but in the subcritical regime of coupling ($V_0=-1560$ MeV$\cdot$fm$^3$). For more details, see the caption of Fig. \ref{fig2} and the description in the text.} \label{fig7} \end{center} \end{figure} \begin{figure}[hbt] \begin{center} \includegraphics[width=7.5cm,angle=00]{Fig8.eps} \caption{The same as in Fig. \ref{fig3} but in the overcritical regime of coupling ($V_0=-1680$ MeV$\cdot$fm$^3$). For more details, see the caption of Fig. \ref{fig2} and the description in the text.} \label{fig8} \end{center} \end{figure} Next two figures show the elastic and inelastic cross sections for $^{15}{\rm F(p,p')}$ in the vicinity of the EP with $J^{\pi}=1^-$ in the subcritical (Fig. 7) and overcritical (Fig. 8) regimes of the continuum coupling. The curves shown by solid lines in Figs. 7,8 represent a sum of different partial contributions of both parities with $J \le 5$. The curves shown by dashed lines exhibit the resonance part of $1^-$ contribution in these cross sections. The qualitative features of the cross sections for the subcritical ($V_0=-1560$ MeV$\cdot$fm$^3$) and overcritical ($V_0=-1680$ MeV$\cdot$fm$^3$) couplings remain same as for the critical coupling (see Fig. 3). In both cases, one see a double-hump shape in the elastic cross sections and a single-hump shape in the inelastic cross section. One observes also a strong asymmetry in widths and heights of two peaks and a small shift of the position of the interference minimum in between the two peaks with respect to the energy which the EP is found for a critical coupling. \subsection{Entangled eigenstates of the effective Hamiltonian} Complex and biorthogonal eigenstates of the effective non-Hermitian Hamiltonian provide a convenient basis in which the resonant part of the scattering function can be expressed. These eigenstates are obtained by an orthogonal and, in general, non-unitary transformation of SM eigenstates \cite{Oko03} which is a consequence of their mixing via coupling to common decay channels. The same coupling is responsible for the entanglement of two eigenstates involved in building of an EP, as illustrated in Fig. 9 on the example of spectroscopic factors. \begin{figure}[hbt] \begin{center} \includegraphics[width=6cm,angle=00]{Fig9.eps} \caption{$p_{1/2}$-spectroscopic factor $\langle^{16}{\rm Ne}(1_n^-)|[^{15}{\rm F}(1/2_1^+)\otimes p(0p_{1/2})]^{1^-}\rangle$ for $1_1^-$ and $1_2^-$ eigenvalues of the effective Hamiltonian at around the double-point of the S-matrix. For more details, see the discussion in the text.} \label{fig9} \end{center} \end{figure} Fig. 9 exhibits the real part of the spectroscopic factor Re($S^2$)=Re$\left(\langle^{16}{\rm Ne}(1_n^-)|[^{15}{\rm F}(1/2_1^+)\otimes p(0p_{1/2})]^{1^-}\rangle^2\right)$ in $^{16}$Ne in three regimes of continuum coupling: (a) the subcritical regime ($V_0=-1560$ MeV$\cdot$fm$^3$), (b) the critical regime ($V_0^{(\rm cr)}=-1617.4$ MeV$\cdot$fm$^3$), and (c) the overcritical regime ($V_0=-1680$ MeV$\cdot$fm$^3$). The solid (short-dashed) lines show the spectroscopic factors for $\Phi(1^-_1)(E)$ ($\Phi(1^-_2)(E)$) eigenvalues of the effective Hamiltonian ${\cal H}_{QQ}(E)$ as a function of the scattering energy $E$. For a critical coupling (plot (b)), the spectroscopic factors for $\Phi(1^-_1)$ and $\Phi(1^-_2)$ wavefunctions diverge at the EP (the double-pole of the S-matrix) but their sum (long-dashed line in Fig. 9) remains finite and constant over a whole region of scattering energies surrounding the EP. In that sense, $\Phi(1^-_1)$ and $\Phi(1^-_2)$ resonance wavefunctions form an inseparable doublet of eigenfunctions with entangled spectroscopic factors. This entanglement is a direct consequence of the energy dependence of coefficients $b_{\alpha i}$: \begin{eqnarray} |\Phi_{\alpha}\rangle=\sum_i b_{\alpha i}(E) |\psi_i\rangle \ , \nonumber \end{eqnarray} in a decomposition of ${\cal H}_{QQ}(E)$ eigenstates in the basis of SM eigenstates. One may notice that the energy dependence of Re($S^2$) in the vicinity of the double-pole for $1^-_1$ and $1^-_2$ eigenstates is quite different in all three regimes of the continuum coupling. In particular, in the overcritical regime of coupling, an EP yields entangled states in a broad range of scattering energies. The strongest entanglement is found at the scattering energy which corresponds to the point of the closest approach of eigenvalues in the complex plane for all regimes of coupling. Obviously, the entanglement of resonance eigenfunctions in the vicinity of an EP is a generic phenomenon in open quantum systems which is manifested in matrix elements and expectation values for any operator which does not commute with the Hamiltonian. \section{Conclusions} In conclusion, we have shown in SMEC studies of the one-nucleon continuum that EPs exist for realistic values of the continuum coupling strength. In the studied case of $^{16}$Ne, few of those EPs appear at sufficiently low excitation energies to be seen in the excitation function as individual peaks associated with a jump by $2\pi$ of the elastic scattering phase shift. The occurrence of an EP leaves also characteristic imprints in its neighborhood, i.e. for avoided crossing of resonances. In all close-to-critical regimes of the continuum coupling where real and/or imaginary parts of the two eigenvalues coincide, one finds qualitatively similar features of the elastic scattering phase shift and the elastic cross-section as found for the critical coupling at around the EP (the double-pole of the S-matrix). This gives a real chance that EPs or their traces may actually be searched for experimentally in the atomic nucleus. The well-known case of $2^+$ doublet in $^8$Be, where resonance energies and widths are $16623\pm 3$ keV, $107\pm 0.5$ keV and $16925\pm 3$ keV, $74.4\pm 0.4$ keV, respectively \cite{Hin78}, nearly satisfies the resonance conditions in the close-to-critical regime of couplings. Various situations in this regime have been studied experimentally in the microwave cavity \cite{Phi00}. Avoided crossing of two resonances with the same quantum numbers provide the valuable information about the configuration mixing in open quantum systems. As the formation of any EP in the scattering continuum depends on a subtle interplay between internal Hamiltonian ($H_{QQ}$) and the coupling to the external environment of decay channels, its finding provides a stringent test of an effective nucleon-nucleon interaction and the configuration mixing in the open quantum system regime. Such tests are crucial for a quantitative description of atomic nuclei in the vicinity of drip lines. \vspace{0.2cm} We wish to thank W. Nazarewicz for stimulating discussions and suggestions.
1,116,691,497,105
arxiv
\section{Introduction and main results} An $n$-dimensional Riemannian manifold $(M,g)$ is called a \textit{gradient shrinking Ricci soliton or shrinker} (see \cite{[Ham]}) if there exists a smooth function $f$ on $(M,g)$ such that the Ricci curvature $\text{Ric}$ and the Hessian of $f$ satisfy \[ {\mathrm {Ric}}+\mathrm{Hess}\,f=\lambda g \] for some constant $\lambda>0$. Function $f$ is often called a \textit{potential} of the shrinker. Upon scaling the metric $g$ by a constant, we may assume $\lambda=1/2$ so that \begin{align}\label{Eq1} {\mathrm {Ric}} +\mathrm{Hess}\,f=\frac 12g. \end{align} Furthermore, we can normalize $f$ such that \eqref{Eq1} simultaneously satisfies \begin{equation}\label{condition} \mathrm{S}+|\nabla f|^2-f=0, \end{equation} where $\mathrm{S}$ is the scalar curvature of $(M,g)$, and \begin{equation}\label{condmu} \int_M (4\pi)^{-\frac n2}e^{-f} dv=e^{\mu}, \end{equation} where $dv$ is the volume element with respect to metric $g$, and $\mu=\mu(g,1)$ is the entropy functional of Perelman \cite{[Pe]}. By Lemma 2.5 in \cite{[LLW]}, we see that the term $e^{\mu}$ is almost equivalent to the volume of geodesic ball $B(p,1)$ with radius $1$ and center $p$. Here $p\in M$ is a infimum point of $f$, which can be always achieved for any complete shrinker; see \cite{[HaMu]}. Shrinkers play an important role in the Ricci flow as they correspond to some self-similar solutions and usually arise as the limit solutions of type I singularity models of the Ricci flow \cite{[EMT]}. They are regarded as a natural extension of Einstein manifolds with positive scalar curvature, and are related to the Bakry-\'Emery Ricci tensor \cite{[BE]}. Nowadays, the understanding of geometry and topology for shrinkers is an important subject in the Ricci flow \cite{[Ham]}. For dimensions 2 and 3, the classification of shrinkers is complete. However dimensions equal to or greater than 4, the complete classification remains open; see \cite{[Cao1],[Cao2]} and references therein for nice surveys. It is an interesting phenomenon that many geometric and analytic properties of shrinkers are similar to manifolds with nonnegative Ricci curvature or Einstein manifolds with positive scalar curvature. Some interesting results are exhibited as follows. Wylie \cite{[Wy]} proved that any complete shrinker has finite fundamental group (the compact case due to Derdzi\'nski \cite{[De]}). Fang, Man and Zhang \cite{[FMZ]} showed that any non-compact shrinker with bounded scalar curvature has finite topological type. Chen and Zhou \cite{[CaZh]} confirmed that any non-compact shrinker has at most Euclidean volume growth. Munteanu and Wang \cite{[MuWa12]} proved that any non-compact shrinker has at least linear volume growth. Haslhofer and M\"uller \cite{[HaMu],[HaMu2]} proved a Cheeger-Gromov compactness theorem of shrinkers with a lower bound on their entropy and a local integral Riemann bound. Li, Li and Wang \cite{[LLW]} gave a structure theory for non-collapsed shrinkers, which was further developed by Huang, Li and Wang \cite{[HLW]}. For the $4$-dimensional case, Li and Wang \cite{[LiWa19]} proved that any nontrivial flat cone cannot be approximated by smooth shrinkers with bounded scalar curvature and Harnack inequality under the pointed-Gromov-Hausdorff topology. Huang \cite{[Hua]} applied the strategy of Cheeger-Tian \cite{[CT]} in Einstein manifolds and proved an $\epsilon$-regularity theorem for $4$-dimensional shrinkers, confirming a conjecture of Cheeger-Tian \cite{[CT]}. Recently, Li and Wang \cite{[LiWa]} obtained a sharp logarithmic Sobolev inequality, the Sobolev inequality, heat kernel estimates, the no-local-collapsing theorem, the pseudo-locality theorem, etc. on complete shrinkers, which can be further extended to the other geometric inequalities, such as Nash inequalities, Faber-Krahn inequalities and Rozenblum-Cwikel-Lieb inequalities in \cite{[Wu]}. For more function theory on shrinkers, the interested readers are referred to \cite{[GZ],[MSW],[MuW14],[MuWa14e], [Wu15], [WW15],[WW16]} and references therein. On a manifold $M$, a set $E$ is called an \textit{end} with respect to a compact set $\Omega\subset M$, if it is an unbounded connected component of $M\backslash\Omega$. The number of ends with respect to $\Omega$, denoted by $N_\Omega(M)$, is the number of unbounded connected components of $M\backslash\Omega$. If $\Omega_1\subset\Omega_2$, then $N_{\Omega_1}(M)\le N_{\Omega_2}(M)$. Hence if $\Omega_i$ is a compact exhaustion of $M$, then $N_{\Omega_i}(M)$ is a nondecreasing sequence. If this sequence is bounded, then we say that $M$ has finitely many ends. In this case, the number of ends of $M$ is defined by \[ N(M)=\lim_{i\to\infty}N_{\Omega_i}(M). \] Obviously, the number of ends is independent of the compact exhaustion $\{\Omega_i\}$. Ends of manifolds are related to the geometry and topology of manifolds; the interested reader may refer to the book \cite{[PL]}. The Cheeger-Gromoll's splitting theorem \cite{[CG]} indicates that any complete non-compact manifold with nonnegative Ricci curvature has at most two ends. Later, Cai \cite{[Cai]} and Li-Tam \cite{[LT]} independently proved that any manifold with nonnegative Ricci curvature outside a compact set has at most finitely many ends (see also Liu \cite{[Liu2]}); see \cite{[Wu16]} for an extension to smooth metric measure spaces. Cai's approach is pure geometrical, strongly depending on a local version of Cheeger-Gromoll's splitting theorem, while Li-Tam's proof is analytic in nature by taking full advantage of the harmonic function theory. Liu's proof is also geometrical, not adapting the local splitting theorem but using various volume comparisons. At present, an interesting question of whether the Cheeger-Gromoll splitting theorem holds on any complete non-compact shrinker still remains unresolved. In the next attempt to consider the number of ends, it is natural to ask \vspace{.1in} \noindent \textbf{Question}. \textit{Does any a complete non-compact shrinker have finitely many ends?} \vspace{.1in} For the K\"ahler case, Munteanu and Wang \cite{[MuWa14e]} proved that any K\"ahler shrinker has only one end. For the Riemannian case, Munteanu, Schulze and Wang \cite{[MSW]} showed that the number of ends is finite when the scalar curvature satisfies certain scalar curvature integral at infinity. Their proof depends on the Li-Tam's analytic theory \cite{[LT]}. In this paper, we use a geometric covering argument and prove that \begin{theorem}\label{endest} The number of ends on $n$-dimensional complete non-compact shrinker with the scalar curvature \[ \mathrm{S}\ge \delta \] for some constant $\delta\ge 0$ is at most polynomial growth with degree $2(n-\delta)$. \end{theorem} \begin{remark} From \eqref{en1} in Section \ref{volcom}, we will see that $\mathrm{S}\ge\delta$ implies $\delta\le n/2$ on shrinkers. From Remark \ref{levcon}, we have that the point-wise assumption $\mathrm{S}\ge \delta$ can be replaced by a lower of the average scalar curvature over the level set $\{f<r\}:=\left\{x\in M|f(x)<r\right\}$ for any $r>0$, that is, \[ \frac{1}{\int_{\{f<r\}}dv}\int_{\{f<r\}}\mathrm{S}\,dv\ge\delta \] for any $r>0$. If the scalar curvature also has a uniformly upper bound, then the degree $2(n-\delta)$ in theorem can be reduced to $n-2\delta$; see Remark \ref{reN2}. \end{remark} The following condition introduced in \cite{[LT2]} will play an important role in this paper. \begin{definition} A Riemannian manifold $(M,g)$ has \textit{volume comparison condition} if there exists a constant $\eta>0$ such that for all $r\ge r_0$ for some $r_0>0$, and all $x\in\partial B(q,r)$, \[ \mathrm{Vol}(B(q,r))\le\eta\,\mathrm{Vol}\left(B(x,\frac{r}{16})\right), \] where $\mathrm{Vol}(B(q,r))$ is the volume of geodesic ball $B(q,r)$ of radius $r$ with center at a fixed point $q\in M$. \end{definition} If the shrinker satisfies volume comparison condition, we prove that \begin{theorem}\label{main1} Any complete non-compact shrinker with volume comparison condition must have finitely many ends. \end{theorem} Many special cases of shrinkers satisfy volume comparison condition. The detailed discussion can be referred to Section \ref{sec4}. Here we summarize some results as follows: (I) If a manifold satisfies volume doubling property, then it admits volume comparison condition; see Proposition \ref{voldoub}. Recall that $(M,g)$ is said to be \textit{volume doubling property} if \[ \mathrm{Vol}(B(x,2r))\le D\,\mathrm{Vol}(B(x,r)) \] for any $x\in M$ and $r>0$, where $D$ is a fixed constant. Clearly, any manifold with nonnegative Ricci curvature satisfies volume doubling property. (II) If the asymptotic scalar curvature ratio of shrinker is finite, then such shrinker has volume comparison condition; see Proposition \ref{decc}. Given a point $q\in (M,g)$, the \textit{asymptotic scalar curvature ratio} ($\operatorname{ASCR}$) is defined by \[ \operatorname{ASCR}(g):=\underset{r(q,x)\to\infty}{\lim\sup}\,\mathrm{S}(x)\cdot r(q,x)^2, \] $r(q,x)$ is the distance function from $q$ to $x$. It is easy to see that $\operatorname{ASCR}(g)$ is independent of the base point $q$. Chow, Lu and Yang \cite{[ChLY]} proved that a non-compact non-flat shrinker has at most quadratic scalar curvature decay. Therefore, except the flat shrinker, our assumption is in fact equivalent to $\operatorname{ASCR}(g)=c_0$ for some constant $c_0>0$, which takes place at least for the asymptotically conical shrinker \cite{[KW]}. (III) If a family of average of scalar curvature integral has at least quadratic decay of radius, precisely, for a infimum point $p\in M$ of $f$, there exists a constant $c_1>0$ such that \[ \frac{r^2}{\mathrm{Vol}\left(B(x,r)\right)}\int_{B(x,r)}\mathrm{S}\,dv\le c_1 \] for all $r>0$ and all $x\in\partial B(p,r)$, then such shrinker has volume comparison condition; see Proposition \ref{intevc}. The class of average scalar curvature integral can be regarded as some energy functions of scalar curvature, which is derived from Li-Wang (logarithmic) Sobolev inequalities; see Lemma \ref{logeq2} or Lemma \ref{slogeq}. (IV) If a complete non-compact shrinker $(M,g,f)$ with a infimum point $p\in M$ of $f$ satisfies \[ \mathrm{Vol}\left(B(x,\frac{r}{16})\right)\ge c_2\,r^n \] for all $r>0$ and all $x\in\partial B(p,r)$, where $c_2$ is a positive constant, then such shrinker satisfies volume comparison condition; see Corollary \ref{AVRc}. This condition can be regarded as a family of Euclidean volume growth, which seems to be stronger than the positive asymptotic volume ratio; see the end of Section \ref{sec4} for the detailed discussion. Besides, Li and Tam \cite{[LT2]} proved that if a Riemannian manifold with each end has asymptotically non-negative sectional curvature, then it satisfies the volume comparison condition. Recall that $(M,g)$ has \textit{asymptotically non-negative sectional curvature} if there exists a point $q\in M$ and a continuous decreasing function $\tau:\mathbb{R}^{+}\to\mathbb{R}^{+}$ such that $\int^{+\infty}_0t\tau(t)\,dt<\infty$ and the sectional curvature $K(x)$ at any point $x\in M$ satisfies $K(x)\ge-\tau(r(q,x))$, where $r(q,x)$ is a distance function from $q$ to $x$. Li and Tam \cite{[LT2]} also proved that if a Riemannian manifold with finite first Betti number has nonnegative Ricci curvature outside a compact set, then it satisfies volume comparison condition. We refer the readers to \cite{[LT2]} for further related discussions. Different from Munteanu-Schulze-Wang's analytic argument, our proof of Theorem \ref{endest} is geometrical, which stems from Liu's approach \cite{[Liu2]}, but we have a major obstacle due to the lack of volume comparison at different points and radii. For manifolds with nonnegative Ricci curvature (outside a compact set), such properties come from classical relative volume comparisons. With these comparisons, Liu was able to get a ball covering property of manifolds with nonnegative Ricci curvature (outside a compact set) and hence proved finitely many ends. But for shrinkers, we only prove relative volume comparisons about geodesic balls with center at a base point; see Theorem \ref{relcompar} in Section \ref{volcom}. We do not know if they could hold for geodesic balls centered at different points. To overcome this difficulty, we extend Cao-Zhou upper volume bound \cite{[CaZh]} (further development by Munteanu-Wang \cite{[MuWa12]}, Zhang \cite{[Zh]}) to a more precise statement; see Lemma \ref{logeq1}; while we generalize the Li-Wang lower volume bound \cite{[LiWa]}; see Lemmas \ref{logeq2} and \ref{slogeq}. Applying these upper and lower volume estimates, we could get a weak volume comparison condition; see Proposition \ref{vd} in Section \ref{sec3}. This proposition is enough to produce a weak ball covering property (see Theorem \ref{sdest} in Section \ref{sec3}) and finally leads to Theorem \ref{endest}. In particular, when the shrinker satisfies volume comparison condition, we can prove Theorem \ref{main1} in a similar spirit. The rest of paper is organized as follows. In Section \ref{volcom}, we will prove upper and lower relative volume comparisons of the shrinker in geodesic balls with center at a base point. We also give some upper and lower volume estimates. In Section \ref{sec3}, we will use volume comparisons of Section \ref{volcom} to prove a weak ball covering property. Then we apply the weak ball covering property to prove Theorem \ref{endest}. In Section \ref{sec4}, when the shrinker satisfies volume comparison condition, we will prove Theorem \ref{main1} by adapting the argument of Theorem \ref{endest}. Meanwhile, we will provide various sufficient condition to ensure volume comparison condition. In Section \ref{sec5}, we will apply the ball covering property of shrinkers to study the diameter growth of ends. In the whole of this paper, we let $c(n)$ denote a constant depending only on dimension $n$ of shrinker $(M,g,f)$ whose value may change from line to line. \vspace{.1in} \textbf{Acknowledgements}. The author thanks Yu Li for his valuable suggestions and stimulating discussions, which improves some results in this paper. The author also thanks Guoqiang Wu for his helpful comments on an earlier version of this paper. Finally the author sincerely thanks Professor Ovidiu Munteanu for valuable comments and pointing out a mistake of an earlier version of the paper. \section{Volume comparison}\label{volcom} In this section, we will discuss upper and lower relative volume comparisons of shrinker about geodesic balls with center at a base point. We will also discuss upper and lower volume estimates of shrinkers. Recall that the potential $f$ of shrinker is uniformly equivalent to the distance function squared. Precisely, the following sharp estimate was established originally due to Cao-Zhou \cite{[CaZh]} and later improved by Haslhofer-M\"uller \cite{[HaMu]}; see also Chow et al. \cite{[Chowetc]}. \begin{lemma}\label{potenesti} Let $(M,g, f)$ be an $n$-dimensional complete non-compact shrinker satisfying \eqref{Eq1} and \eqref{condition}. For any point $q\in M$, $f$ satisfies \[ \frac 14\left[\left(r(q,x)-2\sqrt{f(q)}-4n+\frac 43\right)_{+}\right]^2\le f(x)\le\frac 14\left(r(q,x)+2\sqrt{f(q)}\right)^2 \] for all $x\in M$, where $r(q,x)$ denotes a distance function from $q$ to $x$. Moreover, there exists a point $p\in M $ where $f$ attains its infimum in $M$ such that $f(p)\le n/2$; meanwhile $f$ has a simple estimate \[ \frac 14\left[\big(r(p,x)-5n\big)_{+}\right]^2\le f(x)\le\frac 14\left(r(p,x)+\sqrt{2n}\right)^2 \] for all $x\in M$. Here $a_+=\max\{a,0\}$ for $a\in \mathbb{R}$. \end{lemma} Chen \cite{[Chen]} proved that the scalar curvature of shrinkers has a lower bound \[ \mathrm{S}\ge 0. \] Pigola, Rimoldi and Setti \cite{[PiRS]} showed that the scalar curvature $\mathrm{S}$ is strictly positive, unless $(M,g,f)$ is the Gaussian shrinking Ricci soliton. By Lemma \ref{potenesti} and \eqref{condition}, the scalar curvature naturally has an upper bound \begin{equation}\label{scaup} \mathrm{S}(x)\le\frac 14\left(r(p,x)+\sqrt{2n}\right)^2 \end{equation} for all $x\in M$. This upper bound will be used in this paper. Recently, Li and Wang \cite{[LiWa]} applied the monotonicity of Perelman's functional along Ricci flow and the invariance of Perelman's functional under diffeomorphism actions to obtain (logarithmic) Sobolev inequalities on complete shrinkers. \begin{lemma}\label{sobineq} Let $(M,g, f)$ be an $n$-dimensional shrinker satisfying \eqref{Eq1}, \eqref{condition} and \eqref{condmu}. Then for any $\varphi\in C^{\infty}_0(M)$ with $\int_M\varphi^2dv=1$ and any $\tau>0$, \begin{equation}\label{LSI} \mu+n+\frac n2\ln(4\pi)\le\tau\int_M\left(4|\nabla\varphi|^2+\mathrm{S}\varphi^2\right)dv-\int_M\varphi^2\ln \varphi^2dv-\frac n2\ln \tau. \end{equation} Moreover, for any $u\in C^{\infty}_0(M)$, \begin{equation}\label{sobo} \left(\int_Mu^{\frac{2n}{n-2}}dv\right)^{\frac{n-2}{n}}\le c(n)e^{-\frac{2\mu}{n}}\int_M\left(4|\nabla u|^2+\mathrm{S}u^2\right) dv. \end{equation} \end{lemma} The above inequalities are useful for understanding the geometry and topology for shrinkers; see some recent works \cite{[LiWa]}, \cite{[MSW]}, \cite{[Wu21]} and \cite{[Wu]}. In the following sections, we will apply them to study the volume growth of shrinkers. We start to discuss some applications of the above lemmas. First, applying Lemma \ref{potenesti}, we can provide a relative volume comparison with center at any a base point for large geodesic balls. Similar volume comparison was ever considered by Carrillo and Ni \cite{[CaNi]} under some extra assumption. \begin{theorem}\label{relcompar} Let $(M,g,f)$ be a shrinker satisfying \eqref{Eq1}. For any point $q\in M$, \[ \frac{\mathrm{Vol}(B(q,R))}{\mathrm{Vol}(B(q,r))}\le 2\left(\frac{R+c}{r-c}\right)^n \] for all $R\ge r\ge 2\sqrt{n}+c$. In particular, for any $0<\alpha<1$, \[ \frac{\mathrm{Vol}(B(q,R))}{\mathrm{Vol}(B(q,\alpha R))}\le 2\left(1+\frac{2}{\alpha}\right)^n \] for all $R\ge2\alpha^{-1}(\sqrt{n}+c)$. Here $c:=2\sqrt{f(q)}+4n-4/3$. \end{theorem} \begin{proof}[Proof of Theorem \ref{relcompar}] The proof is essentially contained in the argument of Cao and Zhou \cite{[CaZh]}, and we include it for the completeness. Define \[ \rho(x):=2\sqrt{f(x)}. \] By Lemma \ref{potenesti}, \[ r(q,x)-c\le\rho(x)\le r(q,x)+c, \] where $c=2\sqrt{f(q)}+4n-4/3$. Denote by \[ D(r):=\{x\in M|\rho(x)<r\}\quad \mathrm{and}\quad V(r):=\int_{D(r)}dv. \] We trace \eqref{Eq1} and get \[ \mathrm{S}+\Delta f=\frac n2. \] Integrating this equality and using some properties on shrinkers, Cao and Zhou \cite{[CaZh]} established the following interesting equality: \begin{equation}\label{VRrel} n V(r)-rV'(r)=2\int_{D(r)}\mathrm{S}\,dv-2\int_{\partial D(r)}\frac{\mathrm{S}}{|\nabla f|}dv. \end{equation} Letting \[ \chi(r):=\int_{D(r)}\mathrm{S}\,dv, \] then by the co-area formula, \eqref{VRrel} can be rewritten as \[ n V(r)-rV'(r)=2\chi(r)-\frac{4}{r}\chi'(r), \] that is, \[ (r^{-n}V(r))'=4r^{-n-2}\chi'(r)-2r^{-n-1}\chi(r). \] Integrating this from $r$ to $R$ yields \begin{equation*} \begin{aligned} R^{-n}V(R)-r^{-n}V(r)&=4R^{-n-2}\chi(R)-4r^{-n-2}\chi(r)\\ &\quad+2\int^R_rt^{-n-3}\chi(t)\left(2(n+2)-t^2\right)dt. \end{aligned} \end{equation*} For the last term of the above equality, since $\chi(t)$ is positive and increasing in $t$, then for any $R\ge r\ge \sqrt{2(n+2)}$, we have \begin{equation*} \begin{aligned} 2\int^R_rt^{-n-3}\chi(t)\left(2(n+2)-t^2\right)dt&\le2\chi(r)\int^R_rt^{-n-3}\left(2(n+2)-t^2\right)dt\\ &=2\chi(r)\left(-2t^{-n-2}+\frac{t^{-n}}{n}\right){\bigg|}^R_r\\ &=-4R^{-n-2}\chi(r)+4r^{-n-2}\chi(r)+\frac2n \chi(r)(R^{-n}-r^{-n}). \end{aligned} \end{equation*} Hence, \[ R^{-n}V(R)-r^{-n}V(r)\le 4R^{-n-2}\left(\chi(R)-\chi(r)\right)+\frac2n \chi(r)(R^{-n}-r^{-n}) \] for $R\ge r\ge \sqrt{2(n+2)}$. Therefore, \begin{equation}\label{ineq1} V(R)\le (r^{-n}V(r))R^n+4R^{-2}\chi(R) \end{equation} for all $R\ge r\ge \sqrt{2(n+2)}$. On the other hand, for any $R\ge2\sqrt{n}$, we have \begin{equation}\label{ineq2} 4R^{-2}\chi(R)\le 2nR^{-2}V(R)\le \frac 12 V(R). \end{equation} Substituting \eqref{ineq2} into \eqref{ineq1} gives \[ \frac{V(R)}{V(r)}\le 2\left(\frac Rr\right)^n \] for any $R\ge r\ge 2\sqrt{n}(\ge \sqrt{2(n+2)})$. This implies \[ \frac{V(R+c)}{V(r-c)}\le 2\left(\frac{R+c}{r-c}\right)^n \] for $R\ge r\ge 2\sqrt{n}+c$, where $c:=2\sqrt{f(q)}+4n-4/3$. We also notice \[ \mathrm{Vol}(B(q,R))\le V(R+c) \quad\mathrm{and}\quad \mathrm{Vol}(B(q,r))\ge V(r-c) \] for any $R\ge0$ and $r\ge c$. Therefore, \[ \frac{\mathrm{Vol}(B(q,R))}{\mathrm{Vol}(B(q,r))}\le 2\left(\frac{R+c}{r-c}\right)^n \] for $R\ge r\ge 2\sqrt{n}+c$, which proves the first part of theorem. In particular, we choose $r=\alpha R$, where $0<\alpha<1$ and the above estimate becomes \[ \frac{\mathrm{Vol}(B(q,R))}{\mathrm{Vol}(B(q,\alpha R))}\le 2\left(\frac{R+c}{\alpha R-c}\right)^n \] for $R\ge\alpha^{-1}(2\sqrt{n}+c)$. Furthermore, we let $\alpha R-c>\frac{\alpha}{2}R$, that is, $R\ge2\alpha^{-1}c$, then \[ \frac{\mathrm{Vol}(B(q,R))}{\mathrm{Vol}(B(q,\alpha R))}\le 2\left(1+\frac{2}{\alpha}\right)^n \] for $R\ge2\alpha^{-1}(\sqrt{n}+c)$. This finishes the second part of theorem. \end{proof} Second, following the argument of \cite{[CaZh]}, we can apply Lemma \ref{potenesti} to give a reverse relative volume comparison. \begin{theorem}\label{relcompar2} Let $(M,g,f)$ be a shrinker with a base point $q\in M$ satisfying \eqref{Eq1}. If the scalar curvature $\mathrm{S}\le\sigma$ for some constant $0<\sigma<n/2$, then \[ \frac{\mathrm{Vol}(B(q,R))}{\mathrm{Vol}(B(q,r))}\ge\left(\frac{R-c}{r+c}\right)^{n-2\sigma} \] for all $R\ge r+2c$ and $r>0$, where $c:=2\sqrt{f(q)}+4n-4/3$. \end{theorem} \begin{proof}[Proof of Theorem \ref{relcompar2}] By \eqref{VRrel}, $\mathrm{S}\ge 0$ and our curvature assumption $\mathrm{S}\le\sigma$, we have \[ (n-2\sigma) V(t)\le tV'(t) \] for any $t\ge 0$. Integrating this inequality from $r$ to $R$, we get \[ \frac{V(R)}{V(r)}\ge\left(\frac{R}{r}\right)^{n-2\sigma} \] for any $R\ge r>0$. We also see that \[ \mathrm{Vol}(B(q,r))\le V(r+c) \quad\mathrm{and}\quad \mathrm{Vol}(B(q,R))\ge V(R-c) \] for any $r\ge0$ and $R\ge c$. Therefore, \[ \frac{\mathrm{Vol}(B(q,R))}{\mathrm{Vol}(B(q,r))}\ge\frac{V(R-c)}{V(r+c)}\ge\left(\frac{R-c}{r+c}\right)^{n-2\sigma} \] for any $R\ge r+2c$ and $r>0$. \end{proof} Next we will discuss some volume estimates of geodesic balls on shrinkers. The sharp upper volume estimate was first proved by Cao-Zhou (see Theorem 1.2 in \cite{[CaZh]}), later a explicit coefficient was stated by Munteanu-Wang (see Theorem 1.4 in \cite{[MuW14]}) by using a delicate generalized Laplace comparison. Furthermore, Zhang \cite{[Zh]} proved a sharp quantitative upper volume of the shrinker with scalar curvature bounded below; see also \cite{[Chowetc]}. In the following we will improve previous upper volume estimates when $r$ is not large. \begin{lemma}\label{logeq1} Let $(M,g, f)$ be an $n$-dimensional complete non-compact shrinker satisfying \eqref{Eq1}, \eqref{condition} and \eqref{condmu}. For any point $q\in M$ and for all $r\ge 0$, \[ \mathrm{Vol}(B(q,r))\le c(n)e^{f(q)}r^n. \] Moreover, if the scalar curvature $\mathrm{S}\ge\delta$ for some constant $\delta\ge 0$, then \[ \mathrm{Vol}(B(q,r))\le c(n)e^{f(q)}e^{-\frac{\delta}{r^2}}\,r^{n-2\delta} \] for all $r\ge 2\sqrt{n+2}+c$, where $c:=2\sqrt{f(q)}+4n-4/3$; in particular, if $p\in M$ is a infimum point of $f$, then \[ \mathrm{Vol}(B(p,r))\le c(n)e^{-\frac{\delta}{r^2}}\,r^{n-2\delta} \] for all $r\ge c(n)$. \end{lemma} \begin{proof}[Proof of Lemma \ref{logeq1}] The first estimate is Theorem 1.4 in \cite{[MuW14]}. So we only need to prove the second and third estimates. We remark that the second estimate with a rough coefficient has been proved by Zhang \cite{[Zh]} (see also \cite{[Chowetc]}). Here, we need to figure out the accurate coefficients, which plays a key role in our application. For convenience of our computation, we adapt the notations of \cite{[Zh]} (see also \cite{[Chowetc]}), which are sight different from those in \cite{[CaZh]}. For any $t\in\mathbb{R}$, let \[ \{f<t\}:=\left\{x\in M|f(x)<t\right\} \] and define \[ \mathscr{V}(t):=\int_{\{f<t\}}dv\quad \mathrm{and} \quad \mathscr{R}(t):=\int_{\{f<t\}}\mathrm{S}dv. \] Notice that for any $q\in M$, $f(x)$ satisfies \[ \frac 14\left[(r(x,q)-c)_{+}\right]^2\le f(x)\le\frac 14\left(r(x,q)+c\right)^2 \] where $c:=2\sqrt{f(q)}+4n-4/3$. Therefore, if $r\ge c$, then \[ \left\{f<\frac 14(r-c)^2\right\}\subset B(q,r)\subset\left\{f<\frac 14(r+c)^2\right\} \] and hence \begin{equation}\label{twovolu} \mathscr{V}\left(\frac 14(r-c)^2\right)\le \mathrm{Vol}(B(q,r))\le\mathscr{V}\left(\frac 14(r+c)^2\right). \end{equation} Using present notations, \eqref{VRrel} can be rewritten as \begin{equation}\label{en1} 0\le\frac{n}{2}\mathscr{V}(t)-\mathscr{R}(t)=t\mathscr{V} ^{\prime}(t)-\mathscr{R}^{\prime}(t). \end{equation} For any $t>0$, let \[ \operatorname{P}(t):=\frac{\mathscr{V}(t) }{t^{\frac{n}{2}}}-\frac{\mathscr{R}(t)}{t^{\frac{n}{2}+1}} \quad\mathrm{and}\quad \operatorname{N}(t):=\frac{\mathscr{R}(t)}{t\mathscr{V}(t)}. \] Then \eqref{en1} implies \begin{equation}\label{relaPN} \begin{aligned} \operatorname{P}^{\prime}(t)&=-\left( 1-\frac{n+2}{2t}\right) \frac {\mathscr{R}(t)}{t^{\frac{n}{2}+1}}\\ &=-\frac{\left(1-\frac{n+2}{2t}\right) \operatorname{N}(t)} {1-\operatorname{N}(t)}\operatorname{P}(t). \end{aligned} \end{equation} This implies $\operatorname{P}(t)$ is decreasing and \begin{equation}\label{relaPV} \left(1-\frac{n}{2t}\right) \frac{\mathscr{V}(t)}{t^{\frac{n}{2}}} \le\operatorname{P}(t)\le\frac{\mathscr{V}(t)}{t^{\frac{n}{2}}} \end{equation} for $t\ge n/2+1$, where we used $\frac{\mathscr{R}(t)}{\mathscr{V}(t)}\le n/2$. Integrating equality \eqref{relaPN} gives \[ \operatorname{P}(t)=\operatorname{P}(n+2) e^{-\int_{n+2}^t\frac{\left( 1-\frac{n+2}{2\tau}\right) \operatorname{N}(\tau)} {1-\operatorname{N}(\tau)}d\tau} \] for all $t\ge n+2$. Since $\mathrm{S}\ge\delta$, then $\operatorname{N}(\tau)\ge\delta/\tau$. Also noticing that $\frac{\operatorname{N}(\tau)}{1-\operatorname{N}(\tau)}$ is increasing in $\operatorname{N}(\tau)$, hence the above equality can be estimated by \begin{equation*} \begin{aligned} \operatorname{P}(t)&\le\operatorname{P}(n+2) e^{-\int_{n+2}^t\left(1-\frac{n+2}{2\tau}\right)\frac{\delta}{\tau-\delta}d\tau}\\ &\le\operatorname{P}(n+2) e^{-\int_{n+2}^t\left(1-\frac{n+2}{2\tau}\right)\frac{\delta}{\tau}d\tau}\\ &=\operatorname{P}(n+2) (n+2)^{\delta}e^{\frac{\delta}{2}}e^{-\frac{n+2}{2t}\delta}\,t^{-\delta} \end{aligned} \end{equation*} for all $t\ge n+2$. Combining this with \eqref{relaPV}, \begin{equation}\label{mathvup} \mathscr{V}(t)\le c(n)\operatorname{P}(n+2) e^{-\frac{n+2}{2t}\delta}\, t^{\frac n2-\delta} \end{equation} for all $t\ge n+2$, where we used $\delta< n/2$. By Lemma \ref{potenesti}, since $B(q, 2\sqrt{t}-c)\subset \{f<t\}$, where $c:=2\sqrt{f(q)}+4n-4/3$, combining \eqref{twovolu}, it follows that \[ \mathrm{Vol}\left(B(q, 2\sqrt{t}-c)\right)\le\mathscr{V}(t) \] for $t\ge c^2/4$. Combining this with \eqref{mathvup} yields \[ \mathrm{Vol}\left(B(q, 2\sqrt{t}-c)\right)\le c(n)\operatorname{P}(n+2) e^{-\frac{n+2}{2t}\delta}\, t^{\frac n2-\delta} \] for all $t\ge n+2+c^2/4$, so that \[ \mathrm{Vol}\left(B(q, r)\right)\le c(n)\operatorname{P}(n+2) e^{-\frac{2(n+2)\delta}{(r+c)^2}}\left(\frac{r+c}{2}\right)^{n-2\delta} \] for all $r\ge2\sqrt{n+2}$. Noticing that \[ \operatorname{P}(n+2)\le\frac{\mathscr{V}(n+2)}{(n+2)^{\frac{n}{2}}} \quad\mathrm{and}\quad \mathscr{V}(n+2)\le \mathrm{Vol}\left(B(q,2\sqrt{n+2}+c)\right), \] then \[ \mathrm{Vol}\left(B(q, r)\right)\le c(n)\mathrm{Vol}\left(B(q,2\sqrt{n+2}+c)\right) e^{-\frac{\delta}{r^2}}\,r^{n-2\delta} \] for all $r\ge2\sqrt{n+2}+c$. Therefore the second estimate follows by applying the first estimate of Lemma \ref{logeq1} \begin{equation*} \begin{aligned} \mathrm{Vol}\left(B(q,2\sqrt{n+2}+c)\right)&\le c(n)e^{f(q)}(2\sqrt{n+2}+c)^n\\ &\le c(n)e^{f(q)}, \end{aligned} \end{equation*} where we used a fact that \[ e^{f(q)}(2\sqrt{n+2}+c)^n\le c(n)e^{f(q)}f(q)^{n/2}\le\widetilde{c}(n)e^{f(q)}. \] Finally, the third estimate of the lemma follows by the second estimate and a basic fact $f(p)\le n/2$. \end{proof} \begin{remark}\label{levcon} The above argument shows that the point-wise condition of scalar curvature in Lemma \ref{logeq1} can be replaced by a condition of the average scalar curvature over the level set $\{f<r\}$, that is, \[ \frac{1}{\int_{\{f<r\}}dv}\int_{\{f<r\}}\mathrm{S}\,dv\ge\delta \] for any $r>0$. This is because we only used $\frac{\mathscr{R}(t)}{\mathscr{V}(t)}\ge\delta$ in the proof of Lemma \ref{logeq1}. \end{remark} For a lower volume estimate, a sharp version was proved by Munteanu-Wang (see Theorem 1.6 in \cite {[MuWa12]} or Theorem 1.4 in \cite{[MuW14]}). But coefficients of these estimates all depend on a base point, which will be trouble in dealing with our issue. So in the following we shall adopt a Li-Wang's local lower volume estimate for any base point, which comes from the Sobolev inequality (see Theorem 23 in \cite{[LiWa]}). This estimate is more useful when $r$ is sufficiently large. \begin{lemma}\label{logeq2} Let $(M,g, f)$ be an $n$-dimensional complete non-compact shrinker satisfying \eqref{Eq1}, \eqref{condition} and \eqref{condmu}. For any point $q\in M$ and for any $r>0$, \[ \frac{\mathrm{Vol}(B(q,r))}{r^n} \left[1+\sup_{s\in[0,r]}\frac{s^2\int_{B(q,s)}\mathrm{S}\,dv}{\mathrm{Vol}(B(q,s))}\right]^{n/2} \ge c(n)e^{\mu}. \] In particular, if the scalar curvature $\mathrm{S}\le\Lambda$ for some constant $\Lambda\ge 0$ in $B(q,r)\subset M$, then \[ \frac{\mathrm{Vol}(B(q,r))}{r^n}(1+\Lambda r^2)^{n/2}\ge c(n)e^{\mu}. \] \end{lemma} \begin{proof}[Proof of Lemma \ref{logeq2}] The argument is essentially the same as the proof of Theorem 23 in \cite{[LiWa]}. For the reader's convience, we provide the detailed proof. For a base point $q\in M$, we choose $r_0\in[0,r]$ such that \[ \inf_{s\in[0,r]}\frac{\mathrm{Vol}(B(q,s))}{s^n} \] is attained at $r_0$. Below we discuss two cases $r_0=0$ and $r_0>0 $ separately. Case one: $r_0=0$. We have \[ \mathrm{Vol}(B(q,r))\ge \omega_n r^n, \] where $\omega_n$ is the volume of the unit Euclidean $n$-ball. Now we claim that $\mu\le 0$. Indeed, for $\tau\to 0+$, we have that $(M^n,p,\tau^{-1}g)$ converges to Euclidean space $(\mathbb{R}^n,0,g_E)$ smoothly in the Cheeger-Gromov sense. By Lemma 3.2 of \cite{[Liy]}, we know \[ \underset{\tau\to 0+}{\lim\sup}\,\mu(g,\tau) =\underset{\tau\to 0+}{\lim\sup}\,\mu(\tau^{-1}g,1) \le \mu(g_E,1)=0. \] Also, since $\mu(g,\tau)\ge \mu(g,1)=\mu$ for each $\tau\in(0,1)$ by Lemma 15 in \cite{[LiWa]}, then the claim $\mu\le 0$ follows. Hence the estimate of Case one follows. Case two: $r_0>0$. Let $\phi:\mathbb{R}\to[0,1]$ be a smooth function such that $\phi(t)=1$ on $(-\infty,1/2]$, $\phi(t)=0$ on $[1,+\infty)$ and $|\phi'|\le 2$ on $[0,\infty)$. For any point $q\in M$, let \[ u(x):=\phi\left(\frac{r(q,x)}{r_0}\right). \] Clearly, $u$ is supported in $B(q, r_0)$ and it satisfies $|\nabla u|\le2r_0^{-2}$. We substitute the above special function $u$ into \eqref{sobo} of Lemma \ref{sobineq} and get \begin{equation*} \begin{aligned} \mathrm{Vol}\left(B(q,\frac{r_0}{2})\right)^{\frac{n-2}{n}} &\le c(n)e^{-\frac{2\mu}{n}}\int_{B(q,r_0)}\left(4|\nabla u|^2+\mathrm{S}u^2\right)dv\\ &\le c(n)e^{-\frac{2\mu}{n}}\frac{\mathrm{Vol}(B(q,r_0))}{r^2_0} \left[1+\frac{r^2_0\int_{B(q,r_0)}\mathrm{S}\,dv}{\mathrm{Vol}(B(q,r_0)}\right]. \end{aligned} \end{equation*} From the choice of $r_0$, we see that \[ \mathrm{Vol}\left(B(q,\frac{r_0}{2})\right)\ge \frac{\mathrm{Vol}(B(q,r_0))}{2^n}. \] Combining the above two inequalities yields \[ \frac{\mathrm{Vol}(B(q,r_0))}{r^n_0} \left[1+\frac{r^2_0\int_{B(q,r_0)}\mathrm{S}\,dv}{\mathrm{Vol}(B(q,r_0))}\right]^{n/2} \ge c(n)e^{\mu}. \] According to the definition of $r_0$, we have \[ \frac{\mathrm{Vol}(B(q,r))}{r^n}\ge \frac{\mathrm{Vol}(B(q,r_0))}{r^n_0}. \] Combining the above two inequalities gives the conclusion of Case two. \end{proof} At the end of this section, we give another version of lower volume estimate by using the logarithmic Sobolev inequality \eqref{LSI}, which is sharper than Lemma \ref{logeq2} when $r$ is not sufficiently large. \begin{lemma}\label{slogeq} Let $(M,g, f)$ be an $n$-dimensional complete non-compact shrinker satisfying \eqref{Eq1}, \eqref{condition} and \eqref{condmu}. For any point $q\in M$, \begin{equation}\label{LSIequ} \mu+n+\frac n2\ln(4\pi)+16(1-2\cdot 5^n)\le +2\cdot 5^n r^2\frac{\int_{B(q,r)}\mathrm{S}\,dv}{\mathrm{Vol}(B(q,r))} +\ln \frac{\mathrm{Vol}(B(q,r))}{r^n} \end{equation} for any $r\ge 4(\sqrt{n}+c)$, where $c:=2\sqrt{f(q)}+4n-4/3$. \end{lemma} \begin{proof}[Proof of Lemma \ref{slogeq}] Let $\phi:[0,\infty)\to[0,1]$ be a smooth cut-off function supported in $[0,1]$ such that $\phi(t)=1$ on $[0,1/2]$ and $|\phi'|\le 2$ on $[0,\infty)$. For any $q\in M$ and any $r>0$, let \[ \varphi(x):=e^{-\theta/2}\phi\left(\frac{r(q,x)}{r}\right), \] where $\theta$ is some constant determined by condition $\int_M\varphi^2dv=1$. Clearly, $\varphi$ is supported in $B(p,r)$ and it satisfies $|\nabla\varphi|\le 2r^{-1}\cdot e^{-\theta/2}$. Moreover, $\theta$ satisfies \[ \mathrm{Vol}\left(B(q,\frac r2)\right)\le e^{\theta}\int_M\varphi^2dv=e^{\theta} \] and \[ e^{\theta}=e^{\theta}\int_M\varphi^2dv =\int_M\phi^2\left(\frac{r(q,x)}{r}\right)dv\le\mathrm{Vol}(B(q,r)). \] Now we shall substitute the above cut-off function $\varphi$ into Lemma \ref{sobineq} to simplify the inequality \eqref{LSI}. First, by the definition of $\varphi$ and lower bound of $e^{\theta}$, we have \begin{equation}\label{est1} \begin{aligned} 4\tau\int_M |\nabla\varphi|^2dv&=4\tau\int_{B(q,r)\backslash B(q,\frac r2)} |\nabla\varphi|^2dv\\ &\le\frac{16\tau}{r^2}\left[\mathrm{Vol}(B(q,r))-\mathrm{Vol}\left(B(q,\frac r2)\right)\right]e^{-\theta}\\ &\le\frac{16\tau}{r^2}\left[\frac{\mathrm{Vol}(B(q,r))}{\mathrm{Vol}\left(B(q,\frac r2)\right)}-1\right]\\ &\le 16(2\cdot 5^n-1)\frac{\tau}{r^2} \end{aligned} \end{equation} for all $r\ge 4(\sqrt{n}+c)$, where $c:=2\sqrt{f(q)}+4n-4/3$. In the last inequality, we used Theorem \ref{relcompar} in the following form: \[ \frac{\mathrm{Vol}(B(q,r))}{\mathrm{Vol}(B(q,\frac r2))}\le 2\cdot5^n \] for any $r\ge 4(\sqrt{n}+c)$. Second, by the definition of $\varphi$ and the lower bound of $e^{\theta}$, we have the estimate \begin{equation}\label{est2} \begin{aligned} \tau\int_M\mathrm{S}\varphi^2 dv&\le \tau e^{-\theta}\int_{B(q,r)}\mathrm{S}\,dv\\ &\le\frac{\tau}{\mathrm{Vol}\left(B(q,\frac r2)\right)}\int_{B(q,r)}\mathrm{S}\,dv\\ &\le2\cdot5^n\frac{\tau\int_{B(q,r)}\mathrm{S}\,dv}{\mathrm{Vol}(B(q,r))} \end{aligned} \end{equation} for all $r\ge 4(\sqrt{n}+c)$, where $c:=2\sqrt{f(q)}+4n-4/3$. Here we still used Theorem \ref{relcompar} in the above last inequality. Third, we will apply the Jensen's inequality to estimate the term: $-\int_M\varphi^2\ln \varphi^2dv$. Since smooth function $H(t):=-t\ln t$ is concave in $t>0$ and the Riemannian measure $dv$ is supported in $B(q,r)$, by the following Jensen's inequality \[ \frac{\int H(\varphi^2)dv}{\int dv}\leq H\left(\frac{\int \varphi^2 dv}{\int dv}\right) \] and the definition of $H(t)$, we obtain \[ -\frac{\int_{B(q,r)}\varphi^2\ln\varphi^2dv}{\int_{B(q,r)}dv} \leq-\frac{\int_{B(q,r)}\varphi^2dv}{\int_{B(q,r)}dv}\ln\left(\frac{\int_{B(q,r)}\varphi^2dv}{\int_{B(q,r)}dv}\right). \] Since $\int_{B(q,r)}\varphi^2dv=1$, we further have a simple form \[ -\int_{B(q,r)}\varphi^2\ln\varphi^2dv\le\ln \mathrm{Vol}(B(q,r)). \] Therefore, \begin{equation}\label{est3} \begin{aligned} -\int_M\varphi^2\ln\varphi^2dv&=-\int_{B(q,r)}\varphi^2\ln\varphi^2dv\\ &\le\ln \mathrm{Vol}(B(q,r)). \end{aligned} \end{equation} Now we substitute \eqref{est1}, \eqref{est2} and \eqref{est3} into \eqref{LSI} and get that \[ \mu+n+\frac n2\ln(4\pi)\le 16(2\cdot 5^n-1)\frac{\tau}{r^2} +2\cdot5^n\frac{\tau\int_{B(q,r)}\mathrm{S}\,dv}{\mathrm{Vol}(B(q,r))} +\ln \frac{\mathrm{Vol}(B(q,r))}{\tau^{\frac n2}} \] for any $\tau>0$ and for any $r\ge 4(\sqrt{n}+c)$, where $c:=2\sqrt{f(q)}+4n-4/3$. Finally we let $\tau=r^2$ and the result follows. \end{proof} \section{Ends on a general shrinker}\label{sec3} In this section, we will give a weak ball covering property depending on the radius of a general shrinker without any assumption. Then we will apply the weak ball covering to prove Theorem \ref{endest}. With the help of Lemmas \ref{logeq1} and \ref{logeq2}, we first establish a weak volume comparison condition on shrinkers. \begin{proposition}\label{vd} Let $(M,g, f)$ be an $n$-dimensional complete non-compact shrinker with a infimum point $p\in M$ of $f$ satisfying \eqref{Eq1}, \eqref{condition} and \eqref{condmu}. If the scalar curvature $\mathrm{S}\ge\delta$ for some constant $\delta\ge 0$, then for any $r\ge c(n)$ and for any $x\in \overline{B(p,2r)}$, \[ \frac{\mathrm{Vol}(B(p,2r))}{\mathrm{Vol}\left(B(x,\tfrac{r}{8})\right)}\le c(n)e^{-\mu}r^{2(n-\delta)}. \] In addition, if the scalar curvature $\mathrm{S}\le \sigma$ for some constant $\sigma\ge \delta$ in $M$, then \[ \frac{\mathrm{Vol}(B(p,2r))}{\mathrm{Vol}\left(B(x,\tfrac{r}{8})\right)}\le c(n)e^{-\mu}\sigma^{n/2}r^{n-2\delta} \] for any point $x\in M$ and for any $r\ge c(n)$. \end{proposition} \begin{proof}[Proof of Proposition \ref{vd}] By Lemma \ref{logeq1}, we have \begin{equation}\label{upp} \mathrm{Vol}(B(p,2r))\le c(n)r^{n-2\delta} \end{equation} for any $r\ge c(n)$. On the other hand, the second estimate of Lemma \ref{logeq2} shows that \begin{equation}\label{low} \mathrm{Vol}(B(x,r))\ge c(n)e^{\mu}r^n(1+\Lambda r^2)^{-n/2} \end{equation} with $\mathrm{S}\le\Lambda$ in $B(x,r)\subset M$. Now we want to find an upper bound of scalar curvature $\mathrm{S}$ in $B(x,r)$. From \eqref{scaup}, we know \[ \mathrm{S}(y)\le\frac 14\left(r(y,p)+\sqrt{2n}\right)^2 \] for all $y\in B(x,r)$. Since $x\in\overline{B(p,2r)}$, by the triangle inequality, we further have \begin{equation*} \begin{aligned} \mathrm{S}(y)&\le\frac 14\left(r(y,x)+r(x,p)+\sqrt{2n}\right)^2\\ &\le\frac 14\left(r+2r+\sqrt{2n}\right)^2\\ &\le\frac 14(3+\sqrt{2})^2r^2 \end{aligned} \end{equation*} for all $y\in B(x,r)$ and for all $r\ge\sqrt{n}$. Substituting this into \eqref{low} yields \[ \mathrm{Vol}(B(x,r))\ge c(n)e^{\mu}r^{-n} \] for all $r\ge\sqrt{n}$. Combining this with \eqref{upp} immediately yields the first estimate of theorem. Next we will prove the second part of theorem. Since we also assume that $\mathrm{S}\le \sigma$ for some constant $\sigma\ge\delta$ in $M$, substituting this into \eqref{low}, we have \[ \mathrm{Vol}(B(x,r))\ge c(n)e^{\mu}\sigma^{-n/2} \] for any point $x\in M$ and any $r\ge 1$. Combining this with \eqref{upp} gives the second estimate. \end{proof} Inspired by Liu's argument \cite{[Liu2]}, we shall apply Proposition \ref{vd} to give a weak ball covering property for sufficiently large balls in a shrinker without any assumption. Our argument will be focused on a sufficiently large fixed radius. \begin{theorem}\label{sdest} Let $(M,g,f)$ be a complete non-compact shrinker with a infimum point $p\in M$ of $f$ satisfying \eqref{Eq1}, \eqref{condition} and \eqref{condmu}. If the scalar curvature $\mathrm{S}\ge\delta$ for some constant $\delta\ge 0$, then for sufficiently large $r\ge c(n)$, there exists \[ N=c(n)e^{-\mu}r^{2(n-\delta)} \] such that we can find points $p_1,\ldots, p_k\in B(p,2r)\backslash\overline{B(p,r)}$, where $k=k(r)\le N$, with \[ \bigcup^k_{i=1}B\left(p_i,\frac{r}{4}\right)\supset B(p,2r)\backslash\overline{B(p,r)}. \] \end{theorem} \begin{proof}[Proof of Theorem \ref{sdest}] For a sufficiently large fixed $r\ge c(n)$, we let $k:=k(r)$ denote the maximum number of disjoint geodesic balls of radius $r/8$ with centers $p_1,\ldots, p_k$ in $B(p,2r)\backslash\overline{B(p,r)}$. Obviously, in this case, \[ \bigcup^k_{i=1}B\left(p_i,\frac{r}{4}\right)\supset B(p,2r)\backslash\overline{B(p,r)}. \] See Figure 1 for a detailed description. \begin{figure} \centering \includegraphics[scale=0.6]{covpic.pdf} \caption{ \footnotesize Annulus is covered by small balls} \label{Fig1} \end{figure} Since $p_i\in B(p,2r)\backslash\overline{B(p,r)}$, we may let $p_i\in\partial B(p,\beta_i r)$ for some $1<\beta_i<2$, where $i=1,\ldots,k$. By the first estimate of Proposition \ref{vd}, we have \begin{equation*} \begin{aligned} \mathrm{Vol}(B(p,\beta_i r))&\le\mathrm{Vol}(B(p,2r))\\ &\le c(n)e^{-\mu}r^{2(n-\delta)}\mathrm{Vol}\left(B(p_i,\frac{r}{8})\right) \end{aligned} \end{equation*} for $r\ge c(n)$. By Theorem \ref{relcompar}, we also have \[ \mathrm{Vol}(B(p,3r))\le2\left(1+\frac{6}{\beta_i}\right)^n\mathrm{Vol}\left(B(p,\beta_i r)\right) \] for $r\ge2(\sqrt{n}+c)$, where $c:=2\sqrt{n/2}+4n-4/3$ and $i=1,\ldots,k$. Combining the above two estimates, for each $i$, \[ \mathrm{Vol}(B(p,3r))\le c(n)e^{-\mu}r^{2(n-\delta)} \mathrm{Vol}\left(B(p_i,\frac{r}{8})\right) \] for $r\ge c(n)$, where we used $1<\beta_i<2$. Summing the above $k$ inequalities, we get \[ k(r)\mathrm{Vol}(B(p,3r))\le c(n)e^{-\mu}r^{2(n-\delta)} \sum^k_{i=1}\mathrm{Vol}\left(B(p_i,\frac{r}{8})\right) \] for $r\ge c(n)$. On the other hand, we easily see that \[ \sum^k_{i=1}\mathrm{Vol}\left(B(p_i,\frac{r}{8})\right)\le \mathrm{Vol}(B(p,3r)). \] Combining the above two estimates gives \[ k(r)\le c(n)e^{-\mu}r^{2(n-\delta)} \] for $r\ge c(n)$, which completes the proof. \end{proof} \begin{remark}\label{reN1} In Theorem \ref{sdest}, if the scalar curvature also satisfies $\mathrm{S}\le\sigma$ for some constant $\sigma\ge\delta$ in $M$, then for a sufficiently large $r$, we can choose an $(n-2\delta)$-degree as follows: \[ N=c(n)e^{-\mu}\sigma^{n/2}r^{n-2\delta}. \] \end{remark} The above weak ball covering property immediately implies Theorem \ref{endest}. \begin{proof}[Proof of Theorem \ref{endest}] Let $(M,g,f)$ be an $n$-dimensional complete non-compact shrinker satisfying \eqref{Eq1}, \eqref{condition} and \eqref{condmu}. Since the number of ends on the shrinker is independent of the choice of the base point, we can choose a infimum point $p$ of $f$ as a base point in $M$. Given a sufficiently large fixed number $r$, let \[ N_1=c(n)e^{-\mu}r^{2(n-\delta)} \] as in Theorem \ref{sdest}. That is we can find points $p_1,\ldots, p_k\in B(p,2r)\backslash\overline{B(p,r)}$, where $k=k(r)\le N_1$, with \[ \bigcup^k_{i=1}B\left(p_i,\frac{r}{4}\right)\supset B(p,2r)\backslash\overline{B(p,r)}. \] Next we will prove Theorem \ref{endest} by a contradiction argument. If Theorem \ref{endest} is not true, that is, the number of ends grows faster than polynomial growth with degree $2(n-\delta)$, then for the above mentioned sufficiently large $r$, there exists more than \[ \widetilde{N}_1=c(n)e^{-\mu}r^{2(n-\delta)+\epsilon}, \] where $\epsilon>0$ is any small constant, unbounded ends $E_j$ with respect to $\overline{B(p,r)}$. It is obvious that geodesic balls of radius $r/4$ with centers in different components $E_j\cap B(p,2r)$ do not intersect. Thus we need at least $\widetilde{N}_1$ geodesic balls of radius $r/4$ to cover the sets $E_j\cap B(p,2r)\subset B(p,2r)\backslash\overline{B(p,r)}$, which contradicts Theorem \ref{sdest}. \end{proof} \begin{remark}\label{reN2} For Theorem \ref{endest}, if the scalar curvature $\mathrm{S}\le\sigma$ for some constant $\sigma\ge\delta$ in $M$, then we can apply Remark \ref{reN1} to the above argument and get the same conclusion whereas the degree $2(n-\delta)$ of polynomial growth can be reduced to $n-2\delta$. \end{remark} \section{Ends with volume comparison condition}\label{sec4} In this section we will discuss the finite number of ends when the shrinker satisfies volume comparison condition. In this case we first give a ball covering property, which is similar to the manifold case of nonnegative Ricci curvature. \begin{theorem}\label{shrendest} Let $(M,g,f)$ be an $n$-dimensional complete non-compact shrinker with a base point $q\in M$ satisfying volume comparison condition. There exists a constant \[ N=N(n,\eta) \] depending only on $n$ and $\eta$ such that for any $r\ge2(\sqrt{n}+c)+r_0$, where $c:=2\sqrt{f(q)}+4n-4/3$, we can find $p_1,\ldots, p_k\in B(q,2r)\backslash\overline{B(q,r)}$, $k\le N$, with \[ \bigcup^k_{i=1}B\left(p_i,\frac{r}{4}\right)\supset B(q,2r)\backslash\overline{B(q,r)}. \] \end{theorem} \begin{proof}[Proof of Theorem \ref{shrendest}] Let $k$ be the maximum number of disjoint geodesic balls of radius $r/8$ with centers $p_1,\ldots, p_k$ in $B(q,2r)\backslash\overline{B(q,r)}$. Here we choose $r$ sufficiently large such that $r\ge2(\sqrt{n}+c)+r_0$. Clearly, \[ \bigcup^k_{i=1}B\left(p_i,\frac{r}{4}\right)\supset B(q,2r)\backslash\overline{B(q,r)}. \] Since $p_i\in B(q,2r)\backslash\overline{B(q,r)}$, we may let $p_i\in\partial B(q,\beta_i r)$ for some constant $1<\beta_i<2$, where $i=1,\ldots,k$. By the volume comparison condition, we have \begin{equation*} \begin{aligned} \mathrm{Vol}(B(q,\beta_i r))&\le\eta \mathrm{Vol}\left(B(p_i,\frac{\beta_ir}{16})\right)\\ &\le\eta \mathrm{Vol}\left(B(p_i,\frac{r}{8})\right) \end{aligned} \end{equation*} for all $r\ge r_0$. By Theorem \ref{relcompar}, we see that \[ \mathrm{Vol}(B(q,3r))\le2\left(1+\frac{6}{\beta_i}\right)^n\mathrm{Vol}(B(q,\beta_i r)) \] for $r\ge2\beta_i^{-1}(\sqrt{n}+c)$, where $i=1,\ldots,k$. Combining the above two estimates, for each $i$, there exists a constant $C(n,\eta)$ depending only on $n$ and $\eta$ such that \[ \mathrm{Vol}(B(q,3r))\le C(n,\eta) \mathrm{Vol}\left(B(p_i,\frac{r}{8})\right) \] for $r\ge2(\sqrt{n}+c)+r_0$, where $c:=2\sqrt{f(q)}+4n-4/3$ and we used $1<\beta_i<2$, where $i=1,\ldots,k$. This implies \[ k\mathrm{Vol}(B(q,3r))\le C(n,\eta) \sum^k_{i=1}\mathrm{Vol}\left(B(p_i,\frac{r}{8})\right) \] for $r\ge2(\sqrt{n}+c)+r_0$. On the other hand, \[ \sum^k_{i=1}\mathrm{Vol}\left(B(p_i,\frac{r}{8})\right)\le \mathrm{Vol}(B(q,3r)) \] Combining the above two inequalities yields $k\le C(n,\eta)$ and the result follows. \end{proof} Similar to the preceding discussion in Section \ref{sec4}, we can apply Theorem \ref{shrendest} to prove Theorem \ref{main1}. Here we include it for the completeness. \begin{proof}[Proof Theorem \ref{main1}] Under the assumption of Theorem \ref{main1}, we let $N_2=N(n,\eta)$ as in Theorem \ref{shrendest}. If Theorem \ref{main1} is not true, we can take $r$ large enough such that there exist more than $N_2$ unbounded ends $E_j$ with respect to $\overline{B(q,r)}$. Because $E_j\cap B(q,2r)$ lie in $B(q,2r)\backslash\overline{B(q,r)}$ and geodesic balls of radius $r/4$ with centers in different components $E_j\cap B(q,2r)$ do not intersect. That is, we need more than $N_2$ geodesic balls of radius $r/4$ to cover $E_j\cap B(q,2r)$, which contradicts Theorem \ref{shrendest}. \end{proof} In the rest of this section, we will discuss four sufficient assumptions such that a class of shrinkers satisfies volume comparison condition. As we all know, if $(M,g)$ has nonnegative Ricci curvature everywhere, then it satisfies the volume comparison condition. Indeed the volume doubling property sufficiently leads to volume comparison condition. \begin{proposition}\label{voldoub} Let $(M,g)$ be an $n$-dimensional complete manifold satisfying the volume doubling property. Then for all $0<r<R<\infty$ and all $x\in M$ and $y\in\overline{B(x,R)}$, \[ \frac{\mathrm{Vol}(B(x,R))}{\mathrm{Vol}(B(y,r))}\le D^2\left(\frac{R}{r}\right)^\kappa, \] where $\kappa=\log_2D$. In particular, $(M,g)$ satisfies volume comparison condition. \end{proposition} \begin{proof}[Proof of Proposition \ref{voldoub}] Assume $(M,g)$ satisfies the volume doubling property, that is \[ \mathrm{Vol}(B(x,2r))\le D\,\mathrm{Vol}(B(x,r)) \] for any $x\in M$ and $r>0$, where $D$ is a fixed constant. Let $m$ be a positive integer such that $2^m<R/r\le 2^{m+1}$. Since \[ B(x,R)\subset B(y,2R)\subset B(y,2^{m+2}r) \] and thus \[ \mathrm{Vol}(B(x,R))\le\mathrm{Vol}(B(y,2^{m+2}r)), \] then we have \begin{equation*} \begin{aligned} \mathrm{Vol}(B(x,R))&\le D^{m+2}\mathrm{Vol}(B(y,r))\\ &\le D^2\left(\frac{R}{r}\right)^{\kappa}\mathrm{Vol}(B(y,r)), \end{aligned} \end{equation*} where $\kappa=\log_2D$. This proves the first estimate. In particular, when $y\in \partial B(x,R)$, we let $r=R/16$ in the first estimate and immediately get volume comparison condition. \end{proof} Second, we observe that the shrinker with at least quadratic decay of scalar curvature implies some non-collapsed property and hence satisfies volume comparison condition. \begin{proposition}\label{decc} Let $(M,g,f)$ be a complete non-compact shrinker with a infimum point $p\in M$ of $f$ satisfying \eqref{Eq1}, \eqref{condition} and \eqref{condmu}. If the scalar curvature satisfies \[ \mathrm{S}(x)\cdot r^2(p,x)\le c_0 \] for any $r(p,x)>0$, where $c_0>0$ is a constant and $r(p,x)$ is the distance function from $p$ to $x$, then the shrinker satisfies volume comparison condition. In particular, any shrinker with finite asymptotic scalar curvature ratio satisfies volume comparison condition. \end{proposition} \begin{proof}[Proof of Proposition \ref{decc}] For any $1/32\le\alpha\le 1/2$, for any $r>0$ and for any point $q\in \partial B(p,r)$, by the second estimate of Lemma \ref{logeq2}, we have \[ (\alpha r)^{-n}\mathrm{Vol}(B(q,\alpha r))\ge c(n)e^{\mu} \left[1+\frac{c_0}{(1-\alpha)^2r^2}\cdot(\alpha r)^2\right]^{-\frac n2} \] where we used \[ \mathrm{S}\le\frac{c_0}{r^2(p,x)}\le\frac{c_0}{(1-\alpha)^2r^2}. \] Namely, for any $r>0$ and for any point $q\in \partial B(p,r)$, \begin{equation*} \begin{aligned} \mathrm{Vol}(B(q,\alpha r))&\ge c(n)e^{\mu}\left[1+\frac{c_0\alpha^2}{(1-\alpha)^2}\right]^{-\frac n2}\alpha^n\cdot r^n\\ &\ge c(n,c_0)e^{\mu}r^n \end{aligned} \end{equation*} for some constant $c(n,c_0)$ depending only on $n$ and $c_0$, where used $1/32\le\alpha\le 1/2$. On the other hand, by Lemma \ref{logeq1}, \[ \mathrm{Vol}(B(p,r))\le c(n)r^n \] for any $r>0$. Thus, for any $r>0$ and for any point $q\in \partial B(p,r)$, the lower and upper volume estimates give \[ \frac{\mathrm{Vol}(B(p,r))}{\mathrm{Vol}(B(q,\alpha r))}\le c(n,c_0)e^{-\mu}. \] Letting $\alpha=1/16$ shows that such shrinker satisfies volume comparison condition. \end{proof} The proof of Proposition \ref{decc} indicates that the finite asymptotic scalar curvature ratio implies the positive asymptotic volume ratio. Moreover, combining Proposition \ref{decc} and Theorem \ref{main1}, we easily get the following result due to Munteanu, Schulze and Wang \cite{[MSW]}. \begin{corollary}\label{cor} Any complete non-compact shrinker with finite asymptotic scalar curvature ratio must have finitely many ends. \end{corollary} Third, we see that if a family of the average of scalar curvature integral has at least quadratic decay of radius, then such shrinker also satisfies volume comparison condition. \begin{proposition}\label{intevc} Let $(M,g,f)$ be a complete non-compact shrinker with a infimum point $p\in M$ of $f$ satisfying \eqref{Eq1}, \eqref{condition} and \eqref{condmu}. If there exists a constant $c_1>0$ such that \begin{equation}\label{intsca} \frac{r^2}{\mathrm{Vol}\left(B(x,r)\right)}\int_{B(x,r)}\mathrm{S}\,dv\le c_1 \end{equation} for all $r>0$ and all $x\in\partial B(p,r)$, then the shrinker satisfies volume comparison condition. \end{proposition} \begin{proof}[Proof of Proposition \ref{intevc}] For any $r>0$, we let point $q$ be $x\in\partial B(p,r)$ in the first estimate of Lemma \ref{logeq2}, and get \[ \frac{\mathrm{Vol}\left(B(x,\frac{r}{16})\right)}{(\tfrac{r}{16})^n} \left[1+\sup_{s\in\left[0,\tfrac{r}{16}\right]}\frac{s^2\int_{B(x,s)}\mathrm{S}\,dv}{\mathrm{Vol}(B(x,s))}\right]^{n/2} \ge c(n)e^{\mu}. \] By the assumption \eqref{intsca}, the above inequality becomes \[ \mathrm{Vol}\left(B(x,\frac{r}{16})\right)\ge c(n,c_1)e^{\mu}r^n \] for all $r>0$ and all $x\in\partial B(p,r)$. Combining this with the volume upper growth $\mathrm{Vol}(B(p,r))\le c(n)r^n$ immediately yields \[ \frac{\mathrm{Vol}(B(p,r))}{\mathrm{Vol}\left(B(x,\frac{r}{16})\right)}\le c(n,c_1)e^{-\mu} \] for any $r>0$ and all $x\in \partial B(p,r)$. \end{proof} \begin{remark} Similar to the above argument, Proposition \ref{intevc} can be also proved by Lemma \ref{slogeq}. Moreover, when $n\ge 3$, the assumption \eqref{intsca} in Proposition \ref{intevc} can be replaced by the bound of the following maximal function of scalar curvature introduced by Topping \cite{[To]}: \[ \sup_{s\in\left(0,\tfrac{r}{16}\right]}s^{-1} \left[\mathrm{Vol}(B(x,s))\right]^{-\frac{n-3}{2}}\left(\int_{B(x,s)} \mathrm{S}\,dv\right)^{\frac{n-1}{2}}\le\delta, \] for all $r>0$ and all $x\in\partial B(p,r)$, where $\delta:=\min\{w_n,\,(4\pi)^{\frac n2}e^{\mu+n-2^n\cdot17}\}$ and $\omega_n$ is the volume of the unit Euclidean $n$-ball. This bound assumption also enables us to get that \[ \mathrm{Vol}\left(B(x,\frac{r}{16})\right)> \delta\,r^n \] for all $r>0$ and all $x\in\partial B(p,r)$, the interested readers are referred to Theorem 3.1 of \cite{[Wu21]} for detailed proof. \end{remark} Combining Proposition \ref{intevc} and Theorem \ref{main1} leads to \begin{corollary}\label{intcor} Any complete non-compact shrinker satisfying \eqref{intsca} must have finitely many ends. \end{corollary} In the proof of Corollaries \ref{cor} and \ref{intcor}, we observe that these curvature assumptions both imply a family of Euclidean volume growth. These proof indeed shows that any shrinker with a family of Euclidean volume growth must have volume comparison condition. \begin{corollary}\label{AVRc} If a complete non-compact shrinker $(M,g,f)$ with a infimum point $p\in M$ of $f$ satisfies \begin{equation}\label{famieq} \mathrm{Vol}\left(B(x,\frac{r}{16})\right)\ge c\,r^n \end{equation} for all $r\ge r_0$ for some $r_0>0$, and all $x\in\partial B(p,r)$, where $c$ is a positive constant independent of $x$ and $r$, then such shrinker satisfies volume comparison condition and hence has finitely many ends. \end{corollary} In the end of this section, we give some comments on the relation between Corollary \ref{AVRc} and asymptotic volume ratio on shrinkers. Recall that the \textit{asymptotic volume ratio} ($\operatorname{AVR}$) of a complete Riemannian manifold $(M,g)$ is defined by \[ \operatorname{AVR}(g):=\lim_{r\rightarrow\infty}\frac{\operatorname{Vol}B(q,r)}{\omega_nr^n} \] if the limit exists. Whenever the $\operatorname{AVR}(g)$ exists, it is independent of point $q$. If $(M,g)$ has nonnegative Ricci curvature, then the limit always exists by the Bishop-Gromov volume comparison. For any shrinker, Chow, Lu and Yang \cite{[CLY]} proved that $\operatorname{AVR}(g)$ always exists and is finite. The assumption \eqref{famieq} naturally implies positive asymptotic volume ratio; but the reverse problem is not clear to the author at present. Notice that Feldman, Ilmanen and Knopf \cite{[FIK]} described examples of complete non-compact K\"ahler shrinkers, which have $\operatorname{AVR}(g)>0$ and the Ricci curvature changes sign. We see that positive asymptotic volume ratio provides the Euclidean volume growth based on a fixed point, which does not seem to yield a family of Euclidean volume growth \eqref{famieq}. On the other hand, Carrillo and Ni \cite{[CaNi]} proved that any shrinker with Ricci curvature $\mathrm{Ric}(g)\ge0$ must have $\operatorname{AVR}(g)=0$. Here we may reverse the process and naively ask that if $\operatorname{AVR}(g)=0$ implies $\mathrm{Ric}(g)\ge0$? \section{Diameter growth of ends}\label{sec5} In the last section, we will apply the ball covering property to study the diameter growth of ends in the shrinker. The manifold case can be referred to \cite{[AG]}, where Abresch and Gromoll proved that every end of manifolds with nonnegative Ricci curvature has most linear diameter growth. Later this result can be generalized by Liu \cite{[Liu2]} to manifolds with nonnegative Ricci curvature outside a compact set. Let us first recall the definition diameter of ends on manifolds; see also \cite{[Liu2]}. \begin{definition} Let $q$ be a fixed point in a Riemannian manifold $(M,g)$. For any $r>0$, any connected component $\Sigma$ of the annulus \[ A_q(2r,\tfrac{3}{4}r):=B(q,2r)\backslash\overline{B(q,\tfrac{3}{4}r)}, \] and any two points $x,y\in \Sigma\cap\partial B(q,r)$, we let \[ d_r(x,y):=\inf\left\{\mathrm{length}(\gamma)\right\}, \] where the infimum is taken over all piecewise smooth curves $\gamma$ from $x$ to $y$ in $M\backslash\overline{B(q,r/2)}$. Then we set \[ \mathrm{diam}\left(\Sigma\cap\partial B(q,r)\right) :=\sup_{x,y\in\Sigma\cap\partial B(q,r)}d_r(x,y). \] Using the above notations, the \textit{diameter of ends} at $r$ from $q$ is defined by \[ \mathrm{diam}_q(r):=\sup_{\Sigma\subset A_q(2r,\tfrac{3}{4}r)} \mathrm{diam}\left(\Sigma\cap\partial B(q,r)\right). \] See Figure 2 for a simple description. \end{definition} \begin{figure} \centering \includegraphics[scale=0.5]{defpic.pdf} \caption{ \footnotesize Definition of the diameter of ends} \label{Fig2} \end{figure} We now apply the above definition to Theorem \ref{diam} and obtain a diameter growth for ends in the shrinker without any assumption. \begin{theorem}\label{diam} On any $n$-dimensional complete non-compact shrinker with the scalar curvature \[ \mathrm{S}\ge \delta \] for some constant $\delta\ge 0$, the diameter growth of ends is at most polynomial growth with degree $2(n-\delta)+1$. \end{theorem} \begin{proof}[Proof of Theorem \ref{diam}] Without loss of generality, we choose a infimum point $p\in M$ of $f$ as a base point. By Theorem \ref{sdest}, for a fixed sufficiently large $r$, and for any connected component $\Sigma$ of the annulus $A_q(2r,\tfrac{3}{4}r)$, we can find no more than \[ N:=c(n)e^{-\mu}r^{2(n-\delta)} \] geodesic balls $B_i:=B\left(p_i,\frac{r}{4}\right)$, where $p_i\in A_q(2r,\tfrac{3}{4}r)$ and $i\le N$ such that \[ \bigcup_{i=1}B\left(p_i,\frac{r}{4}\right)\supset \Sigma. \] For any two points $x$ and $y$ in $\Sigma\cap\partial B(q,r)$, since $\Sigma$ is connected, we can find a subsequence of geodesic balls $\{B_i\}$: $B_{i_1},\ldots,B_{i_k}$, where $k\le N$ such that \[ x\in B_{i_1},\quad B_{i_j}\cap B_{i_{j+1}}\neq \emptyset\,\,(j=1 ,\ldots,k-1), \quad y\in B_{i_k}. \] Now we choose fixed points $z_j\in B_{i_j}\cap B_{i_{j+1}}$ and consecutively connect the above mentioned points \[ x,p_{i_1},z_1,p_{i_2},z_2,p_{i_3},\ldots,p_{i_{k-1}},z_{k-1},p_{i_k},y, \] which forms a piecewise smooth curve $\gamma$. Obviously, the curve $\gamma$ lies in $M\backslash\overline{B(q,r/2)}$ and has the length of $\gamma$ \[ \mathrm{length}(\gamma)\le 2k\cdot\frac{r}{4} \le\frac{N}{2}r\le c(n)e^{-\mu}r^{2(n-\delta)+1}. \] This completes the proof. \end{proof} \begin{remark} If the scalar curvature of shrinker is uniformly bounded, by Remark \ref{reN1}, the above argument indicates that the degree $2(n-\delta)+1$ in Theorem \ref{diam} can be reduced to $n-2\delta+1$. \end{remark} If the shrinker satisfies volume comparison condition, by the same argument as above, Theorem \ref{shrendest} immediately implies \begin{theorem} On any complete non-compact shrinker with volume comparison condition, the diameter growth of ends is at most linear. \end{theorem} \bibliographystyle{amsplain}
1,116,691,497,106
arxiv
\section{The \textsc{Sofos}\xspace System}\label{sec:solution} \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{figures/system.pdf} \caption{The \textsc{Sofos}\xspace system.} \label{fig:our_system} \end{figure} \begin{figure*}[!ht] \centering \includegraphics[width=\textwidth]{figures/screenshot.pdf} \caption{The GUI of \textsc{Sofos}\xspace system.} \label{fig:gui} \end{figure*} The \textsc{Sofos}\xspace system implements, adapts and compares several cost models for view selection on RDF KGs. The system, given an initial analytical \emph{facet\xspace} of the graph to analyze, materializes a set of views based on a cost model, and then it measures the performance, in terms of storage cost and query response-time, of the selected views. \textsc{Sofos}\xspace comprises of two main modules: {\large\ding{172}} an \textit{offline module} for \textbf{selective view materialization} (Section~\ref{ssec:offline}), and {\large\ding{173}} an \textit{online module} for \textbf{query execution and performance comparison} (Section~\ref{ssec:online}). Figure~\ref{fig:our_system} shows its main components. \mpara{Background \& problem:} At its core, the \textsc{Sofos}\xspace system takes a knowledge graph $G$ and an analytical facet\xspace $F$, which describes the information that should be aggregated in different views, and materializes a set of $k$ views $\mathcal{V}_1, ..., \mathcal{V}_k$ based on $F$. Then, given any query $Q$ targeting $F$, the system either answers $Q$ querying one of the $k$ materialized views, or accesses the graph $G$ if none of the views can be used to compute the required answer. In \textsc{Sofos}\xspace, a \emph{knowledge graph} $G$ is represented as a set of RDF triples $(s,p,o)\in(\mathcal{I}\cup\mathcal{B}){\times}(\mathcal{I}){\times}(\mathcal{I}\cup\mathcal{B}\cup\mathcal{L})$, where $\mathcal{I}$ is a set of entity identifiers, $\mathcal{B}$ is a set of ``blank'' nodes with no identifier, and $\mathcal{L}$ is a set of literals. A \emph{query} $Q$ on a RDF graph is a set of \emph{triple patterns}, that is, a set of triples in which some of the triple's components $s,p$, or $o$ are variables from a set $\mathcal{X}$, and is expressed in the SPARQL query language. An \emph{answer} to a query $Q$ is computed based on the matchings in $G$ of the triple patterns in the query and the values corresponding to instances of the variables in the query. We denote as $\res{Q}{G}$ the set of query answers on the knowledge graph $G$. Here, we focus on \emph{analytical queries} of the kind \texttt{SELECT} $\vec{X}$~$agg(u)$ \texttt{WHERE} $P$ \texttt{GROUP BY} $\vec{X}$, in which $\vec{X}{\subseteq}\mathcal{X}$ are grouping variables, i.e., a subset of the variables appearing in $P$, $u{\in}\vec{X}$ is the specific variable over which the aggregation is computed, and $agg$ is an aggregation expression in $\{ \textsc{SUM, AVG, COUNT, MAX, MIN} \}$. The \textsc{Sofos}\xspace system builds on analytical \emph{facets\xspace} that determine the triples of the graph that are the target of some queries and hence provide the conditions to construct a set of views. A \emph{facet\xspace} has the same form of an analytical query and is then identified by the triple $F{=}{\langle}\vec{X},P,agg(u){\rangle}$. Finally, a \emph{view} from a facet\xspace $F$ is a query $\mathcal{V}{=}{\langle}\vec{X}^{\prime},P^{\prime},agg(u){\rangle}$, where $P^{\prime}$ is derived from $P$, and $\vec{X}^{\prime}{\subseteq}\vec{X}$ aggregates over just a subset of variables in $\vec{X}$. Therefore, the facet\xspace $F$ induces a \emph{lattice} of views $\lattice{F}$, in which different subsets of variables are used for aggregation and hence, results are represented at different levels of granularity. Moreover, in {\textsc{Sofos}\xspace}, a materialized view is also an RDF graph that contains an encoding of only the answers to the query used to generate it. Analytical queries targeting a {facet\xspace} $F$ also contain a subset of $\vec{X}$ and $P$ but can be further specialized by also introducing additional \texttt{FILTER} conditions. Given a query $Q$, \emph{view materialization} allows for answering the query by exploiting the contents of a precomputed view $\mathcal{V}_i$, avoiding in this way the need to query the underlying graph $G$. Materializing the entire lattice would allow to always select the best view $V_i$ for any query. Nonetheless, materializing the entire lattice is impractical from the memory consumption standpoint. As such, \textsc{Sofos}\xspace explores different strategies that have been proposed in the past to select a subset $\mathcal{V}_1, ..., \mathcal{V}_k$ of views from the lattice. In the relational case, the system would always select the smallest possible view to answer $Q$, since there is a linear correlation between number of tuples and running time~\cite{olapcube}. This linear correlation does not trivially hold in the case of knowledge graphs, because a graph is not defined in terms of tuples. As such, we need a cost function $C: \lattice{F} \rightarrow \mathbb{R}^+$ predicting the running time of any query Q if the view $V_i$ is materialized. In practice, to select the best set of views, we adopt a greedy approach~\cite{olapcube}. Given a set of selected views, the greedy approach exploits the estimated time from the cost function and compares the expected running time of a set of queries with and without including the candidate view $\mathcal{V}_i$ in the set of views. While, in the relational case, the cost is derived directly from the number of tuples in the view, \textsc{Sofos}\xspace proposes a comparison among different cost functions to select $k$ views from a facet $F$, and shows the advantages and shortcoming of each of them when tested against a specific set of queries. We opt for a budget representing the number of views $k$ to allow for a more straightforward comparison on memory and time consumption. However, note that this budget can be adapted to regulate the space consumption on the selected views as well, i.e., instead of selecting $k$ views, select up to $k$ views up to a certain memory budget. \vspace{-10pt} \subsection{Selective view materialization}\label{ssec:offline} \textsc{Sofos}\xspace performs two offline operations, (a) \textbf{view selection} that decides on the best views to materialize given the cost function $\C$, and (b) \textbf{view materialization} that augments the graph with extra information to store aggregation values. \spara{View selection.} \textsc{Sofos}\xspace supports six cost models: (1) a random baseline, (2) a direct adaptation of tuple counting for relational data, two RDF-based cost models, namely (3) the number of aggregated values and (4) the number of nodes, (5) a learned cost model, and (6) a user-defined one. \begin{itemize}[leftmargin=*] \item \textbf{Random:} This cost function is constant $\C(\mathcal{V}_i){=1}$, for each view $\mathcal{V}_i{\in}\lattice{F}$, i.e., this will output a random $k$-size subset of $\lattice{F}$. \item \textbf{Number of triples:} This cost function is analogous to the number of tuples in relational databases. On a knowledge graph, this cost corresponds to the number of RDF triples in the corresponding graph $G_{V_i}$, $\C(\mathcal{V}_i){=}|{G_{V_i}}|$. \item \textbf{Number of aggregated values:} This corresponds to the number of results of the query representing the view, i.e., $\C(\mathcal{V}_i){=} |\res{\mathcal{V}_i}{G}|$. \item \textbf{Number of nodes:} This cost corresponds to the number of node values in the view $\mathcal{V}_i$, i.e., $\C(\mathcal{V}_i){=}|\mathcal{I}_i{\cup}\mathcal{B}_i{\cup}\mathcal{L}_i|$. \item \textbf{Learned cost}: For comparison, we adapt a cost estimate from a learned deep regression model $f{:} \lattice{F}{\rightarrow}\mathbb{R}$~\cite{ortiz2019empirical}. We encode a query into a vector representing the relationships, the attributes, and the type of aggregates in the query, along with statistics about the relationship frequency and the attribute frequency. In the offline training phase, the model takes the encoding of either a given workload or randomly generated queries and their running time. In the online phase, the model receives the encoding of a query (i.e., view) $V_i$ and outputs the estimated running time, such that $\C(\mathcal{V}_i) = f(\mathcal{V}_i)$. \item \textbf{User defined}: The user acts as a cost function, selecting $k$ views from the lattice. \end{itemize} \spara{View materialization.} View materialization in {\textsc{Sofos}\xspace} consists of generating a new graph for each view $\mathcal{V}_i$. Each graph contains a set of extra blank nodes to which is attached the value of the aggregation of different bindings for the subset of the template variables in $\vec{X}$. This materialization procedure is a generalization of the standard techniques adopted in MARVEL~\cite{olapcubeRDF}. The result of view materialization is hence an \emph{expanded RDF graph} $G^+$. \subsection{Query Performance Comparison}\label{ssec:online} After materialization of a specific subset of views, the system runs a set of queries randomly generated from the {facet\xspace} $F$ against the expanded graph $G^+$ and measures the performance of each query. When answering a query, \textsc{Sofos}\xspace identifies the best view to adopt and translates the input query $Q$ into a query $Q'$ in the expanded RDF graph $G^{+}$ targeting the data of the selected view. In practice, the translation straightforwardly substitutes aggregate variables with the blank nodes representing the aggregation and reformulates triples patterns accordingly. Therefore, \textsc{Sofos}\xspace allows running any set of queries on different sets of materialized views for each cost function. The user can then compare the relative performance of each view selection method and hence the appropriateness of different cost models. \section{Related Works}\label{sec:related} KGs gained traction in the last few years, due to the proliferation of Linked Open Data~\cite{Bonifati2019,wylot2018,seaborne2006sparql} and proprietary enterprise knowledge graphs~\cite{noy2019industry}. Recently, companies and researchers require to perform complex analytics on the data in the form of aggregate queries. In the following, we provide more details around existing methods for data cube analysis for the relational model and the existing implementations for the case of graph data. We highlight how existing methods have tried to adapt techniques for relational data to the graph model. In this demonstration, we present a system that can showcase the limitations of these adaptations. \mpara{Data cube analysis.} In relational data, \emph{data cubes}~\cite{olapcube} conveniently represent aggregates over multiple data dimensions. That is, they model data as a set of observations, each carrying one or more measures, and a set of dimensions across which the measures of the observations can be aggregated (e.g., consider the population recorded for each city in each country, which can be aggregated across time, regions and continents, or language spoken in order to retrieve, for instance, the amount of population per country speaking each language). Analyses in such data cubes are notoriously computationally expensive since they involve the processing of large portions of the dataset. Therefore, a common approach is that of employing \emph{materialized views} so that queries can be executed over a smaller portion of pre-processed data, significantly reducing query time~\cite{olapcube,niemi2001constructing}. For instance, one can pre-aggregate population across countries, languages, and years, so that a query asking for the total amount of people speaking German during 2020 can be computed by processing the pre-aggregated results instead of the whole data for each city. Yet, given a data-cube with many different dimensions, there are multiple ways in which data could be aggregated (e.g., across cities and regions, or languages and years, and so on). Materializing views for all these combinations is expensive both in terms of processing time as well as in terms of space occupation on disk. Therefore, \emph{view selection} techniques have been proposed for the case of relational databases~\cite{olapcube,niemi2001constructing}. These techniques estimate the benefit that materializing a specific view can provide. Such benefit is estimated as a linear function of the size of the materialized view compared against the size of the data from which such a view should be derived. For instance, a view aggregating daily records into yearly records provides an expected reduction factor of $\sim350$, and one would expect a proportional improvement in processing speed when using the view for querying, instead of the daily data. For the case of RDF data, instead, the state of the art approaches simply set-out to adapt solutions from the relational model to the graph model. Yet, the research on relational data cannot be directly applied on graphs, as the structure and the schema is not known a-priori in such datasets. \mpara{OLAP approaches for RDF.} The MARVEL system~\cite{olapcubeRDF}, belonging to this line of work, implements view materialization for optimizing query answering of OLAP SPARQL queries~\cite{etcheverry2012qb4olap}. MARVEL employs a cost model, a view selection algorithm, and an algorithm for rewriting SPARQL queries using the available materialized views. Although the approach is the first to tackle the challenges of answering analytical queries on KGs through view materialization, the input data should actually adopt a data cube model (in particular the QB4OLAP~\cite{etcheverry2012qb4olap}) and the cost model simply considers the number of edges (triples) in each view. Other approaches have investigated the need for enabling complex aggregate queries in SPARQL~\cite{AnalyticsSPARQL,colazzo2014rdf}. In particular, the Analytical schema model~\cite{colazzo2014rdf} enables different views on generic KGs. Yet, this model does not tackle the problems of view materialization for RDF data, instead, they propose to map the data to a relational model and exploit traditional optimizations for relational queries. Finally, a distinct approach for RDF analytics~\cite{AnalyticsSPARQL} converts a complex aggregate query to a set of smaller, approximate, queries. Yet, this approach has the sole goal to diminish the load for the database answering the query, and not to speed up query processing. Therefore, to date, no solution has explored in detail the case of view materialization for KGs as a graph-centric problem. Instead, existing solutions, simply resort to map the data to a relational model. {\textsc{Sofos}\xspace} aims at systematically analyzing view materialization by shedding a light on existing methods to pave the road to a native graph-aware model for answering analytical queries on KGs. \section{Demonstration Scenario}\label{sec:scenario} The goal for the demonstration is to show, through experiments, the challenges involved in materialized view selection on knowledge graphs, exploring various alternative cost models. A screenshot of our system is shown in Figure~\ref{fig:gui}. The demonstration will start by guiding the participants through the different design choices in \textsc{Sofos}\xspace. We will then walk them through the following steps: \textbf{Configuration}: In this step, the three datasets used for our demonstration (i.e., the LUBM, the DBpedia, and the Semantic Web Dogfood datasets) will be presented along with the corresponding query facets\xspace for these datasets. Each query facet\xspace will be accompanied by a high-level description and a corresponding SPARQL query template, enabling the active exploration of the data available each time. For each dataset we will propose a query workload composed of different parametrized queries for a given query template. \textbf{Exploration of the Full Lattice:} By selecting a specific combination of dataset and facet\xspace, the full materialized lattice will be presented to the users, explaining why such a large structure is required, precomputing at the various levels, the aggregations that the query template might ask. By selecting a node (view) in the lattice the user will be able to check the data that are stored for this specific node. \textbf{Exploring Cost Models:} Using the full lattice as input, the various view selection algorithms (and the accompanied cost models) will be explained to the participants and demonstrated in practice. In each case, the trade-off in query execution and storage amplification will be shown, enabling users to understand which cost model is better in each case. \textbf{User Selected Views:} Besides exploiting an already existing view selection algorithm, the users will be able to select individual nodes from the lattice to be materialized and see the impact of their choices on the query execution time. Each time the space amplification and the query execution time will be contrasted, enabling users to explore the sweet-spot where space amplification is minimized and query execution time is improved. \textbf{``Hands-on'' Challenge:} In this phase, conference participants would be challenged, so that given a specific query and budget, to optimally select the views to be materialized for optimizing query execution. The participant that will make the best selection will receive a {\textsc{Sofos}\xspace}-related small prize. \section{Introduction}\label{sec:introduction} Companies of all types and sectors, such as Amazon, Google, Bosh, and Zalando, use the graph model to represent and store their enterprise knowledge bases~\cite{noy2019industry,Schmid2019UsingKG}. Moreover, large knowledge repositories are now available with a wide range of information in many different domains -- DBpedia and WikiData are two notable examples. Most of this knowledge is available as RDF datasets~\cite{RDF} through SPARQL endpoints~\cite{Bonifati2019}, organized as \emph{knowledge graphs} (KGs). In KGs like the one in Figure~\ref{fig:graph}, nodes represent entities and edges represent relationships and attributes. KGs allow storing a wide range of heterogeneous, factual, and statistical information that forms a valuable asset for businesses, organizations, and individuals. As more data is stored in KGs, there is an increasing need to answer more complex queries~\cite{AnalyticsSPARQL,noy2019industry}. However, in SPARQL query processing, the research mainly focuses on queries that identify nodes and edges satisfying some specific conditions (e.g., entities by name, friends of friends, or product categories)~\cite{watdiv,lubm,Bonifati2019}. \begin{example} Consider a KG like DBpedia or WikiData storing for each country the list of official languages and the number of people speaking that language in that country. This data can be used to answer analytical queries like ``in how many countries is French an official language?'' or ``what is the total amount of French-speaking population in the American continent?''. \end{example} \noindent Given the growing importance of KGs as knowledge repositories, there is a need for effective \emph{analytical query} answering to extract relevant insights from the data~\cite{colazzo2014rdf,AnalyticsSPARQL,olapcubeRDF}. \begin{figure} \centering \includegraphics[width=.9\columnwidth]{figures/kgraph.pdf} \caption{An example Knowledge Graph.} \label{fig:graph} \end{figure} The study of analytical queries (i.e., OLAP) over relational systems has attracted substantial attention in the past decades~\cite{niemi2001constructing} and recently, different methodologies have also been proposed in the context of KGs~\cite{colazzo2014rdf,gur2017geosemolap}. Nonetheless, obtaining answers to analytical queries is usually time-consuming and prohibitively expensive for most RDF data-management systems~\cite{AnalyticsSPARQL}. A technique to improve the performance of analytical queries is view materialization~\cite{olapcube}. View materialization precomputes and stores the results of analytical queries offline to serve new incoming queries faster. Nonetheless, this requires the system to select which views to materialize. In addition, the intricacies of the RDF model, e.g., complex schema, entailment, and blank nodes, further complicate the direct adoption of techniques proposed for the relational data. A recent work~\cite{olapcubeRDF} applies an approach designed for relational OLAP~\cite{olapcube} to RDF data. Yet, since existing approaches are adaptations of relational techniques, there is no understanding of their appropriateness to knowledge graphs. We shed a light on the use of multiple alternative approaches over KGs by showcasing \textsc{Sofos}\xspace, a system that compares various cost models for view materialization. A cost model is the main building block for selecting the views to materialize, as it provides an estimate of the time for querying a database with and without the materialized views. \mpara{Contributions.} {\textsc{Sofos}\xspace} proposes, evaluates, and compares a variety of existing cost models for view selection, adapted for the RDF setting. It allows users to run a set of queries on the materialized views and inspect the performance in executing the query workload. The goal of this prototype is to identify strengths and limitations of multiple cost estimation techniques for view selection on RDF data. In summary, {\textsc{Sofos}\xspace} (1) addresses the problem of providing fast query answering for analytical queries on KGs, (2) provides a generic solution to be deployed on any RDF triple store with SPARQL query processing, and (3) highlights possible limitations of six alternative approaches. Given a KG, a facet\xspace over the KG, and a constraint on the number of views to materialize, \textsc{Sofos}\xspace generates a set of views to answer aggregated queries over the provided facet\xspace.
1,116,691,497,107
arxiv
\section{Introduction}\label{sec:introduction} Coronavirus Disease 2019 (\textsc{covid}-19{}) is a respiratory disease caused and spread by Severe Acute Respiratory Syndrome Coronavirus 2 (\textsc{sars}-\textsc{c}o\textsc{v}-2{}). The virus most likely originated in Wuhan, China in December 2019 \citep{Zhu2020} and has since spread globally. The pandemic continues up to the moment of writing and is characterised by sequential waves of COVID-19 cases and hospitalisations, warranting a series of preventive governmental policies. Fig. \ref{fig:timeline_2020-2021} provides a detailed overview of key events and policy changes for Belgium. \\ \begin{figure}[b] \centering \includegraphics[width=\textwidth]{timeline_2020.pdf}\\ \includegraphics[width=\textwidth]{timeline_2021.pdf} \caption{Seven-day moving average of daily new \textsc{covid}-19{} hospitalisations in Belgium during 2020 and 2021 (maroon line). Vertical dashed lines are used to indicate events or policy changes with a possible impact on social contact behaviour relevant to this work. A green background colour is used to indicates school vacations. The horizontal arrows over the 2020 graph indicate the period of the first and second \textit{hard} lockdown.} \label{fig:timeline_2020-2021} \end{figure} To better understand the spread of \textsc{sars}-\textsc{c}o\textsc{v}-2{} and inform policymakers, a nation-level compartmental metapopulation model for Belgium was developed \citep{Alleman2021}. Furthermore, likely future scenarios were bundled and discussed within an interuniversity mathematical modelling consortium named \textsc{restore}. The findings were reported in several policy reports with accompanying press releases \citep{RESTORE8}. Reflecting the quickly expanding knowledge on \textsc{covid}-19{}, the existing model (see ref. \citep{Alleman2021}) has proven to neatly fit past trends, as well as creating meaningful projections of future trends \citep{alleman_reportv1p1, alleman_reportv1p2}. The existing model has been under continuous development as a response to the ever-expanding knowledge on \textsc{sars}-\textsc{c}o\textsc{v}-2{}. New knowledge includes firstly the influence of geography on viral spread, and secondly the effect of variants of concern (VOCs), seasonality, and vaccines on viral transmissibility and hospitalisation propensity. Consequently, the existing model was extended to incorporate these aspects\\ Regarding the influence of geography, we have first shown in a parallel work that the viral spread was not spatially homogeneous but rather clustered, especially in the initial phase of the pandemic \citep{rollier2022, Sciensano2020}. Second, we have demonstrated a correlation between the mobility on the one hand, and the morphology and timing of local \textsc{covid}-19{}-related time series on the other hand \citep{rollier2022}. The same was clearly shown for France, Italy, and Spain as well \citep{Iacus2020a}. Third, a national model cannot take into account local differences in immunity, possibly leading to local herd immunity \citep{Barker2021, Aschwanden2021}. Correcting for this may affect the national infection rate in a way that a nationally homogeneous model may not be able to capture. These three reasons suggest that a Belgian epidemiological model may benefit from a spatially explicit setup, as was successfully done for e.g.~Spain \citep{Arenas2020}, Brazil \citep{Costa2020} and France \citep{Roques2020}. Fourth, the inclusion of mobility and spatial heterogeneity into the metapopulation models allows scientists to advise policy-makers on the effect of localised measures, by predicting on a local level which areas face imminent danger, as well as which areas play a pivotal role in controlling the spread of the virus \citep{alleman_reportv1p1, alleman_reportv1p2}. This entails crucial and objective information in terms of e.g. preparation of local hospitals and the introduction of national mobility-related measures.\\ When it comes to VOCs and vaccines, the evidence for their influence on \textsc{sars}-\textsc{c}o\textsc{v}-2{} dynamics is decisive as well, which motivates their inclusion in the model. Subsequent VOCs are associated with different transmissibilities and hospitalisation propensities \cite{Grint2021, Bager2021, VENETI2022}, and speculation on increase in severity is an important factor in policy advice \cite{RESTORE7}. Vaccination has the explicit goal of reducing viral transmission and/or disease severity and has been shown to do this in both clinical trials \cite{doi:10.1056/NEJMoa2034577} and society-scale follow-up studies \cite{Tartof2021}. In addition, vaccine efficacies differ between VOCs \cite{Braeye2022a}. The direct or indirect effect of seasonal changes on the \textsc{sars}-\textsc{c}o\textsc{v}-2{} transmission rate is not supported by the same overwhelming amount of data due to the limited time since the start of the pandemic. However, seasonality plays a crucial role in many viral diseases \cite{martinez2018}, and has been required in recent \textsc{covid}-19{} modelling efforts \cite{Liu2021}. Considering VOCs, vaccines, and seasonality in the model requires the time-, age- and location-dependent rescaling of the model parameters governing transmissibility and hospitalisation propensity.\\ In this work, we first demonstrate that after model development, the resulting simulations provide an adequate description of past \textsc{covid}-19{}-related time series on the level of the Belgian provinces. We then demonstrate how the model can be used to explore hypothetical future scenarios to inform policymakers on the effects of social and pharmaceutical policies. Finally, in a purely hypothetical setup, we study the effect of locally altering the mobility and social contacts on the spread of \textsc{sars}-\textsc{c}o\textsc{v}-2{}, which is only possible in a spatially explicit model. We find that (1) decreasing mobility as a means of slowing or stopping viral spread is not efficient, and (2) local decreased social contact does not help to effectively contain a global viral outbreak.\\ It is important to stress that while the model is calibrated on Belgian \textsc{covid}-19{} data, the underlying framework is in no way unique to Belgium nor to \textsc{covid}-19{}. The mathematical setup of the model may therefore be applied to other countries and/or infectious diseases amongst humans as well. \section{Methods}\label{sec:model} The spatially explicit SEIQRD model presented here constitutes an extension of our national SEIQRD model for Belgium \citep{Alleman2021}. Here we first present the latter model. We then discuss the addition of a spatial dimension, and the dynamic rescaling of model parameters to include the effects of VOCs, seasonality, and vaccines. Finally, we discuss how the model is calibrated and how the hypothetical scenarios shown in the results section were set up. \subsection{SEIQRD model formulation} \label{subsec:age-stratified-model} A metapopulation model assumes that a population is well mixed and is distributed over a number of compartments that correspond to a stage in the disease development. The flowchart depicting the various metapopulation compartments and their interaction in our \textsc{covid}-19{} model is shown in Fig. \ref{fig:flowchart_SEIQRD}. In our previous model \cite{Alleman2021} the infectious compartment (I) in the original SEIRD formulation \citep{Kermack1927} is extended into six compartments to incorporate more expert knowledge on \textsc{sars}-\textsc{c}o\textsc{v}-2{}. In this way, the model accounts for pre-symptomatic and asymptomatic transmission of \textsc{sars}-\textsc{c}o\textsc{v}-2{} \citep{Ganyani2020,Wei2020,Gudbjartsson2020}, and for different \textsc{covid}-19{} severities, ranging from mild disease to hospitalisation. Our model distinguishes between regular hospital wards (cohort) and intensive care units (ICUs) and further accounts for a recovery stay in cohort after an ICU stay. Using data from \num{22 136} \textsc{covid}-19{} patients in Belgian hospitals, we previously computed the probabilities of needing intensive care, the mortalities in both hospital wards, and we computed residence time distributions in both hospital wards \cite{Alleman2021}. Waning of antibodies (seroreversion) is included, enabling re-susceptibility after a prior infection. The model is age-stratified in 10 age classes, 0-12, 12-18, 18-25, 25-35, 35-45, 45-55, 55-65, 65-75, and 85-120 years of age, to account for the fact that social contact and disease severity differs substantially between individuals of different ages. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{flowchart_SEIQRD_new2.png} \caption{Flowchart of the SEIQRD model. Here, $S$ stands for susceptible, and $E$ for exposed but not yet infectious. Infected subjects in the $I$ compartments are those that are considered to actively drive the pandemic, because they are either presymptomatic ($I_\text{presy}$), or asymptomatic ($I_\text{asy}$). Subjects in the $Q$ compartments are supposedly quarantined due to heightened symptom awareness, whether they have mild symptoms ($Q_\text{mild}$), are hospitalised ($Q_\text{hosp}$), are accepted in the ICU ($Q_\text{ICU}$), or remain in a recovery stay in cohort coming from the ICU ($Q_\text{ICU,rec}$). After infection, subjects are either deceased ($D$) or recovered ($R$). Recovered subjects may again become susceptible. The model presented in this paper will stratify each of these compartments according to 10 age classes and 11 provinces.} \label{fig:flowchart_SEIQRD} \end{figure} \subsection{Spatially explicit model extension} \label{subsec:spatial-extension} The first extension is to split Belgium in a collection of 11 geographical units: 10 provinces and the arrondissement Brussels-capital (NUTS3 level, Fig. \ref{fig:beta_classes_prov}, Table \ref{tab:class-NIS-name}). We will refer to the latter as the ``11th province'' for convenience. Each of these 11 provinces exhibits its own SEIQRD dynamics, and the provinces are interconnected based on the mobility of subjects between them. We will also distinguish between social contact behaviour in the home province versus the visited province, and differentiate between transmission coefficients in a rural, urban and metropolitan province. All age-related and spatial stratifications are denoted with subscripts ($i$ or $j$) and superscripts ($g$, $h$ or $k$), respectively. \paragraph{Interprovincial mobility} Central to the quantification of the interprovincial connectivity is the telecommunication dataset provided by Belgium's largest telecom operator Proximus (Appendix \ref{app:proximus-mobility-data}). The use of this type of data as a proxy for mobility has been shown to be legitimate \citep{Palchykov2014}, and has been done in the particular context of \textsc{covid}-19{} in other analyses and modelling efforts \citep{agren2021, santamaria2020, kishore2020}. The geographical spread of subjects between $G$ regions is quantified in a $G\times G$ time-dependent mobility matrix $\bm{P}(t)$ with elements $P^{gh}(t)$. Element $P^{gh}(t)$ represents the estimated fraction of all the time available to all subjects in patch $g$, spent in patch $h$, in the day corresponding to time $t$. Fig. \ref{fig:diagram-spatial-model} depicts an abstract spatially explicit model with three spatial patches; for the actual model $G=11$. As an example, two time series of $P^{gh}(t)$ for two different $(g,h)$ pairs are shown in Fig. \ref{fig:staytime_percentage_timeseries}. \\ \begin{figure} \centering \includegraphics[width=0.4\linewidth]{diagram-spatial-model.pdf} \caption{\small{Abstract representation of the interpretation of the inter-provincial mobility matrix $\bm{P}$ for only three provinces. In the model we consider 11 Belgian provinces, and the mobility matrix elements are time-dependent (Fig. \ref{fig:staytime_percentage_timeseries}).}} \label{fig:diagram-spatial-model} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{P-matrix_timeseries_21000-to-21000_21000-to-90000.png} \caption{Two of the $11^2$ time series $P^{gh}(t)$, here representing the daily percentage of time that all residents of Brussels spent in their home province (top), or in Luxembourg province (bottom). The hatched regions indicate periods for which an estimation was made, because no data was available.} \label{fig:staytime_percentage_timeseries} \end{figure} The mobility matrix is used to determine the \textit{effective} population sizes per model compartment in each province at time $t$ \cite{Arenas2020}. Mathematically, the effective population size is computed as follows: \begin{equation} X^g_{i,\text{eff}}(t) = \sum_{h=1}^G P^{hg}(t)X^h_i(t), \end{equation} where $X$ represents an arbitrary model compartment, or the total population $T$. This matrix multiplication effectively supports the geographical spread of \textsc{sars}-\textsc{c}o\textsc{v}-2{} in our model. \paragraph{Local social contact} The contact between generations in our model drives the rate and spread of the infection, and is time-dependent \cite{Alleman2021}. Furthermore, the contact data has a double geographical stratification: we determine social contact per province, and we determine social contact depending on whether a subject is in their home province, or visiting another province.\\ The size and time dependence of the social contact matrix results from multiplying four factors and summing over six locations, i.e. \begin{equation} \widetilde{\bm{N}}_\text{c}^g(t) = \sum_{k \in \text{loc}} M(t)\Omega^{k} G^{k,g}(t) \bm{N}_\text{c}^{k}, \label{eq:time-dependent-contact-matrix} \end{equation} with elements $\widetilde{N}^g_{\text{c},ij}(t)$ representing the average daily number of contacts at time $t$ between subjects in age class $i$ with those in age class $j$, in province $g$. The sum is over locations that are associated with distinct average contact behaviour: home, school, work, transport, leisure, and other. The factor $\bm{N}_\text{c}^k$ is a social contact matrix taken from the \textsc{socrates} web tool \cite{Willem2020a} and based on a 2010-2011 social contact survey conducted in Flanders that was revisited recently in the context of \textsc{covid}-19{} \citep{Hoang2021}. The observational time series $G^{k,g}(t)$ are taken from the Google Community Mobility Reports (GCMRs) \cite{google_mobility} and constitute the primary tool for rescaling the pre-pandemic social contact survey results to more accurate values. This approach was preferred over using more recent Belgian social contact studies \citep{Coletti2020} because the GCMRs are available daily and at the provincial level. Both the $\Omega^k$ and $M(t)$ parameters range between 0 and 1, and their particular values (at time $t$) are calibrated. The $\Omega^k$ parameters have two physical interpretations. 1) They can be thought of as quantifying the degree to which a contact in place $k$ can contribute to \textsc{sars}-\textsc{c}o\textsc{v}-2{} spread. 2) Alternatively, they can be seen as the degree correlation between the Google mobility indicator in location $k$ and reductions in the spread of \textsc{sars}-\textsc{c}o\textsc{v}-2{}. A low value of $\Omega_k$ suggests a change in the Google mobility indicator has limited effect on viral transmission, i.e. the Google indicator is inadequate. The mentality factors $M(t)$ were added to the social contact model because preliminary research indicated that public awareness triggers an apparent mentality change that reduces the number of social contacts even further than the GCMR data suggest. Two examples of resulting time series $\widetilde{N}_{\text{c},ij}^g(t)$ are shown in Fig. \ref{fig:resulting_Nc_21000_18-25-with-25-35_45-55-with-65-75}. All details are found in Appendix \ref{app:social_contact}.\\ \begin{figure} \centering \includegraphics[width=\linewidth]{resulting_Nc_21000_18-25-with-25-35_45-55-with-65-75.pdf} \caption{Example of time series $\widetilde{N}_{\text{c},ij}^g(t)$ for Brussels, which represent the local effective social contact between two age classes $i$ and $j$. These series result from the multiplication of four factors shown in Eq. \eqref{eq:time-dependent-contact-matrix}. The solid maroon curve shows effective contact between 18-25 year-olds and 25-35 year-olds. The dashed olive curve shows the same information, but 45-45 year-olds contacting 65-75 year-olds, clearly following a similar overall trend but involving fewer contacts.} \label{fig:resulting_Nc_21000_18-25-with-25-35_45-55-with-65-75} \end{figure} We will additionally assume that, on average, one only has work-related contacts in visited provinces, whereas all types of contact are possible within the home province. That is to say that we express the social contact of an average subject from province $g$ visiting province $h$ as \begin{equation} \bar{\bm{N}}_\text{c}^{gh}(t) = \delta^{gh}\widetilde{\bm{N}}_\text{c}^g(t) + (1-\delta^{gh})M^{\text{work},h}(t)\Omega^\text{work} G^{\text{work},h}(t) \bm{N}_\text{c}^\text{work}, \label{eq:time-dep_social-contact} \end{equation} where $\delta^{gh}$ is the Kronecker delta. \paragraph{Local population density dependence} We assume that average population density affects the effective transmission coefficient (similar to \citep{Arenas2020}) because we observed transmissibility differences not explicable by differences in the degree of social contact between provinces. However, in order to avoid over-parametrisation of the model, we don't define a unique transmissibility coefficient for each of the eleven provinces. Instead, we used three different transmission coefficients based on the population density. Essentially, this turns $\beta$ into a vector with three degrees of freedom, \begin{equation} \beta \rightarrow \bm{\beta} \text{ with elements } \beta^g \in \{\beta^\text{R}, \beta^\text{U}, \beta^\text{M}\}, \label{eq:beta_spatially_stratified} \end{equation} depending on whether we consider the province $g$ to be predominantly rural, urban, or metropolitan (see Table \ref{tab:class-NIS-name}). \subsection{Dynamical rescaling of model parameters to include VOCs, seasonality, and vaccines} \label{subsec:voc_and_vac} Including the effects of VOCs and seasonality simply implies dynamically altering the effective value of a number of model parameters during the simulation, regardless of age or home province. We implemented the effect of imperfect (``leaky'') vaccines in a more sophisticated fashion, by further stratifying the metapopulation model. See below and Appendix \ref{app:VOC_vacc} for details. \paragraph{Variants of concern} Beyond the wild-type \textsc{sars}-\textsc{c}o\textsc{v}-2{} variant, we consider four VOCs identified by the World Health Organization \citep{10.3389/fimmu.2022.825256}: Alpha, Beta, Gamma, and Delta. Due to their similar properties in our model \citep{Braeye2021}, we aggregate the first three VOCs, denoted as $\alpha$-$\beta$-$\gamma$. To model the emergence of these variants, national prevalence data were used \cite{Wenseleers2021} (see Fig. \ref{fig:VOC_prevalence}, top). At every time $t$, a weighted average infectiousness of \textsc{sars}-\textsc{c}o\textsc{v}-2{} variants was computed using the variant fractions, which effectively turns the (geographically stratified) transmission coefficient into a time-dependent function, i.e. \begin{equation} \bm{\beta}(t) = \bm{\beta} \sum_n\alpha_n(t)K_{\text{inf},n}. \label{eq:beta-from-VOC} \end{equation} Here $\alpha_n(t)$ represents the fraction of variant $n$ present in Belgium at time $t$, and $K_{\text{inf},n}$ is the increase in infectivity compared to the wild type, which is determined during the calibration procedure (explained below). The variants were assumed to alter the serial interval and disease severity as well, which translates to dynamically changing the length of the average latent time ($\sigma$) and the hospitalisation propensity ($\bm{h}$) in a similar fashion (see Fig. \ref{fig:VOC_prevalence}, bottom). The latter rescaling parameters are derived from literature \citep{Grint2021, Bager2021, VENETI2022, Hart2022} and listed in Table \ref{tab:VOC-dependent-variables}. \paragraph{Seasonality} Changes in climate have been recognised to play a role in the spread of many viral diseases amongst humans, notably influenza \citep{martinez2018}. Seasonal effects influence the effective viral transmissibility, either directly by measurable physical changes in e.g.~temperature, or indirectly by changes in social behaviour we remain agnostic about. Seasonality is included in our model by scaling the transmission coefficient of \textsc{sars}-\textsc{c}o\textsc{v}-2{} with a cosine function \citep{Liu2021}. Its period is one year, and its amplitude is denoted by $A_s$, i.e. \begin{equation} \bar{\bm{\beta}}(t) = \bm{\beta}(t)\left[ 1 + A_s \cos\left(2\pi \frac{t}{365 \text{ days}}\right) \right]. \label{eq:seasonality} \end{equation} Here $t$ is expressed in days since January 1st, at which time we assume the $\widetilde{\bm{\beta}}(t)$ values are maximal. Its simplicity reflects the current lack of understanding of seasonality's actual effect on \textsc{sars}-\textsc{c}o\textsc{v}-2{}, mainly due to lack of long-term data. The amplitude $A_s$ is determined during the calibration procedure. \paragraph{Vaccination} Vaccination against \textsc{covid}-19{} of susceptible subjects was shown to significantly decrease viral transmissibility and hospitalisation propensity, both in clinical trials \cite{doi:10.1056/NEJMoa2034577} and in society-scale follow-up studies \cite{Tartof2021}. However, the protection offered by vaccination is imperfect (``leaky'') and was shown to decrease over time, from hereon referred to as \textit{waning} \cite{Braeye2022a}. Furthermore, the protection against severe \textsc{covid}-19{} is more long-lasting than protection against \textsc{sars}-\textsc{c}o\textsc{v}-2{} transmission \citep{Tartof2021}. We consider three vaccination stages: the first dose (partial vaccination), the second dose (full vaccination), and the third dose (booster shot). Our model approaches vaccination in the same fashion as the age- and spatial stratification: every SEIQRD compartment $X$ is split in four additional subcompartments depending on the vaccination stage, as follows \begin{equation} X_i^g \rightarrow \bm{X}_i^g \text{ with elements } X_{i,v}^g \text{ for } v \in \{\text{none}, \text{first}, \text{full}, \text{booster}\}. \end{equation} Subjects belonging to compartment $Y \in \{S, E, I_\text{presy}, I_\text{asy}, R\}$ are assumed to be eligible for vaccination. They are transferred to another vaccination status within the same compartment by dynamically updating the $Y^g_{i,v}$ value at time $t$, \begin{equation} Y^g_{i,v}(t) = Y^g_{i}(t) \phi^g_{i,v}(t), \label{eq:vaccination-update-metapopulation} \end{equation} where $\phi^g_{i,v}(t)$ represents the fraction of the population in age class $i$ and province $g$ in vaccination stage $v$ at time $t$ (see Figs. \ref{fig:vaccination_timeseries_NIS} and \ref{fig:vaccination_timeseries_age}). Here we have $\sum_v \phi^g_{i,v}(t) = 1$ such that $\sum_v Y_{i,v}^g(t) = Y_i^g(t)$. These data are publicly available for all Belgian provinces and per age class \cite{Sciensano2022}. Individuals not eligible for vaccination cannot change vaccination status $v$. \\ In every metapopulation the vaccine offers protection through three mechanisms: 1) vaccines lower the susceptibility to \textsc{sars}-\textsc{c}o\textsc{v}-2{}, 2) vaccines lower the infectiousness of an individual infected with \textsc{sars}-\textsc{c}o\textsc{v}-2{}, and 3) vaccines lower the hospital admission propensity of \textsc{covid}-19{}. Vaccine efficacies $\bm{E}_{v,n,\text{susc}}, \bm{E}_{v,n,\text{inf}}$ and $\bm{E}_{v,n,\text{hosp}}$ are available for every vaccine stage $v$ and VOC $n$ \cite{Braeye2022a} (see Table \ref{tab:vaccine_properties}), and in general also depend on age and province (see Appendix \ref{app:VOC_vacc}). This means that we further stratify the transmission coefficients \begin{equation} \bar{\beta}^g \rightarrow \bar{\bm{\beta}}^{g} \text{ with elements } \bar{\beta}^{gh}_{ij,vw} = \bar{\beta}^g \sum_n \alpha_n(t) (1 - E_{v,n,\text{susc},i}^g)(1 - E_{w,n,\text{inf},j}^h), \end{equation} where $n$ runs over the VOCs, and the hospitalisation propensities \begin{equation} \bar{h}_i \rightarrow \bar{\bm{h}}_{i} \text{ with elements } \bar{h}_{i,v}^g = \bar{h}_i \sum_n \alpha_n(t) (1 - E_{v,n,\text{hosp},i}^g). \end{equation} We do not explicitly distinguish between the different vaccines: all efficacies used were those of the mRNA-1273 (Pfizer) vaccine, as over 72\% of all administered doses in Belgium were Pfizer's \citep{Sciensano2022}. The vaccine does not work immediately nor permanently. Vaccine onset is included by working with vaccination stage fraction time series $\bm{\phi}(t)$ that have been smoothed by an exponential moving average (Figs. \ref{fig:vaccination_timeseries_NIS} and \ref{fig:vaccination_timeseries_age}); this procedure imposes a de facto two-week delay, which we assume to correspond to the vaccine onset duration. Vaccine waning, on the other hand, is included after full vaccination only, by including a time dependence on efficacies $E_{\text{full},n,\text{susc}}$, $E_{\text{full},n,\text{inf}}$ and $E_{\text{hosp},n,\text{susc}}$, based on vaccination incidence data and the assumption that the vaccine efficacy exponentially approaches zero (see Appendix \ref{app:VOC_vacc} for details). \subsection{Governing equations} Incorporating the model extensions described in Sections \ref{subsec:age-stratified-model}-\ref{subsec:voc_and_vac}, we present the $10 \times 10 \times 11 \times 4 = 4400$ coupled ordinary differential equations (ODEs) that govern the model in Appendix \ref{app:model-equations-and-model-parameters}. The central formula, which determines the number of newly infected subjects resulting from contact with pre- and asymptomatic subjects, is \begin{equation} \dot{S}_{i,v}^g = - \sum\limits_{h=1}^G P^{gh} S_{i,v}^g \sum\limits_{w} \sum\limits_{j=1}^{N} \bar{\beta}^{gh}_{ij,vw} \bar{N}_{\text{c},ij}^{gh} \dfrac{(I_\text{presy})_{j,w,\text{eff}}^h + (I_\text{asy})_{j,w,\text{eff}}^h}{T_{j,w,\text{eff}}^h} + \zeta R^g_{i,v}. \label{eq:central_ODE} \end{equation} Here \textit{all} variables except $\zeta$, which quantifies the average seroreversion rate, are time-dependent. The explicit time dependence is however omitted for readability. The system of ODEs is solved numerically using an explicit Runge-Kutta method of order 3(2) and results in what we will refer to as a ``simulation''. In Appendix \ref{app:model-equations-and-model-parameters}, an overview of all model assumptions and parameters, as well as their chosen values, are given. \subsection{Model calibration} \paragraph{Calibrated parameters} The 11 model parameters $\beta^\text{R}$, $\beta^\text{U}$, $\beta^\text{M}$, $\Omega^{\text{home}}$, $\Omega^{\text{school}}$, $\Omega^{\text{work}}$, $\Omega^{\text{rest}}$, $M_\text{cal}$, $K_{\text{inf},\alpha \beta \gamma}$, $K_{\text{inf},\delta}$, and $A_s$ are considered to be a priori unknown and must be calibrated using the available data. The simulated daily number of hospitalisations is matched to the eleven time series of daily new hospitalisations in each province, starting on March 15th, 2020 and ending on October 1st, 2021. Further, assuming that on average half of the recovered subjects are again susceptible after one year (associated with seroreversion rate parameter $\zeta$), the simulated numbers of recovered individuals are matched to five serological measurements from Herzog et al., 2020 \cite{Herzog2020} and eight serological measurements from Sciensano \cite{Sciensano2020}, spanning the period from March 30th, 2020 until July 7th, 2020. \paragraph{Statistical model} A quadratic relationship between the observed mean and variance of the daily hospitalisations timeseries data was observed, indicating that a negative binomial model is best fit to describe the relationship between the model outcome and observed data \cite{Chan2021}. We therefore iteratively optimise the following loglikelihood function, \begin{multline} \log \mathcal{L}(\bm{\widetilde{x}} \vert \bm{x}) = -\sum_{g=1}^G\sum_{t=1}^n \left( \log\left[\frac{\Gamma(x^g_t + 1/\alpha^{g})}{\Gamma(1/\alpha^{g})}\right] + \right.\\ \left.\frac{1}{\alpha^{g}}\log\left[ \frac{1/\alpha^{g}}{1/\alpha^{g} + \widetilde{x}^g_t} \right] + x^g_t\log\left[ \frac{\widetilde{x}^g_t}{1/\alpha^{g} + \widetilde{x}^g_t} \right]\right). \label{eq:calibration_loglikelihood} \end{multline} Here the outer sum is over all $G=11$ provinces. The inner sum is over all $n$ observed data points at times $t$. $\bm{\widetilde{x}}$ represents the simulated time series, and $\bm{x}$ the equivalent observed time series. The overdispersion parameter $\alpha^g$ quantifies the presumed error on the data per province $g$ (see Table \ref{tab:overdispersions}), and $\Gamma$ is the gamma function. Maximizing the result of Eq. \eqref{eq:calibration_loglikelihood} is computationally demanding and has local minima. We thus need an efficient way to scan through the eleven-dimensional parameter space. A good technique to initially broadly identify the region where the global maximum is situated is Particle Swarm Optimisation (PSO) \cite{kennedy1995}. Subsequently, once a region of interest has been identified, we use the maximum-likelihood estimates as initial values for the ensemble sampler for Markov Chain Monte Carlo (MCMC) \cite{goodman2010}. For all parameters, uniform prior distributions were used. More details are found in Appendix \ref{app:calibration}, and section \ref{sec:results_model-calibration} contains calibration results. \subsection{Scenario analyses} \subsubsection{Scenarios for policymakers} Next, we illustrate how our model can be used to simulate the combined impacts of the emergence of new variants, an ongoing nation-wide vaccination campaign and social relaxations. Such simulations can be used to provide policymakers with insights on the optimal timing of the release of social restrictions and demonstrate the predictive capabilities of the model. We therefore calibrate our model up to March 1st, 2021, a point in time interesting because the $\alpha$-$\beta$-$\gamma$ VOCs had just become dominant, the Belgian vaccination campaign was picking up speed, and there was a high pressure to relax social restrictions. Under the emergence of the $\alpha$-$\beta$-$\gamma$ VOCs and vaccination campaign, we thus define four future scenarios in which social restrictions are released. The baseline scenario (S0) assumes average social contact behaviour of February 2021 to continue indefinitely. Scenarios S1 through S3 involve a gradual increase of social contact toward the behaviour of September 2020, starting at the first day of May (S1), April (S2) or March (S3) (see Fig. \ref{fig:four_projected_nc_1mar2021}). In these scenarios, we assume that the Delta variant does not emerge and the observed number of administered vaccines are used. The number of vaccine doses that would be administered were ofcourse not known on March 1st, 2021, but in policy advices given at that time projections for the future administered doses were used. Results are included in Section \ref{sec:scenarios-for-policymakers}. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{four_projected_Nc_1mar2021.pdf} \caption{Age-weighted average of the observed social contact matrix elements $N_{\text{c},ij}(t)$ (grey), plotted alongside the social contact associated with scenarios S0 (solid maroon), S1 (dotted pink), S2 (blue dash-dotted), and S3 (olive dashed). } \label{fig:four_projected_nc_1mar2021} \end{figure} \subsubsection{Spatially explicit scenarios} A particular strength of our model is its explicitly spatial nature. To illustrate this, we first analyse the effect on the timing and severity of \textsc{covid}-19{}-related hospitalisations of limiting mobility to/from one particular province. Next we assess the impact of limiting social contacts in one particular province. All simulations are started on January 1st 2020, upon which we inspect the resulting hospitalisations in the next four months. Due to their large demographic differences and their relatively weak connectivity, we inspect results for Brussels and Luxembourg. \paragraph{Regulating local mobility} We define a parameter $\bm{p}$ whose elements $p^g \in [0, 1]$ linearly controls the mobility to and from province $g$, compared to some static baseline mobility, \begin{equation} \bar{\bm{P}} = \text{avg}\left\{ \bm{P}(t) \right\} \qquad \text{for } t < \text{March 18th 2020}. \end{equation} the values $p^g$ are defined implicitly as \begin{equation} \widetilde{P}^{gh} =\bar{P}^{gh} p^g p^h + \delta^{gh}\sum_{f=1}^G \bar{P}^{gf}(1-p^f p^h), \end{equation} where $\delta^{gh}$ is the Kronecker delta. We assume social contact behaviour remains the same and is independent of whether a province is a subject's home province or a visited province. We run 25 simulations, one for every $p^g$ value logarithmically spaced between 1 and $10^{-3}$. In our analysis we change mobility to/from Brussels, and consider either the scenario where Brussels is \textit{shielded} from an outbreak in Luxembourg (Mob. S.), or where the outbreak in Brussels is \textit{contained} (Mob. C.). \paragraph{Regulating local social contact} We implicitly define a parameter $\bm{n}_\text{c}$ whose elements $n_\text{c}^g \in [0, 1]$ determine the local average social contact compared to the prepandemic baseline social contact $\bm{N}_\text{c}$: \begin{equation} \bar{N}_{\text{c},ij}^{gh}(t) = n_\text{c}^h(N_{\text{c},ij} - N_{\text{c},ij}^\text{home}) + N_{\text{c},ij}^\text{home}. \end{equation} Note that this quantity is independent of the province of origin $g$: we assume subjects follow the social rules of the province they \textit{visit}. We also again assume that (for $\bm{n}_\text{c} = \bm{1}$) no distinction is made between social contact in the home province compared to the visited province, which is different from what is expressed in Eq. \eqref{eq:time-dep_social-contact} and used in non-fictitious analyses. We again run 25 simulations, one for every $n_\text{c}^g$ value, now equidistantly spaced between 1 and 0, and all other parameters fixed (including mobility). Again altering values for Brussels, we perform a similar analysis for shielding and containing an initial outbreak. We call these scenarios Soc. S. and Soc. C., respectively. \section{Results and discussion}\label{sec:results} \subsection{Model calibration} \label{sec:results_model-calibration} The nationally and regionally aggregated simulations between March 17th and October 1st 2021 are shown in Fig. \ref{fig:national-and-regional-complete-model-fit}. In Figs. \ref{fig:provincial-complete-model-fit-0} and \ref{fig:provincial-complete-model-fit-1}, the fit of the calibrated model to each of the eleven provincial time series is given. In Fig. \ref{fig:seroprevalence-spatial-fit}, the nationally aggregated fit to the seroprevalence data is given. Further, a corner plot showing the posterior distributions of all 11 calibrated parameters is shown in Fig. \ref{fig:full-calibration-corner-plot}, and all calibrated parameter values are listed in Table \ref{tab:calibration_parameters}. The time series of the normalised root-mean-square errors (RMSE) between the observed and simulated daily new hospitalisations of the spatially explicit model are given in Fig. \ref{fig:RMSE-fit-boxplot}, alongside the ones of the previously established national model \cite{Alleman2021}. \paragraph{Goodness-of-fit} In general, over the calibrated period (before the dashed line in Fig. \ref{fig:national-and-regional-complete-model-fit}), both the regional and national aggregates fit the observed number of daily hospitalisations well (Fig. \ref{fig:national-and-regional-complete-model-fit}). On the national level, the simulated number of daily hospitalisations at the peak of the second 2020 \textsc{covid}-19{} wave is slightly lower than the observed one. A possible explanation lies in the fact that on the 24th of September 2020, the federal government released all remaining social restrictions (Fig. \ref{fig:timeline_2020-2021}). This may have caused a sudden increase in the number of social contacts during the one month period prior to the lockdown at the beginning of November 2021. This change in the degree of social contact is not observed in the GCMRs and is thus not captured by our social contact model. Survey-based contact studies under lockdown measures at the regional level could be used to explain the regional difference in the second 2020 \textsc{covid}-19{} peak height.\\ Beyond the calibrated range (after the dashed line in Fig. \ref{fig:national-and-regional-complete-model-fit}), during the Delta wave of October-December 2021, the forecasted number of new hospitalisations is higher than its observed counterpart on the national level. When looking at the regional breakdown of the forecast, the numbers of daily hospitalisations are slightly too low in Flanders, while they are much too high in Wallonia and Brussels. The large difference in model prediction between the regions is most likely due to the large differences in the regional vaccination degree. Flanders (91.4\% of 18+ by October 1st 2021) has a much higher vaccination coverage than Wallonia (79.8\%) and Brussels (66.5\%) (Fig. \ref{fig:vaccination_timeseries_NIS}). Given that the regional differences in vaccine incidence and the subsequent waning of the vaccines are incorporated in our model, the number of observed hospitalisations in Wallonia and Brussels are far below those expected. Still, the model was qualitatively able to predict a Delta wave that would warrant social policy interventions.\\ Just like the second 2020 \textsc{covid}-19{} wave discussed earlier, regional differences in social contact behaviour provide the most likely explanation for the regional mismatches between the simulated and observed numbers. In this case, there still were some social restrictions in Wallonia and Brussels, while on October 1st, 2021, the Flemish government had released all measures. Another possible explanation for the overestimation of the Delta wave in general lies in the fact that seasonal change from summer to autumn, typically at the end of September in Western Europe, happens quite abruptly and with quite some variation in year-to-year timing. Meanwhile, the modelled seasonal wave is smooth and a small mismatch in timing between the seasons change may result in large differences in model outcome due to the exponential rise of hospitalisations. Therefore, accounting for seasonality using temperature observations could yield even better results.\\ In addition to the spatially explicit model, we calibrated our national model with the same implementation of VOCs, seasonality, and vaccines to the nationally aggregated daily number of \textsc{covid}-19{} hospitalisations. The model fit to the data, as well as the normalised RMSE time series of the fit are visualised in Fig. \ref{fig:RMSE-fit-boxplot}. Over the calibration period period, we found a mean RMSE of 0.40 for the spatially explicit model and a mean RMSE of 0.38 for the national model. The difference in RMSE was found not to be statistically significant, indicating both models describe the national hospitalisation data equally well (Mann-Whitney U test \citep{McKnight2010}, p-value: 0.32). The spatially explicit model is therefore also capable of producing the same results as its predecessor \cite{Alleman2021}, which serves as a sanity check. \paragraph{Calibrated parameter values} The values and 95\% quantiles of the calibrated parameters are listed in Table \ref{tab:calibration_parameters} and shown in more detail in Fig. \ref{fig:full-calibration-corner-plot}, some noteworthy values are discussed here. Even though the calibrated values for the transmission coefficients for rural or urban provinces are rather similar, the metropolitan $\beta^\text{M}$ value (0.053) is significantly larger than its rural $\beta^\text{R}$ and urban $\beta^\text{U}$ counterparts (0.040 and 0.041). This reflects that Brussels, with its \textit{much} larger average population density, accommodates an increased effective viral transmission. The calibrated effectivity parameters $\Omega^k$ have well-resolved and distinct values. The effectivity for school-related social contact (0.02), and leisure contacts (0.13) are low, while the effectivity for work-related contacts is high (0.69). This indicates that the workplace mobility from the GCMRs informs the simulated number of daily \textsc{covid}-19{} hospitalisations best. The low inferred effectivity of school contacts does not necessarily imply that schools did not play a role in the epidemic, as changes in work mobility are intertwined with the opening and closing of schools. We found a calibrated mentality value of $M_\text{cal} \simeq 0.36$ during the time periods indicated in Fig. \ref{fig:mentality_timeseries}, corresponding to periods of high healthcare pressure. This implies that people's overall awareness of the danger of \textsc{sars}-\textsc{c}o\textsc{v}-2{} during those periods roughly translates to a 64 percent \textit{additional} reduction in social contact as compared to periods when \textsc{sars}-\textsc{c}o\textsc{v}-2{} poses no imminent threat.\\ Important is the significantly non-zero value for the seasonality amplitude. Since $A_\text{s} \simeq 0.30$ this implies that \textsc{sars}-\textsc{c}o\textsc{v}-2{} is 86\% more transmissible during winter compared to summer time. The increases in \textsc{sars}-\textsc{c}o\textsc{v}-2{} infectivity $K_{\text{inf},{\alpha\beta\gamma}}$ and $K_{\text{inf},{\delta}}$ due to VOCs are significant. We find respectively a 57\% and a 79\% increase as compared to the wild-type variant. The estimate for the Delta variant is on the low side of the values cited in the literature \citep{Kuzmina2021}. However, the reduced serial interval (see Tab. \ref{tab:VOC-dependent-variables}) and lower vaccine efficacies (see Tab. \ref{tab:vaccine_properties}) also contribute to an even greater infectivity. \begin{figure} \centering \includegraphics[width=\linewidth]{national-and-regional-complete-model-fit.pdf} \caption{100 model realisations of the daily new hospitalisations between March 17th 2020 and January 1st 2022 (solid lines) with a negative binomial 95\% confidence region (transparent band). Black crosses signify raw data from Sciensano \cite{Sciensano2020} were used in the calibration procedure while red crosses signify data were not used during the calibration procedure. From top to bottom: Nationally aggregated daily number of hospitalisations, daily hospitalisations aggregated over all Flemish provinces, daily hospitalisations aggregated over Walloon provinces, daily hospitalisations in Brussels (see Table \ref{tab:class-NIS-name} and Fig. \ref{fig:beta_classes_prov}).} \label{fig:national-and-regional-complete-model-fit} \end{figure} \subsection{Scenario analyses} \subsubsection{Scenarios for policymakers} \label{sec:scenarios-for-policymakers} For these scenarios, we go back in time to March 1st, 2021 to study the combined impact of the emergence of the $\alpha$-$\beta$-$\gamma$ VOCs, an ongoing nation-wide vaccination campaign and anticipated social relaxations. The nationally aggregated simulations are shown in the top panel of Fig. \ref{fig:four_scenarios}; the bottom panel illustrates the imposed social contact associated with each of the four scenarios. For the sake of brevity, we omit the regional results.\\ In line with expectations, more and earlier social contact translates into higher hospitalisation peaks. The projections in Fig. \ref{fig:four_scenarios} strongly recommend against the relaxation of social relaxations on March 1st (S3), and on April 1st (S2), \textit{even} if the measures are gradually relaxed over a two-month period. Doing so would result in hospitalisation peaks that far surpass those of the second 2020 \textsc{covid}-19{} wave, which would put the Belgian health care system on the brink of collapse. Relaxations starting on May 1st, 2021 (S1) contain the epidemic, likely due to the combined effect of vaccination and favourable seasonal changes during summer. Relaxations starting on June 1st, 2021 (S0) result in a near extinction of the epidemic in Belgium. In reality, measures were relaxed starting mid May 2021, corresponding to a situation roughly between S0 and S1.\\ It should be noted that translating the ``number of social contacts'' in a mathematical model into a concrete set of rules and regulations is not straightforward. However, scenario analyses like the one presented in this work have the potential to provide policymakers with high-level insights regarding the potential impact of their proposed policies. We stress that the output of one epidemiological model should be interpreted with care and if possible, results from different models should be combined in an ensemble to increase the robustness of the predictions \cite{RESTORE7, RESTORE8}. Relying on such an ensemble gives more weight to the overall trends in the policy advice than to the quantitative model outcomes, which are often disproportionately focused on by policymakers and press media. Ideally, epidemiological models like the one in this paper are further coupled to health economic and macro-economic models to provide even more comprehensive policy advice. \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{four_projected_scenarios_1mar2021.pdf} \caption{\textit{Top}: Combined impact of the $\alpha$-$\beta$-$\gamma$ VOCs, the ongoing nation-wide vaccination campaign and social relaxations on the number of daily hospitalisations, starting on March 1st 2021. \textit{Bottom}: weighted average of the observed social contact matrix elements $N_{\text{c},ij}(t)$ (grey). The coloured curves illustrate the social contact in the four different social relaxation scenarios.} \label{fig:four_scenarios} \end{figure} \subsubsection{Spatially explicit scenarios} \label{sec:results_local-scenarios} \paragraph{Regulating local mobility} Fig. \ref{fig:mobility-reduction-to-21000}(a) shows the result of introducing 100 exposed subjects in every age class in the province of Luxembourg, and demonstrates the effect of progressively enforcing stricter mobility measures to and from Brussels from the start of the simulation (Mob. S). A similar result is shown in Fig. \ref{fig:mobility-reduction-to-21000}(b) for scenario Mob. C. In scenario Mob. S. the only effect on the local number of daily hospitalisations in Brussels is to \textit{delay} the onset (and peak). The timing of the hospitalisation peak logarithmically depends on the decrease of in- and outward mobility, at roughly 7 days per order of magnitude in $p^g$. In scenario Mob. C. a slight increase and advancement of the hospitalisation wave for decreasing mobility is forecasted in Brussels, while demonstrating a similar logarithmic dependence on the hospitalisation peak timing in Luxembourg (roughly 9 days per order of magnitude in $p^g$). These qualitative relations are of course not unique to Brussels and Luxembourg, but apply to all pairs of provinces.\\ \begin{figure} \centering \begin{tabular}{@{}c@{}} \includegraphics[width=\linewidth]{mobility-reduction-to-21000_index-patients-in-80000.pdf} \\%[\abovecaptionskip] \small{(a) Scenario Mob. S.} \end{tabular} \begin{tabular}{@{}c@{}} \includegraphics[width=\linewidth]{mobility-reduction-to-21000_index-patients-in-21000.pdf} \\%[\abovecaptionskip] \small{(b) Scenario Mob. C.} \end{tabular} \caption{Effect of decreasing mobility to/from Brussels on 25 simulated hospitalisation time series, either when (top) shielding Brussels from an outside epidemic, simulated as 100 index patients introduced in every age class in Luxembourg or (bottom) containing an epidemic within Brussels, simulated as 100 index patients introduced in every age class in Brussels. } \label{fig:mobility-reduction-to-21000} \end{figure} During the first, very strict national lockdown, the corresponding value for $p^g$ was approximately 0.5, and certainly larger than $10^{-1}$ (see Fig. \ref{fig:staytime_percentage_timeseries}). At the same time, impact on local time series is only significant for very large mobility reductions, i.e. values $p^g < 10^{-1}$. Consequently, we may conclude that \textit{only} reducing mobility as a means for postponing the wave, while leaving all other social behaviour unchanged, is therefore both very drastic and barely effective, and hence an undesirable mitigation policy. It should be noted that at very high levels of isolation (low $p^g$ values), the deterministic nature of the model results in an overprediction of the number of hospitalisations in the other province. At such levels of isolation, even a fractional person spillover can trigger an epidemic in the other province. A markov-chain stochastic version of the model, whose chains can go extinct, would be more appropriate to study low mobility cases. However, because such levels of isolation are not attainable in reality, the conclusions made above will still stand. \paragraph{Regulating local social contact} Fig. \ref{fig:contact-reduction-in-21000}(a) shows the result of introducing 100 exposed subjects in every age class in the province of Luxembourg, and demonstrates the effect of progressively enforcing stricter social restrictions in Brussels from the start of the simulation (scenario Soc. S.). A similar result is shown in Fig. \ref{fig:contact-reduction-in-21000}(b) but now the 100 exposed subjects are released in Brussels (scenario Soc. C.).\\ Here we see that reducing social contact in Brussels does not influence the epidemic in Luxembourg, while strongly delaying \textit{and} reducing the hospitalisation wave in Brussels. In the containment scenarios, a similar effect is seen for Brussels, but now we observe an additional effect of \textit{delaying} the peak in Luxembourg. In both cases and for both effects (total amount of hospitalisations and peak delay), the effect is now roughly \textit{linear} in $n_\text{c}^g$ rather than logarithmic, preventing hospitalisation of approximately \num{7000} Brussels residents per 10\% reduction in $n_\text{c}^g$. This suggests that social contact reduction is a much more effective policy measure than mobility reduction. In reality, mobility and social contact will never be altered independently. \begin{figure} \centering \begin{tabular}{@{}c@{}} \includegraphics[width=\linewidth]{contact-reduction-in-21000_index-patients-in-80000.pdf} \\%[\abovecaptionskip] \small{(a) Scenario Soc. S.} \end{tabular} \begin{tabular}{@{}c@{}} \includegraphics[width=\linewidth]{contact-reduction-in-21000_index-patients-in-21000.pdf} \\%[\abovecaptionskip] \small{(b) Scenario Soc. C.} \end{tabular} \caption{Effect of decreasing social contact in Brussels on 25 simulated hospitalisation time series, either to (top) shield Brussels from an outside epidemic, simulated as 100 index patients introduced in every age class in Luxembourg or to (bottom) contain an epidemic within Brussels, simulated as 100 index patients introduced in every age class in Brussels. } \label{fig:contact-reduction-in-21000} \end{figure} \section{Conclusion}\label{sec:conclusion} Starting from our previously developed national model \cite{Alleman2021}, a spatially explicit variant was developed. The models were, over the past two years, extended to account for the emergence of VOCs, seasonality, and vaccines. These were critical model additions that were desired and required for the description, forecasting and understanding of the \textsc{covid}-19{} pandemic in Belgium. The spatially explicit and national models are equally capable of describing the hospitalisation data in the calibrated range. Beyond the calibrated range, the spatially explicit model was, at least qualitatively, able to forecast the emergence of a Delta hospitalisation wave in the autumn of 2021. The effective transmission was found to be significantly higher in the metropolitan Brussels-Capital region, arguably due to the much larger population density. The seasonal effect was found to be strong, with an estimated 60\% transmissibility difference between summer and winter. \\ We demonstrate the model is deployable as a means to evaluate scenarios on the effects of non-pharmaceutical policy interventions, which can -- and have been -- applied to support the pandemic decision-making process. In addition, the model was used to study the effects of locally reducing mobility and of locally reducing social contact to shield or contain an epidemic. We found that reducing social contact is quasi-linearly correlated to reducing the sum of \textsc{covid}-19{} hospitalisations. Reducing mobility, on the other hand, only results in postponing a \textsc{covid}-19{} wave, and only does so for very high (and quasi unattainable) levels of isolation. We conclude that the reduction of social contact is a much more effective approach to slow \textsc{sars}-\textsc{c}o\textsc{v}-2{} spread. Generally, the presented model's fidelity and applicability have been demonstrated. Generalising to a non-Belgian context or other infectious diseases is straightforward.\\ \backmatter \clearpage \bmhead{Supplementary information} This paper contains additional information on the geography of Belgium (Appendix \ref{app:sciensano}), details with regard to the data used in this work (Appendices \ref{app:sciensano}, \ref{app:proximus-mobility-data} and \ref{app:social_contact}), more details on the implementation of the VOCs, seasonality, and vaccines (Appendix \ref{app:VOC_vacc}), an overview of the model equations, parameters and assumptions (Appendix \ref{app:model-equations-and-model-parameters}) and more details with regard to the model calibration (Appendix \ref{app:calibration}). \bmhead{Author contributions} \textbf{Michiel Rollier}: Conceptualisation, Methodology, Investigation, Data curation, Writing – original draft. \textbf{Tijs W. Alleman}: Conceptualisation, Software, Methodology, Investigation, Data curation, Writing – original draft. Both of the above authors have closely collaborated on the manuscripts contents and should both be regarded as the primary authors of the text. \textbf{Jenna Vergeynst}: Conceptualisation, Investigation, Project administration. \textbf{Jan M. Baetens}: Conceptualisation, Funding acquisition, Project administration, Writing – review \& editing. \bmhead{Acknowledgements} The resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation - Flanders (FWO) and the Flemish Government. We would like to thank Proximus for providing the the telecommunication data free-of-charge. We would like to thank Lander De Visscher for his methodological help on the \textit{Markov-Chain Monte-Carlo} technique used in this work. This work was supported by the UGent Special Research Fund, Belgium, by the Research Foundation Flanders (FWO), Belgium, project numbers G0G2920 and 3G0G9820 and by VZW 100 km Dodentocht Kadee, Belgium through the organisation of the 2020 100 km COVID-Challenge. \bmhead{Conflict of interest} None declared. \bmhead{Ethics approval} All used data conform to GDPR standards. \bmhead{Consent to participate} Not applicable \bmhead{Consent for publication} All authors consent to publication in (to be determined), preceded by pre-print publication in an open-access archive. \bmhead{Availability of data and materials} All data used in this research are publicly available \citep{Sciensano2020,google_mobility, Willem2020a}. \bmhead{Code availability} The source code for the presented spatially explicit SEIQRD model is freely available on GitHub in the public repository UGentBiomath/COVID19-Model. Note however that running the code requires access to data that is not publicly available. \newpage \begin{appendices} \section{Data} \subsection{COVID-19 time series data} \label{app:sciensano} The model parameters $\beta^\text{R}$, $\beta^\text{U}$, $\beta^\text{M}$, $\Omega^{\text{schools}}$, $\Omega^{\text{work}}$, $\Omega^{\text{rest}}$, $\Omega^{\text{home}}$, $M_\text{cal}$, $K_{\text{inf},\alpha\beta\gamma}$, $K_{\text{inf},\delta}$ and $A_s$ are calibrated using the 11 provincial time series for daily new hospitalisations. The motivation to use these data are fourfold. First, as long as the total hospital capacity is not surpassed, which has not happened in Belgium, the number of hospitalisations is a more objective measure than the daily number of newly detected cases. After all, the latter is highly dependent on the available test capacity. Second, pressure on hospitals is the most relevant measure when informing policy decisions. From a public health perspective, one primarily wants to avoid excess pressure on hospitals, which results in postponement of non-\textsc{covid}-19{} care and eventually the collapse of the health care system. Third, these time series are preferred over data for ICU admissions or deaths, because due to the low number of counts, these data are very noisy, especially at the provincial level. Fourth, the daily number of hospitalisations does not depend on hospital dynamics, such as residence times and distributions between wards.\\ The model calibration secondarily relies on seroprevalence data, indicating the rate at which antibodies wane and thus the rate at which humoral immunity is lost. The seroprevalence time series is the estimated percentage of the population with \textsc{sars}-\textsc{c}o\textsc{v}-2{} antibodies in the blood, reflecting how many subjects have recovered from \textsc{covid}-19{}. Demonstrating the models ability to match the seroprevalence in the Belgian population is an important gauge for overall model fidelity. In this way it is possible to demonstrate that the model captures the total number of asymptomatic infections. We assume that new VOCs and vaccines do not alter the seroreversion rate over the calibration period. \paragraph{Sciensano hospitalisation data} Sciensano, the national public health institute of Belgium \citep{Sciensano2020}, gathers and processes \textsc{covid}-19{}-related hospitalisation time series at the provincial level from all 104 Belgian hospitals. This data set is updated daily, exhaustive since March 15th 2020, and anonymous (aggregated over all ages). It contains the number of newly admitted lab-confirmed \textsc{covid}-19{} hospital patients in the last 24 hours, not referred from another hospital. This number excludes the patients that were admitted to the hospital for other reasons but tested positive for \textsc{covid}-19{} in a screening context. Seven-day moving-average time series for daily new hospitalisations are shown per province in Fig. \ref{fig:all-H_in-series_prov}. Provinces are denoted according to their NIS code (Table \ref{tab:class-NIS-name}).\\ The used hospitalisation time series are exhaustive and of high quality, but two limitations should be noted. First, there is a \textit{weekend effect} in the raw time series. This is mainly due to fewer hospitals reporting data over the weekend and does not reflect viral dynamics; the effect is hence not captured by the model. Second, patients are recorded in the province they are hospitalised, not their province of residence. Thus, a patient residing in province $g$ but hospitalised in province $h$ is counted as a data point in province $h$. Since there is no way to circumvent this problem without considerable privacy issues, we must assume that at the level of provinces this effect is negligible. \paragraph{Seroprevalence data} We consider two independent nationally aggregated time series containing information on the extrapolated number of Belgians that have a significant amount of anti-\textsc{sars}-\textsc{c}o\textsc{v}-2{} antibodies in residual serum samples (i.e. seroprevalence) -- See Fig. \ref{fig:seroprevalence-data_timeline}. The first time series was gathered by Herzog et al. \citep{Herzog2020} between March 30th and October 17th 2020, and contains 7 data points from $~3500$ samples per collection period, spread over both sexes, all ages and all provinces (see Table 1 in \citep{Herzog2020}). Residual serum samples in this study originated from ambulatory patients (including people living in nursing homes) visiting their doctor (mainly general practitioners) for any reason including primary care, routine check-up or follow-up of pathology. The second time series was gathered by Sciensano \citep{Sciensano2020} between March 30th 2020 and July 20th 2021, and contains 29 data points from $~1000$ samples per collection period, again homogeneously spread throughout Belgium. The blood samples originate from Red Cross blood donors. Combining both data sets is therefore interesting, as it contains both subjects in need of medical attention and healthy subjects capable of donating blood. The larger time period over which the latter study is conducted, implies that the data start to show the prevalence of anti-\textsc{sars}-\textsc{c}o\textsc{v}-2{} antibodies resulting from vaccination. This, combined with natural immunity, causes the percentage of `immune' subjects to approach 100\% by the summer of 2021.\\ \subsection{Mobility time series data} \label{app:proximus-mobility-data} \paragraph{Origin and nature of the data} Proximus is Belgium's largest telecommunication company with a market share of 30-40\% in terms of active SIM cards \citep{FOD_economie_proximus_market-share}. Based on the connection between a user's SIM card and the closest transmission tower, the approximate position of a SIM card is known at all times at which the device is operational. The amount of time that this device spends connected to a particular transmission tower is registered, on the condition that it has \textit{reconnected} to a transmission tower and stays connected to this tower for over 15 minutes. Reconnecting occurs either by switching on a disabled device, or by travelling around -- either within or outside a particular postal code. For any given Belgian province, the number of tracked SIM cards represents 25-50\% of the province's population. The extrapolation factor is calculated on a daily basis, based on the number of devices used by individuals living in a particular postal code, and the total registered population there.\\ No data is available for times indicated by the hatched periods in Fig. \ref{fig:staytime_percentage_timeseries}, so we estimate $P^{gh}(t)$ values at these times based on particular periods in the available data. For business days (resp. weekends) before February 10th 2020, we take the average $P^{gh}(t)$ values over all business days (resp. weekends) between February 10th and March 1st 2020. For business days (resp. weekends) after August 31st 2021, we take the average over all business days (resp. weekends) between July 1st and August 31st 2021 (the summer holiday). \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{all-H_in-series_prov.pdf} \caption{Stacked area plot of all seven-day moving-averaged time series for daily new hospitalisations per province (denoted with NIS code, see Table \ref{tab:class-NIS-name}) \citep{Sciensano2020}. Daily data is available from March 15th 2020 onward.} \label{fig:all-H_in-series_prov} \end{figure} \begin{table}[h!] \centering \caption{All 10 provinces and Brussel-Capital Region (the ``11th province'' for convenience). We denote the population density classification, the systematic name (NIS code), and which region it is in (Flanders, Brussels-Capital, Wallonia). We also denote their registered population and the number of hospitals that report the daily number of new \textsc{covid}-19{} patients.} \begin{tabular}{p{1.6cm}p{0.8cm}p{2.3cm}rrr} \toprule \textbf{Type} & \textbf{NIS} & \textbf{Name} & \textbf{Region} & \textbf{Population} & \textbf{\# hospitals} \\ \midrule Metropolitan & 21000 & Brussels & B & \num{1218255} & 15 \\ \midrule Urban & 10000 & Antwerpen & F & \num{1869730} & 14 \\ & 20001 & Vlaams-Brabant & F & \num{1155843} & 6 \\ & 40000 & Oost-Vlaanderen & F & \num{1525255} & 14 \\ \midrule Rural & 20002 & Brabant Wallon & W & \num{406019} & 2 \\ & 30000 & West-Vlaanderen & F & \num{1200945} & 11 \\ & 50000 & Hainaut & W & \num{1346840} & 14 \\ & 60000 & Li\`ege & W & \num{1109800} & 12 \\ & 70000 & Limburg & F & \num{877370} & 7 \\ & 80000 & Luxembourg & W & \num{286752} & 3 \\ & 90000 & Namur & W & \num{495832} & 6 \\ \bottomrule \end{tabular} \label{tab:class-NIS-name} \end{table} \begin{figure}[h] \centering \includegraphics[width=0.60\linewidth]{provinces-with-NIS-regions-density.pdf} \caption{Map of the Belgian provinces indicated by their NIS code (Table \ref{tab:class-NIS-name}). The average population density is indicated by the colour scheme and determines whether we consider a province to be rural, urban or metropolitan, with threshold values resp. 400 km$^{-2}$ and 4000 km$^{-2}$.} \label{fig:beta_classes_prov} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{seroprevalence-data_timeline.pdf} \caption{Timeline with seroprevalence data from randomly sampling subjects visiting the general practitioner (Herzog et al. \citep{Herzog2020}, maroon), or Red Cross blood donors (Sciensano \citep{Sciensano2020}, green). The data is space- and age-aggregated and expressed as a percentage of the total population. The band around the data shows the 95\% uncertainty interval. Note the symmetrical log scale on the y axis.} \label{fig:seroprevalence-data_timeline} \end{figure} \clearpage \section{Social contact model} \label{app:social_contact} \paragraph{Google Community Mobility Reports} Social contact is rescaled daily based on data publicly provided in the Google Community Mobility Reports (GCMR) \citep{google_mobility}. These data are available for (virtually) every day in 2020 since February 15th 2020, and are expressed as fractions of ``activity'' compared to the median value from the 5‑week period between January 3rd and February 6th, 2020. This activity is quantified as an anonymous aggregated GPS-informed visitation frequency to six location types (retail \& recreation, grocery, parks, transport, work, and residential). We call these unprocessed time series the GCMR indicators, or mathematically, $\bm{\mathcal{G}}(t)$ with elements $\mathcal{G}^{g,k'}(t)$ for every province $g$ and every activity type $k'$. The time series $\bm{G}^k(t)$ as used in Eq. \eqref{eq:time-dep_social-contact} are derived from $\bm{\mathcal{G}}(t)$ as follows: \begin{equation} \left\{ \begin{array}{rl} \bm{G}^\text{home}(t) &= 1,\\ \bm{G}^\text{school}(t) &= \bm{H}(t),\\ \bm{G}^\text{work}(t) &= \bm{\mathcal{G}}^\text{work}(t),\\ \bm{G}^\text{transport}(t) &= \bm{\mathcal{G}}^\text{transport}(t),\\ \bm{G}^\text{leisure}(t) &= \bm{\mathcal{G}}^\text{retail~\&~recreation}(t),\\ \bm{G}^\text{other}(t) &= \bm{\mathcal{G}}^\text{grocery}(t). \end{array} \right. \label{eq:gcm-to-alpha} \end{equation} Here $\bm{H}(t)$ with elements $H^g(t)$ is a function that is equal to 1 when schools are open, and 0 when schools are closed. All $G^k(t)$ are equal to 1 before February 15th 2020. The resulting time series are shown in Fig. \ref{fig:GCM_resulting_timeseries}. The GCMRs are not age-stratified and do not correct for potential under-representation of older individuals in the data collection. \paragraph{Scaling the social contact matrices with the GCMR indicators} The pandemic social behaviour of the Belgian population must be translated into a linear combination of pre-pandemic interaction matrices. These interaction matrices are available in different places, namely, at home, in schools, in workplaces, during leisure activities, on public transport and during \textit{other} activities \citep{Willem2020a}. Mathematically, we must find tangible coefficients so that the combination of pre-pandemic interaction matrices, i.e. \begin{equation} \bm{N_\text{c}}(t) = \text{span}\left(\bm{N}_\text{c}^\text{home}, \bm{N}_\text{c}^\text{schools}, \bm{N}_\text{c}^\text{work}, \bm{N}_\text{c}^\text{transport}, \bm{N}_\text{c}^\text{leisure}, \bm{N}_\text{c}^\text{others}\right), \end{equation} where all linear combination parameters are time-dependent, is a good representation of macroscopic social behaviour during the pandemic. Ideally, pandemic contact matrices are used as these will better represent mixing behaviour under lockdown measures. However, such matrices were not available at the start of the pandemic. Hence, our model was built upon pre-pandemic knowledge of social behaviour to make a prediction on pandemic social behaviour. First, the GCMR indicators for Workplaces, Transit stations, Retail \& recreation and Groceries \& pharmacy are used as proxies to scale the work, transport, leisure and \textit{other} social contact matrices. \paragraph{Effectivity parameters and mentality} Intuitively, the effectivity of a contact in a given location may not scale linearly with the observed mobility reductions. The net effectivity of the contacts under lockdown measures depends on a combination of the pre-pandemic physical proximity and duration of the contact, the effectivity of preventive measures and on behavioural changes when lockdown measures are taken. As an example, the effects of alcohol gel and face masks might be significant in workplaces and in grocery stores, but not at home or during leisure activities. To account for different effectivities of contacts in different places, we could introduce one additional parameter per contact matrix, denoted $\Omega_x$, bound between zero and one, and infer its distribution from the available hospitalisation data. However, estimating six effectivity parameters is unfeasible because of identifiability issues. We found that the effectivity parameters of public transport and other places could not be identified. This is most likely because few little contacts are made in those places \citep{Mossong2008}. Consequently, the effectivity parameters of public transport, other places and leisure contacts were aggregated to reduce the number of effectivity parameters from six to four. Another interpretation of these effectivity parameters is the degree of correlation between changes of the GCMR indicator and the effective number of social contacts. Thus, an effectivity value of zero indicates the GCMR indicator has no effect on the model dynamics, while an effectivity value of one indicates the GCMR indicator has a large effect on the model dynamics. Despite the attractive physical interpretation, the latter interpretation seems more scientifically defensible.\\ During model development, we observed that when strict social measures are taken, the number of effective social contacts becomes smaller than the number of contacts obtained after rescaling with the GCMR indicators and the effectivity parameters. Thus, one additional parameter was introduced to additionally downscale the number of social contacts when lockdown measures are taken. This parameter was na\"ively introduced in the main text and explained further below. The so-called \textit{mentality} ($\bm{M}(t)$) parameter, is introduced over a two-week period in the social contact model every time lockdown measures were taken (2020-03-15 and 2020-10-19). Once the first lockdown measures were released (2020-05-01 and 2020-06-01), the mentality parameter was gradually eased out of the social contact model over a two month period. During the model calibration procedure, the value of mentality was inferred as $M_\text{cal} = 0.278^{+0.006}_{-0.009}$. The introduction of the mentality parameter adds a degree of freedom to the model that can be re-estimated when social context changes in the future. During August 2020, minor manual tweaks had to made to the mentality in certain provinces in order to adequately fit the second 2020 \textsc{covid}-19{} wave.\\ After rescaling with the GCMR indicators and introducing the effectivity ($\Omega_x$) and mentality ($\bm{M}(t)$) parameters, the combination of pre-pandemic interaction matrices used to model pandemic social contact becomes, \begin{equation} \begin{split} \bm{N_\text{c}}(t) &= \Omega^{\text{home}} \bm{N_\text{c}^\text{home}} + \bm{M}(t) \Big \{ \Omega^{\text{schools}} \bm{G^{\text{schools}}}(t) \bm{N_\text{c}^\text{schools}} \\ & + \Omega^\text{work} \bm{G^{\text{work}}}(t) \bm{N_\text{c}^\text{work}} + \Omega^{\text{rest}} \big[ \bm{G^{\text{transport}}}(t) \bm{N_\text{c}^\text{transport}}\\ & + \bm{G^{\text{leisure}}}(t) \bm{N_\text{c}^\text{leisure}} + \bm{G^{\text{other}}}(t) \bm{N_\text{c}^\text{other}} \big] \Big \}, \end{split} \end{equation} where $M^g(t)$ of $\bm{M}(t)$ are almost always identical, but not necessarily (see Fig. \ref{fig:mentality_timeseries}). \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{GCM_resulting_timeseries.pdf} \caption{Nationally averaged values of the GCMR indicators ($\bm{G}^k(t)$) used for rescaling of the social contact matrices (Eq. \eqref{eq:time-dep_social-contact}). The contact matrices for home and school contacts are not scaled with their respective GCMR indicator (motivated in \citep{Alleman2021}). The contact matrices for the other social environments (workplaces, transport, leisure and \textit{other} places) are scaled with their appropriate GCMR indicator. The model uses such time series for every province.} \label{fig:GCM_resulting_timeseries} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{mentality_timeseries.pdf}\\ \includegraphics[width=\linewidth]{mentality_timeseries_summer2020.pdf} \caption{\textit{Top}: time-dependent mentality factor varying between the values of one and $M_\text{cal} = 0.278^{+0.006}_{-0.008}$. Here, a value of one indicates people behave as if awareness for \textsc{sars}-\textsc{c}o\textsc{v}-2{} is low. The value of 0.278 represents the additional contact multiplier observed during lockdowns due to raised awareness for \textsc{sars}-\textsc{c}o\textsc{v}-2{} and was obtained during model calibration. The hatched area represents the period in August 2020 where mentality parameters had to be manually set for different provinces. Such ad-hoc change to the model were required to avoid mistakes over the summer period from propagating into the second 2020 \textsc{covid}-19{} wave (Oct. 2020). \textit{Bottom}: Ad-hoc provincial differences in mentality $M^g(t)$ (close-up of hatched region). This is the only time at which some $M^g(t)$ differ between different $g$ values.} \label{fig:mentality_timeseries} \end{figure} \clearpage \section{Variants of concern, vaccination, and seasonality} \label{app:VOC_vacc} \subsection{Variants of concern} VOCs are assumed to have three effects on the model dynamics: 1) VOCs are associated with an increase of the transmission coefficients $\bm{\beta}$ compared to the wild-type variant, denoted $K_{\text{inf}}$. To this end, the infectivity parameters $\beta^\text{R}$, $\beta^\text{U}$ and $\beta^\text{M}$ are rescaled with the prevalence-weighted average infectivity increase at time $t$. 2) VOCs can alter the hospital admission propensity of infected individuals compared to the wild-type variant, this is denoted as $K_{\text{hosp}}$. To this end, the hospital admission propensities ($\bm{h}$) are rescaled with the prevalence-weighted average hospital admission propensity gain at every time $t$. 3) Different VOC types are associated with different durations of the latent \textsc{covid}-19{} period $\sigma$. The relevant parameter values are listed in Table \ref{tab:VOC-dependent-variables} and graphically illustrated in Fig. \ref{fig:VOC_prevalence}. All values describe the ``bare'' effects of the VOCs irregardless of vaccination -- it should be noted that as the pandemic progresses it becomes harder and harder to disentangle this bare effect.\\ \begin{table}[!h] \centering \caption{VOC prevalence and VOC-dependent variables: infectivity increase of VOC type $n$ compared to the wild type ($K_{\text{inf},n}$), hospitalisation propensity increase, and duration of the latent period. The values of $K_{\text{inf},n}$ were found during model calibration. Values of $K_{\text{hosp},n}$ and $\sigma_n$ were extracted from \citep{Grint2021, Bager2021, VENETI2022, Hart2022}.} \begin{tabular}{ p{3cm} p{1.3cm} p{1.3cm} p{1.3cm}} \toprule \textbf{Parameter} & wild type & $\alpha$-$\beta$-$\gamma$ & $\delta$ \\ \midrule $K_{\text{inf},n}$ (-) & 1.00 & 1.57 & 1.79 \\ $K_{\text{hosp},n}$ (-) & 1.00 & 1.00 & 1.00 \\ $\sigma_n$ (days) & 4.5 & 4.5 & 3.8 \\ \bottomrule \end{tabular} \label{tab:VOC-dependent-variables} \end{table} The VOC prevalence data (national level) were obtained from \cite{Wenseleers2021}. The increase in infectivity from the $\alpha$-$\beta$-$\gamma$ and $\delta$ VOCs compared to the wild-type were found during model calibration. The combination of the $\alpha$-$\beta$-$\gamma$ VOCs were estimated to be 57\% more infectious than the wild-type, while the $\delta$ variant was estimated to be 79\% more infectious than the wild-type. The combination of the $\alpha$-$\beta$-$ \gamma$ VOCs almost certainly increased the hospital admission propensity. For instance, Grint et al. \citep{Grint2021} reported an average increase of 62\%. However, we found that applying such multipliers to the model's hospitalisation propensity did not yield satisfactory results. Hence, for the sake of simplicity, we assume no increase of the hospitalisation propensity. The $\delta$ variant was shown to increase the hospital admission propensity for unvaccinated individuals with roughly 70\% \citep{Twohig2022, Bager2021}. On the other hand, a Norwegian study found no significant increase in hospital admission propensity \citep{VENETI2022}. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{VOC_prevalence.pdf} \caption{\textit{Top}: Prevalence of the wild-type variant, the $\alpha$-$\beta$-$\gamma$ variants (aggregated), and the $\delta$ variant in Belgium. Solid lines show a logistic fit in addition to raw data for $\alpha$-$\beta$-$\gamma$ (triangles) and $\delta$ (circles) \citep{Wenseleers2021, Sciensano2022}. \textit{Bottom}: Effect of the VOCs on resp. rescaling of the transmission coefficient $\bm{\beta}(t)$, rescaling of the hospitalisation propensity $\bm{h}(t)$, and time-dependent values of the latent disease period $\sigma(t)$ according to Eq. \eqref{eq:beta-from-VOC}) and values in Table \ref{tab:VOC-dependent-variables}.} \label{fig:VOC_prevalence} \end{figure} \subsection{Seasonality} The introduction of seasonality rescales the infectivity parameters $\beta^\text{R}$, $\beta^\text{U}$, $\beta^\text{M}$. The effect of seasonality is incorporated in a cosine function with a period of one year (Eq. \eqref{eq:seasonality}, based on \citep{Liu2021}). Maximum infectivity is assumed at January 1st. The amplitude of the cosine was estimated at $A_\text{s} = 0.30$ during model calibration. The seasonality influences viral transmission in ways considered out of this work's scope for this work, hence the simplicity of the seasonal relationship. \subsection{Vaccination} Figs. \ref{fig:vaccination_timeseries_NIS} and \ref{fig:vaccination_timeseries_age} show the percentages of the population, resp. per province and per age class, that have had their first vaccination dose ('partial' vaccination), their second vaccination dose ('full' vaccination), or their booster shot ('boosted'). These time series were taken from Ref. \citep{Sciensano2022} and have been smoothed by an exponential moving average. This procedure incidentally resulted in a two-week delay, which we adopted to represent vaccine immunity onset. Table \ref{tab:vaccine_properties} shows the vaccine's efficacies with which the \textsc{sars}-\textsc{c}o\textsc{v}-2{}-related susceptibility, infectiousness, and hospitalisation are rescaled in the model. The efficacies $E_{\text{full},n,w}$ are used to calculate the dynamic vaccine waning after full vaccination. \paragraph{Vaccine efficacies} As previously mentioned, Tartof et al. \citep{Tartof2021} demonstrated that, for an individual fully vaccinated with the mRNA-1273 (Pfizer) vaccine, protection against hospitalisation wanes at a lower rate than protection against symptoms (proxy for susceptibility). Similar findings were reported by Braeye et al. \citep{Braeye2022b}. The efficacies $E_{v,n,\text{susc}}$ and $E_{v,n,\text{inf}}$, for the vaccination stages `full' and `boosted' under the $\alpha$-$\beta$-$\gamma$ and $\delta$ VOCs were derived from an updated version of Braeye et al. \citep{Braeye2022a} (informal communication). For the vaccination stage `full', the efficacies 150 days post-vaccination were extracted. From Ref. \citep{Braeye2022b}, the efficacies $E_{full, \delta, \text{hosp}}$ were extracted both 25 and 225 days post vaccination. It was assumed that the vaccines offer the same protection against hospitalisation under the $\alpha$-$\beta$-$\gamma$ VOCs. It is assumed that all efficacies under the the wild type are the same as under the $\alpha$-$\beta$-$\gamma$ variant. The efficacy reductions reported for the Pfizer vaccine in \citep{Braeye2022a} are assumed to apply to all vaccines and all ages. This simplifying assumption is motivated by the fact that over 72\% of vaccines administered by the end of the period considered during the model calibration (2021-10-01) were Pfizer's. We further assume that partial vaccination offers half the protection a full vaccination offers. The vaccine efficacies used in this study are summarised in Table \ref{tab:vaccine_properties}. \begin{table}[h] \centering \caption{Efficacies of the vaccines in lowering the susceptiblity to \textsc{sars}-\textsc{c}o\textsc{v}-2{}, lowering the infectiousness of \textsc{sars}-\textsc{c}o\textsc{v}-2{}, and the efficacies of the vaccines in lowering the hospitalisation propensity. Partial vaccination is assumed to result in half the efficacy of a full vaccination. The first vaccines were not administered during wild-type VOC dominance, and booster shots were not administered under the $\alpha$-$\beta$-$\gamma$ VOC dominance, so these data are omitted and irrelevant. Protection against hospitalisation is retrieved for the $\delta$ VOC from Ref. \citep{Braeye2022b} but assumed to same for the $\alpha$-$\beta$-$\gamma$ VOC. All $E_{\text{none,n}}$ are 0.} \begin{tabular}{>{\raggedright\arraybackslash}p{1.8cm} m{1.4cm} m{1.4cm} m{1.4cm} m{1.4cm} m{1.4cm}} \toprule & $E_{\text{partial},n}$ & $E_{\text{full},n,0}$ & $E_{\text{full},n,w}$ & $E_{\text{booster},n,0}$ \\ \midrule \multicolumn{5}{l}{\textit{Susceptibility}} \\ \quad $\alpha$-$\beta$-$\gamma$ & 0.44 & 0.87 & 0.64 & NA \\ \quad $\delta$ & 0.40 & 0.79 & 0.54 & 0.80 \\ \midrule \multicolumn{5}{l}{\textit{Infectiousness}} \\ \quad $\alpha$-$\beta$-$\gamma$ & 0.31 & 0.62 & 0.43 & NA \\ \quad $\delta$ & 0.19 & 0.38 & 0.25 & 0.34 \\ \midrule \multicolumn{5}{l}{\textit{Hospitalisation }} \\ \quad $\alpha$-$\beta$-$\gamma$ & 0.47 & 0.93 & 0.81 & NA\\ \quad $\delta$ & 0.47 & 0.93 & 0.81 & 0.93 \\ \bottomrule \end{tabular} \label{tab:vaccine_properties} \end{table} \paragraph{Exponential vaccine waning} Vaccine waning is incorporated by dynamically altering the average vaccine efficacies after full vaccination. Waning for partial or boosted vaccination were not included because data were not readily available at the time of writing. Mathematically, we rely on 1) the past vaccine incidence, available per province, age group and vaccination stage \citep{Sciensano2022}, 2) the vaccine efficacies for every protective mechanism and every VOC, both 25 and 175 days after vaccination \citep{Braeye2022a}, and 3) the assumption that waning occurs exponentially, asymptotically approaching a null efficacy for large $t$. The latter assumption is expressed by \begin{equation} \widetilde{E}_{\text{full},n,\text{susc}}(t) = E_{\text{full},n,0,\text{susc}} \exp\left(-t/\tau\right), \end{equation} where, \begin{equation} \tau = \frac{150\text{ d}}{\ln\left(\dfrac{E_{\text{full},n,0,\text{susc}}}{E_{\text{full},n,w,\text{susc}}}\right)} > 0, \end{equation} and similarly for $\tilde{E}_{\text{full},n,\text{inf}}(t)$ and $\tilde{E}_{\text{full},n,\text{hosp}}$(t) (see Fig. \ref{fig:effect_of_waning_delayed}). These values are used in a weighted sum to find the current average vaccine efficacy in province $g$ and age class $i$, as follows: \begin{equation} \bm{E}_{\text{full},n,\text{susc}}(t) = \frac{1}{\int_{-\infty}^t\tilde{\bm{\phi}}_{v}(t')dt'}\int\limits_{-\infty}^t\tilde{\bm{\phi}}_v(t')\tilde{E}_{\text{full},n,\text{susc}}(t-t')dt', \label{eq:effective_rescaling_param} \end{equation} where $$\displaystyle\int\limits_{t}^{t+\epsilon}\widetilde{\phi}^g_{v,i}(t')dt'$$ is the fraction of the total population in province $g$ and age class $i$ \textit{entering} vaccination state $v$ between times $t$ and $t+\epsilon$, i.e.\ the vaccination incidence over $[t, t+\epsilon]$. So, $\widetilde{\phi}_{v,i}^g(t)$ is the incidence rather than the cumulative data. Further note that this does not equal $\phi_{v,i}^g(t)$, because the latter also takes into account subjects \textit{leaving} the vaccination stage $v$ due to a new vaccination. Consequently, we may write \begin{equation} \tilde{\bm{\phi}}_v(t) = \max\left(0, \frac{d\bm{\phi}_v(t)}{dt}\right). \end{equation} Hence, when a large amount of newly-vaccinated individuals enter the metapopulation, the average $\bm{E}_{\text{full},n}(t)$ increases. Also note that now the efficacy $\bm{E}_{\text{full},n,\text{susc}}$ has a spatial and an age dimension. This implies that the transmission coefficient becomes \begin{equation} \bar{\beta}^g_{vw} \rightarrow \bar{\bm{\beta}}^{g}_{vw} \text{ with elements } \bar{\beta}^{gh}_{ij,vw} = \bar{\beta}^g \sum_n \alpha_n(t)(1-E_{v,n,\text{susc},i}^{g})(1-E_{w,n,\text{inf},j}^{h}), \end{equation} where indices $g$, $i$ and $v$ indicate the susceptible person, and indices $h$, $j$ and $w$ indicate the infectious person. The vaccine efficacies with waning are computed once beforehand. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{effect_of_waning_delayed.pdf} \caption{Evolution of the vaccine efficacy associated with infectivity $E_\text{inf}(t)$, susceptibility $E_\text{susc}(t)$, and hospitalisation propensity $E_\text{hosp}(t)$ under the $\alpha$-$\beta$-$\gamma$ VOCs and over a two year period. The observations extracted from literature (see Table \ref{tab:vaccine_properties}) were used to inform the half-life of the exponential decay function.} \label{fig:effect_of_waning_delayed} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{vaccination_timeseries_NIS.pdf} \caption{Vaccination time series in terms of the fraction (\%) of the total population in the province (indicated by NIS code, see Table \ref{tab:class-NIS-name}), aggregated for all ages. From top to bottom: first dose only (all vaccine types except Janssen), full dose only (second dose and Janssen vaccine), booster shot.} \label{fig:vaccination_timeseries_NIS} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{vaccination_timeseries_age.pdf} \caption{Vaccination time series in terms of the fraction (\%) of the total number of individuals in the age group, aggregated for all provinces. From top to bottom: first dose only (all vaccine types except Janssen), full dose only (second dose and Janssen vaccine), booster shot.} \label{fig:vaccination_timeseries_age} \end{figure} \clearpage \newpage \section{Model equations, parameters, and assumptions} \label{app:model-equations-and-model-parameters} \subsection{Model equations and parameters} The model is governed by a first-order, ordinary coupled differential equation per model compartment, per age class $i$, and per province $g$: \begin{align*} \dot{S}_{i,v}^g &= - \sum\limits_{h=1}^G P^{gh} S_{i,v}^g \sum\limits_{w} \sum\limits_{j=1}^{N} \bar{\beta}^{gh}_{ij,vw} \bar{N}_{\text{c},ij}^{gh} \dfrac{(I_\text{presy})_{j,w,\text{eff}}^h + (I_\text{asy})_{j,w,\text{eff}}^h}{T_{j,w,\text{eff}}^g} + \zeta R^g_{i,v}, \\ \dot{E}_{i,v}^g &= \sum\limits_{h=1}^G P^{gh} S_{i,v}^g \sum\limits_{w} \sum\limits_{j=1}^{N} \bar{\beta}^{gh}_{ij,vw} \bar{N}_{\text{c},ij}^{gh} \dfrac{(I_\text{presy})_{j,w,\text{eff}}^h + (I_\text{asy})_{j,w,\text{eff}}^h}{T_{j,w,\text{eff}}^g} - \frac{1}{\sigma} E_{i,v}, \\ (\dot{I}_\text{presy})_{i,v}^g &= \frac{1}{\sigma} E_{i,v}^g - \frac{1}{\omega} (I_\text{presy})_{i,v}^g,\\ (\dot{I}_\text{asy})_{i,v}^g &= \frac{a_i}{\omega} (I_\text{presy})_{i,v}^g - \frac{1}{d_{a}} (I_\text{asy})_{i,v}^g,\\ (\dot{Q}_\text{mild})_{i,v}^g &= \frac{1-a_i}{\omega} (I_\text{presy})_{i,v}^g - \left( \frac{1-\bar{h}_{i,v}^g}{d_m} + \frac{\bar{h}_{i,v}^g}{d_\text{hospital}} \right) (Q_\text{mild})_{i,v}^g, \\ (\dot{Q}_\text{hosp})_{i,v}^g &= \frac{\bar{h}_{i,v}^g c_i}{d_\text{hospital}} (Q_\text{mild})_{i,v}^g - \frac{1-m_{C,i}}{d_{C,R,i}} C_{i,v}^g - \frac{m_{C,i}}{d_{C,D,i}} C_{i,v}^g\\ (\dot{Q}_\text{ICU})_{i,v}^g &= \frac{\bar{h}_{i,v}^g(1-c_i)}{d_\text{hospital}}(Q_\text{mild})_{i,v}^g - \frac{1-m_{\text{ICU},i}}{d_{\text{ICU},R,i} - d_{\text{ ICU},\text{rec},i}}(Q_\text{ICU})_{i,v}^g \\ & \qquad - \frac{m_\text{ICU}}{d_{\text{ICU},D,i}} (Q_\text{ICU})_{i,v}^g\\ (\dot{Q}_\text{ICU,rec})_{i,v}^g &= \frac{1-m_{\text{ICU},i}}{d_{\text{ICU},R,i}} (Q_\text{ICU})_{i,v}^g - \frac{1}{d_\text{ICU,rec}} (Q_\text{ICU,rec})_{i,v}^g,\\ \dot{D}_{i,v}^g &= \frac{m_{\text{ ICU},i}}{d_{\text{ICU},D,i}} (Q_\text{ICU})_{i,v}^g + \frac{m_{C,i}}{d_{C,D,i}} (Q_\text{hosp})_{i,v}^g ,\\ \dot{R}_{i,v}^g &= \frac{1}{d_a} (I_\text{asy})_{i,v}^g + \frac{1-\bar{h}_{i,v}^g}{d_m} (Q_\text{mild})_{i,v}^g + \frac{1-m_{C,i}}{d_{C,R,i}} (Q_\text{hosp})_{i,v}^g \\ & \qquad + \frac{1}{d_{\text{ICU,rec}}} (Q_\text{ICU,rec})_{i,v}^g - \zeta R^g_{i,v}, \label{eq:all_ODEs_spatial} \end{align*} which results in a system of $10 \times 10 \times 11 \times 4 = 4400$ coupled differential equations. All variables representing a model compartment are time-dependent. The social contact matrix $\bm{\bar{N}}_\text{c}(t)$, the mobility matrix $\bm{P}(t)$, and (therefore) the effective populations $\bm{I}_\text{presy,eff}(t)$, $\bm{I}_\text{presy,eff}(t)$ and $\bm{T}_\text{eff}(t)$ are time-dependent as well (see Subsection \ref{subsec:spatial-extension} and Appendices \ref{app:social_contact} and \ref{app:proximus-mobility-data}). Additionally, the introduction of VOCs, seasonality and vaccination makes the parameters $\bar{\bm{\beta}}(t)$, $\bar{\bm{h}}(t)$, and $\sigma(t)$ time dependent as well (see Subsection \ref{subsec:voc_and_vac} and Appendix \ref{app:VOC_vacc}). For simplicity, this explicit time dependence and its associated parameters are not shown below, however, all information can be found in the relevant appendices. The meaning and value of the parameters are listed in Tables \ref{tab:SEIQRD_params} and the associated Tables \ref{tab:ageDistributionAsymptomatic}, \ref{tab:results_hospital_age} and \ref{tab:results_hospital_days}.\\ \begin{table}[!h] \centering \caption{Fraction of asymptomatic subjects $a_i$ (based on \citep{Wu2020b}), and hospitalisation propensity $h_i$ for symptomatic infections per age class (inferred, see \citep{Alleman2021}). The hospitalisation propensity $\bm{h}$ is dynamically and spatially rescaled in the model to account for the combined effects of VOCs and vaccination. The baseline values without VOCs or vaccines are shown here.} \begin{tabular}{ p{3cm} p{1.5cm} p{1.5cm} } \toprule \textbf{Age class $i$ (years)} & $a_i$ (\%) & $h_i$ (\%)\\ \midrule $[0,12[$ & 98.3 & 1.5 \\ $[12,18[$ & 97.8 & 2.0 \\ $[18,25[$ & 90.4 & 2.7 \\ $[25,35[$ & 78.5 & 3.0 \\ $[35,45[$ & 64.2 & 3.0 \\ $[45,55[$ & 48.9 & 4.5 \\ $[55,65[$ & 26.8 & 10.3 \\ $[65,75[$ & 10.3 & 24.3 \\ $[75,85[$ & 4.4 & 55.7 \\ $[85,\infty[$ & 1.2 & 80.0 \\ \midrule \textbf{Population average} & 57.0 & 11.2 \\ \bottomrule \end{tabular} \label{tab:ageDistributionAsymptomatic} \end{table} \begin{table}[!h] \centering \caption{Average fraction $c_i$ of hospitalised subjects admitted in a cohort ward (as opposed to an ICU), average mortality in cohort wards ($m_{\text{C},i}$) and average mortality in ICU ($m_{\text{ICU},i}$) per age class. These estimates were obtained by analysing a dataset of \num{22 136} patients in all 133 Belgian hospitals (see \cite{Alleman2021} for details).} \begin{tabular}{ p{3cm} p{1.5cm} p{1.5cm} p{1.7cm} } \toprule \textbf{Age class $i$ (years)} & $c_i$ \textbf{(\%)} & $m_{\text{C},i}$ \textbf{(\%)} & $m_{\text{ICU},i}$ \textbf{(\%)} \\ \midrule $[0,12[$ & 97.4 & 0.0 & 0.0 \\ $[12,18[$ & 88.8 & 0.0 & 9.0 \\ $[18,25[$ & 90.3 & 0.4 & 17.4 \\ $[25,35[$ & 91.5 & 1.0 & 11.8 \\ $[35,45[$ & 87.1 & 1.5 & 16.0 \\ $[45,55[$ & 83.0 & 2.7 & 19.3 \\ $[55,65[$ & 78.3 & 5.1 & 35.4 \\ $[65,75[$ & 76.3 & 11.4 & 51.6 \\ $[75,85[$ & 83.6 & 26.4 & 70.0 \\ $[85,\infty[$ & 95.3 & 42.3 & 78.6 \\ \midrule \textbf{Population average} & 83.8 & 16.6 & 46.4 \\ \bottomrule \end{tabular} \label{tab:results_hospital_age} \end{table} \begin{landscape} \thispagestyle{empty} \begin{table} \centering \caption{\small{Parameters used for calculating the dynamics between the various SEIQRD compartments shown in Fig. \ref{fig:flowchart_SEIQRD}. Note that all symbols in boldface are non-scalar (vector or matrix), and the values of their elements are provided in separate tables.}} \begin{tabular}{p{1.5cm}p{9cm}lp{2cm}} \textbf{Symbol} & \textbf{Parameter} & \textbf{Migration} & \textbf{Value} \\ \toprule $\bm{a}$ & Fraction of infected subjects remaining asymptomatic & $I_\text{presy} \rightarrow I_\text{asy}$ & Table \ref{tab:ageDistributionAsymptomatic}, \citep{Wu2020b} \\ $\bm{h}(t)$ & Fraction of mildly symptomatic subjects requiring hospitalisation. Time-dependent due to VOCs and vaccination. & $Q_\text{mild} \rightarrow Q_\text{hosp} \text{ or } Q_\text{ICU}$ & Table \ref{tab:ageDistributionAsymptomatic}, \textit{inferred}, see \citep{Alleman2021}\\ $\bm{c}$ & Fraction of hospitalisations admitted in regular cohort hospital ward & $Q_\text{mild} \rightarrow Q_\text{hosp}$ & Table \ref{tab:results_hospital_age}, \citep{Alleman2021}\\ $\bm{m}_C$ & Mortality of patients in a cohort hospital ward & $Q_\text{hosp} \rightarrow D$ & Table \ref{tab:results_hospital_age}, \citep{Alleman2021} \\ $\bm{m}_\text{ICU}$ & Mortality of patients in an IC unit & $Q_\text{ICU} \rightarrow D$ & Table \ref{tab:results_hospital_age}, \citep{Alleman2021} \\ \midrule $\bm{d}_{C,R}$ & Length-of-stay in hospital cohort ward (outcome: recovered) & $Q_\text{hosp} \rightarrow R$ & Table \ref{tab:results_hospital_days}, \citep{Alleman2021} \\ $\bm{d}_{C,D}$ & Length-of-stay in hospital cohort ward (outcome: deceased) & $Q_\text{hosp} \rightarrow D$ & Table \ref{tab:results_hospital_days}, \citep{Alleman2021} \\ $\bm{d}_{\text{ICU},R}$ & Length-of-stay in an IC unit (outcome: recovered) & $Q_\text{ICU} \rightarrow Q_{\text{ICU, rec}}$ & Table \ref{tab:results_hospital_days}, \citep{Alleman2021} \\ $\bm{d}_{\text{ICU},D}$ & Length-of-stay in an IC unit (outcome: deceased) & $Q_\text{ICU} \rightarrow D$ & Table \ref{tab:results_hospital_days}, \citep{Alleman2021} \\ $\bm{d}_{\text{ICU},\text{rec}}$ & Average recovery stay in a cohort ward after ICU & $Q_{\text{ICU,rec}} \rightarrow R$ & Table \ref{tab:results_hospital_days}, \citep{Alleman2021} \\ \midrule $d_a$ & Average duration of asymptomatic infection & $I_\text{asy} \rightarrow R$ & 7.0 d, \textit{assumed}\\ $d_m$ & Average duration of mild infection before recovery & $Q_\text{mild} \rightarrow R$ & 7.0 d, \textit{assumed} \\ $d_\text{hosp}$ & Average duration between symptom onset and hospitalisation & $Q_\text{mild} \rightarrow Q_\text{C} \text{ or } Q_\text{ICU}$ & 6.4 d, \citep{Alleman2021} \\ $\sigma(t)$ & Average duration of latent period. Time-dependent due to VOC prevalence. & $E \rightarrow I_\text{presy}$ & Table \ref{tab:VOC-dependent-variables}, \cite{Hart2022} \\ $\omega$ & Average duration of presymptomatic infectious period & $I_\text{presy} \rightarrow I_\text{asy} \text{ or } Q_\text{mild}$ & 0.7 d, \citep{Wei2020, He2020} \\ $\zeta$ & Average seroreversion rate & $R \rightarrow S$ & $\ln (2)/365\ \text{d}^{\text{-}1}$, \textit{assumed} \\ \midrule $\bm{\beta}(t)$ & Probability of infection upon contact with an infectious individual (if the infectee is 100\% susceptible), elements $\beta^g$ and three degrees of freedom $\beta^\text{R}, \beta^\text{U}, \beta^\text{M}$. Time-dependent due to seasonality, VOC prevalence, and vaccination. & $S \rightarrow E$ & \textit{inferred} \\ \bottomrule \end{tabular} \label{tab:SEIQRD_params} \end{table} \end{landscape} \begin{table}[!h] \centering \caption{Hospital length-of-stay in a cohort ward ($C$) or intensive care unit (ICU) in case of recovery or death. NA denotes no deaths were recorded in that particular age class. These estimates were obtained by analysing a dataset of \num{22136} patients in all 133 Belgian hospitals (see \cite{Alleman2021} for details).} \begin{tabular}{ p{1.6cm} p{1.2cm} p{1.2cm} p{1.2cm} p{1.2cm} p{1.2cm}} \toprule \textbf{Age class $i$ (years)} & $d_{C,R,i}$ (days) & $d_{C,D,i}$ (days) & $d_{\text{ICU},R,i}$ (days) & $d_{\text{ICU},D,i}$ (days) & $d_{\text{ICU},\text{rec},i}$ (days)\\ \midrule $[0,12[$ & 3.5 & NA & 5.9 & NA & 3.0 \\ $[12,18[$ & 6.8 & NA & 3.2 & 16.0 & 4.0\\ $[18,25[$ & 5.7 & 2.0 & 5.3 & 3.0 & 4.0 \\ $[25,35[$ & 4.8 & 8.1 & 9.3 & 12.6 & 4.5 \\ $[35,45[$ & 5.9 & 6.0 & 10.9 & 16.3 & 5.0\\ $[45,55[$ & 6.9 & 8.8 & 11.4 & 20.6 & 6.0 \\ $[55,65[$ & 8.5 & 8.7 & 12.7 & 17.3 & 6.0 \\ $[65,75[$ & 11.2 & 13.2 & 13.8 & 16.3 & 8.0 \\ $[75,85[$ & 15.2 & 12.1 & 11.9 & 13.6 & 11.0 \\ $[85,\infty[$ & 18.9 & 11.8 & 5.0 & 9.1 & 10.0\\ \midrule \textbf{Population average} & 10.8 & 11.8 & 12.0 & 15.2 & 5.6\\ \bottomrule \end{tabular} \label{tab:results_hospital_days} \end{table} \subsection{Model assumptions and simplifications} Here, we list the main assumptions and simplifications underlying our model. While we consider these to not alter the paper's conclusions, we choose to explicitly mention them below as good scientific practice.\\ \begin{enumerate} \item Cross-border mobility is not included in this model, the mobility matrix, $\bm{P}$, is not age-stratified, and the elements $P^{gh}(t)$ were estimated when no data was available at time $t$ (see Appendix \ref{app:proximus-mobility-data}).\\ \item We assume that, on average, one only has work-related contacts in visited provinces, whereas all other types of contact are possible within the home province.\\ \item The GCMR indicators, which are used to inform the degree of social interaction in the model, are not age-stratified. The GCMR indicators thus present a more coarse-grained alternative to social-epidemiological contact studies under lockdown measures.\\ \item The average vaccine efficacies and information on vaccine waning used in the model were those of the Pfizer vaccine. The model does not explicitly distinguish between the different vaccines.\\ \item We aggregate the $\alpha$, $\beta$ and $\gamma$ VOCs because the effect of their epidemiological properties are comparable in our model, and the aggregation decreases the overall complexity.\\ \item Our models do not include age-specific increases for transmissibility and disease severity for the VOCs. The emergence of the variants was implemented on the national level, thus, the geographic spread of the $\alpha$-$\beta$-$\gamma$ and $\delta$ variant was not included in the simulations.\\ \item Implementing seasonality using a cosine function is a high-level mathematical abstraction of several factors such as, but not limited to, the effects of humidity and temperature on viral survival in the environment.\\ \item In order for the negative binomial distribution loglikelihood function to apply to all $G \times n$ data points, the data points should strictly speaking be independent of each other, which they are not.\\ \item The model does not explicitly account for testing and tracing. These effects are implicitly accounted for in the calibrated parameters, however.\\ \item The model is based on ordinary diferential equations (ODEs) and is thus deterministic in nature. This implies that epidemiological chain extinction is not possible and thus, at low \textsc{sars}-\textsc{c}o\textsc{v}-2{} prevalences, the model may overpredict the number of observed daily hospitalisations.\\ \item Raw vaccination data is only communicated for minors 0-17 years. There is no distinction for 0-12 or 12-17. In our current implementation, all vaccinations are distributed between 0-12 and 12-17 year olds based on demographics. \\ \item A number of assumptions are made when implementing the vaccination into the model. In \eqref{eq:vaccination-update-metapopulation} it is assumed that vaccinations are given homogeneously to subjects in all vaccine-eligible compartments, while e.g. in reality people that had only recently recovered were not immediately invited for vaccination. It is assumed that vaccinated people have the same number of contacts and the same mobility patterns as non-vaccinated people, and that they on average come into contact with the same fraction of vaccinated and non-vaccinated people as the global average. \end{enumerate} \newpage \section{Model calibration}\label{app:calibration} 11 model parameters are considered to be a priori unknown and must be calibrated using the available data. Here we elaborate on the calibration procedure and the resulting parameter values and uncertainties. \subsection{Statistical model} Given a time series $\bm{x}^g$ for every province $g \in \{1, ..., G\}$ with $n$ observations $x_t^g$ for $t \in \{1, ..., n\}$ corresponding to times $\{t_1, ..., t_n\}$, any choice for model parameters $\bm{\theta}$ combined with an initial condition (IC) will produce a continuous time series $\tilde{x}^g(t)$ for every province $g$. This time series may be sampled to produce a set of model-based values $\{\tilde{x}^g(t_1), ..., \tilde{x}^g(t_n)\}$ that we will denote as $\{\tilde{x}^g_1, ..., \tilde{x}^g_n\}$. The aim is to find the model setup for which it is most likely that the $\bm{x}^g$ are observations of the modelled time series $\bm{\tilde{x}}^g$, considering a particular error.\\ We have estimated the variance in all provincial time series as a function of their rolling exponential mean. Next, the most appropriate statistical model was chosen by fitting the mean-variance of several candidate models -- the Gaussian model ($\sigma^2 = c $), Poisson model ($\sigma^2 = \mu$), quasi-Poisson model ($\sigma^2 = \alpha \mu$) and negative binomial model ($\sigma^2 = \mu + \alpha \mu^2$) -- and using the AIC to determine what model fits best. The negative binomial model best described the variance in the data in all but two provinces, in which the quasi-Poisson model had the lowest AIC score. However, for the sake of simplicity, it was assumed that all eleven provincial time series variance are described by the negative binomial model. In this way, we assume that a single observation $x^g_t$ is the result of a counting experiment with an additional unknown error for every province $g$, captured by the estimated overdispersion parameter $\alpha^g$ per province $g$ \cite{cameron1998, Chan2021} (see Table \ref{tab:overdispersions}). The values of which were obtained by fitting the negative binomial mean-variance relationship to our estimated mean-variance couple. In general, the overdispersion in the data becomes larger when the population in a province decreases. The associated negative binomial likelihood for every observation $t$ is \begin{equation} \mathcal{L}( \tilde{x}_t^g \vert x_t^g ) = \frac{\Gamma(x_t^g + 1/\alpha^g)}{x_t^g!\Gamma(1/\alpha^g)} \left( \frac{1/\alpha^g}{1/\alpha^g + \tilde{x}^g_t} \right)^{1/\alpha^g} \left( \frac{\tilde{x}^g_t}{1/\alpha^g + \tilde{x}^g_t} \right)^{x^g_t}, \end{equation} with $\Gamma$ the gamma function. The negative binomial distribution has mean value $\tilde{x}^g_t$ and variance $\tilde{x}^g_t(1 + \alpha^g\tilde{x}^g_t)$; it is maximised for $\tilde{x}_t = x_t$ and reduces to the Poisson likelihood for $\alpha^g \rightarrow 0$. Adding more observations over time and regions, individual likelihood functions can be multiplied: \begin{equation*} \mathcal{L}( \bm{\tilde{x}} \vert \bm{x} ) = \prod_{g=1}^G\prod_{t=1}^n \mathcal{L}( \tilde{x}_t^g \vert x_t^g ). \end{equation*} Again, this value $\mathcal{L}( \bm{\tilde{x}} \vert \bm{x} )$ is maximised if $\forall g,t: \tilde{x}_t^g = x_t^g$, but this is generally not possible: the values $\tilde{x}_t^g$ must be samples of the simulated local time series $\tilde{x}^g(t)$, for particular $\bm{\theta}$ values. Since the logarithmic function is monotonically increasing, the maximum value for $\mathcal{L}(\bm{\tilde{x}} \vert \bm{x})$ occurs at the same location in parameter space as for $\log \mathcal{L}(\bm{\tilde{x}} \vert \bm{x})$, so we may as well consider: \begin{multline*} \log \mathcal{L}(\bm{\tilde{x}} \vert \bm{x}) = -\sum_{g=1}^G\sum_{t=1}^n \left( \log\left[\frac{\Gamma(x^g_t + 1/\alpha^g)}{\Gamma(1/\alpha^g)}\right] + 1/\alpha^g\log\left[ \frac{1/\alpha^g}{1/\alpha^g + \tilde{x}^g_t} \right] \right. \\ \left.+ x^g_t\log\left[ \frac{\tilde{x}^g_t}{1/\alpha^g + \tilde{x}^g_t} \right] - \log (x^g_t !)\right). \label{eq:calibration_loglikelihood_complete} \end{multline*} The result is the loglikelihood in Eq. \eqref{eq:calibration_loglikelihood}. The parameter choice $\bm{\theta} = \bm{\hat{\theta}}$ that maximises Eq. \eqref{eq:calibration_loglikelihood} for particular values of $\alpha^g$ is considered the `best-fit' choice. A large collection of such sampled $\bm{\hat{\theta}}$ make up the posterior. The posterior distributions resulting from the calibration MCMC also provide a quantitative measure for the calibrated value's uncertainty interval \citep{emcee2013}, which together with the overdispersion values ($\alpha^g$) determines the uncertainty on the simulated time series. Note that large $\tilde{x}^g_t$ and $x^g_t$ values will contribute more to the total sum in Eq. \eqref{eq:calibration_loglikelihood} than small such values, which means that time series of large provinces will have a larger weight in the overall sum. This effect is further amplified by the fact that less densely populated provinces generally have noisier data and thus larger overdispersion factors $\alpha^g$. In our calibration procedure, we use three sources of data and thus, we optimise the weighted sum of three such loglikelihoods, \begin{equation*} \log \mathcal{L}(\bm{\tilde{x}}_{H_{\text{in}}} \vert \bm{x}_{H_{\text{in}}}) + \epsilon[\log \mathcal{L}(\bm{\tilde{x}}_R \vert \bm{x}_{R,\text{Herzog}}) + \log \mathcal{L}(\bm{\tilde{x}}_R \vert \bm{x}_{R,\text{Sciensano}})], \end{equation*} where the weighting factor $\epsilon$ is fixed at $10^{-4}$ and was found through trial and error. The time series $\bm{\tilde{x}}_{H_{\text{in}}}$ and $\bm{\tilde{x}}_R$ correspond to the simulated daily new hospitalisations and the total number of recovered subjects, respectively. The observed time series are $\bm{x}_{H_{\text{in}}}$, $\bm{x}_{R,H}$ and $\bm{x}_{R,S}$: observed daily new hospitalisations per province \citep{Sciensano2020}, national seroprevalence data from general practitioners by Herzog et al. \citep{Herzog2020}, and national seroprevalence data from Red Cross by Sciensano \citep{Sciensano2020}, respectively (see Appendix \ref{app:sciensano}). \begin{table}[h] \centering \caption{Values per province of the inferred overdispersion parameter of the negative binomial distribution associated with the time series of daily \textsc{covid}-19{} hospitalisations, used in the loglikelihood function \eqref{eq:calibration_loglikelihood}. The average overdispersion coefficient of 0.034 (population-size weighted) was used for all simulations presented in this work.} \begin{tabular}{p{2.2cm}p{1.1cm}p{2.3cm}p{1.1cm}p{1.8cm}p{.5cm}} \toprule \textbf{Province} & $\alpha^g$ & \textbf{Province} & $\alpha^g$ & \textbf{Province} & $\alpha^g$ \\ \midrule Antwerpen & 0.031 & West-Vlaanderen & 0.041 & Limburg & 0.060 \\ Vlaams-Brabant & 0.035 & Oost-Vlaanderen & 0.027 & Luxembourg & 0.003 \\ Brabant Wallon & 0.059 & Hainaut & 0.029 & Namur & 0.007 \\ Brussels & 0.037 & Li\`ege & 0.039 & & \\ \bottomrule \end{tabular} \label{tab:overdispersions} \end{table} \subsection{Results of model calibration} Calibrated values of all a priori unknown model parameters, including their interpretation, are listed in Table \ref{tab:calibration_parameters}. The posterior distributions of the estimated parameters and their potential correlations are shown in Fig. \ref{fig:full-calibration-corner-plot}. Simulations of the daily number of new hospitalisations for every province are shown in Figs \ref{fig:provincial-complete-model-fit-0} and \ref{fig:provincial-complete-model-fit-1}. The negligible difference in goodness-of-fit between the spatially explicit and the national models is demonstrated in Fig. \ref{fig:RMSE-fit-boxplot}. \begin{table}[!h] \centering \caption{All calibrated parameters in the spatially explicit SEIQRD model, with their physical interpretation and the equation that shows their mathematical definition. The values and confidence intervals of these parameters are determined in the MCMC procedure constructed around the loglikelihood function given by Eq. \eqref{eq:calibration_loglikelihood}.} \begin{tabular}{p{1.2cm}>{\raggedright\arraybackslash}p{4.5cm}p{1.1cm}lp{1cm}} \toprule \textbf{Param.} & \textbf{Interpretation} & \textbf{Eq.} & \textbf{Value} & \textbf{Error}\\ \midrule $\beta^\text{R}$ & Transmission coefficient associated with rural provinces. & Eq. \eqref{eq:beta_spatially_stratified} & $0.040$ & ${}^{+0.002}_{-0.002}$ \\ $\beta^\text{U}$ & Transmission coefficient associated with urban provinces. & Eq. \eqref{eq:beta_spatially_stratified} & $0.041$ & ${}^{+0.002}_{-0.002}$ \\ $\beta^\text{M}$ & Transmission coefficient associated with metropolitan provinces. & Eq. \eqref{eq:beta_spatially_stratified} & $0.053$ & ${}^{+0.003}_{-0.003}$\\ $\Omega^\text{home}$ & Effectivity parameter in a home environment. & Eq. \eqref{eq:time-dependent-contact-matrix} & $0.16$ & ${}^{+0.01}_{-0.01}$ \\ $\Omega^\text{school}$ & Effectivity parameter in a school environment. & Eq. \eqref{eq:time-dependent-contact-matrix} & $0.02$ & ${}^{+0.01}_{-0.01}$ \\ $\Omega^\text{work}$ & Effectivity parameter in a work environment. & Eq. \eqref{eq:time-dependent-contact-matrix} & $0.69$ & ${}^{+0.04}_{-0.05}$ \\ $\Omega^\text{rest}$ & Effectivity parameter in transport, leisure and other environments. & Eq. \eqref{eq:time-dependent-contact-matrix} & $0.13$ & ${}^{+0.02}_{-0.01}$ \\ $K_{\text{inf},\alpha\beta\gamma}$ & Increased infectivity of the $\alpha$-$\beta$-$\gamma$ VOCs compared to the wild type for non-vaccinated subjects. & Eq. \eqref{eq:beta-from-VOC} & $1.57$ & ${}^{+0.02}_{-0.02}$ \\ $K_{\text{inf},\delta}$ & Increased infectivity of the $\delta$ VOC compared to the wild type for non-vaccinated subjects. & Eq. \eqref{eq:beta-from-VOC} & $1.79$ & ${}^{+0.03}_{-0.04}$ \\ $A_s$ & Amplitude of changing transmission coefficient due to seasonality of \textsc{sars}-\textsc{c}o\textsc{v}-2{}. & Eq. \eqref{eq:seasonality} & $0.30$ & ${}^{+0.01}_{-0.01}$ \\ $M_\text{cal}$ & National mentality factor & Eq. \eqref{eq:time-dependent-contact-matrix} & $0.36$ & ${}^{+0.01}_{-0.01}$\\ \bottomrule \end{tabular} \label{tab:calibration_parameters} \end{table} \begin{figure} \centering \includegraphics[width=\linewidth]{full-calibration-corner-plot.pdf} \caption{Corner plot showing the posterior distributions of all 11 free parameters. Created with the \texttt{corner} package \cite{corner2016}.} \label{fig:full-calibration-corner-plot} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{provincial-complete-model-fit-0.pdf} \caption{100 model realisations of the daily new hospitalisations between March 17th 2020 and January 1st 2022 (solid lines) with a negative binomial 95\% confidence region (transparent band). Black crosses signify raw data from Sciensano \cite{Sciensano2020} were used in the calibration procedure while red crosses signify data were not used during the calibration procedure. From top to bottom: Antwerpen (10000), Vlaams Brabant (20001), Brabant Wallon (20002), Brussels (21000), West-Vlaanderen (30000) and Oost-Vlaanderen (40000). (see Table \ref{tab:class-NIS-name} and Fig. \ref{fig:beta_classes_prov}).} \label{fig:provincial-complete-model-fit-0} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{provincial-complete-model-fit-1.pdf} \caption{100 model realisations of the daily new hospitalisations between March 17th 2020 and January 1st 2022 (solid lines) with a negative binomial 95\% confidence region (transparent band). Black crosses signify raw data from Sciensano \cite{Sciensano2020} were used in the calibration procedure while red crosses signify data were not used during the calibration procedure. From top to bottom: Hainaut (50000), Li\`ege (60000), Limburg (70000), Luxembourg (80000), Namur province (90000) (see Table \ref{tab:class-NIS-name} and Fig. \ref{fig:beta_classes_prov}).} \label{fig:provincial-complete-model-fit-1} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{RMSE-fit-boxplot.pdf} \caption{(a) 100 realisations of the national model (see Ref. \citep{Alleman2021}) and (b) 100 realisations of the spatially explicit model (nationally aggregated) of the daily new hospitalisations between March 17th 2020 and January 1st 2022 (solid lines) with a negative binomial 95\% confidence region (transparent band). The accompanying normalised root mean square error (RMSE) of the model predictions is given in black on the right hand axis. (c) Boxplot of the normalised RMSE values of the national and spatially explicit model. The RMSE time series of both models have a similar morphology, and no statistically significant difference in RMSE values was found.} \label{fig:RMSE-fit-boxplot} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{seroprevalence-spatial-fit.pdf} \caption{(a) 100 realisations of the model estimated fraction of seropositive individuals (solid lines) with negative binomial 95\% confidence region (transparent band) versus the fraction of seropositive individuals as measured by Refs. \cite{Herzog2020} and \cite{Sciensano2020}.} \label{fig:seroprevalence-spatial-fit} \end{figure} \clearpage \end{appendices}
1,116,691,497,108
arxiv
\section{Introduction} In real-world applications, it is often the case that the dataset for training a prediction model contains missing values. This phenomenon can happen for many reasons, e.g., human error, privacy concerns, and the difficulty of data collection. For example, in census data, some people might not be comfortable to reveal their sensitive information such as employment information~\citep{lillard1986we, eckert2020imputing}. In healthcare data, different patients may have taken different health examinations, which cause the dataset to have different available health features for each patient~\citep{wells2013strategies, hegde2019mice}. Technically, the missing data problem can be divided into three categories: Missing completely at random (MCAR), Missing at random (MAR), Missing not at random (MNAR) (see~\citet{rubin1976inference} and~\citet{van2018flexible} for more details). According to~\citet{jarrett2022hyperimpute}, a missing value imputation approach can be divided into two categories. The first category is the iterative approach. It is based on the idea of estimating the conditional distribution of one feature using all other available features. In one iteration, we will train a conditional distribution estimator to predict the value of each feature. We will repeat the process for many iterations until the process is converged, i.e., the latest iteration does not change the prediction output significantly according to the pre-specified convergence criterion. This approach has been studied extensively~\citep{van2000multivariate,khan2020sice, stekhoven2012missforest, jarrett2022hyperimpute} and one of the most well-known methods is Multiple Imputation based on Chained Equations (MICE)~\citep{van2000multivariate}. The second category is a deep generative model approach. In this approach, we will train a generative model to generate values in missing parts based on observed values. Previous methods that can be categorized in this approach are Multiple Imputation using Denoising Autoencoders (MIDA)~\citep{gondara2018mida}, Handling Incomplete Heterogeneous Data using Variational Autoencoders (HIVAE)~\citep{nazabal2020handling}, Missing Data Importance-weighted Autoencoder (MIWAE)~\citep{mattei2019miwae}, and Generative Adversarial Imputation Nets (GAIN)~\citep{yoon2018gain}. Recently, diffusion model is a generative model that has demonstrated its effectiveness over other generative models in various domains, e.g., computer vision~\citep{song2019generative, ho2020denoising,croitoru2022diffusion}, time-series data~\citep{tashiro2021csdi, rasul2021autoregressive}, chemistry~\citep{luo2021predicting, xu2022geodiff}, and natural language processing~\citep{li2022diffusion, yu2022latent}. However, to the best of our knowledge, a diffusion model has not been proposed yet for missing value imputation in tabular data. The goal of this paper is to develop a diffusion model approach for missing value imputation based on the recent development of diffusion model approach for missing value imputation in time-series data called CSDI~\citep{tashiro2021csdi}. CSDI is originally designed for time-series data and it cannot support categorical variables, which are necessary for tabular data. To solve this problem, we propose a variant of CSDI called {CSDI\_T} for tabular data by making it supports both the categorical and numerical features. Our experimental results show that {CSDI\_T} can be successfully trained to achieve competitive performance to existing methods both in the iterative approach and generative approach. We can also observe that the choice of categorical embedding methods can affect performance. \section{Problem formulation} Let $\mathcal{X} =(\mathbb{R} \cup \{ \varnothing \})^d$ be an input space, where $\mathbb{R}$ denotes a real number space and ``$\varnothing$'' denotes a missing value. In missing value imputation, we are given $d$-dimensional training dataset $\mathbf{X}_{\mathrm{tr}} = \{ \mathbf{x}_i \}_{i=1}^{n}$, where $n$ is the number of data. Without loss of generality, a feature $j \in \{1, \ldots, d\} $ of $\mathbf{x}_i$ is defined as $\mathbf{x}^j_i \in \mathcal{X}$, where a feature can be missing, numerical variable, or categorical variable. This paper focuses on an inductive setting where the goal is to find an imputation function $f \colon \mathcal{X} \to \mathbb{R}^d$ that transforms the input that allows missing value $\mathcal{X}$ to the $d$-dimension real values $\mathbb{R}^d$. A desirable $f$ should be able to replace the missing values with reasonable values. To evaluate the performance of $f$, we are given test input data $\mathbf{X}_{\mathrm{te}} = \{ \mathbf{x}_i \}_{i=1}^{n}$ and ground truths $\mathbf{Y}_{\mathrm{te}} = \{ y^j_i \in \mathbb{R} \colon \mathbf{x}^j_i = \varnothing$\}. For $\mathbf{x}^j_i$, we define $\widehat{\mathbf{x}}^j_i$ to be an imputed feature obtained from $f(\mathbf{x}_i)$ for a feature $j$. Let $M^j = \{ i: \mathbf{x}^j_i=\varnothing \}$ be the set of missing value indices and $N^j_\mathrm{miss} = |M^j|$ be the number of missing values for a feature $j$. To calculate the error of $f$, we use the root mean squared error (RMSE) if $j$ is numerical and the error rate (Err) if $j$ is categorical: \begin{align*} \mathrm{RMSE}(j) = \sqrt{\frac{\sum_{i \in M^j} (\widehat{\mathbf{x}}^j_i - y^j_i)^2 } {N^j_\mathrm{miss}}}, \quad \mathrm{Err}(j) = \frac{1}{N^j_\mathrm{miss}} \sum_{i \in M^j} \mathds{1}_{[\widehat{\mathbf{x}}^j_i \neq y^j_i]}, \end{align*} where $\mathds{1}_{[\cdot]}$ is an indicator function that returns $1$ if the condition holds and $0$ otherwise. \section{CSDI\_T: Conditional Score-based Diffusion Models for Tabular data} In this section, we describe our proposed diffusion model method for missing value imputation in tabular data by describing CSDI~\citep{tashiro2021csdi} and how to modify it for handling tabular data. \subsection{Conditional Score-based Diffusion Model (CSDI)} Diffusion model contains two processes: the forward noising process where we iteratively inject the noise into the input data, and the reverse denoising process where we iteratively denoise the data. In the standard training process of diffusion model~\citep{song2019generative, ho2020denoising}, only the reverse process requires training while the forward process is always fixed. We omit the details of diffusion model for brevity (see \citet{song2019generative, ho2020denoising} for more information). Based on the idea of diffusion model,~\citet{tashiro2021csdi} recently proposed a diffusion model called CSDI for missing value imputation for time-series data. The key idea of CSDI can be explained as follows. Instead of reconstructing the whole input $\mathbf{x}$ by straightforwardly using the diffusion model, aka., unconditional diffusion model (see Appendix C of~\citet{tashiro2021csdi}), CSDI separates input $\mathbf{x}$ into two parts: the observed part (aka., conditional part) $\mathbf{x}^{co}$ and the unobserved part to predict (aka., target part) $\mathbf{x}^{ta}$. The goal of the diffusion model is to model the following distribution: \begin{equation*} p_{\theta}(\mathbf{x}^{ta}_{t-1}|\mathbf{x}^{ta}_{t}, \mathbf{x}^{co}_{0}) = \mathcal{N}(\mathbf{x}_{t-1}^{ta};\mathbf{\mu}_\theta(\mathbf{x}_{t}^{ta},t|\mathbf{x}^{co}_{0}), \sigma \mathbf{I}), \end{equation*} where $t \in \{1, \ldots, T\}$ denotes the iteration round of the process and $T$ is a hyperparameter. We need to model $\mathbf{\mu}_\theta$ that focuses only on predicting the values of the unobserved part. It is observed that the conditional diffusion model can achieve better performance than the unconditional one. In our study, we followed the formulation of~\citet{tashiro2021csdi} as our objective function. For the architecture part, we slightly modified the architecture proposed in CSDI to appropriately handle tabular data. More specifically, we removed the temporal transformer layer of the original CSDI architecture since our data does not contain temporal information and use a simple residual connection of transformer encoder and multi-layered perceptron. \subsection{Handling categorical variables} In the original CSDI, it is assumed that the input features contain only numerical variables, which is not the case for tabular data. In this section, we extend CSDI to support categorical variables by proposing three different techniques: (1) one-hot encoding, (2) analog bits encoding, and (3) feature tokenization. Figure~\ref{fig:encoding} illustrates how each encoding works. The categorical variable is marked as yellow and we assume that there are three different categories for this feature. Without loss of generality, for one-hot encoding, the representation can be $[1,0,0]$. For analog bits, we follow the encoding scheme proposed by~\citet{chen2022analog}. In our example, the categorical variable will take two columns and represented in binary bits as $[1,1]$. To make the data more distinguishable, we further convert $0$ to $-1$ in one-hot and analog bits encoding. For feature tokenizer~\citep{gorishniy2021revisiting}, we transform both numerical and categorical variables together to embeddings. In our example, variables will have embedding vectors with the same length, i.e., $E_1, E_2, E_3 \in \mathbb{R}^e$. In sum, analog bits encoding takes less columns compared to one-hot encoding but will make the encoded vector complex. Feature tokenizer lets all variables have embeddings with the same length. Then, we train the model with the processed input. After obtaining the raw output, different handling schemes require different recover procedures. For one-hot encoding, we treat the index of the largest element as the model inferred category. For analog bits encoding, we convert every output element to $1$ if the output element is larger than $0$, otherwise we convert it to $-1$. In the FT scheme, we need to recover both numerical and categorical variables back from the embeddings~\citep{gorishniy2021revisiting}. For numerical variables, we divide the diffusion model output by the corresponding embedding element-wise and use the average value as the final model output. For categorical variables, we calculate the Euclidean distance between {CSDI\_T} outputs and every categorical embedding and set the category of closest embedding (i.e., $1$-nearest neighbor) as the final model output. \begin{figure \centering \includegraphics[scale=0.5]{figure/encoding.drawio.png} \caption{Example of handling categorical variables in one-hot encoding, analog bits encoding, and embeddings. The categorical variable is marked as the yellow block. Two numerical variables are marked as blue and green blocks.} \label{fig:encoding} \vspace{-0.2in} \end{figure} \section{Experimental results} In this section, we report experiments on pure numerical datasets and mixed variable datasets to show the effectiveness of CSDI\_T. \textbf{Datasets:} We used seven datasets. Census Income Data Set (Census), Wine Quality (Wine), Concrete Compressive Strength (Concrete), Libras Movement (Libras) and Breast Cancer Wisconsin (Breast) were obtained from UCI Machine Learning Repository~\citep{Dua:2019}. COVID-19\footnote{https://www.kaggle.com/datasets/tanmoyx/covid19-patient-precondition-dataset} and Diabetes\footnote{https://www.kaggle.com/datasets/alexteboul/diabetes-health-indicators-dataset} were obtained from Kaggle. Dataset information is detailed in Appendix~\ref{sec:app-dataset}. Note that Diabetes and COVID-19 datasets only have binary category variables and we preprocessed the numerical variables for all datasets by min-max normalization. \textbf{Comparison methods:} In our experiments, we compare our proposed method with a simple baseline that uses training data's mean values for numerical variables and mode values for categorical variables (Mean / Mode). We used MICE method with linear regression and logistic regression (MICE (linear)) and MICE method based on random forest (MissForest). We also used GAIN as a representative method for a deep generative model approach. The code implementation for MICE (linear), MissForest, and GAIN was provided by the Hyperimpute framework~\citep{jarrett2022hyperimpute}\footnote{https://github.com/vanderschaarlab/hyperimpute}. For {CSDI\_T}, we built our code based on CSDI~\citep{tashiro2021csdi}\footnote{https://github.com/ermongroup/CSDI}. Hyperparameter information is detailed in Appendix~\ref{sec:app-hyperparams}. \textbf{Results:} First, we show our results on three mixed variable datasets (Diabetes, Census and COVID-19). The following Table~\ref{table:mixed_resutls} shows the comparison between different imputation methods and categorical variable handling schemes. Our proposed methods ({CSDI\_T}) reached the lowest RMSE in Diabetes and Census datasets. MissForest reached the lowest error rate in the Diabetes and Census datasets. The RMSE difference between three categorical handling methods was not evident. However, {CSDI\_T} with FT obtained the lowest error rate in the Census dataset compared to other two categorical handling methods, where the analog bits approach is superior to one-hot. Second, we show our results on four pure numerical datasets in Table~\ref{table:numerical_resutls}. It can be observed that our proposed {CSDI\_T} has best performance against other comparison methods for three out of four datasets. \begin{table \centering \caption{RMSE and error rate performance for comparison methods on three mixed variable datasets. Note that one-hot and analog bits are equivalent for a dataset without multi-categorical variables.} \label{table:mixed_resutls} \scalebox{0.8}{ \begin{tabular}{lllllll} \hline & \multicolumn{2}{c}{Diabetes} & \multicolumn{2}{c}{COVID-19} & \multicolumn{2}{c}{Census} \\ \cline{2-7} & RMSE & Error rate & RMSE & Error rate & RMSE & Error rate \\ \hline Mean / Mode & 0.222 (0.003) & 0.260 (0.004) & 0.138 (0.002) & 0.144 (0.002) & 0.120 (0.003) & 0.424 (0.003) \\ MICE (linear) & 0.263 (0.002) & 0.270 (0.004) & 0.125 (0.003) & 0.300 (0.038) & 0.101 (0.002) & 0.530 (0.011) \\ MissForest & 0.216 (0.003) & \textbf{0.214 (0.001)} & \textbf{0.120 (0.002)} & 0.131 (0.002) & 0.112 (0.004) & \textbf{0.300 (0.014)} \\ GAIN & 0.202 (0.003) & 0.282 (0.005) & 0.127 (0.002) & 0.217 (0.011) & 0.123 (0.057) & 0.412 (0.012) \\ CSDI\_T / one-hot & \textbf{0.197 (0.001)} & 0.222 (0.005) & 0.122 (0.003) & 0.111 (0.012) & 0.099 (0.004) & 0.400 (0.033) \\ CSDI\_T / analog bits & \textbf{0.197 (0.001)} & 0.222 (0.005) & 0.122 (0.003) & 0.111 (0.012) & 0.103 (0.004) & 0.376 (0.013) \\ CSDI\_T / FT & 0.206 (0.002) & 0.224 (0.004) & 0.123 (0.002) & \textbf{0.107 (0.002)} & \textbf{0.098 (0.003)} & 0.345 (0.002) \\ \hline \end{tabular}} \end{table} \begin{table \caption{RMSE performance of comparison methods on four pure numerical datasets.} \label{table:numerical_resutls} \centering \begin{tabular}{lllll} \hline Methods & Wine & Concrete & Libras & Breast \\ \hline Mean & 0.076 (0.003) & 0.217 (0.007) & 0.099 (0.001) & 0.263 (0.009) \\ MICE (linear) & 0.065 (0.003) & 0.153 (0.006) & 0.034 (0.001) & 0.154 (0.011) \\ MissForest & \textbf{0.060 (0.002)} & 0.173 (0.005) & 0.024 (0.001) & 0.163 (0.014) \\ GAIN & 0.072 (0.004) & 0.203 (0.007) & 0.089 (0.006) & 0.165 (0.006) \\ CSDI\_T & 0.065 (0.004) & \textbf{0.131 (0.008)} & \textbf{0.011 (0.001)} & \textbf{0.153 (0.003)} \\ \hline \end{tabular} \vspace{-0.2in} \end{table} \textbf{Discussions:} Based on the results, {CSDI\_T} is observed to be effective in imputing numerical variables, where it obtained the best RMSE performance for 5 out of 7 datasets. Different from previous generative models, the diffusion model performs decoding through a reverse process. {CSDI\_T} can benefit from this iterative approximation reverse process, which allows the neural network to gradually figure out the target value. Moreover, our results suggest the effectiveness of FT in handling categorical variables. The superiority of FT is evident in the Census dataset (the only multi-category mixed data types dataset). One possible reason is that FT treats all variables equally. That is, all numerical variables will have embedding vectors with the same length. This strategy avoids the problem of column imbalance. Column imbalance can happen in one-hot and analog bits encoding, where the more categories the category variable contain, the more columns it will take. \section{Conclusions and future work} We have proposed a diffusion model-based method for missing value imputation called {CSDI\_T}. We demonstrated that {CSDI\_T} can obtain competitive performance with other well-known imputation methods. Particularly, {CSDI\_T} works well for numerical variables imputation. We also explored different schemes for handling categorical variables and found that FT embedding gives evident better performance compared to one-hot encoding and analog bits in the Census dataset. Future work for {CSDI\_T} that can be considered are (1) investigation of the inference time, (2) model architecture improvement, and (3) theoretical analysis of the loss function. \section*{Acknowledgements} The authors would like to thank Shin-ichi Maeda, Kohei Hayashi, and Kenta Oono for helpful discussions during the PFN summer internship program 2022.
1,116,691,497,109
arxiv
\section{Introduction} Cell polarisation is the result of a very large and intricate network of biochemical and biomechanical processes occurring in the cell, which cause a loss of internal symmetry in its protein distribution and in its shape \cite{Nelson2003}. The biological complexity ranges from and couples different scales, going from molecular interactions to protein reactions and their spatial distributions, to superstructures as filament networks, which support the whole cellular system, ultimately defining the cell shape. In order to understand key mechanisms involved in cell polarisation, modellers have being trying to minimise the number of components and variables under consideration, often reducing their work to purely qualitative descriptions of the polarisation process. One of the simplest mathematical models of cell polarisation was originally proposed in \cite{Mori2008} and it is known as the \textit{wave pinning model}. The model is based on the activation-inactivation switching of a representative protein from the GTPase family, and polarisation arises by the appearance of stable regions characterised by high concentrations of the active protein. Originally the model, constituted by a system of two reaction-diffusion equations, was defined on a one-dimensional domain, later extended to two-dimensional domains \citep{Vanderlei2011}. In \citep{Cusseddu2019} we proposed an extension to a three-dimensional domain $\Omega\subset\mathbf{R}^3$, using a bulk-surface partial differential equation approach, where the active protein was confined to the cell membrane $\Gamma = \partial\Omega$ (a sharp-interface approximation of the cell cortex) and the inactive protein free to move all over the cell $\overline{\Omega}$. We refer to such model as the \textit{bulk-surface wave pinning (BSWP) model} and, for more details on the biological motivations, we refer the interested reader to \citep{Cusseddu2019} and references contained therein. In the BSWP model the functions $a (\mathbf{x},t):\Gamma\times[0,T]\to\mathbb{R}$ and $b (\mathbf{x},t):\overline\Omega\times[0,T]\to\mathbb{R}$ represent, respectively, the active and the inactive GTPase protein { concentrations}, whose evolution is described by the following coupled system of bulk-surface partial differential equations \begin{align} &\frac{\partial b}{\partial t}(\mathbf{x},t) = D_b \Delta b(\mathbf{x},t), & \mathbf{x}\in\Omega, \label{eq:bulk}\\ &-D_b \frac{\partial b}{\partial \mathbf{n}}(\mathbf{x},t) = f(a,b), & \mathbf{x}\in\Gamma, \label{eq:boundary_condition}\\ &\frac{\partial a}{\partial t}(\mathbf{x},t) = D_a \Delta_{\Gamma} a(\mathbf{x},t) + f(a,b), & \mathbf{x}\in\Gamma. \label{eq:surface} \end{align} Here, $\Delta$ is the classical Laplace operator, $\Delta_\Gamma$ is the Laplace-Beltrami operator, $\mathbf{n}$ is the outward vector to $\Omega$ on $\Gamma$, $D_b{ >0}$ and $D_a{ >0}$ are the bulk and surface diffusion coefficients, respectively. The function $f$ is defined as \begin{equation} f(a,b) = \left(k_0 + \frac{\gamma a^2}{K^2+a^2}\right)b - \beta a\label{eq:f(a,b)} \end{equation} and indicates the { flux, represented by the} reaction between $a(\mathbf{x},t)$ and $b(\mathbf{x},t)$, which takes place at the boundary { $\Gamma$} of $\Omega$. The constant parameters $k_0>0$ and $\beta>0$ represent, respectively, the basal activation and inactivation rates, while $\gamma>0$ weights a nonlinear term describing a positive feedback loop in activation, in the form of a Hill function. At saturation of $a$, the extent of this term tends to $\gamma b$, while $K$ represents the half-activation concentration of $a$. All the parameter values used throughout this text are reported in \ref{app:numerical}. In what follows, we will consider the above system as a non-dimensional re-scaled version of the BSWP model proposed in \citep{Cusseddu2019}. Three key properties of the BSWP model are: \begin{enumerate} \item Temporal conservation of total mass, meaning $\frac{d}{dt}\left(\int_\Omega b + \int_\Gamma a\right) = 0$; \item $f(a,b)$ admits three zeros $a_1(b)<a_2(b)<a_3(b)$ for $b$ ranging in a certain interval $[b_1, b_2]$; \item Difference in the diffusion coefficients $D_a\ll D_b$, which relies on the fact that protein diffusion on the cell membrane occurs much slower that within the cytosol. \end{enumerate} The { \textit{wave pinning model} by Mori et al. \citep{Mori2008}} shares the same properties, but the components $a$ and $b$ diffuse and react on the same spatial domain $\Omega\subset\mathbb{R}^d$, with $d=1$ or 2, and are subject to zero-flux boundary conditions. For our settings, the surface reaction-diffusion equation does not have boundary conditions since it is posed on a closed manifold where $\partial\Gamma=\emptyset$, i.e. it is empty. Initial conditions, for both $a$ and $b$, are prescribed in equations \eqref{eq:ic_a} and \eqref{eq:ic_b} that follow below in our exposition. For the analysis on the well-posedness of bulk-surface reaction-diffusion systems, we refer the interested reader to the work by Sharma and Morgan \citep{sharma2016global}. In \citep{Cusseddu2019} we described how the BSWP model might generate cell polarisation. For convenience, we briefly describe it here. In a given parameter region, equation \eqref{eq:surface} is a bistable surface reaction-diffusion equation. This can cause initial perturbations in $a$ to evolve in propagating fronts. The speed of such propagation strictly depends on the bulk component $b$, which, due to conservation of total mass, gets overall depleted as $a$ expands on $\Gamma$. Eventually, $b$ reaches a critical value $b^*$, causing a pinning of the propagating fronts of $a$. As a consequence, the system approaches an {\it apparent} stationary state, in which $\Gamma$ has regions where $a$ has approximately high values of concentration (denoted here as $a_{high}$) and regions where $a$ has approximately low values of concentration (denoted here as $a_{low}$), with $a_{high}>a_{low}$ (see also the work done by Brauns et al. on reaction-diffusion systems with conservation of total mass \citep{Brauns2020}). In general, we say that polarisation occurs when $\Gamma$ shows at least one region where $a\approx a_{high}$. We will refer to such a region as the \textit{polarisation patch}. Extending { the original wave pinning model} in \citep{Mori2008} from one to two and three dimensions results in more complex system dynamics. Jilkine \citep{JilkinePhD} and Vanderlei et al. \citep{Vanderlei2011} initially studied a two-dimensional version of the model, with $a$ and $b$ defined on the same domain $\Omega$. Despite sharing initial propagation and pinning dynamics, in multi-dimensional domains the pinned front is subject to a slow motion across the domain. Same dynamics characterise also the bulk-surface wave pinning model \eqref{eq:bulk}-\eqref{eq:f(a,b)} for which, in \citep{Cusseddu2019}, we showed the slow motion of a single polarisation patch towards areas of $\Gamma$ with higher curvature. Often this shifting occurs on such a long time scale to the extent that it is often neglected in biological studies or simply that this phenomena is not apparent to trigger biological interest. However it surely triggers a mathematical curiosity for identifying parameter regions and geometries $\Omega$ for which such behaviour might be of biological interest. A second characteristic of the wave pinning model, common to all of its different versions, is a sort of competition between different polarisation patches. Indeed, it has been shown that stationary solutions with multiple polarisation patches are not stable { neither in the original wave pinning model by Mori et al. \citep{Mori2008} { nor} in its two-dimensional extension presented in \citep{JilkinePhD, Chiou2018, Brauns2021-wavelength}}. A first example of competition in the bulk-surface wave pinning model \eqref{eq:bulk}-\eqref{eq:f(a,b)} was shown in \citep{Cusseddu2019}. { Interestingly, in a very recent work, Miller et al. \citep{Miller2022} studied a reduced version of the same model, showing that two patches may instead coexist on non-convex surfaces $\Gamma$.} In this work, through numerical investigations, we try to advance current-state-of-the-art understanding of these two characteristics. To the best of our knowledge, we analyse such dynamics for the first time, taking advantage of the bulk-surface finite element method for solving the system \eqref{eq:bulk}-\eqref{eq:f(a,b)} on different geometries \citep{Cusseddu2019, MADZVAMUSE20169}. { Other numerical approaches to similar problems might be found in the literature. As an example we mention the bulk-surface Virtual Element Method, which is a promising numerical method recently applied by Frittelli et al. for solving the bulk-surface wave pinning model on a two-dimensional spatial domain \citep{Frittelli2021, frittelli2021virtual}.} { We organise this work as follows. In Section \ref{sec:model_reduction} we introduce a reduced version of the BSWP model and discuss about curvature-driven polarisation. In the following, Section \ref{sec:curvature-driven polarisation}, we investigate the influence of the bulk component $b$ in the long time patterning of the solution $a$ on a simple domain, by comparing the model solution for different bulk diffusion coefficients $D_b$.} In Section \ref{sec:competition}, we investigate multi-patch competition both on three-dimensional and two-dimensional geometries. In this last case, the surface naturally reduces to a closed curve. In Section \ref{sec:bulk induces polarisation}, we show an example in which bulk heterogeneity is the main driver of polarisation on the surface. Finally, we conclude our study by summarising our main findings in Section \ref{sec:conclusion}. \section{Bulk diffusion, model reduction and curvature-driven polarisation}\label{sec:model_reduction} The bulk component $b$ is the fuel for the propagation and pinning of the activated patch on the surface. However, given the big difference in diffusion coefficients between bulk and surface, $b$ is often considered to be spatially homogeneous. For instance, in the analysis of the BSWP model on a disk, in \citep{Cusseddu2019}, we showed that the solution $b$, to a certain approximation, was spatially uniform. Diegmiller et al. \cite{Diegmiller2018} showed that reducing the bulk-surface model to a single reaction-diffusion equation on a sphere still provides an accurate description of the polarisation dynamics. Such reduction results from considering the limit $D_b \to \infty$. Given the conservation of total mass $M(t) = \int_\Omega b + \int_\Gamma a = M_0$, for all $t>0$ and assuming $b$ to be spatially homogeneous at all times, we have $b = \frac{1}{|\Omega|}\left( M_0 - \int_\Gamma a \right)$, which depends only on time. Exploiting this assumption, one then obtains the following reduced surface reaction-diffusion equation \begin{align} \frac{\partial a}{\partial t} = D_a \Delta_\Gamma a +\frac{1}{|\Omega|}\left( M_0 - \int_\Gamma a ~\text{d}\sigma(\mathbf{x}) \right) \left( k_0 + \frac{\gamma a^2}{K^2 + a^2} \right) - \beta a, \; \mathbf{x}\in\Gamma. \label{eq:reducedmodel} \end{align} { We couple} the BSWP model \eqref{eq:bulk}-\eqref{eq:f(a,b)} and the above reduced model \eqref{eq:reducedmodel} to the following Gaussian initial condition for $a$ \begin{equation}\label{eq:ic_a} a(\mathbf{x},0) = a_{p,0}\exp{\left\{-\frac{(x-x_0)^2}{\sigma_{x,0}^2} - \frac{(y-y_0)^2}{\sigma_{y,0}^2} - \frac{(z-z_0)^2}{\sigma_{z,0}^2}\right\}} \end{equation} and, when solving the BSWP model \eqref{eq:bulk}-\eqref{eq:f(a,b)}, the initial condition for $b$ is prescribed as \begin{equation}\label{eq:ic_b} b(\mathbf{x},0)=\frac{1}{|\Omega|}\left( M_0 - \int_\Gamma a ~\text{d}\sigma(\mathbf{x}) \right). \end{equation} { The initial condition \eqref{eq:ic_a} can be seen as a localised perturbation of the surface protein distribution. However, it is important to remark that this is just one of many possible choices. For instance, polarisation may arise even when the initial datum for both $a$ and $b$ is spatially uniform due to the geometry of the domain \citep{Cusseddu2019} or when $a$ is randomly perturbed across the whole surface $\Gamma$ \citep{Miller2022}.} The initial peak in $a(\mathbf{x},t)$ may propagate by developing travelling fronts, extending into a \textit{mesa}, i.e. a high plateau region with $a\approx a_{high}$, whereas, in the rest of the surface $\Gamma$, $a\approx a_{low}$. On a flat surface, the normal speed of a travelling wave in excitable media is given by $v = c{(b)} - \kappa D_a$, with $c$ being a non-decreasing function of $b$ and $\kappa$ denotes the curvature of the patch interface \citep{tyson1988singular}. On a generic surface $\Gamma$, $\kappa$ is the geodesic curvature of the front line \citep{bialecki2020traveling}. Therefore, in this last case, also the geometry of the surface plays an essential role in the propagation of $a(\mathbf{x},t)$. It was suggested that the motion of the polarised patch across the domain corresponds to a minimisation of the perimeter of the interface separating the states $a_{high}$ and $a_{low}$, under the constrain of constant polarised mass \citep{JilkinePhD}. Recently, Singh et al. \citep{Singh2021} compared the reduced model equation \eqref{eq:reducedmodel} with a problem of perimeter minimisation with constant polarisation patch area, obtaining, for certain initial conditions, a very good agreement between the two solutions. In the bulk-surface framework, the travelling wave is also influenced by the spatial distribution of the bulk component $b$. In general, given a large difference between $D_b$ and $D_a$, we expect similar dynamics to the reduced model \eqref{eq:reducedmodel}. For instance, on a capsule-shaped domain, when the polarisation patch develops at the center of the cylindrical side, it will slowly move towards one of the two spherical ends \citep{Cusseddu2019}. It remains unclear, however, if and how the bulk component might play a role in pushing the polarised patch towards one of the two spherical ends. { In the following sections we will compare the two models (BSWP model and its reduced version) on three different geometries. As we will see, in certain cases, the reduced version \eqref{eq:reducedmodel} provides a very good approximation of the overall dynamics of \eqref{eq:bulk}-\eqref{eq:f(a,b)}. However, we will also show that, in other cases, by limiting bulk diffusion, the dynamics can lead to substantially different dynamics. } \section{Bulk diffusion and polarisation on an oblate spheroid} \label{sec:curvature-driven polarisation} In this section, we compare the solution of the reduced model \eqref{eq:reducedmodel} with the solution of the bulk-surface system \eqref{eq:bulk}-\eqref{eq:f(a,b)} for different values of its diffusion coefficient $D_b$ on a simple domain $\Omega\subset\mathbb{R}^3$. In order to reduce the complexity, we keep the geometry of $\Omega$ as simple as possible. In the simulations shown here, $\Omega$ is { an ellipsoid,} obtained by a 85\% rescaling of the unit ball $B_1(\mathbf{0})$ on the $y$-axis. In this way, the curvature of $\Gamma$ is maximal for $y=0$ and minimal at { at the top and bottom points $(0,\pm0.85,0)$}. { When the Gaussian function \eqref{eq:ic_a} is centered at any of these two points, our choice of the geometric domain} is convenient { for } comparing different solutions of $a$ over a line, obtained by the intersection of $\Gamma$ with a plane passing orthogonal through the peak of the initial condition \eqref{eq:ic_a}{, see Figure \ref{fig:arclenghtsolutionA}}. \begin{figure} \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=1.2\textwidth]{Figures/intersection_plane.png} \caption{}\label{fig:arclenghtsolutionA} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth]{Figures/arc-550.png} \caption{}\label{fig:arclenghtsolutionB} \end{subfigure} \caption{ The system \eqref{eq:bulk}-\eqref{eq:f(a,b)} with \eqref{eq:ic_a}-\eqref{eq:ic_b} is solved on a oblate spheroid, for different values of the bulk diffusion coefficient $D_b$. Thanks to the symmetrical properties of $\Omega$ { and of the initial condition for $a(\mathbf{x},t)$ given in \eqref{eq:ic_a}, { at the} initial stage, the peak of $a$ expands from the top, symmetrically with respect to all longitudes, over the surface $\Gamma$. This allows us to} plot the solution profiles, { for different values $D_b$}, { over a meridian,} see panel (a). In (b) the profiles are compared at time $t=550${, where the variable $\varphi$ represents the meridian length. Therefore $\varphi=0$ indicates the north pole, $\varphi\approx 2.9$ the opposite pole}. For $D_b=5$ and $D_b=50$ the profile of $a(\mathbf{x},t)$ is almost indistinguishable from the one obtained by the reduced model \eqref{eq:reducedmodel} (dashed line). It is sufficient to set $D_b = 0.5$ (100 times bigger than $D_a = 0.005$) to obtain a very similar profile as the one from the reduced model. Polarisation occurs also when $D_b = 0.1 = 20 D_a$. For a three-dimensional view of the solutions, see Figure \ref{fig:3D-different-D_bulk}. Parameter values are reported in \ref{app:numerical}.} \label{fig:arclenghtsolution} \end{figure} \begin{figure} \includegraphics[width=1\textwidth]{Figures/revision/oblate/oblate.pdf} \caption{Each row reports the solution $a(\mathbf{x},t)$ of \eqref{eq:bulk}-\eqref{eq:f(a,b)} with \eqref{eq:ic_a}-\eqref{eq:ic_b} for the specified value of the bulk diffusion coefficient $D_b$, at four different times. When $D_b = +\infty$, $a(\mathbf{x},t)$ solves \eqref{eq:reducedmodel} with \eqref{eq:ic_a}. Initial conditions and view point, as shown by the coordinate system on the left, are the same ones for all the plots. The colour bar is automatically re-scaled between $\min_\Gamma a(\mathbf{x},t)$ and $\max_\Gamma a(\mathbf{x},t)$ for each plot. Parameter values are reported in \ref{app:numerical}. See also Figure \ref{fig:arclenghtsolution}. } \label{fig:3D-different-D_bulk} \end{figure} Figures \ref{fig:arclenghtsolution} and \ref{fig:3D-different-D_bulk} show the solution $a$ corresponding to the bulk-surface wave pinning model \eqref{eq:bulk}-\eqref{eq:f(a,b)} for different values of $D_b$, and comparisons with the solution of the reduced surface reaction-diffusion model \eqref{eq:reducedmodel} ($D_b=\infty$). Figure \ref{fig:arclenghtsolutionB} represents the comparison at an intermediate time, when the polarisation patch is still centered at $(x_0,y_0,z_0)$. { Consistently with the work by Miller et al. \citep{Miller2022}, the polarisation patch shifts over $\Gamma$ towards the equator, where it stabilises. This behaviour is consistent across different bulk diffusion coefficients, as illustrated in Figure \ref{fig:3D-different-D_bulk}. As such, the reduced model \eqref{eq:reducedmodel} is a very good approximation of the BSWP model \eqref{eq:bulk}-\eqref{eq:f(a,b)}, for a big variety of bulk diffusion coefficients $D_b$.} Increasing $D_b$ causes an enlargement of the distance between maximal and minimal value of $a$, by increasing the first one and decreasing the latter one (Figure \ref{fig:arclenghtsolutionB}). Intuitively, this result can be understood in light of the stability theory developed by Brauns et al. for reaction-diffusion systems of two components with conservation of total mass. We refer the interested reader to their work \citep{Brauns2020} for further details. \begin{figure} \begin{subfigure}[b]{0.475\textwidth} \includegraphics[width=1.\textwidth]{Figures/revision/oblate/_convergence_oblate.png} \caption{}\label{fig:L2norm-oblate-bulk-diffusion-A} \end{subfigure} \hfill \begin{subfigure}[b]{0.475\textwidth} \includegraphics[width=1.1\textwidth]{Figures/revision/oblate/_convergence_oblate_end.png} \caption{}\label{fig:L2norm-oblate-bulk-diffusion-B} \end{subfigure} \caption{ Plots of the discrete L$^2$-norm of the function $\Delta_t a(\mathbf{x},t):= a(\mathbf{x},t+\tau) - a(\mathbf{x},t)$, where $a(\mathbf{x},t)$ solves the BSWP model \eqref{eq:bulk}-\eqref{eq:f(a,b)} with \eqref{eq:ic_a}-\eqref{eq:ic_b}, for different values of the bulk diffusion coefficient $D_b$. When $D_b = + \infty$, $a(\mathbf{x},t)$ solves the reduced model \eqref{eq:reducedmodel} with \eqref{eq:ic_a}. The function $\Delta_t a(\mathbf{x},t)$ represents the difference between consecutive numerical solutions, whose L$^2$-norm noticeably drops in time as the system approaches the polarisation pattern. Panel (b) is a the restriction of the plot in (a) to $t>100$, which highlights an increment of $||\Delta_t a(\mathbf{x},t)||_2$. Such increment is associated with the transition of the polarised pattern across the surface $\Gamma$, see also Figure \ref{fig:3D-different-D_bulk}. The bell profiles in panel (b) would be unnoticeable in scale of panel (a), as their values are very small. } \label{fig:L2norm-oblate-bulk-diffusion} \end{figure} { While the dynamics are very similar for the different values of the bulk diffusion coefficient $D_b$, by varying this parameter we affect the shifting time of the surface polarisation patch towards the equator.} In Figure \ref{fig:L2norm-oblate-bulk-diffusion}, the discrete L$^2-$norm of the difference between consecutive numerical solutions $a_h$, defined by \begin{equation} \label{eq:Delta_t(a)} ||\Delta_t a(\mathbf{x},t) ||_2^2 := \int_\Gamma \left(a_h(\mathbf{x},t+\tau) - a_h(\mathbf{x},t)\right)^2~\text{d}\sigma(\mathbf{x}), \end{equation} where $\tau$ is the temporal discretisation step, helps in analysing the stability of the system. In particular, it helps to understand how the surface component $a$ is subject to the curvature-driven motion across the domain $\Gamma$. Initially the polarisation patch forms and gets pinned. This causes a drop in $||\Delta_t a ||_2$ and the system appears to have reached a steady state (Figure \ref{fig:L2norm-oblate-bulk-diffusion-A}). However, after a certain time, $||\Delta_t a ||_2$ starts growing again (Figure \ref{fig:L2norm-oblate-bulk-diffusion-B}), as the polarisation patch shifts from the top to a lateral side of $\Gamma$ (see last column in Figure \ref{fig:3D-different-D_bulk}). The bell shape of $||\Delta_t a ||_2$ (Figure \ref{fig:L2norm-oblate-bulk-diffusion-B}) indicates that the speed of such transition is not monotonic, and the decreasing side of the profile of $||\Delta_t a ||_2$ indicates system stabilisation. The bell amplitude indicates the temporal extent of such transition, which occurs on a much longer time scale with respect to the initial propagation and pinning of the polarised patch. It must be noted that time in Figure \ref{fig:L2norm-oblate-bulk-diffusion} is reported on a log scale. It is also interesting to remark that the profiles in Figure \ref{fig:L2norm-oblate-bulk-diffusion-B} are not visible in Figure \ref{fig:L2norm-oblate-bulk-diffusion-A}, because the maximal speed of such transition is extremely small, when compared to the initial propagation (see also $y$-axis values in both figures). { In the case here presented, the bulk diffusion coefficient $D_b$ seems to have a role only on the temporal scale, meaning that, when bulk diffusion is limited, the surface patch is subject to a slower curvature-dependent speed. However, we will now present another case in which the parameter $D_b$ appears to have a much stronger impact on the model dynamics. } \section{Competition between different polarisation patches}\label{sec:competition} In \citep{Cusseddu2019} we showed an example in which an initial condition for $a(\mathbf{x},t)$ with two separated peaks, subject to the BSWP model \eqref{eq:bulk}-\eqref{eq:f(a,b)}, evolved, in a first phase, into two polarisation patches. However, in the long time, eventually one of the two prevailed over the other. Competition between multiple polarisation patches in the solution $a(\mathbf{x},t)$ has been studied by Chiou et al. \cite{Chiou2018} also for the limit case $D_b=+\infty$ (i.e. equation \eqref{eq:reducedmodel}). They show how different patches compete on a flat surface, where, as discussed in Section \ref{sec:model_reduction}, the speed of the travelling wave is given by $v = c - \kappa D_a$, with $\kappa$ being the curvature of the patch interface. Therefore, in the case of two circular polarisation patches of radii, respectively, $R$ and $r$ with $R > r > \frac{D_a}{c(b_0)}$, they will both spread radially as long as $v_R = c(b) - \frac{D_a}{R}$ and $v_r = c(b) - \frac{D_a}{r}$ remain positive. However, while $a(\mathbf{x},t)$ propagates, $b(\mathbf{x},t)$ gets depleted and $c$ decreases. This continues until a certain point is reached such that $c(b) < \frac{D_a}{r}$. At this point, $v_R$ remains positive, but for the smaller patch the speed is reversed and it starts shrinking. The shrinking patch releases $b(\mathbf{x},t)$, which, in the case of $D_b=+\infty$, is immediately consumed by the other enlarging patch \citep{Chiou2018}. However, in the bulk-surface framework, the component $b(\mathbf{x},t)$ is released locally, not globally. Therefore, the bulk-surface wave pinning model might develop different competition dynamics from those of the reduced model \eqref{eq:reducedmodel}. In the case of $\Gamma\in\mathbb{R}^1$, the situation is different, as such speed description does not hold. However, peak competition is not specific to multidimensional domains, as shown and analysed on one-dimensional domains in \citep{Chiou2018, Brauns2021-wavelength}. \begin{figure} \includegraphics[width=1\textwidth]{Figures/TBC-2D3D.pdf} \caption{The solution $a(\mathbf{x},t)$ corresponding to the reduced surface reaction-diffusion model \eqref{eq:reducedmodel} on a curve and surface, with initial condition \eqref{eq:TBC_ic}, where $x_0=x_1=z_0=z_1=0$ and $y_0=-y_1 = 0.85$. The colour bar is automatically re-scaled between the lowest and highest value of the solution at each plot. In the top row $M_0=0.85|\Omega|$, while on bottom row $M_0=|\Omega|$. Remaining parameter values are reported in \ref{app:numerical}. } \label{fig:TBC-3D_and_2D} \end{figure} In Figure \ref{fig:TBC-3D_and_2D}, we show the solution of the surface reaction-diffusion \eqref{eq:reducedmodel} (the reduced model), given the initial condition \begin{equation}\label{eq:TBC_ic} a(\mathbf{x},0) = \sum_{i=0}^1 a_{p,i}\exp{\left\{-{\sigma_{x,i}^{-2}}{(x-x_i)^2} - {\sigma_{y,i}^{-2}}{(y-y_i)^2} - {\sigma_{z,i}^{-2}}{(z-z_i)^2}\right\}}, \end{equation} where $(x_0,y_0,z_0)$ and $(x_1,y_1,z_1)$ are, respectively, the points with the smallest curvature, and we would refer to these as the top and bottom of $\Gamma$. When $\Gamma$ is a surface (bottom row of Figure \ref{fig:TBC-3D_and_2D}), the patch competition is immediately followed by the movement of the winner towards the central area of the domain. When $\Gamma$ is a curve (top row of Figure \ref{fig:TBC-3D_and_2D}), the solution stabilises immediately after patch competition. In this last case, the curve $\Gamma$ { is an ellipse} obtained by a 85\% re-scaling on the $y$-axis of the circumference of radius one and can be seen as the intersection of the surface on the bottom row of the figure with the plane $z=0$. Hence, the initial condition for $a(\mathbf{x},t)$ is given by \eqref{eq:TBC_ic} with $z=z_0=z_1=0$. For numerical reasons due to the { three-dimensional} meshing, despite the initial condition being symmetric along the $y$-axis, the mass of the peak centered at the top $(x_0,y_0,z_0)$ results to be slightly bigger than the mass of the other peak (approximately 0.30498 versus 0.30476). This creates a natural perturbation that triggers the competition. Brauns et al., in \citep{Brauns2021-wavelength}, provide arguments to show the ultimate instability of multiple peak solutions to two-component mass conserving reaction-diffusion systems. In particular, they show how the difference of mass between two peaks will always grow, leading to the disappearance of one patch. { Hence, following \citep{Brauns2021-wavelength}, we might expect the peak at the top to expand at the expenses of the other one. { The bottom row in Figure \ref{fig:TBC-3D_and_2D} shows that this is indeed the case. Would this be true also for the complete BSWP system \eqref{eq:bulk}-\eqref{eq:f(a,b)}? } In order to address this question and to investigate the role that bulk diffusion plays in the competition between multiple polarisation patches, we consider another relatively simple domain $\Omega\subset\mathbb{R}^3$. Here $\Omega$ results from a small ``egg-type'' deformation of the unit sphere, obtained by a domain rescaling along the $y$-axis: the hemisphere $\Omega\cap \{y>0\}$ is stretched by $5\%$, while the hemisphere $\Omega\cap \{y<0\}$ is shrunk by $5\%$. As such, the point $(0, 1.05, 0)$ is the point with maximal curvature over $\Gamma$, while $(0, -0.95, 0)$ the one with minimal curvature. We might refer to these points as the top and bottom of $\Gamma$. In Figure \ref{fig:TBC} we show the solutions $a$ and $b$ of the BSWP model \eqref{eq:bulk}-\eqref{eq:f(a,b)} with initial conditions \eqref{eq:ic_b} and \eqref{eq:TBC_ic}. Again, the initial condition for $a$ is given by the sum of two Gaussian functions centered, respectively, at the top and bottom of $\Gamma$. In these simulations, the peak at the flattened bottom hemisphere has a slightly bigger mass than the top peak. However, in this case, competition might not depend only on this property. Hence, as already discussed, the expansion of the initial peaks strictly depends also on the curvature of the domain. Indeed, following the work by Singh et al. \cite{Singh2021} we might expect the model to evolve towards a single patch solution that minimises its interface perimeter. Interestingly, the bulk diffusion coefficient $D_b$ reveals to be crucial in determining the outcome of the competition. In the reduced model \eqref{eq:reducedmodel} the bottom patch looses the competition and the polarisation occurs at the top, where the curvature is greater. The same result is achieved by the complete system \eqref{eq:bulk}-\eqref{eq:f(a,b)} when the bulk diffusion coefficient is big enough. It might be worth observing that, in these two similar cases, bulk diffusion slows down the competition. However, by further reducing the bulk diffusivity, the BSWP model \eqref{eq:bulk}-\eqref{eq:f(a,b)} results in the opposite outcome, with polarisation of the bottom hemisphere, at the expense of the top one. This result is confirmed on three different mesh refinements, those results are attached as supplementary material. We have here shown the first crucial difference that might arise in the BSWP model when $D_b$ is not extremely big. It is important to stress that, by considering $D_b=0.5$, we are still considering it to be 100 times bigger than the surface diffusion coefficient, which is $D_a = 0.005$. The remaining parameter values are reported in \ref{app:numerical}. } \begin{figure} \includegraphics[width = 1\textwidth]{Figures/revision/egg/egg_fenicsrefined/egg_all.pdf} \caption{ The solutions $a(\mathbf{x},t)$ and $b(\mathbf{x},t)$ of the BSWP model \eqref{eq:bulk}-\eqref{eq:f(a,b)} with initial conditions \eqref{eq:ic_b} and \eqref{eq:TBC_ic} { for $D_b = 0.5$ and $D_b = 5$ (first and second row) and the solution of \eqref{eq:reducedmodel} with initial condition \eqref{eq:TBC_ic} (indicated by $D_b = +\infty$, last row). Columns indicate the solutions at different times, i.e. $t=$ 10, 300, 450, 650. We show both bulk and surface solution by longitudinally sectioning the domain in two halves: the bulk component $b$ is reported on the left half (color map green-orange), the surface component $a$ on the right half (color map blue-red). For simplicity of comparisons, the case $D_b = \infty$ is represented as the previous ones, but $b$ is exactly $(M_0 - \int_\Gamma a)/|\Omega|$. While for $D_b = 0.5$ the bottom patch wins the competition, when increasing such value, the outcome is the opposite. These results refer to the mesh shown Figure \ref{fig:meshes}F. Here $D_a = 0.005$, while the remaining parameter values are reported in \ref{app:numerical}. A video of the simulations presented here is attached as supplementary material, together with the same simulations for less refined mesh.} } \label{fig:TBC} \end{figure} \section{Bulk heterogeneity can sustain cell polarisation}\label{sec:bulk induces polarisation} Until now, our discussion has been focused on numerical investigations of the bulk-surface wave pinning model on a relatively simple convex domain. In this section we consider a more complex three-dimensional geometry, taking advantage of the bulk-surface finite element method which is easily adaptable to different geometries (see \ref{app:numerical} and \citep{Cusseddu2019, CussedduPhD, MADZVAMUSE20169} for details). In a bulk-surface framework, local restrictions of the domain $\Omega$ slow down bulk diffusion, maintaining heterogeneity in the bulk component for a longer time. This might be fundamental for a lasting polarised pattern. In a non-convex geometry we find that the impact of a slower bulk diffusion might be crucial to polarisation. In Figure \ref{fig:nonconvex}, we show one example of transient polarisation. Here, neither the reduced surface reaction-diffusion model \eqref{eq:reducedmodel} nor the BSWP model \eqref{eq:bulk}-\eqref{eq:f(a,b)} are able to maintain a stable and strong polarisation of the surface $\Gamma$. However, in both cases, { we notice the presence of a polarisation patch}, but in the case of finite bulk diffusion, it resists for a much longer time. In the case of the reduced surface reaction-diffusion model \eqref{eq:reducedmodel}, { we can say that} polarisation is completely lost at $t=200$, while the BSWP model \eqref{eq:bulk}-\eqref{eq:f(a,b)} still maintains strong polarisation at $t=1200$ { (the difference between the extreme values $a_{high}$ and $a_{low}$ reported on each colourbar in the two cases gives an idea on the polarisation level)}. This is due to the bulk patterning, shown in the third column of Figure \ref{fig:nonconvex}. Both slow diffusion ($D_b=0.5$) and domain geometry highlight the importance that the bulk component $b(\mathbf{x},t)$ has in establishing polarisation patterns. \\ { A deeper observation of these dynamics is reported in Figure \ref{fig:zoom_on_cones}. Here we show the solution on a portion of the domain where the initial peak is prescribed. In this way the local values are more easily distinguishable and we can focus on the bulk component $b$ just in the proximities of the polarisation patch interface. Figure \ref{fig:zoom_on_cones} shows the solutions of the BSWP system \eqref{eq:bulk}-\eqref{eq:f(a,b)} for different values of the parameter $D_b$, as well as the solution of the reduced model \eqref{eq:reducedmodel} for two different values of the total mass $M_0$. We show that polarisation is achievable also by the reduced model, at the cost of increasing $M_0$. Therefore, reducing bulk diffusion might be a way of over-passing such cost, since inhibited spatial redistribution of the bulk component might be a way of keeping, in the vicinity of the the polarisation interface, the values of $b$ within a certain range for polarisation. } \begin{figure} \includegraphics[width=1.0\textwidth]{Figures/revision/cones_both.pdf} \caption{Bulk polarisation induces surface polarisation: here the reduced surface reaction-diffusion model \eqref{eq:reducedmodel}-\eqref{eq:ic_a} does not generate a lasting polarisation (first column), as compared to the BSWP model \eqref{eq:bulk}-\eqref{eq:f(a,b)} with \eqref{eq:ic_a}-\eqref{eq:ic_b}. { Notice that, while the solution of the reduced model at time $t=200$ might appear to generate polarisation, the difference between minimum and maximum is extremely minimal.} Bulk-surface finite element solutions $a(\mathbf{x},t)$ and $b(\mathbf{x},t)$ are reported, respectively, in the second and third column. The domain elongates along the $y$ axis and the initial condition \eqref{eq:ic_a} for $a(\mathbf{x},t)$ is centered at the smallest $y$ value. In both cases we set $M_0=1.2|\Omega|$. The BSWP model is solved for $D_b=0.5$. For the remaining parameter values see \ref{app:numerical}.} \label{fig:nonconvex} \end{figure} \begin{figure} \includegraphics[width=1.0\textwidth]{Figures/revision/zoom_cones/cones_zoom_all.pdf} \caption{ A zoom on the simulations of Figure \ref{fig:nonconvex} helps in understanding the polarisation dynamics. Here, we show the numerical solutions on a portion of the domain $|\Omega|$ where the surface component has its initial peak. The first three rows of figures represent the solutions of the BSWP system \eqref{eq:bulk}-\eqref{eq:f(a,b)} with \eqref{eq:ic_a}-\eqref{eq:ic_b} for, respectively, bulk diffusion $D_b = 0.5$, 5 and 50. The last two rows represent the solutions of the reduced model \eqref{eq:reducedmodel}-\eqref{eq:ic_a} with different total mass, the last one having total mass $M_0 = 1.4|\Omega|$, instead of $M_0 = 1.2|\Omega|$. Each single figure shows both the surface and bulk solutions, separated by an horizontal red line cutting the domain section: the surface component $a$ is shown above the line (blue-red colourmap), while below the line we report the bulk component $b$, with a view also on its interior (green-orange colourmap). Each colourbar is automatically rescaled between the minimum and maximal solution value achieved in such section of $\Omega$. Surface polarisation is strictly dependent on the bulk component in the vicinity of the polarisation interface. By increasing the total mass from $M_0 =1.2|\Omega|$ to $M_0 =1.4|\Omega|$, polarisation results also in the reduced model. When $M_0 =1.2|\Omega|$, the BSWP model with a sufficiently small $D_b$ induces spatial dishomogeneities in the bulk component that are able to locally sustain surface polarisation. }\label{fig:zoom_on_cones} \end{figure} In general, it is worth mentioning that non-convex geometries might be of particular interest in the study of cell polarisation, as they might have a remarkable role in the dynamics. { Spill et al. studied a more complex bulk-surface model for cell polarisation and showed the importance of cell geometry in achieving and maintaining polarisation \citep{Spill2016}. In particular, they show how a non-convex cell geometry might be more convenient for polarising, with respect to convex geometries. } Ramirez et al. investigated polarisation of dendritic spines, which are small protrusions present in neuron cells \citep{ramirez2015dendritic}. In their work they showed how their geometry can, by itself, induce polarisation, even in the absence of depletion of the bulk component. Indeed, propagating fronts of solutions for single bistable reaction-diffusion equations may experience geometry-induced pinning when they reach an abrupt opening of the domain \citep{bialecki2020traveling, ramirez2015dendritic,berestycki2016front}, which is the case of the modelled spines. Moreover, Giese et al. studied the impact of obstacles in a two-dimensional version of the BSWP model \citep{Giese2015}, observing how these might influence the position of the polarisation patch. From a modelling perspective, this is fundamentally important since in an environment such as that of the cell, crowding effects and sub-cellular structures might finally trigger surprising outcomes. \section{Conclusion}\label{sec:conclusion} In this manuscript we presented and compared novel bulk-surface finite element numerical computations concerning the long time dynamics of the bulk-surface wave pinning model \eqref{eq:bulk}-\eqref{eq:f(a,b)} and its natural asymptotic approximation, the surface reaction-diffusion model given in \eqref{eq:reducedmodel}. While in \citep{Cusseddu2019} we focused on presenting intermediate results on biologically relevant timescales, in this work we consider a rescaled version of the system on simpler domains, with the goal of understanding the role that the bulk component might have on the surface dynamics. In the bulk-surface system \eqref{eq:bulk}-\eqref{eq:f(a,b)}, the polarisation pattern over the surface $\Gamma$ is strictly dependent on the bulk component $b(\mathbf{x},t)$, especially on its diffusion coefficient $D_b$. For too small values of $D_b$ polarisation generally cannot occur through wave pinning dynamics. When $D_b$ is large enough, polarisation patterns can be generated and these are subject to a slow transition of the polarised area across the surface $\Gamma$. This is an intrinsic feature of equation \eqref{eq:surface} for $a(\mathbf{x},t)$, as recently shown, for instance by Singh et al. \citep{Singh2021} { and Miller et al. \citep{Miller2022}} for the reduced surface reaction-diffusion model \eqref{eq:reducedmodel}. In our work we discuss the importance of the bulk component $b(\mathbf{x},t)$ in regulating the final position of the polarised patch as well as its transition speed. While the overall system dynamics in the reduced case \eqref{eq:reducedmodel} often constitute a good qualitative representation of the complete system \eqref{eq:bulk}-\eqref{eq:f(a,b)}, considering the role of the bulk component $b(\mathbf{x},t)$ might still be of quantitative importance from the biological point of view of cell polarisation. Moreover, in some cases, transient polarisation can directly result from bulk heterogeneity. The reduced surface reaction-diffusion model \eqref{eq:reducedmodel} constitutes an even more minimal model for cell polarisation with respect to the bulk-surface wave-pinning model, however it is important to keep in mind that it is based on the assumption that nothing hinders or limits the bulk diffusion. In some other cases, the assumption that the bulk proteins are so abundant leads to consider the bulk component constant both in space and time. Even with this assumption polarisation can occur, as shown for similar cases in \citep{ bialecki2020traveling,ramirez2015dendritic,berestycki2016front}. Clearly, these assumptions depend on the question that a modeller formulates and wants to answer. However, for the case when obstacles exist for the bulk diffusion, such as holes { or tethers}, different polarisation dynamics might arise { \citep{Giese2015, houk2012membrane}}. In the same way, as we have seen, restrictions of the domain $\Omega$, which might represent contractions of the cell membrane, can induce polarisation through bulk patterning. This suggests that, on a moving and deforming domain, the bulk component might be of fundamental importance. \section*{Acknowledgments} This project (DC,AM) has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 642866. DC is supported by Fundação para a Ci\^encia e a Tecnologia under the project UIDB/00208/2020. AM acknowledges support from the EPSRC grant (EP/T00410X/1): UK-Africa Postgraduate Advanced Study Institute in Mathematical Sciences (UK-APASI). The work of AM was partly supported by Health Foundation (1902431), the NIHR (NIHR133761) and by an individual grant from the Dr Perry James (Jim) Browne Research Centre on Mathematics and its Applications (University of Sussex). AM acknowledges support from Royal Society Wolfson Research Merit Award (2016-2021) funded generously by the Wolfson Foundation. AM is a Distinguished Visiting Scholar to the Department of Mathematics, University of Johannesburg, South Africa. \section*{Conflict of interest} The authors declare no conflict of interest.
1,116,691,497,110
arxiv
\section{Notation} As usual, $\mathbb{R}^d$ denotes the Euclidean $d$-space, $\mathbb{T} = [0, 2 \pi]$ is the unit circle, and ${{\Bbb N}}_0 = {{\Bbb N}}\cup \{0\}$. For $1 \leq p \leq \infty$, $p'$ is defined by $\frac{1}{p} + \frac{1}{p'}=1$. By $Q$ or $Q_0$ we denote cubes in $\mathbb{R}^d$ with sides parallel to the coordinate axes. Moreover, $2 Q$ stands for the cube, which has the same center as $Q$ whose side length is twice that of $Q$. Let $\mu$ be a Radon positive measure on $\mathbb{R}^d$. Throughout this paper we assume that $\mu$ satisfies the doubling condition, i.e., there exists a constant $c_\mu > 0$ such that \begin{equation*} \mu (2Q) \leq c_\mu \,\mu(Q) \end{equation*} for all cubes $Q$. The distribution function of a scalar-valued measurable function $f$ defined on $\mathbb{R}^d$ is \begin{equation*} \mu_f(t) = \mu \{x \in \mathbb{R}^d : |f(x)| > t\}, \quad t >0, \end{equation*} the non-increasing rearrangement is given by \begin{equation*} f^*_\mu(t) = \inf \{\lambda \geq 0 : \mu_f(\lambda) \leq t\} \end{equation*} and \begin{equation*} f^{**}_\mu(t) = \frac{1}{t} \int_0^t f^*_\mu(s) \, ds. \end{equation*} In the special case that $\mu = |\cdot|_d$, the Lebesgue measure on $\mathbb{R}^d$, we write simply $f^*$ and $f^{**}$, respectively. We will assume that $A\lesssim B$ means that $A\leq C B$ with a positive constant $C$ depending only on nonessential parameters. If $A\lesssim B\lesssim A$, then $A\asymp B$. \subsection{Maximal functions} For $f \in L_1(Q_0, \mu)$, the local sharp maximal function is defined by \begin{equation*} f^{\#}_{Q_0;\mu}(x) = \sup_{x \in Q, Q \subset Q_0} \frac{1}{\mu(Q)} \int_Q |f(y) - f_{Q;\mu}| \, d\mu (y), \quad x \in Q_0, \end{equation*} where $f_{Q;\mu} = \frac{1}{\mu(Q)} \int_Q f d\mu$. The space $\text{BMO}(Q_0,\mu)$ consists of all $f \in L_1(Q_0,\mu)$ such that \begin{equation*} \|f\|_{\text{BMO}(Q_0,\mu)} = \sup_{x \in Q_0} f^{\#}_{Q_0;\mu}(x) < \infty. \end{equation*} Similarly, given a locally integrable function $f$ on $\mathbb{R}^d$, we define \begin{equation*} f^{\#}_\mu(x) = \sup_{x \in Q} \frac{1}{\mu(Q)} \int_Q |f(y) - f_{Q;\mu}| \, d \mu(y), \quad x \in \mathbb{R}^d, \end{equation*} and $\|f\|_{\text{BMO}(\mathbb{R}^d, \mu)} = \sup_{x \in \mathbb{R}^d}f^{\#}_\mu(x) < \infty.$ When $\mu = |\cdot|_d$, we use the notation $f_Q, f_{Q_0}^{\#}, f^{\#}, \text{BMO}(Q_0)$, and $\text{BMO}(\mathbb{R}^d)$. Let $s \in (0,1)$. The Str\"omberg--Jawerth--Torchinsky local maximal operator \cite{Stromberg, JawerthTorchinsky} is defined by \begin{equation*} M^{\#}_{s, Q_0;\mu} f (x) = \sup_{x \in Q, Q \subset Q_0} \inf_{c \in \mathbb{R}} \inf \big\{\alpha \geq 0 : \mu\{y \in Q: |f(y) - c| > \alpha\} < s \mu(Q)\big\}, \quad x \in Q_0. \end{equation*} Again if $\mu$ is the Lebesgue measure we write $M^{\#}_{s, Q_0} f$. The relationship between $f^{\#}_{Q_0;\mu}$ and $M^{\#}_{s, Q_0;\mu} f$ is given by the well-known equivalence \begin{equation}\label{EquivMax} (f^{\#}_{\mu})^*_\mu (t) \asymp (M^{\#}_{s;\mu} f)^{**}_\mu(t) \end{equation} provided that $s$ is small enough; see \cite[Lemma 3.4]{JawerthTorchinsky}. We also consider the modification of the Str\"omberg--Jawerth--Torchinsky maximal function given by \begin{equation*} \overline{M}^{\#}_{s, Q_0} f (x) = \sup_{x \in Q, Q \subset Q_0} \inf \big\{\alpha \geq 0 : \big|\{y \in Q: |f(y) - f_Q| > \alpha\}\big|_d \leq s |Q|_d\big\}, \quad x \in Q_0. \end{equation*} \subsection{Function spaces}\label{SectionFunctionSpaces} If $0 < p, q \leq \infty$ and $-\infty < b < \infty$, then $L_{p,q}(\log L)_b(\mathbb{R}^d,\mu)$ stands for the \emph{Lorentz--Zygmund space} formed by all (equivalence classes of) measurable functions $f$ defined on $\mathbb{R}^d$ such that \begin{equation}\label{LZ} \|f\|_{L_{p,q}(\log L)_b(\mathbb{R}^d,\mu)} = \Big(\int_0^\infty (t^{1/p} (1 + |\log t|)^b f^*_\mu(t))^q \frac{dt}{t} \Big)^{1/q} < \infty \end{equation} (where the integral should be replaced by the supremum if $q=\infty$). When $p=\infty$ we assume that $b < -1/q \, (b \leq 0 \text{ if } q=\infty)$, otherwise $L_{\infty,q}(\log L)_b(\mathbb{R}^d,\mu)=\{0\}$. The quasi-norm \eqref{LZ} can be equivalently characterized in terms of $\mu_f(t)$ (cf. \cite[Proposition 2.2.5]{Carro}). Again the symbol $\mu$ will be omitted in the notation of Lorentz--Zygmund spaces when we deal with the Lebesgue measure. The Lorentz--Zygmund spaces $L_{p,q}(\log L)_b(Q_0,\mu)$, as well as their periodic counterparts for the Lebesgue measure $L_{p,q}(\log L)_b(\mathbb{T})$, are introduced in a similar way but now the integration in \eqref{LZ} extends over the interval $(0,1)$. For more details on Lorentz--Zygmund spaces, we refer to \cite{BennettRudnick, BennettSharpley, Carro}. In particular, setting $b=0$ in $L_{p,q}(\log L)_b(Q_0,\mu)$ we obtain the classical Lorentz spaces $L_{p,q}(Q_0,\mu)$ and \begin{equation*} \|f\|_{L_{p,q}(Q_0,\mu)} \asymp \Big(\int_0^\infty t^q (\mu_f(t))^{q/p} \frac{d t}{t} \Big)^{1/q}. \end{equation*} Moreover, letting $p=q < \infty$ in $L_{p,q}(\log L)_b(Q_0,\mu)$ we recover the Zygmund spaces $L_{p} (\log L)_b(Q_0,\mu)$ and \begin{equation*} \|f\|_{L_{p} (\log L)_b(Q_0,\mu)} \asymp \Big(\int_{Q_0} [|f(x)| \log^b (e + |f(x)|)]^p \, d \mu(x) \Big)^{1/p}. \end{equation*} For $b=0$, $L_{p}(\log L)_b(Q_0,\mu)= L_p(Q_0,\mu)$. If $p=\infty$ and $b < 0$, then $L_\infty (\log L)_b(Q_0,\mu)$ coincides with the Orlicz space of exponentially integrable functions $\text{exp}\, L^{-1/b}(Q_0,\mu)$, i.e., \begin{equation*} \text{exp}\, L^{-1/b}(Q_0,\mu) = \Big\{f : \int_{Q_0} \exp (\lambda |f(x)|^{-1/b}) \, d \mu (x) < \infty \quad \text{for some} \quad \lambda = \lambda(b, f) >0 \Big\}. \end{equation*} In the special case $b=-1$ we simply write $\text{exp}\, L (Q_0,\mu)$. Let us now define the smooth function spaces that are useful in this paper. Assume that $1 < p < \infty, 1 \leq q \leq \infty$, and $-\infty < s, b < \infty$. Let $\dot{H}^s L_{p,q} (\log L)_b(\mathbb{R}^d)$ be the \emph{Riesz-potential space} defined as the completion of $C^\infty_0(\mathbb{R}^d)$ with respect to the functional \begin{equation*} \|f\|_{\dot{H}^s L_{p,q}(\log L)_b(\mathbb{R}^d)} = \|(-\Delta)^{s/2} f\|_{L_{p,q}(\log L)_b(\mathbb{R}^d)}. \end{equation*} In particular, setting $q=p$ and $b=0$ one recovers the classical space $\dot{H}^s_p(\mathbb{R}^d) = \dot{H}^s L_p(\mathbb{R}^d)$. Note that if $k \in {{\Bbb N}}$ then $\dot{H}^k L_{p,q}(\log L)_b(\mathbb{R}^d)$ coincides (up to equivalence constants of semi-quasi-norms) with the \emph{(homogeneous) Lorentz--Zygmund--Sobolev space} $\dot{W}^k L_{p,q}(\log L)_b(\mathbb{R}^d)$, where \begin{equation*} \|f\|_{\dot{W}^k L_{p,q} (\log L)_b(\mathbb{R}^d)} = \|\, |\nabla^k f| \, \|_{L_{p,q}(\log L)_b(\mathbb{R}^d)} \end{equation*} and $|\nabla^k f (x) | = \Big(\sum_{|\alpha|_1 = k} |D^\alpha f (x)|^2 \Big)^{1/2}$. We shall also need their inhomogeneous counterparts $W^k L_{p,q} (\log L)_b(\mathcal{X}), \, \mathcal{X} \in \{\mathbb{R}^d, Q_0\},$ formed by all those $k$-times weakly differentiable functions $f$ on $\mathcal{X}$ such that \begin{equation*} \|f\|_{W^k L_{p,q} (\log L)_b(\mathcal{X})} = \sum_{m=0}^k \|\, |\nabla^m f| \,\|_{ L_{p,q} (\log L)_b(\mathcal{X})} < \infty. \end{equation*} In particular, the choice $p=q$ and $b=0$ yields the classical Sobolev spaces $W^k_p(\mathcal{X}) = W^k L_p(\mathcal{X})$. For $k \in {{\Bbb N}}$, we let $\omega_k(f,t)_{p,q,b; \mathbb{R}^d}$ denote the $k$-th order \emph{moduli of smoothness} of $f \in L_{p,q} (\log L)_b(\mathbb{R}^d)$ defined by \begin{equation*} \omega_k(f,t)_{p,q,b;\mathbb{R}^d} = \sup_{|h| \leq t} \| \Delta^k_h f\|_{L_{p,q}(\log L)_b (\mathbb{R}^d)}, \qquad t > 0, \end{equation*} where $\Delta^k_h f$ is the $k$-th difference of $f$ with step $h$, that is, \begin{equation*} \Delta^1_h f(x) = \Delta_h f(x) = f(x+h)-f(x), \quad \Delta^k_h = \Delta_h \Delta^{k-1}_h, \quad h \in \mathbb{R}^d. \end{equation*} As usual, in the definition of $\omega_k(f,t)_{p,q,b;\mathbb{T}}$ in the case of $2 \pi$-periodic functions, the norm is taken over all of $\mathbb{T}$, while the modulus of smoothness on the cube $Q$ is given by $ \omega_k(f,t)_{p,q,b;Q}= \sup_{|h| \leq t} \| \Delta^k_h f\|_{L_{p,q}(\log L)_b(Q_{kh})}, $ where the set $Q_{kh}:=\{x:x, x+kh\in Q\}.$ To simplify notation, we write $\omega_k(f,t)_{p,q,b}=\omega_k(f,t)_{p,q,b; \mathcal{X}}$ for $f \in L_{p,q} (\log L)_b(\mathcal{X}), \, \mathcal{X} \in \{\mathbb{R}^d, \mathbb{T}, Q\}$. Although we use the same notation for the moduli of smoothness of functions in $\mathbb{R}^d, \mathbb{T}$ and $Q$, this should hopefully cause no confusion, as the meaning should be clear from the context. Let $s > 0,$ $-\infty < b, \xi < \infty$, and $0 < r \leq \infty$. The \emph{(homogeneous) Besov-type space} $\dot{B}^{s,\xi}_r L_{p,q} (\log L)_b(\mathbb{R}^d)$ is defined as the completion of $C^\infty_0(\mathbb{R}^d)$ with respect to the (quasi-semi)-norm \begin{equation*} \|f\|_{\dot{B}^{s,\xi}_r L_{p,q} (\log L)_b(\mathbb{R}^d);k} = \Big(\int_0^\infty (t^{-s} (1 + |\log t|)^\xi \omega_k (f,t)_{p,q,b})^r \frac{dt}{t} \Big)^{1/r} < \infty \end{equation*} (with the usual modification if $r=\infty$). Let $B^{s,\xi}_r L_{p,q} (\log L)_b(\mathbb{R}^d)$ be the corresponding inhomogeneous space, endowed with \begin{equation*} \|f\|_{B^{s,\xi}_r L_{p,q} (\log L)_b(\mathbb{R}^d);k} = \|f\|_{L_{p,q} (\log L)_b(\mathbb{R}^d)} + \Big(\int_0^1 (t^{-s} (1 + |\log t|)^\xi \omega_k (f,t)_{p,q,b})^r \frac{dt}{t} \Big)^{1/r} < \infty \end{equation*} (with the usual modification if $r=\infty$). It is well known that these definitions are independent of $k$, in the sense that different choices of $k \in {{\Bbb N}}$ with $k > s$ give equivalent (quasi-semi-)norms on Besov spaces. Nevertheless, our notation is justified by the fact that the equivalence constants may depend on $k$, which plays a key role to establish extrapolation estimates. Letting $p=q$ and $b=0$ in $\dot{B}^{s,\xi}_r L_{p,q}(\log L)_b(\mathbb{R}^d)$ we recover the logarithmic Besov space $\dot{B}^{s,\xi}_r L_{p}(\mathbb{R}^d) = \dot{B}^{s,\xi}_{p,r}(\mathbb{R}^d)$ (cf. \cite{DominguezTikhonov} and the list of references given there). If $\xi=0$ and $b=0$ then $\dot{B}^{s,\xi}_r L_{p,q}(\log L)_b(\mathbb{R}^d) = \dot{B}^s_r L_{p,q}(\mathbb{R}^d)$, a Lorentz--Besov space (cf. \cite{GogatishviliOpicTikhonovTrebels, SeegerTrebels} and the references therein); if moreover $p=q$ then we arrive at the classical Besov space $\dot{B}^s_{p,r}(\mathbb{R}^d)$ (see \cite{BennettSharpley, BerghLofstrom, Triebel83}). Similarly, one can define the periodic spaces $\dot{B}^{s,\xi}_r L_{p,q}(\log L)_b(\mathbb{T}), \, B^{s,\xi}_r L_{p,q}(\log L)_b(\mathbb{T})$ and the spaces $\dot{B}^{s,\xi}_r L_{p,q}(\log L)_b(Q_0), \, B^{s,\xi}_r L_{p,q}(\log L)_b(Q_0)$ for cubes $Q_0 \subset \mathbb{R}^d$. \subsection{Limiting interpolation} Let $(A_0, A_1)$ be a compatible pair of quasi-Banach spaces. For all $f \in A_0+A_1$ and $t > 0$, the \emph{Peetre $K$-functional} is defined by \begin{equation*} K(t,f) = K(t, f; A_0, A_1) = \inf_{\substack{f = f_0 + f_1 \\ f_i \in A_1, i=0,1}} \{\|f_0\|_{A_0} + t \|f_1\|_{A_1}\}. \end{equation*} Let $0 < \theta < 1, -\infty < b < \infty$, and $0 < q \leq \infty$. The \emph{real interpolation space} $(A_0,A_1)_{\theta,q;b}$ is the collection of all those $f \in A_0+A_1$ for which \begin{equation}\label{interpolationnorm} \|f\|_{(A_0,A_1)_{\theta,q;b}} = \left(\int_0^\infty (t^{-\theta} (1 + |\log t|)^b K(t,f))^q \frac{dt}{t} \right)^{1/q} < \infty \end{equation} (with the usual modification if $q=\infty$). See \cite{BrudnyiKrugljak, EvansOpicPick, GogatishviliOpicTrebels, Gustavsson}. In particular, letting $b=0$ we recover the classical interpolation space $(A_0, A_1)_{\theta,q}$ (cf. \cite{BennettSharpley, BerghLofstrom, Triebel78}). Given two (quasi-semi-)normed spaces $A_0$ and $A_1$, we write $A_0 \hookrightarrow A_1$ if $A_0 \subset A_1$ and the natural embedding from $A_0$ into $A_1$ is continuous. The space $A'$ is the dual space of the Banach space $A$. It is easy to see that if $\theta=0$ or $\theta=1$ in \eqref{interpolationnorm} then we only obtain trivial spaces in general; that is why the definition of meaningful limiting interpolation spaces requires some modifications of the classical interpolation norm \eqref{interpolationnorm}. For simplicity, assume that the pair $(A_0, A_1)$ is {ordered}, that is, $A_1 \hookrightarrow A_0$. It is not difficult to check that \eqref{interpolationnorm} is equivalent to that obtained by replacing the interval $(0, \infty)$ to the smaller interval $(0,1)$. Namely, for $\theta\in (0,1)$, \begin{equation*} \|f\|_{(A_0,A_1)_{\theta,q;b}} \asymp \left(\int_0^1 (t^{-\theta} (1 + |\log t|)^b K(t,f))^q \frac{dt}{t} \right)^{1/q}. \end{equation*} This basic observation together with the finer tuning given by logarithmic weights is the key to introduce \emph{limiting interpolation methods}. The space $(A_0, A_1)_{(1,b),q}$ is formed by all those $f \in A_0$ satisfying \begin{equation}\label{DefLimInt} \|f\|_{(A_0,A_1)_{(1,b),q}} = \left(\int_0^1 (t^{-1} (1 + |\log t|)^{b} K(t,f))^q \frac{dt}{t} \right)^{1/q} < \infty. \end{equation} Here $b < -1/q \, (b \leq 0 \text{ if } q=\infty)$, since otherwise $(A_0, A_1)_{(1,b),q}$ becomes the trivi\-al space, in the sense that it contains the zero element only. For our purposes, it is enough to work with ordered pairs $(A_0, A_1)$ and the limiting interpolation space with $\theta = 1$, but one may also consider limiting interpolation methods for general quasi-Banach pairs and $\theta=0$. For detailed information on limiting interpolation, we refer the reader to \cite{AstashkinLykovMilman, DominguezHaroskeTikhonov, DominguezTikhonov, EvansOpic, EvansOpicPick, GogatishviliOpicTrebels} and the references therein. \bigskip \section{New estimates of the maximal functions} \subsection{Estimates of the maximal functions in terms of measures of smoothness}\label{SectionEstimSmooth} It is well known that there are strong relationships between the oscillation of a function and its smoothness properties, see, e.g., \cite{DeVore}, \cite{DeVoreSharpley}, \cite{Garsia}, \cite{JohnNirenberg}, \cite{Kolyada86}, \cite{Kolyada87}, \cite{Kolyada87b}, \cite{Kolyada}, \cite{Kolyada99}, \cite{KolyadaLerner}, \cite{MartinMilman}, \cite{MartinMilman14}. In this section we focus on the following well-known inequality for Besov functions on the real line \begin{equation}\label{EmbD} \|f\|_{\text{BMO}(\mathbb{R})} \lesssim \|f \|_{\dot{B}^{1/p}_{p,\infty}(\mathbb{R});1}, \quad 1 < p < \infty. \end{equation} At the quantitative level, one may ask for pointwise estimates involving the maximal function $f^{\#}$ and the moduli of smoothness $\omega_1(f,t)_p$. The answer to this question was addressed by DeVore \cite[Theorem 1]{DeVore}. Namely, he showed that \begin{equation}\label{DeVMax} f^{\# *}(t) \lesssim \sup_{t < u < \infty} u^{-1/p} \omega_1(f,u)_{p}, \qquad t > 0, \end{equation} for $f \in L_p(\mathbb{R}), \, 1 \leq p < \infty$. It is clear that \eqref{DeVMax} implies \eqref{EmbD}. Inequality \eqref{DeVMax} has been extended in several ways by Kolyada. The limiting case $p=1$ in \eqref{DeVMax} is of special interest, since it yields the quantitative version of the known embedding $\text{BV}(\mathbb{R}) \hookrightarrow \text{BMO}(\mathbb{R})$. Note that in this case inequality \eqref{DeVMax} can be equivalently written as \begin{equation}\label{DeVMaxp=1} t f^{\# *}(t) \lesssim \omega_1(f,t)_{1}, \quad f \in L_1(\mathbb{R}), \end{equation} which complements the important inequality \begin{equation}\label{DeVMaxp=1*} f^{**} (t) \lesssim \int_{t^{1/d}}^\infty \frac{\omega_d(f,u)_{1}}{u^d} \frac{du}{u}, \quad f \in L_1(\mathbb{R}^d). \end{equation} The latter admits extensions to the moduli of smoothness based on $L_p(\mathbb{R}^d)$ (see, e.g., \cite[Chapter 5, Theorem 4.19]{BennettSharpley}) or, more generally, to r.i. spaces \cite[Corollary 1]{MartinMilman}. In the special case $d=1$, inequalities \eqref{DeVMaxp=1} and \eqref{DeVMaxp=1*} were improved by Kolyada and Lerner. Namely, they showed that (cf. \cite[(24)]{KolyadaLerner}) \begin{equation}\label{KL} t f^{**}(t) \lesssim \omega_1(f,t)_{1}, \quad f \in L_1(\mathbb{R}). \end{equation} In the multivariate case the following estimate was obtained by Kolyada \cite[Theorem 1]{Kolyada}: If $f \in L_1(\mathbb{R}^d)$ then \begin{equation}\label{KL--} t^d f^{**}(t^d) \lesssim t\|f\|_{L_1(\mathbb{R}^d)} + \omega_1(f,t)_{1}, \quad t \in \big(0, 2^{-1/d} \big). \end{equation} Our first result sharpens DeVore's inequality \eqref{DeVMax} for $p<\infty$ and Kolyada's estimate \eqref{KL--} for $p=1$. \begin{thm}\label{ThmDeVoreLorentz} Let $k \in {{\Bbb N}}$ and $t > 0$. Assume $f \in L_{p,q}(\mathbb{R}^d)$ with $1 < p < \infty$ and $0 < q \leq \infty$. Then \begin{equation}\label{ThmDeVore*Lorentz} t^{-d/p} \Big(\int_0^{t^d} (u^{1/p} f^{\# *}(u))^q \frac{du}{u} \Big)^{1/q} \lesssim \sup_{ t < u < \infty} u^{-d/p} \omega_k(f,u)_{p,q} \qquad \text{if} \quad k > d/p, \end{equation} and \begin{equation}\label{ThmDeVore*Lorentz2} \Big(\int_0^{t^d} (u^{1/p} f^{\# *}(u))^q \frac{du}{u} \Big)^{1/q} \lesssim \omega_k(f,t)_{p,q} \qquad \text{if} \quad k\leq d/p \end{equation} (with the standard modifications when $q=\infty$). Assume $f \in L_1(\mathbb{R}^d)$. Then \begin{equation}\label{ThmDeVore*New} t^d f^{**}(t^d) \lesssim \omega_d(f,t)_{1} \end{equation} and \begin{equation}\label{KolyadaNew} t^k \int_{t^d}^\infty u^{1-k/d} f^{**}(u) \frac{du}{u} \lesssim \omega_k(f,t)_1 \qquad \text{if} \quad k < d. \end{equation} \end{thm} Note that by \eqref{EquivMax}, $f^{\#}$ can be replaced by $M^{\#}_{s} f$ for small enough $s>0$ in inequalities \eqref{ThmDeVore*Lorentz} and \eqref{ThmDeVore*Lorentz2}. On the other hand, the corresponding inequalities to \eqref{ThmDeVore*New} and \eqref{KolyadaNew} obtained by replacing $f^{**}$ by $f^{\#*}$ also holds true (cf. \eqref{ProofLem2.1} below). In particular, it is plain to see that \eqref{KolyadaNew} implies \begin{equation*} \int_0^{t^d} f^{\# *}(u) \, du \lesssim \omega_k(f,t)_{1} \qquad \text{if} \quad k < d \end{equation*} for $f \in L_1(\mathbb{R}^d)$, see \eqref{ThmDeVore*Lorentz2}. Taking $p=q$ in Theorem \ref{ThmDeVoreLorentz} permits us to significantly extend DeVore's inequality \eqref{DeVMax}. \begin{cor}\label{CorollaryThmDeVore} Let $k \in {{\Bbb N}}$ and $t > 0$. Assume $f \in L_p(\mathbb{R}^d)$ with $1 < p < \infty$. Then \begin{equation}\label{ThmDeVore*} t^{-d/p} \Big(\int_0^{t^d} (f^{\# *}(u))^p \, du \Big)^{1/p} \lesssim \sup_{ t < u < \infty} u^{-d/p} \omega_k(f,u)_{p} \qquad \text{if} \quad k > d/p, \end{equation} and \begin{equation}\label{ThmDeVore*newextreme} \Big(\int_0^{t^d} (f^{\# *}(u))^p \, du \Big)^{1/p} \lesssim \omega_k(f,t)_{p} \qquad \text{if} \quad k \leq d/p. \end{equation} \end{cor} Let us now discuss the optimality of Theorem \ref{ThmDeVoreLorentz} and its extension to cubes. \begin{rem}\label{RemarkCubes} (i) Assume $p > 1$. It is clear that \eqref{ThmDeVore*} strengthens \eqref{DeVMax}. Below (cf. Proposition \ref{ThmSharpnessAssertion}) we will construct the family of extremal functions for which \eqref{ThmDeVore*Lorentz} (in particular, \eqref{ThmDeVore*}) becomes equivalence but \eqref{DeVMax} is not applicable since \begin{equation*} \frac{f^{\# *}(t)}{\sup_{t < u < \infty} u^{-1/p} \omega_k(f,u)_{p}} \to 0 \quad \text{as} \quad t \to 0. \end{equation*} \\ (ii) Observe that inequality \eqref{ThmDeVore*Lorentz2} for $k < d/p$ follows from the sharper estimate $$ \Big(\int_0^{t^d} (u^{\frac{1}{p}} f^*(u))^q \frac{du}{u} \Big)^{1/q} + t^k \Big(\int_{t^d}^\infty (u^{\frac{1}{p} - \frac{k}{d}} f^*(u))^q \frac{du}{u} \Big)^{1/q} \lesssim \omega_k(f,t)_{p,q}. $$ \\ (iii) Inequalities \eqref{ThmDeVore*Lorentz} and \eqref{ThmDeVore*} do not depend on $k > d/p$ since \begin{equation*} \sup_{ t < u <\infty} u^{-d/p} \omega_k(f,u)_{p,q} \asymp \sup_{ t < u <\infty} u^{-d/p} \omega_l(f,u)_{p,q}, \qquad k, l > d/p. \end{equation*} This is an immediate consequence of the Marchaud inequalities for moduli of smoothness (see \cite{GogatishviliOpicTikhonovTrebels}) \begin{equation}\label{KL---} \omega_l(f,t)_{p,q} \lesssim \omega_k (f,t)_{p,q} \lesssim t^k \int_t^\infty \frac{\omega_l(f,u)_{p,q}}{u^k} \frac{du}{u}, \qquad k < l. \end{equation} \\ (iv) Theorem \ref{ThmDeVoreLorentz} shows the important role played by the relationships between $k, d$ and $p$ in maximal inequalities. Comparing the cases $k > d/p$ and $k \leq d/p$, the question is raised whether \eqref{ThmDeVore*Lorentz} can be sharpened by removing the $\sup_{t < u < \infty}$. It will be shown below that this is not the case. \\ (v) Assume $p=1$. Inequality \eqref{ThmDeVore*New} sharpens \eqref{KL--} in the sense of the order of moduli of smoothness, cf. \eqref{KL---}. Furthermore, in Proposition \ref{SharpnessThmDeVore*New} below, we will show that \eqref{ThmDeVore*New} is sharp, that is, there exist functions for which the converse inequality is valid. On the other hand, the inequality \eqref{KolyadaNew} and its optimality are known (see \cite[Corollary 6]{Kolyada} and \cite[Theorem 4.3]{DominguezTikhonovB}), but it is incorporated to Theorem \ref{ThmDeVoreLorentz} for the sake of completeness. \\ (vi) Another interesting approach to estimates for maximal functions in terms of smoothness was proposed by Kolyada \cite{Kolyada87}. Among other results, he showed (cf. \cite[Theorem 1]{Kolyada87}) that if $f \in L^p(\mathbb{R}^d), \, 1 < p < \infty$, then \begin{equation}\label{Kolyada} \sum_{n \geq j : f^{\#*}(2^{-n d}) > f^{\#*}(2^{-(n+1) d})} 2^{-n d} (f^{\#*}(2^{-n d}))^p \lesssim \omega_1(f,2^{-j})_p^p, \quad j \in \mathbb{N}. \end{equation} Furthermore, if $1 < p < d$ then this estimate can be improved: \begin{equation}\label{Kolyada2} \sum_{n \geq j} 2^{-n d} (f^{\#*}(2^{-n d}))^p \lesssim \omega_1(f,2^{-j})_p^p, \quad j \in \mathbb{N}. \end{equation} To compare \eqref{Kolyada} with Corollary \ref{CorollaryThmDeVore}, it is convenient to rewrite \eqref{ThmDeVore*} with $k=1$ as follows: \begin{equation}\label{6656565} \sum_{n \geq j} 2^{-n d} (f^{\#*}(2^{-n d}))^p \lesssim 2^{-j d} \sup_{2^{-j} < u < \infty} u^{-d} \omega_1(f,u)_p^p, \qquad j \in {{\Bbb N}}, \qquad d < p. \end{equation} On the one hand, the left-hand side of \eqref{6656565} sharpens the corresponding one in \eqref{Kolyada}. On the other hand, the $\sup_{2^{-j} < u < \infty}$ in the right-hand side of \eqref{6656565} is attained at $u=2^{-j}$ in \eqref{Kolyada}. This raises the natural question whether the inequality \eqref{6656565} (or equivalently, \eqref{ThmDeVore*}) can be improved by \begin{equation}\label{66565651} \sum_{n \geq j} 2^{-n d} (f^{\#*}(2^{-n d}))^p \lesssim \omega_1(f,2^{-j})_p^p, \qquad j \in {{\Bbb N}}, \qquad d < p. \end{equation} However, this is not true. Indeed, assume that \eqref{66565651} holds, or equivalently, \bigg(\int_0^{t^d} (f^{\# *}(u))^p \, du \bigg)^{1/p} \lesssim \omega_1(f,t)_{p},$ $t \in (0,1).$ Since $\lim_{t \to 0+} \frac{\int_0^{t} (f^{\# *}(u))^p \, du }{t} = (f^{\#*}(0))^p = \|f\|^p_{\text{BMO}(\mathbb{R}^d)}$, then \begin{equation*} \|f\|^p_{\text{BMO}(\mathbb{R}^d)} \lesssim \lim_{t \to 0+} \frac{ \omega_1(f,t)_{p}}{t^{d/p}}, \quad d < p, \end{equation*} which is not true even for $f \in C^\infty_0(\mathbb{R}^d)$ (recall that $\omega_1(f,t)_p \asymp t$ for $t$ sufficiently small). Assume $d \geq 2$ and $p \in (1,d)$. Then inequalities \eqref{ThmDeVore*newextreme} with $k=1$ and \eqref{Kolyada2} coincide. Furthermore, \eqref{ThmDeVore*newextreme} shows that the limiting case of \eqref{Kolyada2} with $p=d \geq 2$ also holds. This was left open in \cite{Kolyada87}. \\ (vii) The counterpart of Theorem \ref{ThmDeVoreLorentz} for cubes reads as follows. Let $k \in {{\Bbb N}}$ and $t \in (0,1)$. Assume $f \in L_{p,q}(Q_0)$ with $1 < p < \infty$ and $0 < q \leq \infty$. Then \begin{equation}\label{ThmDeVore*LorentzCubes} t^{-d/p} \Big(\int_0^{t^d} (u^{1/p} f_{Q_0}^{\# *}(u))^q \frac{du}{u} \Big)^{1/q} \lesssim \|f\|_{L_{p,q}(Q_0)} + \sup_{ t < u < 1} u^{-d/p} \omega_k(f,u)_{p,q}, \qquad \text{if} \quad k > d/p, \end{equation} and \begin{equation}\label{ThmDeVore*Lorentz2Cubes} \Big(\int_0^{t^d} (u^{1/p} f_{Q_0}^{\# *}(u))^q \frac{du}{u} \Big)^{1/q} \lesssim t^k \|f\|_{L_{p,q}(Q_0)}+ \omega_k(f,t)_{p,q}, \qquad \text{if} \quad k\leq d/p \end{equation} (with the standard modifications when $q=\infty$). Assume $f \in L_1(Q_0)$ and $t \in (0,1)$. Then \begin{equation}\label{ThmDeVore*NewCubes} t^d f^{**}(t^d) \lesssim t^d \|f\|_{L_1(Q_0)} + \omega_d(f,t)_{1} \end{equation} and \begin{equation}\label{KolyadaNew**} t^k \int_{t^d}^1 u^{1-k/d} f^{**}(u) \frac{du}{u} \lesssim t^k \|f\|_{L_1(Q_0)} + \omega_k(f,t)_1, \qquad \text{if} \quad k < d. \end{equation} Note that the inequalities \eqref{ThmDeVore*LorentzCubes}--\eqref{KolyadaNew**} are no longer true when we omit the terms involving $\|f\|_{L_{p,q}(Q_0)}$ or $\|f\|_{L_1(Q_0)}$ (take, e.g., polynomials). This is in sharp contrast with Theorem \ref{ThmDeVoreLorentz} showing an important difference between inequalities for functions on cubes and the Euclidean space. \end{rem} Working with the Lorentz norm in Theorem \ref{ThmDeVoreLorentz} allows us to improve not only the classical embedding \eqref{EmbD} but also the Stein--Zygmund embedding \cite{SteinZygmund} (see also \cite[p. 164]{Stein70}) \begin{equation*} \dot{H}^{d/p} L_{p,\infty}(\mathbb{R}^d) \hookrightarrow \text{BMO}(\mathbb{R}^d), \quad 1 < p < \infty, \end{equation*} by considering a finer scale of Lorentz--Besov spaces. Similar approach has been recently applied in, e.g., \cite{Brezis, Brue, GrafakosSlavikova, MartinMilman, MartinMilman14, SeegerTrebels, Stolyarov}. \begin{thm}\label{LemmaEmbBMOLorentzState} Assume $1 < p < \infty$. Then \begin{equation}\label{LemmaEmbBMOLorentz} \dot{B}^{d/p}_\infty L_{p,\infty}(\mathbb{R}^d) \hookrightarrow \emph{BMO}(\mathbb{R}^d). \end{equation} Consequently, \begin{equation}\label{LemmaEmbBMO*} \dot{H}^{d/p} L_{p,\infty}(\mathbb{R}^d) \hookrightarrow \emph{BMO}(\mathbb{R}^d) \end{equation} and \begin{equation}\label{LemmaEmbBMO} \dot{W}^k L_{d/k,\infty}(\mathbb{R}^d) \hookrightarrow \emph{BMO}(\mathbb{R}^d), \quad k < d. \end{equation} \end{thm} \begin{rem} (i) The Stein--Zygmund embedding \eqref{LemmaEmbBMO*} follows from \eqref{LemmaEmbBMOLorentz} since $\dot{H}^{d/p} L_{p,\infty}(\mathbb{R}^d) \hookrightarrow \dot{B}^{d/p}_\infty L_{p,\infty}(\mathbb{R}^d)$ (cf. \cite[Theorem 1.2]{SeegerTrebels}). (ii) Since \begin{equation*} \dot{B}^{d/p_0}_\infty L_{p_0,\infty}(\mathbb{R}^d) \hookrightarrow \dot{B}^{d/p_1}_\infty L_{p_1,\infty}(\mathbb{R}^d), \quad 1 < p_0 < p_1 < \infty, \end{equation*} (cf. \cite[Theorem 1.5]{SeegerTrebels}), the domain space in \eqref{LemmaEmbBMOLorentz} becomes larger with $p \to \infty$. A similar comment also applies to \eqref{LemmaEmbBMO*} and \eqref{LemmaEmbBMO} (cf. \cite[Theorem 1.6]{SeegerTrebels}). The limiting case $p=\infty$ in \eqref{LemmaEmbBMOLorentz} requires special care because the chosen definition of Besov spaces (with smoothness zero) makes important distinctions. Namely, the space $\dot{B}^{0}_\infty L_{\infty,\infty}(\mathbb{R}^d) = \dot{B}^0_{\infty,\infty}(\mathbb{R}^d)$ given in terms of the moduli of smoothness coincides with $C^\infty_0(\mathbb{R}^d)$ equipped with the $L_\infty(\mathbb{R}^d)$ norm. In this case, we obviously have $\dot{B}^{0}_{\infty,\infty}(\mathbb{R}^d) \hookrightarrow \text{BMO}(\mathbb{R}^d).$ Moreover, the converse embedding holds true when the Besov space $\dot{B}^{0}_{\infty,\infty}(\mathbb{R}^d)$ is replaced by its Fourier-analytically defined counterpart (cf. \cite[p. 24]{Triebel20}). \end{rem} Embedding \eqref{LemmaEmbBMO} sharpens \begin{equation}\label{MP} \dot{W}^k L_{d/k,\infty}(\mathbb{R}^d) \hookrightarrow L(\infty,\infty)(\mathbb{R}^d), \quad k < d, \end{equation} (cf. \cite[Theorem 1.2]{Milman04} and \cite{Milman16}), where $L(\infty,\infty)(\mathbb{R}^d)$ is the so-called weak-$L_\infty$ space. Note that $L(\infty,\infty)(\mathbb{R}^d)$ is the r.i. hull of $\text{BMO}(\mathbb{R}^d)$ (cf. \cite{BennettDeVoreSharpley}). The quantitative version of \eqref{MP} (with $k=1$) is given by the oscillation inequality \begin{equation}\label{KolyadaGrad} f^{**}(t)-f^*(t) \lesssim t^{1/d} |\nabla f|^{**}(t), \quad f \in C^\infty_0(\mathbb{R}^d). \end{equation} This inequality plays a prominent role in the theory of Sobolev embeddings and isoperimetry, as can be seen in \cite{Bastero}, \cite{Kolyada89}, \cite{MartinMilman10} and the references quoted therein. Another goal of this paper is to complement \eqref{KolyadaGrad}, as well as Theorem \ref{ThmDeVoreLorentz}, with the quantitative counterpart of \eqref{LemmaEmbBMO} in terms of $f^{\#}$. More precisely, we show the following. \begin{thm}\label{ThmDeVoreDer} Let $1 < p < \infty, k \in {{\Bbb N}}$, and $r= \frac{d p}{d + k p}$. Assume that either of the following conditions is satisfied: \begin{enumerate}[\upshape(i)] \item $k < d (1-1/p)$ and $1 \leq q \leq \infty,$ \item $k= d(1-1/p)$ and $q=1$. \end{enumerate} Then, given any $f \in \dot{W}^k L_{r,q}(\mathbb{R}^d) + \dot{W}^k L_{d/k, \infty}(\mathbb{R}^d)$ and $t > 0$, we have \begin{align} t^{-1/p} \Big(\int_0^{t} (u^{1/p}f^{\# *}(u))^q \, \frac{du}{u} \Big)^{1/q} & \lesssim t^{-1/p} \Big(\int_0^{t} (u^{1/r} |\nabla^k f|^*(u))^q \, \frac{du}{u} \Big)^{1/q} \nonumber \\ & \hspace{1cm}+ \sup_{t < u < \infty} u^{k/d} |\nabla^k f|^*(u) \label{ThmDeVoreDer<} \end{align} (with the usual modification if $q=\infty$). The corresponding inequality for cubes reads as \begin{align} t^{-1/p} \Big(\int_0^{t} (u^{1/p}f^{\# *}_{Q_0}(u))^q \, \frac{du}{u} \Big)^{1/q} & \lesssim \sum_{l=0}^k \Big[ t^{-1/p} \Big(\int_0^{t} (u^{1/r} |\nabla^l f|^*(u))^q \, \frac{du}{u} \Big)^{1/q} \nonumber \\ & \hspace{1cm}+ \sup_{t < u < 1} u^{k/d} |\nabla^l f|^*(u) \Big] \label{ThmDeVoreDer<Cubes} \end{align} for $f \in W^k L_{r,q}(Q_0)$ and $t \in (0,1)$. \end{thm} \begin{rem} (i) The two terms given in the right-hand side of \eqref{ThmDeVoreDer<} are independent of each other. More precisely, let \begin{equation*} I(t) = t^{-1/p} \Big(\int_0^{t} (u^{1/r} g^*(u))^q \, \frac{du}{u} \Big)^{1/q} \quad \text{and} \quad J(t) = \sup_{t < u < \infty} u^{k/d} g^*(u). \end{equation*} If $g(x) = g_0(|x|)$ with $g_0(t) = t^{-d/r} (1 + |\log t|)^{-\varepsilon}, \, \varepsilon > 1/q$, then $J(t) \asymp t^{-1/p} (1 + |\log t|)^{-\varepsilon}$ and $I(t) \asymp t^{-1/p} (1 + |\log t|)^{-\varepsilon + 1/q}$. On the other hand, setting $g_0(t) = \chi_{(0,1)}(t)$ we have $I(t) \asymp t^{k/d}$ and $J(t) \asymp C$ for $t$ small enough. \\ (ii) Inequality \eqref{ThmDeVoreDer<} is sharp in the sense that there exists a family of functions for which \eqref{ThmDeVoreDer<} becomes an equivalence (see Proposition \ref{ThmDeVoreDerSharpnessAssertion} below for the precise statement.) \\ (iii) Inequalities \eqref{KolyadaGrad} and \eqref{ThmDeVoreDer<} with $k=1$ are not comparable in general. On the one hand, comparing their left-hand sides, we have \begin{equation*} f^{**}(t)-f^*(t) \lesssim t^{-1/p} \Big(\int_0^{t} (u^{1/p}f^{\# *}(u))^q \, \frac{du}{u} \Big)^{1/q}. \end{equation*} This is an immediate consequence of the Bennett--DeVore--Sharpley inequality (cf. \eqref{BDSOsc} below) and monotonicity properties. On the other hand, concerning right-hand sides, H\"older inequality yields \begin{equation*} t^{1/d} |\nabla f|^{**}(t) \lesssim t^{-1/p} \Big(\int_0^{t} (u^{1/r} |\nabla f|^*(u))^q \, \frac{du}{u} \Big)^{1/q}. \end{equation*} Furthermore, it is plain to construct counterexamples showing that the converse estimate fails to be true. \\ (iv) Note that \eqref{ThmDeVoreDer<Cubes} is not valid if we replace the right-hand side by the corresponding expression involving only the last summand with $l=k$. \end{rem} \subsection{Rearrangement inequalities in terms of sharp maximal functions} To avoid unnecessary technicalities, the results of this section are stated for integrable functions on $Q_0$ but similar local results can be obtained for integrable functions on $\mathbb{R}^d$. It is well known that \begin{equation}\label{ProofLem2.1} (f^{\#}_{Q_0;\mu})^{*}_{\mu}(t) \lesssim f^{**}_\mu(t) \end{equation} for $f \in L^1(Q_0, \mu)$, see, e.g., \cite{Herz} and \cite[Theorem 3.8, p. 122]{BennettSharpley}. In many problems of harmonic analysis the converse estimates to \eqref{ProofLem2.1} are of great importance. The celebrated Bennett--DeVore--Sharpley inequality \cite[(3.3), p. 605]{BennettDeVoreSharpley} (comp\-lemented in \cite[(3.6), p. 227]{ShagerShvartsman}) partially provides such estimates: if $f \in L_1(Q_0,\mu)$ and $0 < t< \frac{\mu(Q_0)}{6}$ then \begin{equation}\label{BDSOsc} f^{**}_\mu(t) - f^{*}_\mu(t) \lesssim (f^{\#}_{Q_0;\mu})^*_{\mu}(t). \end{equation} In particular, this inequality yields that $\text{BMO}(Q_0,\mu)$ is contained in the weak $L_\infty(Q_0,\mu)$ space. Furthermore, it can be applied to derive such fundamental results as the John--Nirenberg inequality \cite{JohnNirenberg} and the Fefferman--Stein theorem \cite{FeffermanStein} on the equivalence between the $L_p$-norms of $f$ and $f^{\#}$, as well as various applications in interpolation theory, cf. \cite{BennettSharpley} and \cite{ShagerShvartsman}. As a consequence, inequality \eqref{BDSOsc} has received a lot of attention over the last few years. In this regard, we mention the work \cite{Lerner05} by Lerner, where he obtained an improvement of \eqref{BDSOsc} for non-doubling measures with the help of centered maximal functions, as well as weighted variants of \eqref{BDSOsc} involving $f^{\#}$ rather than $f^{\#}_{\mu}$. Related inequalities for the maximal function $M^{\#}_{s,Q_0}$ may be found in \cite{Cwikel, JawerthTorchinsky, Lerner98a}. In this paper we obtain a logarithmic version of the Bennett--DeVore--Sharpley inequality but, unlike \eqref{BDSOsc}, involving only $f^{*}_\mu$ in the lower bound. Here, we stress that the left-hand side of \eqref{BDSOsc} depends neither on the growth of $f^*_\mu$ nor $f^{**}_\mu$ but rather on the oscillation, which sometimes causes additional obstacles for applications. Our result reads as follows. \begin{thm}\label{ThmBDS} Let $1 < p < \infty$ and $0 < r \leq \infty$. Assume that $f \in L_{p,r}(Q_0,\mu)$. Then, for each $t \in (0,1)$, we have \begin{equation}\label{ThmBDS1} \int_0^{t (1-\log t)^{-p}} (u^{1/p} (f-f_{Q_0;\mu})^{*}_{\mu}(u))^r \, \frac{du}{u} \lesssim \int_0^{t} (u^{1/p} (f^{\#}_{Q_0;\mu})^*_\mu(u))^r \, \frac{du}{u} \end{equation} (with the usual modification if $r=\infty$). Furthermore, this inequality is optimal in the following sense \begin{equation}\label{ThmBDS1Optim} \int_0^{t (1-\log t)^{-\lambda}} (u^{1/p} (f-f_{Q_0})^{*}(u))^r \, \frac{du}{u} \lesssim \int_0^{t} (u^{1/p} (f^{\#}_{Q_0})^*(u))^r \, \frac{du}{u} \iff \lambda \geq p. \end{equation} The same result holds for $M^{\#}_{s, Q_0;\mu}, \, 0 < s\leq s_0,$ where $s_0 > 0$ depends on $d$. \end{thm} \begin{rem}\label{Rem1} (i) The analogue of \eqref{ThmBDS1} for functions on $\mathbb{R}^d$ reads as follows. Let $1 < p < \infty$ and $0 < r \leq \infty$. Assume $f \in L_{p,r}(\mathbb{R}^d)+ \text{BMO}(\mathbb{R}^d)$. Then, given any cube $Q_0$ in $\mathbb{R}^d$, the following holds \begin{equation}\label{757575} \int_0^{t (1-\log t)^{-p}} (u^{1/p} ((f-f_{Q_0;\mu}) \chi_{Q_0})^{*}_{\mu}(u))^r \, \frac{du}{u} \lesssim \int_0^{t} (u^{1/p} (f^{\#}_{\mu})^*_\mu(u))^r \, \frac{du}{u}, \quad t \in (0,1). \end{equation} Note that the equivalence constants behind are independent of $f$ and $t$ (but may depend on $|Q_0|_d$.) \\ (ii) It is a natural question to investigate the interrelation between our inequality \eqref{ThmBDS1} and good-$\lambda$ inequalities. A typical good-$\lambda$ inequality claims that there exists $B > 0$ such that for all $\varepsilon > 0, \lambda > 0$, and all locally integrable function $f$ on $\mathbb{R}^d$, we have \begin{equation}\label{GoodLambda} \mu \{|f| > B f^{\#} + \lambda\} \leq \varepsilon \mu \{|f| > \lambda\}. \end{equation} The connection between this good-$\lambda$ inequality and the Bennett--DeVore--Sharpley inequality was shown by Kurtz \cite{Kurtz} (with \cite{Bagby} as a forerunner). He proved that \eqref{GoodLambda} implies the following variants of \eqref{BDSOsc}: \begin{equation*} f^*_\mu(t)-f^*_\mu(2t) \leq C (f^{\#}_\mu)^*_\mu\Big(\frac{t}{2}\Big) \end{equation*} and \begin{equation}\label{VariantBDS} f^{**}_\mu(t) - f^{*}_\mu(t) \leq 2 B (f^{\#}_{\mu})^{**}_{\mu}(\frac{t}{4}) = 8 B \frac{1}{t} \int_0^{t/4} (f^{\#}_\mu)^*_\mu(u) \,du. \end{equation} More general results may be found in \cite{Milman16}. Further, using Hardy's inequality and the fact that $f^{**}_\mu(t) = \int_t^\infty (f^{**}_\mu(u) - f^*_\mu(u)) \frac{du}{u}$ whenever $f^{**}_\mu(\infty)=0$ (e.g., if $f \in L_{p}(\mathbb{R}^d,\mu)$ for any $p < \infty$), \eqref{VariantBDS} yields the following Fefferman--Stein type inequality \begin{equation}\label{p} \int_0^\infty (u^{1/p} f^{*}_\mu(u))^r \, \frac{du}{u} \lesssim \int_0^{\infty} (u^{1/p} (f^{\#}_{\mu})^*_\mu(u))^r \, \frac{du}{u}, \quad 1 < p < \infty, \quad r > 0. \end{equation} Clearly, \eqref{ThmBDS1} also implies the counterpart of \eqref{p} for cubes. Taking $p=r$ and $\mu = |\cdot|_d$ we arrive at the classical Fefferman--Stein inequality \cite{FeffermanStein}. However, the previous argument cannot be applied to obtain the qualitative estimates of \eqref{p}, i.e., replacing $\int_0^\infty$ by $\int_0^t$. More than that, in virtue of the sharpness assertion \eqref{ThmBDS1Optim}, the estimate \begin{equation}\label{ThmBDS1False} \int_0^{t} (u^{1/p} (f-f_{Q_0})^{*}(u))^r \, \frac{du}{u} \lesssim \int_0^{t} (u^{1/p} (f^{\#}_{Q_0})^*(u))^r \, \frac{du}{u}, \quad f \in L_{p,r}(Q_0), \end{equation} fails to be true. The correct inequality is given by \eqref{ThmBDS1} and involves the additional logarithmic term. Moreover, it is not difficult to see that \eqref{ThmBDS1} implies the weaker estimate \begin{equation*} \int_0^{t} (u^{1/p} (f-f_{Q_0;\mu})_\mu^{*}(u))^r \, \frac{du}{u} \lesssim \int_0^{t} (u^{1/p} (1-\log u) (f^{\#}_{Q_0;\mu})_\mu^*(u))^r \, \frac{du}{u}, \quad f \in L_{p,r}(Q_0). \end{equation*} (iii) For $p > 1$, by Hardy's inequality, \eqref{ThmBDS1} can be equivalently written as \begin{equation*} \int_0^{t (1-\log t)^{-p}} (u^{1/p} (f-f_{Q_0;\mu})^{**}_\mu(u))^r \, \frac{du}{u} \lesssim \int_0^{t} (u^{1/p} (f^{\#}_{Q_0;\mu})_\mu^{*}(u))^r \, \frac{du}{u}, \quad f \in L_{p,r}(Q_0). \end{equation*} This estimate is sharp. Indeed assume, without loss of generality, that $\mu = |\cdot|_d$, and define $f(u) = |u|^{-d/p} (1-\log |u|)^{-\varepsilon} + f_{Q_0}, \, u \in Q_0 = [-1,1]^d$, where $\varepsilon > 1/r$. Elementary computations show that $(f-f_{Q_0})^*(t) \asymp t^{-1/p} (1 - \log t)^{-\varepsilon}$ for $t$ sufficiently small and thus \begin{equation*} \int_0^{t(1-\log t)^{-p}} (u^{1/p} (f-f_{Q_0})^{**}(u))^r \, \frac{du}{u} \asymp (1-\log t)^{-\varepsilon r + 1} \end{equation*} and, by \eqref{ProofLem2.1}, \begin{equation*} \int_0^{t} (u^{1/p} (f^{\#}_{Q_0})^*(u))^r \, \frac{du}{u} \lesssim \int_0^{t} (u^{1/p} f^{**}(u))^r \, \frac{du}{u} \asymp (1-\log t)^{-\varepsilon r + 1}. \end{equation*} (iv) A weaker estimate than (\ref{BDSOsc}), namely $f^{**}(t) \lesssim \int_t^\infty (f^{\#}(u))^* \frac{du}{u}$ whenever $f \in L_1(\mathbb{R}^d) + L_\infty(\mathbb{R}^d)$ and $f^{**}(\infty)=0$ was obtained in \cite[(4.15)]{BennettSharpley79} (cf. \cite[Chapter 5, Corollary 7.4, p. 379]{BennettSharpley} for its analogue for cubes). However, this inequality does not yield optimal estimates even for smooth functions. More precisely, for any non-zero $f \in C^\infty_0(\mathbb{R}^d),$ we have $f^{**}(t) \to \|f\|_{L_\infty(\mathbb{R}^d)}$ and $\int_t^\infty (f^{\#}(u))^* \frac{du}{u} \to \infty$ as $t \to 0+$. Note that this obstruction is not observed in inequality \eqref{ThmBDS1}. \end{rem} We also establish the endpoint case $p=\infty$ in Theorem \ref{ThmBDS} with the help of the maximal function $\overline{M}^{\#}_{s, Q_0} f$. \begin{thm}\label{ThmGJ} If $f \in \emph{BMO}(Q_0)$, then \begin{equation}\label{ThmGJInequal} \sup_{0 < u < t} (1-\log u)^{-1} (f-f_{Q_0})^{**}(u) \lesssim (1-\log t)^{-1} \|\overline{M}^{\#}_{t, Q_0} f \|_{L_\infty(Q_0)}, \quad t \in (0,1). \end{equation} \end{thm} \begin{rem}\label{RemLim} (i) Inequality \eqref{ThmGJInequal} is optimal in the sense that the equivalence holds for a certain function in $\text{BMO}(Q_0)$. To prove this assertion, we make use of the following estimate, which is an immediate consequence of the John--Nirenberg theorem. If $f \in \text{BMO}(Q_0)$, then \begin{equation*} \|\overline{M}^{\#}_{t, Q_0} f \|_{L_\infty(Q_0)} \lesssim (-\log t) \|f\|_{\text{BMO}(Q_0)}. \end{equation*} Consider $f(x) = |\log |x||, \, x \in Q_0 = [-1,1]^d$. Since $f \in \text{BMO}(Q_0)$ and $f^{**}(t) \asymp (-\log t)$, it follows that both sides in \eqref{ThmGJInequal} coincide. \\ (ii) Let us now show that \eqref{ThmGJInequal} provides a much stronger estimate than the inequality $(f-f_{Q_0})^{**}(t) \lesssim \|\overline{M}^{\#}_{t, Q_0} f \|_{L_\infty(Q_0)}$ (without $\sup_{0 < u < t}$). To be more precise, for each $t \in (0,1)$ there exists $f \in \text{BMO}(Q_0)$ (we may assume without loss of generality that $f_{Q_0} = 0$) such that \begin{equation*} \sup_{0 < u < t} (1-\log u)^{-1} f^{**}(u) \asymp (1-\log t)^{-1} \|\overline{M}^{\#}_{t, Q_0} f \|_{L_\infty(Q_0)} \end{equation*} but \begin{equation*} f^{**}(t) \asymp 1 , \quad \|\overline{M}^{\#}_{t, Q_0} f \|_{L_\infty(Q_0)} \asymp (- \log t). \end{equation*} Indeed, consider \begin{equation*} f_0(u) = \left\{\begin{array}{lcl} (-\log t) & , & 0 < u \leq \frac{t}{2}, \\ & & \\ \big(1- \frac{2 (1-(-\log t)^{-1})}{t} (u - \frac{t}{2})\big) (-\log t)& , & \frac{t}{2} < u < t, \\ & & \\ 0 &, & t \leq u \leq 1, \end{array} \right. \end{equation*} and let $f$ be such that $f^{**}(u) \asymp f_0(u)$. We have \begin{align*} \sup_{0 < u < t} (1-\log u)^{-1} f^{**}(u) &\asymp \sup_{0 < u < \frac{t}{2}} (-\log u)^{-1} (-\log t) + \\ &\hspace{1cm} \sup_{\frac{t}{2} < u < t} \big(1- \frac{2 (1-(-\log t)^{-1})}{t} (u - \frac{t}{2})\big) \asymp 1 \end{align*} and thus, by \eqref{ThmGJInequal}, \begin{equation*} (-\log t) \lesssim \|\overline{M}^{\#}_{t, Q_0} f \|_{L_\infty(Q_0)} \lesssim \|f\|_{L_\infty(Q_0)} \asymp (-\log t). \end{equation*} \end{rem} \bigskip \section{Applications and discussions} \subsection{Fefferman--Stein's inequality}\label{SectionFS} As usual, a weight is a non-negative integrable function $w$ on $Q_0$. Given a measurable set $E \subset Q_0$, let $w (E) = \int_E w(x) \, dx$. We say that $w$ belongs to the $A_\infty(Q_0)$ class if there exist positive constants $C_w$ and $\delta$ such that for all cubes $Q \subseteq Q_0$ and all measurable $E \subseteq Q$ we have $ \frac{w(E)}{w(Q)} \leq C_w \Big(\frac{|E|_d}{|Q|_d}\Big)^\delta. $ It is well known that if $w \in A_\infty(Q_0)$ then $w$ is doubling; also $L_\infty(Q_0, w) = L_\infty(Q_0)$. The Fefferman--Stein inequality \cite{FeffermanStein} asserts that \begin{equation}\label{FSNew} \inf_{c \in \mathbb{R}}\|f-c\|_{L_p(Q_0,w)} \lesssim \|f^{\#}_{Q_0}\|_{L_p(Q_0,w)}, \quad 0 < p < \infty, \quad w \in A_\infty(Q_0); \end{equation} see also \cite{Stromberg}. The stronger version of this inequality, which is obtained by replacing $f^{\#}_{Q_0}$ by $M^{\#}_{s,Q_0} f$, also holds true (cf. \cite{JawerthTorchinsky, Lerner98}). Inequality \eqref{FSNew} plays a central role in Fourier analysis (see, e.g., \cite{Grafakos, Stein}) and interpolation theory (see \cite{BennettSharpley}). In particular, it is strongly related to Coifman--Fefferman inequalities, cf. \cite{Kurtz,Lerner, LernerPre1, LernerPre2}. To obtain \eqref{FSNew}, one can use various methods including duality arguments \cite{Stein}, rearrangement inequalities \cite{BennettSharpley79, DeVoreSharpley}, good-$\lambda$ inequalities \cite{Grafakos}, or Garsia--Rodemich spaces \cite{AstashkinMilman}. Recently, great interest has been directed toward studies of \eqref{FSNew} in more ge\-ne\-ral function spaces, see \cite{AstashkinMilman,Lerner, LernerPre2}. In particular, it was shown that the Coifman--Fefferman inequality is equivalent to the Fefferman--Stein inequality on a certain class of Banach function spaces. Moreover, the Fefferman--Stein inequality holds on r.i. spaces if and only if their lower Boyd index is positive. These techniques however cannot be applied to investigate function spaces that are close to $L_\infty(Q_0)$ (i.e., the lower Boyd index is $0$). A natural example of such spaces, widely used in harmonic analysis, is the Lorentz--Zygmund spaces $L_{\infty, q}(\log L)_b(Q_0, w)$ and, in particular, the exponential classes $\text{exp}\, L^{\lambda}(Q_0,w)$ (see Section \ref{SectionFunctionSpaces}). Clearly, inequality \eqref{FSNew} cannot be true replacing $L_p(Q_0,w)$ by $L_{\infty, q}(\log L)_b(Q_0, w)$. Below we answer the following question: What is the best possible target space $\mathbb{X}= \mathbb{X}(Q_0)$ within Lorentz--Zygmund spaces so that the inequality \begin{equation*} \inf_{c \in \mathbb{R}}\|f-c\|_{\mathbb{X}} \le C \|f^{\#}_{Q_0}\|_{L_{\infty, q}(\log L)_b(Q_0, w)} \end{equation*} holds? \begin{cor}\label{TheoremSharpLimiting} Let $0 < q \leq \infty$ and $b < -1/q$. Assume $w \in A_\infty(Q_0)$ and $f \in L_p(Q_0, w)$ for some $1 < p < \infty$. Then \begin{equation}\label{FSLim} \inf_{c \in \mathbb{R}} \bigg(\int_0^1 (1 - \log t)^{b q} \Big(\sup_{t <u <1} (1-\log u)^{-1} (f-c)_w^*(u) \Big)^{q} \frac{dt}{t} \bigg)^{1/q} \lesssim \|f^{\#}_{Q_0}\|_{L_{\infty,q}(\log L)_b(Q_0,w)}. \end{equation} In particular, we have \begin{equation*} \inf_{c \in \mathbb{R}} \|f-c\|_{L_{\infty,q}(\log L)_{b-1}(Q_0,w)} \lesssim \|f^{\#}_{Q_0}\|_{L_{\infty,q}(\log L)_b(Q_0,w)} \end{equation*} and \begin{equation*} \inf_{c \in \mathbb{R}} \|f-c\|_{\emph{exp} \, L^{\frac{\lambda}{\lambda+1}} (Q_0,w)} \lesssim \|f^{\#}_{Q_0}\|_{\emph{exp} \, L^\lambda (Q_0,w) }, \quad \lambda >0. \end{equation*} The corresponding result for $M^{\#}_{s,Q_0} f$ with $s$ small enough also holds true. \end{cor} \begin{rem} (i) The norm given in the left-hand side of \eqref{FSLim} appears frequently in the study of optimal embedding theorems in limiting cases (cf. \cite{BennettRudnick}, \cite{Pustylnik} and \cite{DominguezHaroskeTikhonov}) and interpolation theorems (cf. \cite{EvansOpic, EvansOpicPick}). (ii) The sharp version of Corollary \ref{TheoremSharpLimiting} (see Proposition \ref{PropOptimFS} below) states that for $0 < q \leq \infty$ and $b < -1/q$ the inequality \begin{equation*} \inf_{c \in \mathbb{R}}\left(\int_0^1 (1 - \log t)^{b q} \left(\int_t^1 (u^{1/p} (1-\log u)^{\xi} (f-c)_w^*(u))^r \frac{du}{u} \right)^{q/r} \frac{dt}{t} \right)^{1/q} \lesssim \|f^{\#}_{Q_0}\|_{L_{\infty,q}(\log L)_b(Q_0,w)} \end{equation*} holds if and only if \begin{equation*} \left\{\begin{array}{lcl} p < \infty, & r \leq \infty, & -\infty < \xi < \infty, \\ & & \\ p=\infty,& r < \infty, & \xi < -1 - \frac{1}{r}, \\ & & \\ p=\infty, & r= \infty, & \xi \leq -1. \end{array} \right. \end{equation*} \end{rem} \subsection{Calder\'on--Scott type results} The strong connection between estimates of maximal functions of smooth functions and Sobolev embeddings was established by Calder\'on and Scott \cite{CalderonScott} and further developed in DeVore and Sharpley \cite{DeVoreSharpley}. Let us illustrate it with a simple example given in \cite[p. 84]{CalderonScott}. In order to guarantee that $f^{\#} \in L_{\xi,r}(\mathbb{R}^d),\, 1 <\xi < \infty, 0 < r \leq \infty$, it suffices to assume that $f \in \dot{B}^{d (\frac{1}{p} - \frac{1}{\xi})}_{p, r}(\mathbb{R}^d)$ where $1 < p < \xi$. Moreover, we have \begin{equation}\label{Blowupnew} \|f^{\#}\|_{L_{\xi,r}(\mathbb{R}^d)} \leq C(\xi) \|f\|_{\dot{B}^{d (\frac{1}{p} - \frac{1}{\xi})}_{p, r}(\mathbb{R}^d);k}, \quad k > d \Big(\frac{1}{p} - \frac{1}{\xi}\Big), \end{equation} where $C(\xi)$ is a positive constant which depends, among other parameters, on $\xi$ but is independent of $f$. Indeed, this is an immediate consequence of \eqref{ProofLem2.1} and the classical Sobolev inequality for Besov functions $\|f\|_{L_{\xi,r}(\mathbb{R}^d)} \lesssim \|f\|_{\dot{B}^{d (\frac{1}{p} - \frac{1}{\xi})}_{p, r}(\mathbb{R}^d);k}$. Furthermore, the same argument but now invoking the Sobolev inequality for Lorentz--Besov spaces (cf. \cite{Martin}) shows that \eqref{Blowupnew} also holds true when the Besov space $\dot{B}^{d (\frac{1}{p} - \frac{1}{\xi})}_{p, r}(\mathbb{R}^d)$ is replaced by $\dot{B}^{d (\frac{1}{p} - \frac{1}{\xi})}_{r} L_{p,q}(\mathbb{R}^d), \, 0 < q \leq \infty$. Note that $\dot{B}^{d (\frac{1}{p} - \frac{1}{\xi})}_{p, r}(\mathbb{R}^d) \subsetneq \dot{B}^{d (\frac{1}{p} - \frac{1}{\xi})}_{r} L_{p,q}(\mathbb{R}^d), \, q > p$. The corresponding analysis for function spaces that are close to $L_\infty$, for example $\text{exp} \, L^\lambda$ or more generally $L_{\infty, r}(\log L)_b$, is much more delicate. Among other obstructions, it is convenient to switch from homogeneous Besov spaces on $\mathbb{R}^d$ to inhomogeneous Besov spaces on bounded domains. Otherwise, the expected counterpart of \eqref{Blowupnew} will no longer involve only a classical function norm on the left-hand side, but a sum of function norms; this phenomenon already occurs in Sobolev inequalities for limiting cases (cf. \cite[Theorem 8.1]{DeVoreRiemenschneiderSharpley}). Clearly, inequality \eqref{Blowupnew} does not hold for functions in $\dot{B}^{d (\frac{1}{p} - \frac{1}{\xi})}_{p, r}(Q_0)$ (consider, e.g., polynomials) so that one is forced to work with its inhomogeneous counterpart $B^{d (\frac{1}{p} - \frac{1}{\xi})}_{p, r}(Q_0)$. Furthermore, we note that the Fefferman--Stein inequality \eqref{FSNew} does not hold for $p=\infty$; we only have the trivial reverse estimate $\|f^{\#}\|_{L_{\infty,r}(\log L)_b(Q_0)} \lesssim \|f\|_{L_{\infty,r}(\log L)_b(Q_0)}$. Then, following a similar reasoning as the one given above for the non-limiting case (i.e., $\xi < \infty$) but now relying on the corresponding Sobolev inequality (cf. \cite[Corollary 5.5]{DeVoreRiemenschneiderSharpley} and \cite[Theorem 2]{Martin}) \begin{equation}\label{Blowup3} \|f\|_{L_{\infty, r}(\log L)_b(Q_0)} \lesssim \|f\|_{B^{d/p, b+ 1/\min\{1,r\}}_{p,r} (Q_0);k} \quad b < -1/r, \quad k > d/p, \end{equation} we derive \begin{equation}\label{Blowup4} \|f^{\#}_{Q_0}\|_{L_{\infty, r}(\log L)_b(Q_0)} \lesssim \|f\|_{B^{d/p, b+ 1/\min\{1,r\}}_{p,r} (Q_0);k}. \end{equation} This implies a loss of logarithmic smoothness of order $\frac{1}{\min\{1,r\}}$ in order to achieve that $f^{\#}_{Q_0} \in L_{\infty, r}(\log L)_b(Q_0)$. One of our goals in this paper is to show that the standard methods described above, which reduce estimates for maximal functions (cf. \eqref{Blowup4}) to the Sobolev inequalities (cf. \eqref{Blowup3}), are far from being optimal and can be considerably improved by using new extrapolation estimates based on Theorem \ref{ThmDeVoreLorentz}. These extrapolation results are interesting by their own sake and we postpone their detailed discussion to Section \ref{SectionExtrapol} below. As application of these extrapolation arguments, we are in a position to improve \eqref{Blowup4}. \begin{cor}\label{CorollaryLimitingBesovMax} Let $1 < p < \infty, 0 < q, r \leq \infty, k > d/p$, and $b < -1/r$. Then \begin{equation}\label{CorollaryLimitingBesovMax1New*} \|f^{\#}_{Q_0}\|_{L_{\infty, r} (\log L)_b (Q_0)} \lesssim \|f\|_{B^{d/p, b+1/r}_{r} L_{p,q}(Q_0);k}. \end{equation} In particular, \begin{equation}\label{CorollaryLimitingBesovMax1} \|f^{\#}_{Q_0}\|_{L_{\infty, r} (\log L)_b (Q_0)} \lesssim \|f\|_{B^{d/p, b+1/r}_{p, r}(Q_0);k} \end{equation} and if $\lambda > 0$ then \begin{equation*} \|f^{\#}_{Q_0}\|_{\emph{exp} \, L^\lambda (Q_0)} \lesssim \|f\|_{B^{d/p,-1/\lambda}_{p, \infty}(Q_0);k}. \end{equation*} \end{cor} \begin{rem} (i) Since $B^{d/p, b+1}_{p,r}(Q_0) \subsetneq B^{d/p, b+1/r}_{p, r}(Q_0), \, r > 1$, \eqref{CorollaryLimitingBesovMax1} sharpens \eqref{Blowup4}. \\ (ii) We will show in Proposition \ref{CorollaryLimitingBesovMaxOptimal} below that \eqref{CorollaryLimitingBesovMax1New*} is optimal, namely \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{\infty, r} (\log L)_b (Q_0)} \lesssim \|f\|_{B^{d/p, b+\xi}_{r} L_{p,q}(Q_0);k } \iff \xi \geq 1/r. \end{equation*} \end{rem} Our method can be also applied to work with Sobolev spaces. Let us state the analogue of Corollary \ref{CorollaryLimitingBesovMax}. \begin{cor}\label{CorollaryLimitingBesovMaxSecond} Let $1 \leq r \leq \infty, k < d$, and $b < -1/r$. Then \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{\infty, r} (\log L)_b (Q_0)} \lesssim \|f \|_{W^k L_{d/k, r} (\log L)_{b + 1/r}(Q_0)}. \end{equation*} \end{cor} \begin{rem} (i) In Proposition \ref{CorollaryLimitingBesovMaxSecondOptimal} below, we will establish the optimality of the previous inequality, i.e., \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{\infty, r} (\log L)_b (Q_0)} \lesssim \|f \|_{W^k L_{d/k, r} (\log L)_{b + \xi}(Q_0)} \iff \xi \geq 1/r. \end{equation*} (ii) Comparing Corollaries \ref{CorollaryLimitingBesovMax} and \ref{CorollaryLimitingBesovMaxSecond}, we note that to the best of our knowledge the relationship between the spaces $W^k L_{d/k,r}(\log L)_{b+1/r}(Q_0)$ and $B^{d/p,b+1/r}_{r} L_{p,q}(Q_0)$ is not known. See \cite{SeegerTrebels}. \end{rem} \subsection{Extrapolation results}\label{SectionExtrapol} A natural question in harmonic analysis is to look for sharp bounds of the norms of classical operators in terms of some of the involved parameters (integrability, smoothness, $A_p$ characteristic of weights, etc). This question is not only interesting by itself (cf. \cite{HytonenPerez}) but it is also useful to establish borderline estimates via extrapolation methods (cf. \cite{JawerthMilman}). The aim of this section is to apply the pointwise estimates obtained in Section \ref{SectionEstimSmooth} to derive several sharp estimates involving integrability properties of $f^{\#}$. As anticipated above, these estimates will be essential in the proofs of Corollaries \ref{CorollaryLimitingBesovMax} and \ref{CorollaryLimitingBesovMaxSecond}. According to \eqref{Blowupnew}, we have that \begin{equation}\label{Blowup2} \|f^{\#}\|_{L_{d/\varepsilon, r}(\mathbb{R}^d)} \leq C(\varepsilon) \|f\|_{\dot{B}^{d/p-\varepsilon}_{p, r}(\mathbb{R}^d);k}, \quad \varepsilon \to 0+. \end{equation} Invoking now Theorem \ref{ThmDeVoreLorentz} we are able to determine the exact behaviour of the constant $C(\varepsilon)$. \begin{cor}\label{CorDeVoreExtrapol} Let $1 < p < \infty, 0 < q, r \leq \infty,$ and $k \in {{\Bbb N}}$. Then \begin{equation}\label{CorDeVoreExtrapol1} \|f^{\#}\|_{L_{d/\varepsilon, r}(\mathbb{R}^d)} \leq C_0 \, \varepsilon^{-1/r} \|f\|_{\dot{B}^{d/p-\varepsilon}_r L_{p, q}(\mathbb{R}^d); k}, \qquad \varepsilon \to 0+, \qquad k > d/p, \end{equation} and \begin{equation}\label{CorDeVoreExtrapol2} \|f^{\#}\|_{L_{d/\varepsilon, r}(\mathbb{R}^d)} \leq C_1 \|f\|_{\dot{B}^{d/p-\varepsilon}_r L_{p, q}(\mathbb{R}^d); k}, \qquad \varepsilon \to 0+, \qquad k = d/p, \end{equation} where $C_0$ and $C_1$ are positive constants which do not depend on $f$ and $\varepsilon$. The corresponding estimates for cubes read as follows \begin{equation}\label{CorDeVoreExtrapol1cube} \|f^{\#}_{Q_0}\|_{L_{d/\varepsilon, r}(Q_0)} \leq C_2 \, \varepsilon^{-1/r} \|f\|_{B^{d/p-\varepsilon}_r L_{p, q}(Q_0); k}, \qquad \varepsilon \to 0+, \qquad k \geq d/p, \end{equation} where $C_2$ does not depend on $f$ and $\varepsilon$. \end{cor} \begin{rem} (i) The previous result tells us that the asymptotic behaviour of $C(\varepsilon)$ in \eqref{Blowup2} strongly depends on the order $k$ of the fixed Besov (semi-)norm. On the one hand, in the limiting case $k=d/p$ by \eqref{CorDeVoreExtrapol2}, $C(\varepsilon) = O(1)$. This corresponds to the fact that if $\varepsilon \to 0+$ then we approach the spaces $L_{\infty,r}(\mathbb{R}^d)$ and \begin{equation*} \Big\{f \in C^\infty_0(\mathbb{R}^d) : \int_0^\infty (t^{-k} \omega_k(f,t)_{p,q})^r \frac{dt}{t} < \infty \Big\}, \end{equation*} which are both trivial when $r < \infty$; while if $r=\infty$ one can easily check that inequality \eqref{CorDeVoreExtrapol2} can be rewritten as $\|f\|_{\text{BMO}(\mathbb{R}^d)} \lesssim \||\nabla^k f |\|_{L_{d/k, q}(\mathbb{R}^d) }$ (cf. \eqref{LemmaEmbBMO}). A similar comment also applies to \eqref{CorDeVoreExtrapol1} (i.e., $k > d/p$) with $r= \infty$ (cf. \eqref{LemmaEmbBMOLorentz}.) On the other hand, a completely different phenomenon arises when $k > d/p$ and $r < \infty$ (see \eqref{CorDeVoreExtrapol1}). In this case, the blow up $C(\varepsilon) = O (\varepsilon^{-1/r})$ is in fact sharp since (see Proposition \ref{CorDeVoreExtrapolSharp} below) \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{d/\varepsilon, r}(Q_0)} \leq C_0 \, \varepsilon^{-\xi} \|f\|_{B^{d/p-\varepsilon}_r L_{p, q}(Q_0); k} \iff \xi \geq 1/r. \end{equation*} (ii) Estimate \eqref{CorDeVoreExtrapol1cube} is formulated in terms of the inhomogeneous Besov norms rather than their homogeneous counterparts (see \eqref{CorDeVoreExtrapol1} and \eqref{CorDeVoreExtrapol2}). This modification is indeed necessary to obtain meaningful estimates. Moreover, the limiting case $k=d/p$ shows another distinction between Besov norms on $\mathbb{R}^d$ and cubes. Specifically, inequality \eqref{CorDeVoreExtrapol1cube} with $k=d/p$ is considerably worse than \eqref{CorDeVoreExtrapol2}. \end{rem} The counterpart of Corollary \ref{CorDeVoreExtrapol} for Sobolev spaces reads as follows. \begin{cor}\label{CorDeVoreDerExtrapol} Let $1 \leq r \leq \infty$ and $k < d$. Then \begin{equation}\label{CorDeVoreDerExtrapol1} \|f^{\#}\|_{L_{d/\varepsilon, r}(\mathbb{R}^d)} \leq C_0 \, \varepsilon^{-1/r} \|f \|_{\dot{W}^k L_{d/(k+\varepsilon), r} (\mathbb{R}^d)}, \qquad \varepsilon \to 0+, \end{equation} and \begin{equation}\label{CorDeVoreDerExtrapol1Cubes} \|f^{\#}_{Q_0}\|_{L_{d/\varepsilon, r}(Q_0)} \leq C_1 \, \varepsilon^{-1/r} \|f \|_{W^k L_{d/(k+\varepsilon), r} (Q_0)}, \qquad \varepsilon \to 0+, \end{equation} where $C_0$ and $C_1$ are positive constants which do not depend on $f$ and $\varepsilon$. \end{cor} \begin{rem} (i) The assumption $k < d$ guarantees that the right-hand sides of \eqref{CorDeVoreDerExtrapol1} and \eqref{CorDeVoreDerExtrapol1Cubes} are well defined for small enough $\varepsilon$. \\ (ii) Letting $C(\varepsilon) = \sup_{\|f\|_{W^k L_{d/(k+\varepsilon), r} (Q_0)} \leq 1} \|f^{\#}_{Q_0}\|_{L_{d/\varepsilon, r}(Q_0)}$ and $r < \infty$, the fact that $C(\varepsilon) \to \infty$ as $\varepsilon \to 0+$ is clear since $L_{\infty, r}(Q_0)$ becomes the trivial space. The novelty of Corollary \ref{CorDeVoreDerExtrapol} is to establish the blow-up $C(\varepsilon) = O (\varepsilon^{-1/r})$. Furthermore, by Proposition \ref{CorDeVoreDerExtrapolOptim} below this estimate is optimal, that is, \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{d/\varepsilon, r}(Q_0)} \leq C \, \varepsilon^{-\xi} \|f\|_{W^k L_{d/(k+\varepsilon), r} (Q_0)} \iff \xi \geq 1/r. \end{equation*} On the other hand, note that $C(\varepsilon)$ is uniformly bounded if $r=\infty$ (see \eqref{CorDeVoreDerExtrapol1}). This corresponds to the fact that $\|f\|_{\text{BMO}(Q_0)} \lesssim \|f\|_{W^k L_{d/k,\infty}(Q_0)}$ (cf. \eqref{LemmaEmbBMO} and \eqref{0030030} below.) \end{rem} \bigskip \section{Proofs of Theorems \ref{ThmBDS} and \ref{ThmGJ}}\label{section5} Proofs of Theorems \ref{ThmBDS} and \ref{ThmGJ} are based on a combination of known Fourier analytic tools such as the John--Nirenberg inequality and the Garnett--Jones theorem on $\text{BMO}(Q_0)$ as well as new arguments involving limiting interpolation techniques (see \eqref{DefLimInt}) and Holmstedt's reiteration formulas. In particular, the following charac\-te\-ri\-za\-tion of the Lorentz--Zygmund space $L_{\infty,q} (\log L)_{b}(Q_0)$ as a limiting interpolation space will be useful. \begin{lem}\label{LemInterp1} Let $0 < p < \infty, 0 < q, r \leq \infty,-\infty < c < \infty$, and $b < -1/q \, (b \leq 0 \text{ if } q=\infty)$. Then we have \begin{equation*} L_{\infty,q} (\log L)_{b}(Q_0,\mu) = (L_{p,r}(\log L)_c(Q_0,\mu), L_\infty(Q_0,\mu))_{(1,b),q} \end{equation*} with equivalent quasi-norms. The corresponding result for periodic spaces also holds true. \end{lem} \begin{proof} Assume first that $c=0$. Inserting the well-known estimate \begin{equation}\label{KFunctLorentz} K(t, f; L_{p,r}(Q_0,\mu), L_\infty(Q_0,\mu)) \asymp \left(\int_0^{t^p} (u^{1/p} f_\mu^*(u))^r \frac{du}{u} \right)^{1/r} \end{equation} (see \cite[Theorem 4.2]{Holmstedt}) into the definition of the limiting interpolation space, we obtain \begin{align*} \|f\|_{(L_{p,r}(Q_0,\mu), L_\infty(Q_0,\mu))_{(1,b),q}} & \asymp \left(\int_0^1 t^{-q/p} (1-\log t)^{b q} \left(\int_0^{t} (u^{1/p} f_\mu^*(u))^r \frac{du}{u} \right)^{q/r} \frac{dt}{t} \right)^{1/q} \\ & \asymp \left( \int_0^1 ((1-\log t)^b f_\mu^*(t))^q \frac{dt}{t}\right)^{1/q} = \|f\|_{L_{\infty,q}(\log L)_b( Q_0,\mu)}, \end{align*} where the last equivalence follows from Hardy's inequality \cite[Theorem 6.4]{BennettRudnick} and the monotonicity of $f_\mu^*$. The general case ($c \in \mathbb{R}$) can be reduced to the previous one via the trivial embeddings \begin{equation*} L_{p_0}(Q_0,\mu) \hookrightarrow L_{p,r}(\log L)_c(Q_0,\mu) \hookrightarrow L_{p_1}(Q_0,\mu), \quad 0 < p_1 < p < p_0 < \infty. \end{equation*} \end{proof} \begin{lem}[\cite{EvansOpicPick}]\label{LemInterp2} Let $(A_0,A_1)$ be a quasi-Banach pair with $A_1 \hookrightarrow A_0$. Let $K(t,f) = K(t, f; A_0, A_1), \, 0 < t < 1$. If $0 < q \leq \infty$ and $b < -1/q \, (b \leq 0 \text{ if } q= \infty)$, then \begin{align} K(t(1-\log t)^{-b-1/q}, f; A_0, (A_0, A_1)_{(1,b),q}) & \nonumber\\ &\hspace{-5cm} \asymp K(t,f) + t (1-\log t)^{-b-1/q} \Big(\int_t^1 (u^{-1} (1 - \log u)^b K(u,f))^q \frac{du}{u} \Big)^{1/q} \label{LemInterp2.1} \end{align} and \begin{equation}\label{LemInterp2.2} K((1-\log t)^{b+1/q}, f; (A_0, A_1)_{(1,b),q}, A_1) \asymp \Big(\int^t_0 (u^{-1} (1 - \log u)^b K(u,f))^q \frac{du}{u} \Big)^{1/q} \end{equation} (with the usual modifications if $q=\infty$). \end{lem} We are now ready to give the \begin{proof}[Proof of Theorem \ref{ThmBDS}] In light if the John--Nirenberg inequality \cite{JohnNirenberg}, we have \begin{equation}\label{JNInequality} \|f-f_{Q_0;\mu}\|_{\text{exp} \, L (Q_0,\mu)} \lesssim \|f\|_{\text{BMO}(Q_0,\mu)} \end{equation} which yields \begin{equation}\label{ProofThmBDS1} K(t,f-f_{Q_0;\mu}; L_{p,r}(Q_0,\mu), \text{exp} \, L (Q_0,\mu)) \lesssim K(t, f; L_{p,r}(Q_0,\mu), \text{BMO}(Q_0,\mu)). \end{equation} Since $\text{exp} \, L (Q_0,\mu) = L_\infty (\log L)_{-1}(Q_0,\mu)$ (see Section \ref{SectionFunctionSpaces}), we can apply Lemma \ref{LemInterp1} and relation \eqref{LemInterp2.1} to establish \begin{align*} K(t (1-\log t), f; L_{p,r}(Q_0,\mu), \text{exp} \, L (Q_0,\mu) ) & \\ &\hspace{-7cm}\asymp K(t (1-\log t), f; L_{p,r}(Q_0,\mu), (L_{p,r}(Q_0,\mu), L_\infty(Q_0,\mu))_{(1,-1),\infty}) \\ & \hspace{-7cm} \asymp K(t, f; L_{p,r}(Q_0,\mu), L_\infty(Q_0,\mu)) \\ & \hspace{-6cm} + t (1-\log t) \sup_{t \leq u \leq 1} u^{-1} (1-\log u)^{-1} K(u,f;L_{p,r}(Q_0,\mu), L_\infty(Q_0,\mu)) \\ & \hspace{-7cm} \gtrsim K(t, f; L_{p,r}(Q_0,\mu), L_\infty(Q_0,\mu)). \end{align*} This and \eqref{KFunctLorentz} imply \begin{equation}\label{ProofThmBDS2} K(t (1-\log t), f; L_{p,r}(Q_0,\mu), \text{exp} \, L (Q_0,\mu)) \gtrsim \left(\int_0^{t^p} (u^{1/p} f_\mu^*(u))^r \frac{du}{u} \right)^{1/r}. \end{equation} On the other hand, from \cite[Remark 3.7]{JawerthTorchinsky}, \begin{equation}\label{JT} K(t, f; L_{p,r}(Q_0,\mu), \text{BMO}(Q_0,\mu)) \asymp \left(\int_0^{t^p} (u^{1/p} (f^{\#}_{Q_0;\mu})_\mu^*(u))^r \, \frac{du}{u} \right)^{1/p}. \end{equation} Combining \eqref{ProofThmBDS1}--\eqref{JT} yields \eqref{ThmBDS1}. It remains to show the sharpness assertion \eqref{ThmBDS1Optim}. Suppose that there exists $\lambda > 0$ such that \begin{equation*} \int_0^{t (1-\log t)^{-\lambda}} (u^{1/p} (f-f_{Q_0})^{*}(u))^r \, \frac{du}{u} \lesssim \int_0^{t} (u^{1/p} f^{\#*}_{Q_0}(u))^r \, \frac{du}{u}, \end{equation*} or, equivalently, \begin{equation*} \int_0^{t} (u^{1/p} (f-f_{Q_0})^{*}(u))^r \, \frac{du}{u} \lesssim \int_0^{t (1-\log t)^{\lambda}} (u^{1/p} f^{\#*}_{Q_0}(u))^r \, \frac{du}{u}. \end{equation*} Therefore, using properties of rearrangements, we obtain \begin{align*} t^{1/p} (f-f_{Q_0})^*(t) &\asymp \left(\int_{t/2}^{t} (u^{1/p} (f-f_{Q_0})^{*}(u))^r \, \frac{du}{u} \right)^{1/r} \\ & \lesssim \left(\int_0^{t (1-\log t)^\lambda} (u^{1/p} f^{\#*}_{Q_0}(u))^r \, \frac{du}{u} \right)^{1/r} \\ & \lesssim t^{1/p} (1-\log t)^{\lambda/p} \|f\|_{\text{BMO}(Q_0)}, \end{align*} which yields that there exist positive constants $c_1, c_2$ such that \begin{equation*} |\{x \in Q_0 : |f(x)-f_{Q_0}| > \xi\}|_d \leq c_1 e^{-c_2 \big(\frac{\xi}{\|f\|_{\text{BMO}(Q_0)}}\big)^{\frac{p}{\lambda}}} \end{equation*} The latter embedding implies $\lambda \geq p$ because $\text{exp} \, L (Q_0)$ is the smallest rearrangement invariant space that contains $\text{BMO}(Q_0)$ (cf. \cite{Pustylnik}.) \end{proof} A careful examination of the proof of Theorem \ref{ThmBDS} shows the following \begin{thm} Let $\lambda > 0$. The following statements are equivalent: \begin{enumerate}[\upshape(i)] \item John--Nirenberg inequality: for each $f \in \emph{BMO}(Q_0)$, there exist positive constants $c_1$ and $c_2$ such that \begin{equation*} |\{x \in Q_0 : |f(x)-f_{Q_0}| > \xi\}|_d \leq c_1 e^{-c_2 \big(\frac{\xi}{\|f\|_{\emph{BMO}(Q_0)}} \big)^{\frac{1}{\lambda}}}. \end{equation*} \item Oscillation inequality: for $1 < p < \infty$ and $0 < r \leq \infty$ we have \begin{equation*} \int_0^{t (1-\log t)^{-\lambda p}} (u^{1/p} (f-f_{Q_0})^{*}(u))^r \, \frac{du}{u} \lesssim \int_0^{t} (u^{1/p} f^{\#*}_{Q_0}(u))^r \, \frac{du}{u}, \quad f \in L_{p,r}(Q_0). \end{equation*} \item $\lambda \geq 1.$ \end{enumerate} \end{thm} This characterization fits into the research program developed in Mart\'in, Milman and Pustylnik \cite{MartinMilmanPustylnik} and Mart\'in and Milman \cite{MartinMilman}, where certain Sobolev--Poincar\'e inequalities for smooth functions are equivalently characterized in terms of oscillation inequalities. Note that $\text{BMO}(Q_0)$ can be considered as the limiting space of the scale of Lipschitz spaces $\text{Lip}^\alpha(Q_0), 0 < \alpha < 1$, via the Campanato--Meyers theorem. For further details, we refer the reader to \cite{DeVoreSharpley} and \cite{MartinMilman14}. \begin{proof}[Proof of Theorem \ref{ThmGJ}] Applying the John--Nirenberg inequality given by \eqref{JNInequality}, we have \begin{equation}\label{ThmGJProof1} K(t, f-f_{Q_0}; \text{exp} \, L (Q_0), L_\infty(Q_0)) \lesssim K(t, f; \text{BMO}(Q_0), L_\infty(Q_0)). \end{equation} In light of the Garnett--Jones characterization of the $K$-functional for the couple $(L_\infty(Q_0), \text{BMO}(Q_0))$ (cf. \cite{GarnettJones}, \cite{JawerthTorchinsky} and \cite{Shvartsman}) \begin{equation*} K(t, f; L_\infty(Q_0), \text{BMO}(Q_0)) \asymp \|\overline{M}^{\#}_{e^{-t}, Q_0} f \|_{L_\infty(Q_0)}, \qquad t > 1, \end{equation*} we derive \begin{align} K(t, f; \text{BMO}(Q_0), L_\infty(Q_0)) & = t K(t^{-1}, f; L_\infty(Q_0), \text{BMO}(Q_0)) \nonumber \\ & \hspace{-3.5cm} \asymp t \big\|\overline{M}^{\#}_{e^{-t^{-1}}, Q_0} f \big\|_{L_\infty(Q_0)}, \quad t \in (0,1). \label{ThmGJProof2} \end{align} On the other hand, since $\text{exp} \, L(Q_0) = (L_1(Q_0), L_\infty(Q_0))_{(1,-1), \infty}$ (by Lemma \ref{LemInterp1}), we make use of estimate \eqref{LemInterp2.2} together with the well-known formula $$K(t,f; L_1(Q_0), L_\infty(Q_0)) = t f^{**}(t)$$ (see, e.g., \cite{BennettSharpley}). Namely, we have, for each $t \in (0,1)$, \begin{equation}\label{ThmGJProof3} K((1-\log t)^{-1} ,f; \text{exp} \, L(Q_0), L_\infty(Q_0)) \asymp \sup_{0 < u < t} (1-\log u)^{-1} f^{**}(u). \end{equation} Inserting \eqref{ThmGJProof2} and \eqref{ThmGJProof3} into \eqref{ThmGJProof1}, we complete the proof. \end{proof} \bigskip \section{Proofs of Theorems \ref{ThmDeVoreLorentz}, \ref{LemmaEmbBMOLorentzState} and optimality assertions } \label{section4} To prove Theorem \ref{ThmDeVoreLorentz}, we use the new embeddings between Besov spaces and $\text{BMO}(\mathbb{R}^d)$ given in Theorem \ref{LemmaEmbBMOLorentzState}. These results are closely related to embedding theorems recently obtained by Seeger and Trebels \cite{SeegerTrebels}. \begin{proof}[Proof of Theorem \ref{LemmaEmbBMOLorentzState}] Let $H_1(\mathbb{R}^d)$ be the usual Hardy space. According to \cite[Theorem 1.2]{SeegerTrebels}, we have \begin{equation}\label{d} H_1(\mathbb{R}^d) \hookrightarrow \dot{B}^{-d/p}_1 L_{p',1}(\mathbb{R}^d), \end{equation} where $\dot{B}^{-d/p}_1 L_{p',1}(\mathbb{R}^d)$ is the Fourier-analytically defined Lorentz--Besov space. It is well known that $\text{BMO}(\mathbb{R}^d)$ is the dual space of $H_1(\mathbb{R}^d)$, i.e., $(H_1(\mathbb{R}^d))' = \text{BMO}(\mathbb{R}^d)$ (see \cite[Theorem 2, p. 145]{FeffermanStein}). On the other hand, we claim that \begin{equation}\label{d2} (\dot{B}^{-d/p}_1 L_{p',1}(\mathbb{R}^d))' = \dot{B}^{d/p}_\infty L_{p,\infty}(\mathbb{R}^d). \end{equation} Assuming this result momentarily, embedding \eqref{LemmaEmbBMOLorentz} follows from \eqref{d} via duality. We just outline the proof of \eqref{d2} and leave further details to the interested reader. The claim can be shown by using the retraction method in a similar way as the corresponding duality assertion for classical Besov spaces \cite[Theorem 2.11.2, pp. 178--180]{Triebel83} together with the fact that Fourier--Besov spaces with positive smoothness based on $L_{p,q}(\mathbb{R}^d), \, 1 < p < \infty,$ can be equivalently characterized in terms of the $L_{p,q}(\mathbb{R}^d)$ moduli of smoothness (see the proof of the corresponding result for classical Besov spaces with $p= q$ in \cite[Sections 1.13 and 2.5.1]{Triebel78}, where now Fourier multiplier assertions for $L_{p,q}(\mathbb{R}^d)$ are covered by interpolation of $L_p(\mathbb{R}^d)$). Embedding \eqref{LemmaEmbBMO*} is an immediate consequence of \eqref{LemmaEmbBMOLorentz} and the fact (see \cite[Theorem 1.2]{SeegerTrebels}) that $\dot{H}^{d/p} L_{p,\infty}(\mathbb{R}^d) \hookrightarrow \dot{B}^{d/p}_\infty L_{p,\infty}(\mathbb{R}^d)$. In particular, setting $p=d/k > 1$ we obtain \eqref{LemmaEmbBMO}. \end{proof} \begin{proof}[Proof of Theorem \ref{ThmDeVoreLorentz}] \textsc{Case 1: $1 < p < \infty, 0 < q \leq \infty$, and $k > d/p$.} In virtue of the known embedding for Lorentz spaces, Lemma \ref{LemmaEmbBMOLorentzState} implies that $\dot{B}^{d/p}_\infty L_{p,q}(\mathbb{R}^d) \hookrightarrow \text{BMO}(\mathbb{R}^d), \, 0 < q \leq \infty$. Then \begin{equation}\label{ProofThmDeVoreLorentz1} K(t, f; L_{p,q}(\mathbb{R}^d), \text{BMO}(\mathbb{R}^d)) \lesssim K(t, f; L_{p,q}(\mathbb{R}^d), \dot{B}^{d/p}_\infty L_{p,q}(\mathbb{R}^d)). \end{equation} Next we compute these $K$-functionals. Concerning the left-hand side, by \eqref{JT}, \begin{equation}\label{ProofThmDeVoreLorentz2} K(t, f; L_{p,q}(\mathbb{R}^d), \text{BMO}(\mathbb{R}^d)) \asymp \Big(\int_0^{t^p} (u^{1/p} f^{\#*}(u))^q \frac{du}{u} \Big)^{1/q}. \end{equation} On the other hand, it follows from \begin{equation}\label{ProofLemmaEmbBMOLorentzState1*} K(t^k, f; L_{p, q}(\mathbb{R}^d), \dot{W}^k L_{p, q}(\mathbb{R}^d)) \asymp \omega_k(f,t)_{p,q}, \quad f \in L_{p, q}(\mathbb{R}^d), \end{equation} (cf. \cite[3.9.4]{BrudnyiKrugljak}; see also \cite{GogatishviliOpicTikhonovTrebels} and \cite{MartinMilman14}) that \begin{equation}\label{ProofLemmaEmbBMOLorentzState1**} \dot{B}^{d/p}_\infty L_{p, q}(\mathbb{R}^d) = (L_{p,q}(\mathbb{R}^d), \dot{W}^k L_{p,q}(\mathbb{R}^d))_{\frac{d}{k p}, \infty}. \end{equation} Then we apply Holmstedt's reiteration formula \cite[Corollary 2.3, p. 310]{BennettSharpley} to establish \begin{align} K(t^{d/k p}, f; L_{p,q}(\mathbb{R}^d), \dot{B}^{d/p}_\infty L_{p,q}(\mathbb{R}^d))&\asymp t^{d/k p} \sup_{t^{1/k} < u < \infty} u^{-d/ p} K(u^k, f ; L_{p, q}(\mathbb{R}^d), \dot{W}^k L_{p,q}(\mathbb{R}^d)) \nonumber\\ & \asymp t^{d/k p} \sup_{t^{1/k} < u < \infty} u^{-d/p} \omega_k (f,u)_{p, q}. \label{ProofThmDeVoreLorentz3} \end{align} Plugging \eqref{ProofThmDeVoreLorentz2} and \eqref{ProofThmDeVoreLorentz3} into \eqref{ProofThmDeVoreLorentz1}, we derive the desired estimate \eqref{ThmDeVore*Lorentz}. \textsc{Case 2: $1 < p < \infty, 0 < q \leq \infty$, and $k=d/p$.} By \eqref{ProofThmDeVoreLorentz2}, Lemma \ref{LemmaEmbBMOLorentzState} and \eqref{ProofLemmaEmbBMOLorentzState1*}, we have \begin{align*} \Big(\int_0^{t^d} (u^{k/d} f^{\#*}(u))^q \frac{du}{u} \Big)^{1/q} &\asymp K(t^k, f; L_{d/k, q}(\mathbb{R}^d), \text{BMO}(\mathbb{R}^d)) \\ &\hspace{-2.5cm} \lesssim K(t^k, f; L_{d/k, q}(\mathbb{R}^d), \dot{W}^k L_{d/k, q}(\mathbb{R}^d)) \asymp \omega_k (f, t)_{d/k, q}. \end{align*} \textsc{Case 3: $1 < p < \infty, 0 < q \leq \infty$ and $k < d/p$.} In light of the Sobolev inequality (see, e.g., \cite[Theorem 2]{Milman}) \begin{equation*} \dot{W}^k L_{p,q}(\mathbb{R}^d) \hookrightarrow L_{p^*,q}(\mathbb{R}^d), \qquad p^* = \frac{d p}{d-k p}, \end{equation*} we obtain \begin{equation}\label{hfhfhhf} K(t^k,f; L_{p,q}(\mathbb{R}^d), L_{p^*,q}(\mathbb{R}^d)) \lesssim K(t^k,f; L_{p,q}(\mathbb{R}^d), \dot{W}^k L_{p,q}(\mathbb{R}^d)). \end{equation} Inserting the Holmstedt's formula \cite[Theorem 4.2]{Holmstedt} \begin{equation*} K(t^k,f; L_{p,q}(\mathbb{R}^d), L_{p^*,q}(\mathbb{R}^d)) \asymp \Big(\int_0^{t^d} (u^{\frac{1}{p}} f^*(u))^q \frac{du}{u} \Big)^{1/q} + t^k \Big(\int_{t^d}^\infty (u^{\frac{1}{p} - \frac{k}{d}} f^*(u))^q \frac{du}{u} \Big)^{1/q} \end{equation*} and \eqref{ProofLemmaEmbBMOLorentzState1*} in \eqref{hfhfhhf}, we infer that \begin{equation}\label{dhhdshahs} \Big(\int_0^{t^d} (u^{\frac{1}{p}} f^*(u))^q \frac{du}{u} \Big)^{1/q} + t^k \Big(\int_{t^d}^\infty (u^{\frac{1}{p} - \frac{k}{d}} f^*(u))^q \frac{du}{u} \Big)^{1/q} \lesssim \omega_k(f,t)_{p,q} \end{equation} and, in particular, \begin{equation*} \Big(\int_0^{t^d} (u^{\frac{1}{p}} f^*(u))^q \frac{du}{u} \Big)^{1/q} \lesssim \omega_k(f,t)_{p,q}. \end{equation*} Applying now \eqref{ProofLem2.1} and the Hardy's inequality, we get \begin{align*} \Big(\int_0^{t^d} (u^{1/p} f^{\# *}(u))^q \frac{du}{u} \Big)^{1/q} &\lesssim \Big(\int_0^{t^d} (u^{1/p} f^{* *}(u))^q \frac{du}{u} \Big)^{1/q} \\ & \hspace{-3cm}\lesssim \Big(\int_0^{t^d} (u^{1/p} f^{*}(u))^q \frac{du}{u} \Big)^{1/q} \lesssim \omega_k(f,t)_{p,q}. \end{align*} \textsc{Case 4: $p=q=1$ and $k=d$.} It follows from the fundamental theorem of Calculus that $\dot{W}^d_1(\mathbb{R}^d) \hookrightarrow L_\infty(\mathbb{R}^d)$, and thus \begin{equation*} K(t^d, f; L_1(\mathbb{R}^d), L_\infty(\mathbb{R}^d)) \lesssim K(t^d, f; L_1(\mathbb{R}^d), \dot{W}^d_1(\mathbb{R}^d)). \end{equation*} In light of $ K(t^d, f; L_1(\mathbb{R}^d), \dot{W}^d_1(\mathbb{R}^d)) \asymp \omega_d(f,t)_{1},$ $f \in L_{1}(\mathbb{R}^d), $ and \begin{equation*} K(t, f; L_1(\mathbb{R}^d), L_\infty(\mathbb{R}^d)) = t f^{**}(t) \end{equation*} (see, e.g., \cite[Theorem 1.6, Chapter 5, p. 298]{BennettSharpley}) we arrive at \eqref{ThmDeVore*New}. \textsc{Case 5: $p=q=1$ and $k < d$.} See Remark \ref{RemarkCubes}(iv). \end{proof} The proof of the optimality of Theorem \ref{ThmDeVoreLorentz} relies on a study of the Fourier series with monotone-type coefficients. We obtain sharp estimates for oscillations and smoothness of such functions. Our method is based on realization results for moduli of smoothness (cf. Lemma \ref{LemModuliLorentz} below), the lower bound for the sharp maximal function given in Theorem \ref{ThmBDS}, and limiting interpolation techniques. Note that the characterization of functions from $\text{BMO}(\mathbb{T})$ in terms of their Fourier coefficients was given by the celebrated Fefferman's result \cite{SleddStegenga} assuming, additionally, that Fourier coefficients are non-negative. Furthermore, various characterizations of $\text{BMO}$ functions under special conditions (e.g., lacunary Fourier series, power series) are also known (see, e.g., \cite{ChamizoCordobaUbis, KolyadaLeindler} among other). However, these results cannot be applied to establish pointwise estimates of both the oscillation (i.e., the sharp maximal function) and the moduli of smoothness. The next result confirms that inequalities \eqref{ThmDeVore*Lorentz} and \eqref{ThmDeVore*Lorentz2} are optimal and provide a non-trivial improvement of \eqref{DeVMax}. \begin{prop}\label{ThmSharpnessAssertion} Let $1 < p < \infty, 0 < q \leq \infty,$ and $k \in {{\Bbb N}}$. Let $b$ a positive slowly varying function on $(1,\infty)$ such that \begin{equation}\label{ThmSharpnessAssertionAssump0} \int_1^\infty (b(u))^q \frac{du}{u} < \infty \end{equation} (where the integral should be replaced by the supremum if $q=\infty$). Set \begin{equation}\label{Auxb} \tilde{b}_q(t) = \Big(\int_t^\infty (b(u))^q \frac{du}{u} \Big)^{1/q}, \qquad t > 1. \end{equation} Assume that \begin{equation}\label{ThmSharpnessAssertionAssump2} \tilde{b}_q(t) \lesssim \tilde{b}_q(t(\log t)^p) \quad \text{as} \quad t \to \infty. \end{equation} Let $f \in L_1(\mathbb{T})$ and \begin{equation}\label{FouSer} f (x) \sim \sum_{n=1}^\infty n^{-1 + 1/p} b(n) \cos nx, \qquad x \in \mathbb{T}. \end{equation} Then \begin{equation}\label{ThmSharpnessAssertion1} \frac{f^{\# *}(t)}{\sup_{t < u < 1} u^{-1/p} \omega_k(f,u)_{p,q}} \to 0 \quad \text{as} \quad t \to 0 \end{equation} and \begin{equation}\label{ThmSharpnessAssertion2} t^{-1/p} \Big(\int_0^{t} (u^{1/p} f^{\# *}(u))^q \, \frac{du}{u} \Big)^{1/q} \asymp \sup_{t < u < 1} u^{-1/p} \omega_k(f,u)_{p,q}. \end{equation} \end{prop} \begin{rem}\label{remark-slowly-var} (i) We will show in Appendix A that condition (\ref{ThmSharpnessAssertionAssump2}) is in fact essential. \\ (ii) As examples of slowly varying functions satisfying \eqref{ThmSharpnessAssertionAssump0} and \eqref{ThmSharpnessAssertionAssump2}, take $b (t) = (\log t)^{-\varepsilon} \psi(\log t)$ with $\varepsilon > 1/p$ and $\psi$ being a broken logarithmic function, or $b(t) = (\log t)^{-1/p} (\log (\log t))^{-\beta}$ with $\beta > 1/p$. See \cite{Bingham} for more examples of slowly varying functions. \\ (iii) Functions \eqref{FouSer} can be easily extended to consider more general Fourier series $f (x) \sim \sum_{n=1}^\infty n^{-1 + 1/p} (b_1(n)\cos nx+ b_2(n)\sin nx)$. \end{rem} The proof of Proposition \ref{ThmSharpnessAssertion} is based on technical Lemmas \ref{LemModuliLorentz}--\ref{Lem3}, which are interesting for their own sake and deal with Fourier series with general monotone coefficients. A sequence $a=\{a_n\}$ is called \emph{general monotone}, written $a \in GM$, if there is a constant $C > 0$ such that \begin{equation*} \sum_{\nu=n}^{2n-1} |\Delta a_\nu| \leq C |a_n| \quad \text{for all} \quad n \in \mathbb{N}. \end{equation*} Here $\Delta a_\nu = a_\nu - a_{\nu+1}$ and the constant $C$ is independent of $n$. It is proved in \cite[p. 725]{Tikhonov} that $a \in GM$ if and only if \begin{equation}\label{GM1} |a_\nu| \lesssim |a_n| \quad \text{for} \quad n \leq \nu \leq 2n \end{equation} and \begin{equation}\label{GM2} \sum_{\nu=n}^N |\Delta a_\nu| \lesssim |a_n| + \sum_{\nu=n+1}^N \frac{|a_\nu|}{\nu} \quad \text{for any} \quad n < N. \end{equation} Examples of $GM$ sequences include decreasing sequences, increasing sequences satisfying the condition $a_{2n}\lesssim a_n$, quasi-monotone sequences, i.e., such that $a_n/n^\tau$ is decreasing for some $\tau \geq 0$ and many others; see \cite{Tikhonov}. \begin{lem}\label{LemModuliLorentz} Let $1 < p < \infty, 0 < q \leq \infty$, and $k \in {{\Bbb N}}$. Assume that $f \in L_1(\mathbb{T})$ and \begin{equation*} f (x) \sim \sum_{n=1}^\infty (a_n \cos nx + b_n \sin nx), \qquad x \in \mathbb{T}, \end{equation*} where $\{a_n\}, \{b_n\}$ are nonnegative general monotone sequences. For every $j \in {{\Bbb N}}_0$, we have \begin{equation}\label{ModLorentzGM} \omega_k(f, 2^{-j})_{p,q} \asymp 2^{-j k} \Big(\sum_{\nu=0}^j 2^{\nu(k + 1/p') q} (a_{2^\nu}^q + b_{2^\nu}^q) \Big)^{1/q} +\Big(\sum_{\nu=j}^\infty 2^{\nu q/p'} (a_{2^\nu}^q + b_{2^\nu}^q) \Big)^{1/q} \end{equation} (with the usual modification if $q=\infty$). \end{lem} This result is known for Lebesgue spaces (i.e., $p=q$) \cite[Theorem 6.1]{GorbachevTikhonov}. \begin{proof}[Proof of Lemma \ref{LemModuliLorentz}] Since the periodic Hilbert transform is bounded in ${L_{p,q}(\mathbb{T})}$, we may assume that $b_n =0, \, n \in {{\Bbb N}}$. We will apply the following realization result for moduli of smoothness \cite[Lemma 3.1]{GogatishviliOpicTikhonovTrebels} \begin{equation}\label{realization} \omega_k(f, 2^{-j})_{p,q} \asymp \|f - S_{2^j} f\|_{L_{p,q}(\mathbb{T})} + 2^{-j k} \|(S_{2^j} f)^{(k)}\|_{L_{p,q}(\mathbb{T})}, \end{equation} where $S_{2^j} f (x) \sim \sum_{n=1}^{2^j} a_n \cos nx$. Although the proof given in \cite{GogatishviliOpicTikhonovTrebels} is only stated for $1 \leq q \leq \infty$, it also holds in the case $0 < q \leq \infty$. To verify \eqref{ModLorentzGM}, in view of \eqref{realization} and \eqref{GM1}, it suffices to show the following estimates: \begin{equation}\label{Claim1} \Big(\sum_{i=2^{j+1}} ^\infty i^{q/p'-1} a_i^q\Big)^{1/q} \lesssim \|f- S_{2^j} f\|_{L_{p,q}(\mathbb{T})} \lesssim \Big(\sum_{i=2^{j-1}} ^\infty i^{q/p'-1} a_i^q\Big)^{1/q} \end{equation} and \begin{equation}\label{Claim2} \|(S_{2^j} f)^{(k)}\|_{L_{p,q}(\mathbb{T})} \asymp \Big(\sum_{i=1}^{2^j} i^{(k + 1/p')q - 1} a_i^{q} \Big)^{1/q}. \end{equation} Suppose that $q < \infty$. To estimate $\|f - S_{2^j} f\|_{L_{p,q}(\mathbb{T})}$, we make use of the inequality \cite[Theorem 2.4]{Sagher} \begin{equation}\label{sagJMAA} \Big\|\Big(\sup_{n \geq i} \Big|\frac{1}{n} \sum_{l=1}^n a_l \Big| \Big)_{i \in {{\Bbb N}}} \Big\|_{\ell_{p',q}} \lesssim \|f\|_{L_{p,q}(\mathbb{T})} \end{equation} for $f(x) \sim \sum_{n=1}^\infty a_n \cos nx$. Here, $\ell_{p',q}$ denotes the Lorentz sequence space (see \cite{BennettRudnick, BennettSharpley}). Applying (\ref{sagJMAA}) for $f - S_{2^j} f$ we obtain \begin{equation}\label{sag} \qquad \quad\quad\Big\|\Big(\sup_{ n \geq i} \frac{1}{n} \sum_{l=1}^n \bar{a}_l \Big)_{i \in {{\Bbb N}}} \Big\|_{\ell_{p',q}} \lesssim \|f- S_{2^j} f\|_{L_{p,q}(\mathbb{T})}, \quad \bar{a}_l = \left\{\begin{array}{lcl} 0 & , & l \leq 2^j,\\ a_l & , & l \geq 2^j + 1. \end{array} \right. \end{equation} We split the left-hand side of \eqref{sag} into two terms $K_1 + K_2$, where \begin{equation*} K_1 = \bigg(\sum_{i=1}^{2^j} i^{q/p' - 1} \Big(\sup_{n \geq i} \frac{1}{n} \sum_{l=1}^n \bar{a}_l \Big)^q \bigg)^{1/q} \, \, \text{and} \, \, K_2 = \bigg(\sum_{i=2^j + 1}^\infty i^{q/p' - 1} \Big(\sup_{n \geq i} \frac{1}{n} \sum_{l=1}^n \bar{a}_l \Big)^q \bigg)^{1/q}. \end{equation*} Applying \eqref{GM1}, we obtain \begin{equation*} K_1 \geq \Big(\sum_{i=1}^{2^j} i^{q/p' - 1} \Big)^{1/q} \sup_{n \geq 2^j + 1} \frac{1}{n} \sum_{l=2^j + 1}^n a_l \gtrsim 2^{j/p'} a_{2^{j+1}} \end{equation*} and \begin{align*} K_2 &= \bigg(\sum_{i=2^j + 1}^\infty i^{q/p' - 1} \Big(\sup_{n \geq i} \frac{1}{n} \sum_{l=2^j + 1}^n a_l \Big)^q \bigg)^{1/q} \geq \bigg(\sum_{i=2^{j + 1}+2}^\infty i^{q/p' - 1} \Big( \frac{1}{i} \sum_{l=[i/2]}^i a_l \Big)^q \bigg)^{1/q}\\ & \gtrsim \Big(\sum_{i=2^{j+1} + 2} ^\infty i^{q/p'-1} a_i^q\Big)^{1/q}, \end{align*} where as usual $[x]$ denotes the integer part of $x \in \mathbb{R}$. Inserting these estimates into \eqref{sag}, we obtain the first inequality in \eqref{Claim1}. Next we prove the upper estimate in \eqref{Claim1}. Using summation by parts, for $x \neq 0$ and $N \geq j$, \begin{align*} \Big| \sum_{\nu=2^j+1}^\infty a_\nu \cos \nu x \Big| & \lesssim \sum_{\nu=2^j}^{2^N} a_\nu + \frac{1}{|x|} \sum_{\nu=2^N+1}^\infty |\Delta a_\nu| \\ & \lesssim \sum_{\nu=2^j}^{2^N} a_\nu + \frac{1}{|x|} \sum_{\nu=2^{N-1}}^\infty \frac{a_\nu}{\nu}, \end{align*} where the last estimate follows from \eqref{GM1}, \eqref{GM2}. Consequently, \begin{equation}\label{EstimRearrange} (f- S_{2^j} f)^*(t) \leq C \left\{\begin{array}{lcl} \sum_{\nu=2^j}^{2^N} a_\nu + \frac{1}{t} \sum_{\nu=2^{N-1}}^\infty \frac{a_\nu}{\nu} & , & N \geq j, \, \quad t > 0, \\ & & \\ \frac{1}{t} \sum_{\nu=2^{j-1}}^\infty \frac{a_\nu}{\nu}& , & t > 0, \end{array} \right. \end{equation} where $C$ is a positive constant which is independent of $f, t, j$, and $N$. These estimates imply, together with monotonicity properties, that \begin{equation}\label{Estim1} \|f - S_{2^j} f\|_{L_{p,q}(\mathbb{T})} \lesssim \Big(\sum_{i=0}^\infty 2^{-i q/p} ((f- S_{2^j} f)^*(2^{-i-1}))^q \Big)^{1/q} \lesssim D_1 + D_2, \end{equation} where \begin{equation*} D_1 = 2^{j/p'} \sum_{\nu= j-1}^\infty a_{2^\nu} \quad \text{and} \quad D_2 = \bigg(\sum_{i=j}^\infty 2^{-i q/p} \Big(\sum_{\nu=j}^{i} 2^\nu a_{2^\nu} + 2^{i} \sum_{\nu=i-1}^\infty a_{2^\nu} \Big)^q \bigg)^{1/q}. \end{equation*} Further, applying H\"older's inequality if $q \geq 1$ and the embedding $\ell_q \hookrightarrow \ell_1$ if $q < 1$, we have \begin{equation}\label{Estim2} D_1 \lesssim \Big(\sum_{\nu=j-1}^\infty 2^{\nu q/p'} a_{2^\nu}^q \Big)^{1/q}. \end{equation} On the other hand, using Hardy's inequality, we estimate \begin{equation}\label{Estim3} D_2 \lesssim \Big(\sum_{i=j}^\infty 2^{i q/p'} a_{2^{i}}^q \Big)^{1/q}. \end{equation} Therefore, in virtue of \eqref{Estim1}--\eqref{Estim3}, the second inequality in \eqref{Claim1} is shown. Finally, we proceed to prove \eqref{Claim2}. Invoking again \eqref{sagJMAA}, \begin{align*} \|(S_{2^j} f)^{(k)}\|_{L_{p,q}(\mathbb{T})} \gtrsim \bigg(\sum_{i=1}^{2^j} i^{q/p'-1} \Big(\frac{1}{i} \sum_{l=1}^i l^k a_l \Big)^q \bigg)^{1/q} \gtrsim \Big(\sum_{i=1}^{2^j} i^{(k + 1/p')q - 1} a_i^{q} \Big)^{1/q}, \end{align*} where the last step follows from \eqref{GM1}. To establish the converse estimate, one can use a similar argument as above (see \eqref{EstimRearrange}) to show \begin{equation*} \big((S_{2^j} f)^{(k)}\big)^*(t) \lesssim \left\{\begin{array}{lcl} \sum_{\nu=1}^{2^N} \nu^k a_\nu + \frac{1}{t} \sum_{\nu=2^{N-1}}^{2^j} \frac{\nu^k a_\nu}{\nu} & , & N < j, \, \quad t > 0, \\ & & \\ \sum_{\nu=1}^{2^j} \nu^k a_\nu& , & t > 0. \end{array} \right. \end{equation*} Thus \begin{equation*} \|(S_{2^j} f)^{(k)}\|_{L_{p,q}(\mathbb{T})} \lesssim \Big(\sum_{i=0}^\infty 2^{-i q/p} ((S^{(k)}_{2^j} f)^*(2^{-i-1}))^q \Big)^{1/q}\lesssim E_1 + E_2, \end{equation*} where \begin{equation*} E_1 = 2^{-j/p} \sum_{\nu=0}^{j} 2^{\nu (k+1)} a_{2^\nu} \,\, \text{and} \,\, E_2 = \bigg(\sum_{i=0}^j 2^{-i q/p} \Big(\sum_{\nu=0}^{i} 2^{\nu (k+1)} a_{2^\nu} + 2^{i} \sum_{\nu=i-1}^{j} 2^{\nu k} a_{2^\nu} \Big)^q \bigg)^{1/q}. \end{equation*} By reasoning in the same way as in \eqref{Estim2} and \eqref{Estim3}, one has $E_1 + E_2 \lesssim \Big(\sum_{i=0}^j 2^{i (k + 1/p') q} a_{2^{i}}^q\Big)^{1/q}$. This concludes the proof of \eqref{Claim2}. Similarly, \eqref{ModLorentzGM} also holds for $q=\infty$. \end{proof} As a byproduct of Lemma \ref{LemModuliLorentz} we obtain the following characterization of Lorentz--Besov norms. \begin{lem}\label{Lem1} Let $1 < p < \infty, 0 < q, r \leq \infty, k \in {{\Bbb N}}$, and $0 < s < k$. Assume that $f \in L_1(\mathbb{T})$ and \begin{equation}\label{Assumption1} f (x) \sim \sum_{n=1}^\infty (a_n \cos nx + b_n \sin nx), \qquad x \in \mathbb{T}, \end{equation} where $\{a_n\}, \{b_n\}$ are nonnegative general monotone sequences. For every $j \in {{\Bbb N}}$, we have \begin{align} \Big(\int_{2^{-j}}^1 (u^{-s} \omega_k(f,u)_{p,q})^r \frac{du}{u} \Big)^{1/r} & \asymp \Big(\sum_{\nu=0}^j 2^{\nu (s + 1/p') r} (a_{2^\nu}^r + b_{2^\nu}^r) \Big)^{1/r} \nonumber \\ & \hspace{1cm}+ 2^{j s} \Big(\sum_{\nu=j}^\infty 2^{\nu q/p'} (a_{2^{\nu}}^q + b_{2^\nu}^q) \Big)^{1/q}. \label{Lem1.1} \end{align} \end{lem} \begin{rem} Passing to the limit $j \to \infty$ in \eqref{Lem1.1}, we derive that the Besov seminorm \begin{equation*} \vertiii{f}_{\dot{B}^s_r L_{p,q}(\mathbb{T});k} := \Big(\int_0^1 (u^{-s} \omega_k(f,u)_{p,q})^r \frac{du}{u} \Big)^{1/r} \asymp \Big(\sum_{\nu=0}^\infty 2^{\nu (s + 1/p') r} (a_{2^\nu}^r + b_{2^\nu}^r) \Big)^{1/r}. \end{equation*} Thus the fine integrability parameter $q$ does not play a role when working with the global Lorentz--Besov norm. In particular, we have \begin{equation*} \vertiii{f}_{B^s_r L_{p,q}(\mathbb{T});k} \asymp \vertiii{f}_{B^s_{p,r}(\mathbb{T});k}; \end{equation*} see also \cite[Theorem 4.22]{DominguezTikhonov}. However, the quantitative estimate \eqref{Lem1.1} strongly depends on $q$, since it is easily seen that the terms \begin{equation*} \Big(\sum_{\nu=0}^j 2^{\nu (s + 1/p') r} (a_{2^\nu}^r + b_{2^\nu}^r) \Big)^{1/r}\qquad \text{and} \qquad 2^{j s} \Big(\sum_{\nu=j}^\infty 2^{\nu q/p'} (a_{2^{\nu}}^q + b_{2^\nu}^q) \Big)^{1/q}, \end{equation*} are not comparable. \end{rem} \begin{proof}[Proof of Lemma \ref{Lem1}] It follows from Lemma \ref{LemModuliLorentz} that $ \Big(\int_{2^{-j}}^1 (u^{-s} \omega_k(f,u)_{p,q})^r \frac{du}{u} \Big)^{1/r} \asymp K_1 + K_2 + K_3, $ where \begin{equation*} K_1 = \Big(\sum_{\nu=0}^j \big(2^{\nu (s -k) q}\sum_{i = 0}^\nu 2^{i(k + 1/p')q} (a_{2^{i}}^q + b_{2^{i}}^q) \big)^{r/q} \Big)^{1/r}, \end{equation*} \begin{equation*} K_2 = \Big(\sum_{\nu=0}^j\big(2^{\nu s q} \sum_{i=\nu}^j 2^{i q/p'} (a_{2^{i}}^q + b_{2^{i}}^q) \big)^{r/q} \Big)^{1/r}, \quad K_3 = 2^{j s} \Big(\sum_{\nu=j}^\infty 2^{\nu q/p'} (a_{2^{\nu}}^q + b_{2^\nu}^q) \Big)^{1/q}. \end{equation*} Applying Hardy's inequality (since $0 < s < k$), \begin{equation*} K_1 \asymp K_2 \asymp \Big(\sum_{\nu=0}^j 2^{\nu (s + 1/p') r} (a_{2^\nu}^r + b_{2^\nu}^r) \Big)^{1/r} \end{equation*} and thus we arrive at \eqref{Lem1.1}. \end{proof} Our next result provides an upper estimate of the sharp maximal function $f^{\#}$ in terms of Fourier coefficients of $f$. \begin{lem}\label{Lem2} Assume that $f \in L_1(\mathbb{T})$ and \begin{equation*} f (x) \sim \sum_{n=1}^\infty (a_n \cos nx + b_n \sin nx), \qquad x \in \mathbb{T}, \end{equation*} where $\{a_n\}, \{b_n\}$ are nonnegative general monotone sequences. Then \begin{equation*} f^{\#*}(2^{-j}) \lesssim \sum_{n=0}^j 2^n (a_{2^n} + b_{2^n})+ 2^j \sum_{n=j}^\infty \sum_{\nu=n}^\infty (a_{2^\nu} + b_{2^\nu}) \end{equation*} for $j \in {{\Bbb N}}.$ \end{lem} \begin{proof} For simplicity, we assume that $b_n = 0$. Taking into account \eqref{ProofLem2.1}, it suffices to show that \begin{equation*} f^{**}(2^{-j}) \lesssim \sum_{n=0}^j 2^n a_{2^n}+ 2^j \sum_{n=j}^\infty \sum_{\nu=n}^\infty a_{2^\nu}. \end{equation*} Arguing along similar lines as \eqref{EstimRearrange}, we obtain \begin{equation*} f^*(t) \leq C \left( \sum_{n=1}^m a_n + \frac{1}{t} \Big(a_{m+1} + \sum_{n=m+2}^\infty \frac{a_n}{n} \Big) \right), \quad t > 0, \quad m \in {{\Bbb N}}. \end{equation*} Applying this estimate together with monotonicity properties (see \eqref{GM1}), we establish \begin{align*} f^{**}(2^{-j}) &= 2^j \int_0^{2^{-j}} f^*(u) \, du \asymp 2^j \sum_{\nu=j}^\infty 2^{-\nu} f^*(2^{-\nu}) \\ & \lesssim 2^j \sum_{\nu=j}^\infty 2^{-\nu} \left(\sum_{n=0}^{\nu} 2^n a_{2^n} + 2^\nu \sum_{n={\nu-1}}^\infty a_{2^n} \right) \\ & \asymp \sum_{n=0}^j 2^n a_{2^n}+ 2^j \sum_{n=j}^\infty \sum_{\nu=n}^\infty a_{2^\nu}. \end{align*} \end{proof} Concerning the lower bound of $f^{\#}$, we obtain the following \begin{lem}\label{Lem3} Let $1 < p < \infty$ and $0 < q \leq \infty$. Assume that $f \in L_1(\mathbb{T})$ and \begin{equation*} f (x) \sim \sum_{n=1}^\infty (a_n \cos nx + b_n \sin nx), \qquad x \in \mathbb{T}, \end{equation*} where $\{a_n\}, \{b_n\}$ are nonnegative general monotone sequences. Then \begin{equation*} \sum_{n=j}^\infty 2^{-n q/p} n^{-q} \big(2^{n} n^p (a_{[2^n n^p]} + b_{[2^n n^p]})\big)^q \lesssim \int_0^{2^{-j}} (u^{1/p} f^{\#*}(u))^q \, \frac{du}{u}, \quad j \in {{\Bbb N}}. \end{equation*} \end{lem} \begin{proof} Let $f_-(x)=\frac{f(x)-f(-x)}2$ and $f_+(x)=f(x)-f_-(x)$. By \eqref{GM1}, making use of the method of \cite{tikhonov1}, it is plain to check that b_n \lesssim \int_0^{\frac{\pi}{n} } f_-(u) \, du. $ Moreover, noting that $$\int_0^{t } \int_0^{x} f_+(u) \, du\, dx=2\sum_{n=1}^\infty\frac{a_n}{n^2}\sin^2 \frac{nt}2, $$ we also derive a_n \lesssim \int_0^{\frac{\pi}{n} } |f_+(u)| \, du. $ Thus we have \begin{equation*} a_n + b_n \lesssim \int_{-\pi/n }^{\pi/n } |f(u)| \, du \end{equation*} Using basic properties of rearrangements \cite[Lemma 2.1, Chapter 2, p. 44]{BennettSharpley} implies \begin{equation}\label{ProofLem3.1} n (a_n + b_n) \lesssim n \int_{-\pi/n }^{\pi/n} |f(u)| \, du \lesssim n \int_0^{\pi/n} f^*(u) \, du \lesssim f^{**}\Big(\frac{1}{n}\Big). \end{equation} Invoking Theorem \ref{ThmBDS} (see also Remark \ref{Rem1}(iii)), we obtain \begin{align*} \int_0^{2^{-j}} (u^{1/p} f^{\#*}(u))^q \, \frac{du}{u} & \gtrsim \int_0^{2^{-j} j^{-p}} (u^{1/p} f^{**}(u))^q \, \frac{du}{u} \gtrsim \sum_{n=j}^\infty (2^{-n} n^{-p})^{q/p} (f^{**}(2^{-n} n^{-p}))^q \\ &\gtrsim \sum_{n=j}^\infty 2^{-n q/p} n^{-q} \big(2^{n} n^p (a_{[2^n n^p]} + b_{[2^n n^p]})\big)^q, \end{align*} where the last step follows from \eqref{ProofLem3.1}. \end{proof} We are now in a position to prove Proposition \ref{ThmSharpnessAssertion}. \begin{proof}[Proof of Proposition \ref{ThmSharpnessAssertion}] Let $j \in {{\Bbb N}}$. Applying Lemma \ref{Lem1} with $s=1/p$ and $r=\infty$ and basic properties of slowly varying functions (see \cite{Bingham}), we have \begin{align} \sup_{2^{-j} < u < 1} u^{-1/p} \omega_k(f,u)_{p,q} &\asymp \sup_{\nu=0, \ldots, j} 2^{\nu/p} b(2^\nu) + 2^{j/p} \Big(\sum_{\nu=j}^\infty b(2^\nu)^q \Big)^{1/q} \asymp 2^{j/p} \tilde{b}_q(2^j). \label{ProofThmSharpnessAssertion1} \end{align} Further, in light of Lemma \ref{Lem2}, \begin{equation}\label{ProofThmSharpnessAssertion2} f^{\#*}(2^{-j}) \lesssim \sum_{n=0}^j 2^{n/p} b(2^n) + 2^j \sum_{n=j}^\infty \sum_{\nu=n}^\infty 2^{\nu(-1+1/p)} b(2^\nu) \asymp 2^{j/p} b(2^j). \end{equation} On the other hand, Lemma \ref{Lem3} yields \begin{align} \left(\int_0^{2^{-j}} (u^{1/p} f^{\#*}(u))^q \, \frac{du}{u} \right)^{1/q} &\gtrsim \left(\sum_{n=j}^\infty 2^{-n q/p} n^{-q} (2^{n} n^p (2^n n^p)^{-1 + 1/p}b(2^n n^p))^q \right)^{1/q} \nonumber \\ & = \left(\sum_{n=j}^\infty (b(2^n n^p))^q \right)^{1/q} \asymp \tilde{b}_q(2^j j^p). \label{ProofThmSharpnessAssertion3} \end{align} By \eqref{ProofThmSharpnessAssertion1} and \eqref{ProofThmSharpnessAssertion2}, \begin{equation*} \frac{f^{\#*}(2^{-j}) }{\sup_{2^{-j} < u < 1} u^{-1/p} \omega_k(f,u)_{p,q}} \lesssim \frac{b(2^j)}{\tilde{b}_q(2^j)} \end{equation*} and thus, \eqref{ThmSharpnessAssertion1} holds since \begin{equation*} \lim_{t \to \infty} \frac{b(t)}{\tilde{b}_q(t)} = 0 \end{equation*} (see \cite[Propositions 1.3.6(ii) and 1.5.9b]{Bingham}). Further, taking into account assumption \eqref{ThmSharpnessAssertionAssump2}, it follows from \eqref{ProofThmSharpnessAssertion3} and \eqref{ProofThmSharpnessAssertion1} that \begin{align*} 2^{j/p} \left(\int_0^{2^{-j}} (u^{1/p} f^{\#*}(u))^q \, \frac{du}{u} \right)^{1/q} & \gtrsim 2^{j/p} \tilde{b}_q(2^j j^p) \gtrsim 2^{j/p} \tilde{b}_q(2^j) \\ &\hspace{-3cm} \asymp \sup_{2^{-j} < u < 1} u^{-1/p} \omega_k(f,u)_{p,q}. \end{align*} Hence, assertion \eqref{ThmSharpnessAssertion2} follows now from \eqref{ThmDeVore*Lorentz}. \end{proof} Our next objective is to show the sharpness of \eqref{ThmDeVore*New}, i.e., $t f^{**}(t) \lesssim \omega_1(f,t)_{1}$. \begin{prop}\label{SharpnessThmDeVore*New} Let $k\in {{\Bbb N}}$ and $f \in L_1(\mathbb{T})$ be such that \begin{equation*} f (x) \sim \sum_{n=1}^\infty a_n \cos n x, \qquad x \in \mathbb{T}, \end{equation*} where $\{a_n\}$ is a decreasing convex sequence satisfying, for some $\varepsilon > 0$, \begin{equation}\label{SharpnessThmDeVore*New1--}a_n n^{k-\varepsilon}\lesssim a_m m^{k-\varepsilon},\qquad n\le m. \end{equation} Then \begin{equation}\label{SharpnessThmDeVore*New1} \omega_k(f, t)_1 \lesssim t f^{**}(t). \end{equation} \end{prop} A typical example of $\{a_n\}$ satisfying \eqref{SharpnessThmDeVore*New1--} is $a_n=n^{-\varkappa}d_n$ with $0 < \varkappa < k$ and $d_n=d(n)$ with a slowly varying function $d(x)$. Note that in this case, if $0 < \varkappa < 1$, then $f(x)\sim x^{\varkappa-1} d(1/x) \Gamma(1-\varkappa)\sin \frac{\pi \varkappa}2$ as $x\to 0+$ (cf. \cite{Tikhonov}). The proof of Proposition \ref{SharpnessThmDeVore*New} is based on the following estimates of moduli of smoothness of Fourier series with convex coefficients. \begin{lem}[{\cite{Aljancic}}]\label{LemmaAljancic} Assume that $f \in L_1(\mathbb{T})$ is such that f (x) \sim \sum_{n=1}^\infty a_n \cos n x,$ $x \in \mathbb{T}, $ where $\{a_n\}$ is a decreasing convex sequence. Then, for $k \in {{\Bbb N}}$, \begin{equation*} \omega_k\Big(f, \frac{1}{n}\Big)_1 \lesssim \frac{1}{n^k} \sum_{\nu=1}^n {\nu^{k-1}}{a_\nu}, \qquad n \in {{\Bbb N}}. \end{equation*} \end{lem} \begin{proof}[Proof of Proposition \ref{SharpnessThmDeVore*New}] Applying Lemma \ref{LemmaAljancic} and taking into account condition \eqref{SharpnessThmDeVore*New1--}, we obtain \begin{equation*} \omega_k \Big( f, \frac{1}{n}\Big)_1 \lesssim \frac{1}{n^k} \sum_{\nu=1}^n {\nu^{k-1}}{a_\nu} \asymp a_n. \end{equation*} On the other hand, by \eqref{ProofLem3.1}, \begin{equation*} \frac{1}{n} f^{**} \Big( \frac{1}{n}\Big) \gtrsim a_n. \end{equation*} The desired estimate \eqref{SharpnessThmDeVore*New1} now follows from monotonicity properties of moduli of smoothness and $f^{**}$. \end{proof} \begin{proof}[Proof of Remark \ref{RemarkCubes} \emph{(iv)}] Let $\text{Ext}$ be the linear extension operator for Sobolev spaces given by Calder\'on--Stein \cite[Chapter VI, Section 3, pp. 180--192]{Stein70}. Then \begin{equation*} \text{Ext}: L_p(Q_0) \to L_p(\mathbb{R}^d) \quad \text{and} \quad \text{Ext}: W^k_p(Q_0) \to W^k_p(\mathbb{R}^d) \end{equation*} for $k \in {{\Bbb N}}$ and $1 \leq p \leq \infty$. By the interpolation properties of Sobolev spaces \cite{DeVoreScherer}, the previous estimates admit extensions to Lorentz--Sobolev spaces \begin{equation}\label{112} \text{Ext}: W^k L_{p,q}(Q_0) \to W^k L_{p,q}(\mathbb{R}^d), \qquad 1 < p < \infty, \quad 0 < q \leq \infty. \end{equation} Thus, for each $t \in (0,1)$, \begin{equation*} K(t^k,\text{Ext} f; L_{p,q}(\mathbb{R}^d), W^k L_{p,q}(\mathbb{R}^d)) \lesssim K(t^k,f; L_{p,q}(Q_0), W^k L_{p,q}(Q_0)) \end{equation*} or, equivalently, (see \eqref{ProofLemmaEmbBMOLorentzState1*}) \begin{equation}\label{333222} t^k \|\text{Ext} f\|_{L_{p,q}(\mathbb{R}^d)} + \omega_k(\text{Ext} f, t)_{p,q} \lesssim t^k \|f\|_{L_{p,q}(Q_0)} + \omega_k(f, t)_{p,q}. \end{equation} On the other hand, taking into account the trivial estimate $\|\text{Re} f\|_{\text{BMO}(Q_0)} \leq \|f\|_{\text{BMO}(\mathbb{R}^d)}$, we derive \begin{equation*} K(t, \text{Re} f; L_{p,q}(Q_0), \text{BMO}(Q_0)) \leq K(t, f; L_{p,q}(\mathbb{R}^d), \text{BMO}(\mathbb{R}^d)), \quad t \in (0,1). \end{equation*} According to \eqref{JT} and \eqref{ProofThmDeVoreLorentz2}, \begin{equation}\label{7474747} \int_0^{t} (u^{1/p} (\text{Re} f)^{\#*}_{Q_0}(u))^q \frac{du}{u} \lesssim \int_0^{t} (u^{1/p} f^{\#*}(u))^q \frac{du}{u}, \quad t \in (0,1). \end{equation} Assume $k > d/p$. Combining \eqref{ThmDeVore*Lorentz}, \eqref{333222} and \eqref{7474747}, we arrive at \begin{align*} t^{-d/p} \Big(\int_0^{t^d} (u^{1/p} f_{Q_0}^{\# *}(u))^q \frac{du}{u} \Big)^{1/q} &\lesssim t^{-d/p} \Big(\int_0^{t^d} (u^{1/p} (\text{Ext} f)^{\# *}(u))^q \frac{du}{u} \Big)^{1/q} \\ & \lesssim \sup_{ t < u < \infty} u^{-d/p} \omega_k(\text{Ext} f,u)_{p,q} \\ & \lesssim \| \text{Ext} f\|_{L_{p,q}(\mathbb{R}^d)} + \sup_{ t < u < 1} u^{-d/p} \omega_k(\text{Ext} f,u)_{p,q} \\ & \lesssim \|f\|_{L_{p,q}(Q_0)} + \sup_{ t < u < 1} u^{-d/p} \omega_k(f,u)_{p,q}. \end{align*} This proves \eqref{ThmDeVore*LorentzCubes}. The proofs of \eqref{ThmDeVore*Lorentz2Cubes} and \eqref{ThmDeVore*NewCubes} are similar but now using \eqref{ThmDeVore*Lorentz2} and \eqref{ThmDeVore*New}, respectively. \end{proof} \bigskip \section{Proof of Theorem \ref{ThmDeVoreDer} and its optimality} \begin{proof}[Proof of Theorem \ref{ThmDeVoreDer}] We start by proving \eqref{ThmDeVoreDer<}. According to \eqref{LemmaEmbBMO}, we have \begin{equation}\label{000202002} \dot{W}^k L_{d/k, \infty}(\mathbb{R}^d) \hookrightarrow \text{BMO}(\mathbb{R}^d). \end{equation} On the other hand, in light of the Sobolev embedding (see, e.g., \cite[Theorem 2]{Milman}), we obtain \begin{equation}\label{ThmDeVoreDer1*} \dot{W}^k L_{r,q}(\mathbb{R}^d) \hookrightarrow L_{p,q}(\mathbb{R}^d), \qquad r= \frac{d p}{d + k p}, \end{equation} provided that either $k < d(1-1/p)$ or $k=d(1-1/p)$ and $q=1$. Hence, we have \begin{equation}\label{ThmDeVoreDer1} K(t, f; L_{p,q}(\mathbb{R}^d), \text{BMO}(\mathbb{R}^d)) \lesssim K(t, f;\dot{W}^k L_{dp/(d+k p),q}(\mathbb{R}^d), \dot{W}^k L_{d/k, \infty}(\mathbb{R}^d)). \end{equation} By \eqref{ProofThmDeVoreLorentz2}, \begin{equation}\label{ThmDeVoreDer2} K(t, f; L_{p,q}(\mathbb{R}^d), \text{BMO}(\mathbb{R}^d)) \asymp \left(\int_0^{t^p} (u^{1/p} f^{\#*}(u))^q \, \frac{du}{u} \right)^{1/q}. \end{equation} Applying the characterization of the $K$-functional for Sobolev spaces given in \cite{DeVoreScherer} (cf. also \cite{CalderonMilman} and \cite{DeVoreSharpley}) together with Holmstedt's formula \cite[Theorem 4.2]{Holmstedt}, we see that \begin{align} K(t, f; \dot{W}^k L_{r,q}(\mathbb{R}^d), \dot{W}^k L_{d/k, \infty}(\mathbb{R}^d)) &\asymp K(t, |\nabla^k f|; L_{r,q}(\mathbb{R}^d), L_{d/k, \infty}(\mathbb{R}^d)) \nonumber \\ &\hspace{-5cm} \asymp \Big(\int_0^{t^p} (u^{1/r} |\nabla^k f|^*(u))^q \, \frac{du}{u} \Big)^{1/q} + t \sup_{t^p < u < \infty} u^{k/d} |\nabla^k f|^*(u). \label{ThmDeVoreDer3} \end{align} Inserting the estimates \eqref{ThmDeVoreDer2} and \eqref{ThmDeVoreDer3} into \eqref{ThmDeVoreDer1} we derive \begin{align*} \left(\int_0^{t} (u^{1/p} f^{\#*}(u))^q \, \frac{du}{u} \right)^{1/q} & \lesssim \Big(\int_0^{t} (u^{1/r} |\nabla^k f|^*(u))^q \, \frac{du}{u} \Big)^{1/q} \\ & \hspace{1cm} + t^{1/p} \sup_{t < u < \infty} u^{k/d} |\nabla^k f|^*(u). \end{align*} The proof of \eqref{ThmDeVoreDer<Cubes} follows the same lines as above, but now relying on the inhomogeneous counterparts of \eqref{000202002} and \eqref{ThmDeVoreDer1*}, i.e., \begin{equation}\label{0030030} W^k L_{d/k, \infty}(Q_0) \hookrightarrow \text{BMO}(Q_0) \end{equation} and the classical embedding \begin{equation*} W^k L_{r,q}(Q_0) \hookrightarrow L_{p,q}(Q_0), \qquad r= \frac{d p}{d + k p}. \end{equation*} The proof of \eqref{0030030} is an immediate consequence of \eqref{000202002} and the fact that (see \eqref{112}) \begin{equation*} \text{Ext}: W^k L_{d/k,\infty}(Q_0) \to W^k L_{d/k,\infty}(\mathbb{R}^d). \end{equation*} \end{proof} To show the optimality of Theorem \ref{ThmDeVoreDer}, to avoid unnecessary complications, we will only concern ourselves with $k=1$. \begin{prop}\label{ThmDeVoreDerSharpnessAssertion} Let $1 < p < \infty$ and $r= \frac{dp}{d+p}$. Assume that either of the following conditions is satisfied: \begin{enumerate}[\upshape(i)] \item $1 < d (1-1/p)$ and $1 \leq q \leq \infty$, \item $1= d(1-1/p)$ and $q=1$. \end{enumerate} Let $b$ be a positive slowly varying function on $(0,1)$ satisfying \begin{equation*} \int_0^1 (b(u))^q \frac{du}{u} < \infty \end{equation*} (where the integral should be replaced by the supremum if $q=\infty$). Set \begin{equation*} \bar{b}_q(t) = \Big(\int^t_0 (b(u))^q \frac{du}{u} \Big)^{1/q}, \qquad t < 1. \end{equation*} Assume that \begin{equation}\label{ThmDeVoreDerSharpnessAssertion1} \bar{b}_q(t) \lesssim \bar{b}_q(t(-\log t)^{-p/d}) \quad \text{as} \quad t \to 0. \end{equation} Define \begin{equation*} f(x) = \int_{|x|}^1 u^{-d/p} b(u) \frac{du}{u}, \qquad 0 < |x| < 1, \end{equation*} and $f(x)=0$ otherwise. Then \begin{align*} t^{-d/p} \Big(\int_0^{t^d} (u^{1/p}f^{\# *}(u))^q \, \frac{du}{u} \Big)^{1/q} & \asymp t^{-d/p} \Big(\int_0^{t^d} (u^{1/r} |\nabla f|^*(u))^q \, \frac{du}{u} \Big)^{1/q} \\ & \hspace{1cm}+ \sup_{t^d < u < \infty} u^{1/d} |\nabla f|^*(u) \end{align*} for $t$ sufficiently small. \end{prop} Regarding condition \eqref{ThmDeVoreDerSharpnessAssertion1} and examples of functions $b$, see Remark \ref{remark-slowly-var}(i) and (ii). \begin{proof}[Proof of Proposition \ref{ThmDeVoreDerSharpnessAssertion}] We define $f_0 : [0,\infty) \to [0, \infty)$ by \begin{equation*} f_0(t) = \int_t^1 u^{-d/p} b(u) \frac{du}{u}, \qquad t \in (0, 1), \end{equation*} and $f_0(t)=0$ otherwise. Noting that $f(x) = f_0(|x|)$ and $f_0$ is a decreasing function, it is readily seen that $f^*(t) = f_0 \left((t/\omega_d)^{1/d} \right)$, where $\omega_d$ denotes the volume of the $d$-dimensional unit ball. It follows from basic properties of slowly varying functions that \begin{equation*} f^*(t) \asymp t^{-1/p} b(t^{1/d}), \qquad t \in (0,1/2). \end{equation*} Further, elementary computations yield that \begin{equation*} |\nabla f|^*(t) \asymp t^{-1/p -1/d} b(t^{1/d}), \qquad t \in (0,1). \end{equation*} Assume $q < \infty$. If $t$ is sufficiently small then \begin{equation*} t^{-d/p} \Big(\int_0^{t^d} (u^{1/r} |\nabla f|^*(u))^q \, \frac{du}{u} \Big)^{1/q} + \sup_{t^d < u < \infty} u^{1/d} |\nabla f|^*(u) \asymp t^{-d/p} \bar{b}_q(t), \end{equation*} where we have used the fact that $\lim_{t \to 0 +} \frac{\bar{b}_q(t)}{b(t)} = \infty$ (see \cite[Propositions 1.3.6(ii) and 1.5.9b]{Bingham}). On the other hand, applying Theorem \ref{ThmBDS} (see also \eqref{757575} with $Q_0 = [-1,1]^d$) and making use of assumption \eqref{ThmDeVoreDerSharpnessAssertion1}, \begin{equation*} t^{-d/p} \Big(\int_0^{t^d} (u^{1/p} f^{\#*}(u))^q \frac{du}{u} \Big)^{1/q} \gtrsim t^{-d/p} \bar{b}_q(t (1-\log t)^{-p/d}) \gtrsim t^{-d/p} \bar{b}_q(t). \end{equation*} Combining the above results and \eqref{ThmDeVoreDer<}, we complete the proof. The case $q=\infty$ is easier and we omit further details. \end{proof} \bigskip \section{Proof of Corollary \ref{TheoremSharpLimiting} and its optimality} \begin{proof}[Proof of Corollary \ref{TheoremSharpLimiting}] According to Theorem \ref{ThmBDS}, there exists $s$ such that the sharp maximal function $M^{\#}_{s,Q_0;w}f$ obeys the estimate \begin{equation*} \int_0^{t (1-\log t)^{-p}} ((f-f_{Q_0;w})^{*}_w(u))^p \, du \lesssim \int_0^{t} ((M^{\#}_{s,Q_0;w}f)_w^*(u))^p \, du, \quad \end{equation*} and thus, applying Hardy's inequality, we derive \begin{align*} \bigg(\int_0^1 t^{-q/p} (1 - \log t)^{b q} \Big(\int_0^{t (1-\log t)^{-p}} ((f - f_{Q_0;w})_w^{*}(u))^p \, du \Big)^{q/p} \frac{dt}{t} \bigg)^{1/q} & \\ & \hspace{-8cm}\lesssim\bigg(\int_0^1 t^{-q/p} (1 - \log t)^{b q} \Big( \int_0^{t} ((M^{\#}_{s,Q_0;w}f)_w^*(u))^p \, du\Big)^{q/p} \frac{dt}{t} \bigg)^{1/q} \\ & \hspace{-8cm} \lesssim \bigg( \int_0^1 ((1-\log t)^{b} (M^{\#}_{s,Q_0;w}f)_w^*(t))^q \frac{dt}{t} \bigg)^{1/q}. \end{align*} Next we show that \begin{align} \bigg(\int_0^1 (1 - \log t)^{b q} \Big(\sup_{t <u <1} (1-\log u)^{-1} (f - f_{Q_0;w})_w^*(u) \Big)^{q} \frac{dt}{t} \bigg)^{1/q} & \lesssim \nonumber \\ &\hspace{-8cm} \bigg(\int_0^1 t^{-q/p} (1 - \log t)^{b q} \Big(\int_0^{t (1-\log t)^{-p}} ((f-f_{Q_0;w})^{*}_w(u))^p \, du \Big)^{q/p} \frac{dt}{t} \bigg)^{1/q}\label{71} \end{align} and \begin{equation}\label{72} \bigg( \int_0^1 ((1-\log t)^{b} (M^{\#}_{s,Q_0;w}f)_w^*(t))^q \frac{dt}{t} \bigg)^{1/q} \lesssim \bigg( \int_0^1 ((1-\log t)^{b} (f_{Q_0}^{\#})^*_w(t))^q \frac{dt}{t} \bigg)^{1/q}. \end{equation} Note that in the right-hand side of \eqref{72} we work with the function $f^{\#}_{Q_0}$ taken with respect to the Lebesgue measure. Using monotonicity properties and changing the order of summation, we infer that \begin{align*} \bigg(\int_0^1 (1 - \log t)^{b q} \Big(\sup_{t < u < 1} (1-\log u)^{-1} (f- f_{Q_0;w})_w^*(u) \Big)^{q} \frac{dt}{t} \bigg)^{1/q} & \asymp \\ & \hspace{-9cm} \bigg(\sum_{j=0}^\infty 2^{j(b+1/q) q} \Big(\sup_{\nu = 0, \ldots, j} 2^{-\nu} (f- f_{Q_0;w})_w^*(2^{-2^\nu})\Big)^q \bigg)^{1/q} \\ &\hspace{-9cm} \leq \bigg(\sum_{j=0}^\infty 2^{j(b+1/q) q} \sum_{\nu=0}^j 2^{-\nu q} ((f- f_{Q_0;w})_w^*(2^{-2^\nu}))^q \bigg)^{1/q} \\ &\hspace{-9cm} \asymp \bigg(\sum_{\nu=0}^\infty 2^{\nu((b-1) q + 1)} ((f- f_{Q_0;w})_w^*(2^{-2^\nu}))^q \bigg)^{1/q} \\ &\hspace{-9cm} \asymp \bigg(\int_0^1 ((1-\log t)^{b-1} (f- f_{Q_0;w})_w^*(t))^q \frac{dt}{t} \bigg)^{1/q} \\ &\hspace{-9cm} \lesssim \bigg(\int_0^1 t^{-q/p} (1-\log t)^{b q} \Big(\int_0^{t(1-\log t)^{-p}} ((f- f_{Q_0;w})_w^*(u))^p \, du \Big)^{q/p} \frac{dt}{t} \bigg)^{1/q}. \end{align*} The proof of \eqref{71} is complete. Concerning \eqref{72}, we first observe that, by the $A_\infty(Q_0)$-condition, there are $\delta, C>0$ such that \begin{equation}\label{anew} f^*_w(t) \leq C f^*(t^{1/\delta}) \end{equation} for all measurable functions $f$ on $Q_0$. In particular, this yields $$M^{\#}_{s,Q_0;w}f (x) \leq C M^{\#}_{\frac{(s w(Q_0))^{1/\delta}}{|Q_0|_d},Q_0}f (x).$$ On the other hand, it is plain to see that $M^{\#}_{s,Q_0}f (x) \leq (s |Q_0|_d)^{-1} f^{\#}_{Q_0} (x)$. Combining these two inequalities, we arrive at $M^{\#}_{s,Q_0;w}f (x) \lesssim f^{\#}_{Q_0} (x)$ and thus \eqref{72} follows. \end{proof} Our next result establishes the optimality of Corollary \ref{TheoremSharpLimiting}. \begin{prop}\label{PropOptimFS} Let $0 < q \leq \infty$ and $b < -1/q$. Assume $w \in A_\infty(Q_0)$ and $f \in L_p(Q_0,w)$ for some $1 < p < \infty$. Then \begin{equation}\label{As1} \left(\int_0^1 (1 - \log t)^{b q} \left(\int_t^1 (u^{1/p} (1-\log u)^{\xi} (f-f_{Q_0;w})_w^*(u))^r \frac{du}{u} \right)^{q/r} \frac{dt}{t} \right)^{1/q} \lesssim \|f^{\#}_{Q_0}\|_{L_{\infty,q}(\log L)_b(Q_0,w)} \end{equation} if and only if \begin{equation*} \left\{\begin{array}{lcl} p < \infty, & r \leq \infty, & -\infty < \xi < \infty, \\ & & \\ p=\infty,& r < \infty, & \xi < -1 - \frac{1}{r}, \\ & & \\ p=\infty, & r= \infty, & \xi \leq -1. \end{array} \right. \end{equation*} \end{prop} \begin{proof} The backward implication follows from \eqref{FSLim} and simple computations. The proof of the forward implication with $w = |\cdot|_d$ hinges on the following estimate \begin{equation}\label{key} \|f^{\#}_{Q_0}\|_{L_{\infty,q}(\log L)_b(Q_0)} \lesssim \sum_{l=0}^1 \bigg(\int_0^1 \Big((1-\log t)^b \sup_{t < u < 1} u^{1/d} |\nabla^l f|^*(u) \Big)^{q} \frac{dt}{t} \bigg)^{1/q}. \end{equation} To show \eqref{key}, we invoke \eqref{ThmDeVoreDer<Cubes} to derive \begin{equation*} f^{\#*}_{Q_0}(t) \lesssim \sum_{l=0}^1 \Big[t^{-1/p_0} \Big(\int_0^{t} (u^{1/r} |\nabla^l f|^*(u))^{p_0} \, \frac{du}{u} \Big)^{1/p_0} + \sup_{t < u < 1} u^{1/d} |\nabla^l f|^*(u) \Big], \end{equation*} where $p_0' < d$ and $r = \frac{d p_0}{d+p_0}$. Thus, applying Hardy's inequality we conclude that \begin{align*} \|f^{\#}_{Q_0}\|_{L_{\infty,q}(\log L)_b(Q_0)} & \lesssim \sum_{l=0}^1 \bigg[ \bigg(\int_0^1 t^{-q/p_0} (1-\log t)^{b q} \Big(\int_0^{t} (u^{1/r} |\nabla^l f|^*(u))^{p_0} \, \frac{du}{u} \Big)^{q/p_0} \frac{dt}{t} \bigg)^{1/q} \\ & \hspace{1cm}+ \bigg(\int_0^1 \Big((1-\log t)^b \sup_{t < u < 1} u^{1/d} |\nabla^l f|^*(u) \Big)^{q} \frac{dt}{t} \bigg)^{1/q} \bigg] \\ & \asymp \sum_{l=0}^1 \bigg[ \bigg(\int_0^1 (t^{1/d} (1-\log t)^b |\nabla^l f|^*(t))^q \frac{dt}{t} \bigg)^{1/q} \\ & \hspace{1cm}+ \bigg(\int_0^1 \Big((1-\log t)^b \sup_{t < u < 1} u^{1/d} |\nabla^l f|^*(u) \Big)^{q} \frac{dt}{t} \bigg)^{1/q} \bigg] \\ & \asymp \sum_{l=0}^1 \bigg(\int_0^1 \Big((1-\log t)^b \sup_{t < u < 1} u^{1/d} |\nabla^l f|^*(u) \Big)^{q} \frac{dt}{t} \bigg)^{1/q}. \end{align*} Assume that \eqref{As1} holds. There is no loss of generality in fixing $Q_0 = [-\frac{1}{2},\frac{1}{2}]^d$. Firstly, we suppose $p=\infty$ and $r=\infty$. Then we will prove that the condition $\xi \leq -1$ is necessary if \eqref{As1} holds. Indeed, if $\xi > -1,$ then we choose $\beta$ such that $\max\{-b-1/q-\xi-1, 0\} < \beta < -b-1/q$ and define $f(x) = f_0(|x|)$, where \begin{equation*} f_0(t) = \int_t^1 (1-\log u)^\beta \frac{du}{u}, \qquad t \in (0, 1/2), \end{equation*} and $f_0(t)=0$ otherwise. Elementary computations yield that \begin{equation*} f^*(t) \asymp (1-\log t)^{\beta + 1} \quad \text{and} \quad |\nabla f|^*(t) \asymp t^{-1/d} (1-\log t)^\beta \end{equation*} for $t \in (0,1)$. According to \eqref{key}, we have \begin{align*} \|f^{\#}_{Q_0}\|_{L_{\infty,q}(\log L)_b(Q_0)} & \lesssim \bigg(\int_0^1 \Big((1-\log t)^b \sup_{t < u < 1} u^{1/d} (1- \log u)^{\beta + 1} \Big)^{q} \frac{dt}{t} \bigg)^{1/q} \\ & \hspace{1cm}+ \bigg(\int_0^1 \Big((1-\log t)^b \sup_{t < u < 1} (1-\log u)^\beta \Big)^{q} \frac{dt}{t} \bigg)^{1/q} < \infty \end{align*} but \begin{align*} \left(\int_0^1 (1 - \log t)^{b q} \Big(\sup_{t < u <1} (1-\log u)^{\xi} (f-f_{Q_0})^*(u) \Big)^{q} \frac{dt}{t} \right)^{1/q} = \infty, \end{align*} which contradicts \eqref{As1}. Secondly, we will show that \eqref{As1} with $p=\infty$ and $r < \infty$ implies $\xi < -1-1/r$. We again argue by contradiction. Assume that \eqref{As1} holds with $p=\infty, r < \infty$ and $\xi = -1-1/r$. For each $j \in {{\Bbb N}}$, we consider the function $f_j(x) = f_j(|x|), \, x \in Q_0,$ given by \begin{equation*} f_j(t) = \int_t^1 \frac{du}{u}, \qquad t \in (0, 2^{-j}), \end{equation*} and $f_j(t)=0$ otherwise. Denoting by $ \chi_{(0, 2^{-j})}$ the characteristic function of the set $(0, 2^{-j})$, it is readily seen that \begin{equation*} f_j^*(t) \asymp (-\log t) \chi_{(0, 2^{-j})}(t) \quad \text{and} \quad |\nabla f_j|^*(t) \asymp t^{-1/d} \chi_{(0, 2^{-j})}(t) \end{equation*} uniformly with respect to $j$. Therefore, we obtain \begin{align*} \left(\int_0^1 (1 - \log t)^{b q} \left(\int_t^1 ((1-\log u)^{-1-1/r} (f_j-f_{j;Q_0})^*(u))^r \frac{du}{u} \right)^{q/r} \frac{dt}{t} \right)^{1/q} & \gtrsim \\ & \hspace{-7cm} \left(\int_0^{2^{-j}} (- \log t)^{b q} \bigg(\int_t^{2^{-j}} \frac{du}{u (-\log u)} \bigg)^{q/r} \frac{dt}{t} \right)^{1/q} \\ & \hspace{-7cm} \gtrsim \left(\int_0^{2^{-j}} (-\log t)^{b q} (\log (-\log t))^{q/r} \frac{dt}{t} \right)^{1/q} \asymp j^{b+1/q} (\log j)^{1/r} \end{align*} and (cf. \eqref{key}) \begin{align*} \|(f_j)^{\#}_{Q_0}\|_{L_{\infty,q}(\log L)_b(Q_0)} & \lesssim 2^{-j/d} j \bigg(\int_0^{2^{-j}} (-\log t)^{b q} \frac{dt}{t} \bigg)^{1/q} + \bigg(\int_0^{2^{-j}} (-\log t)^{b q} \frac{dt}{t} \bigg)^{1/q} \\ & \asymp j^{b+1/q} (2^{-j/d} j + 1) \asymp j^{b+1/q}. \end{align*} Combining these estimates with \eqref{As1}, we arrive at $j^{b+1/q} (\log j)^{1/r} \lesssim j^{b+1/q}$, which leads to a contradiction. Furthermore, the failure of \eqref{As1} with $p=\infty, r < \infty$, and $\xi > -1-1/r$ can be obtained from the previous case using the trivial estimates. The general case $w \in A_\infty(Q_0)$ can be reduced to the Lebesgue setting using \eqref{anew} and a simple change of variables. Further details are left to the reader. \end{proof} \bigskip \section{Proofs of Corollaries \ref{CorollaryLimitingBesovMax}, \ref{CorollaryLimitingBesovMaxSecond}, \ref{CorDeVoreExtrapol}, and \ref{CorDeVoreDerExtrapol} and their optimality} \begin{proof}[Proof of Corollary \ref{CorDeVoreExtrapol}] We concentrate only on \eqref{CorDeVoreExtrapol1cube} and leave to the reader the proofs of \eqref{CorDeVoreExtrapol1} and \eqref{CorDeVoreExtrapol2}. Without loss of generality, we may assume $0 < \varepsilon < d/p$. Suppose $r < \infty$. If $k > d/p$, then we apply monotonicity properties of $f^{\#*}_{Q_0}$, \eqref{ThmDeVore*LorentzCubes} and Fubini's theorem to derive \begin{align*} \|f^{\#}_{Q_0}\|_{L_{d/\varepsilon, r}(Q_0)} & \lesssim \bigg(\int_0^1 t^{(\varepsilon -d/p) r} \Big(\int_0^{t^d} (u^{1/p} f^{\#*}_{Q_0}(u))^q \, \frac{du}{u} \Big)^{r/q} \frac{dt}{t} \bigg)^{1/r} \\ & \lesssim \Big(\int_0^1 t^{\varepsilon r} \frac{dt}{t} \Big)^{1/r} \|f\|_{L_{p,q}(Q_0)} + \Big(\int_0^1 t^{\varepsilon r} \int_t^1 (u^{-d/p} \omega_k(f,u)_{p, q})^r \frac{du}{u} \frac{dt}{t} \Big)^{1/r} \\ & \asymp \varepsilon^{-1/r} \|f\|_{L_{p,q}(Q_0)} + \Big(\int_0^1 (u^{-d/p} \omega_k(f,u)_{p, q})^r \int_0^{u} t^{\varepsilon r} \frac{dt}{t} \frac{du}{u} \Big)^{1/r} \\ & \asymp \varepsilon^{-1/r} \|f\|_{B^{d/p-\varepsilon}_r L_{p,q}(Q_0);k}. \end{align*} Let $k= d/p$. In light of \eqref{ThmDeVore*Lorentz2Cubes}, we have $f^{\#*}_{Q_0} (t^d) \lesssim \|f\|_{L_{p,q}(Q_0)} + t^{-d/p} \omega_k(f,t)_{p,q}$. Therefore, \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{d/\varepsilon, r}(Q_0)} \lesssim \varepsilon^{-1/r} \|f\|_{L_{p,q}(Q_0)} + \Big(\int_0^1 (t^{\varepsilon-d/p} \omega_k(f,t)_{p,q})^r \frac{dt}{t} \Big)^{1/r} \leq \varepsilon^{-1/r} \|f\|_{B^{d/p-\varepsilon}_r L_{p,q}(Q_0); k}, \end{equation*} that is, \eqref{CorDeVoreExtrapol1cube} holds true. The case $r=\infty$ can be treated similarly. We omit the details. \end{proof} \begin{proof}[Proof of Corollary \ref{CorollaryLimitingBesovMax}] Assume $r < \infty$. Let $J \in {{\Bbb N}}$ be such that $2^{-J} < 1/p$. Setting $\varepsilon= 2^{-j} d, \, j \geq J$, in \eqref{CorDeVoreExtrapol1cube}, we have \begin{equation*} 2^{j b} \|f^{\#}_{Q_0}\|_{L_{2^j, r}(Q_0)} \leq C 2^{j (b+1/r)} \|f\|_{B^{d(1/p-2^{-j})}_r L_{p, q}(Q_0); k}, \end{equation*} where $C > 0$ is independent of $j$. Therefore, \begin{equation}\label{ProofCorollaryLimitingBesovMax1} \Big(\sum_{j=J}^\infty ( 2^{j b} \|f^{\#}_{Q_0}\|_{L_{2^j, r}(Q_0)})^r \Big)^{1/r} \lesssim \Big(\sum_{j=J}^\infty (2^{j (b+1/r)} \|f\|_{B^{d(1/p-2^{-j})}_r L_{p, q}(Q_0); k})^r \Big)^{1/r}. \end{equation} Applying Fubini's theorem yields \begin{equation}\label{ProofCorollaryLimitingBesovMax2} \sum_{j=J}^\infty 2^{j b r} \|f^{\#}_{Q_0}\|_{L_{2^j, r}(Q_0)}^r = \int_0^1 V(t) ((f^{\#}_{Q_0})^*(t))^r \frac{dt}{t} \end{equation} and, since $b < -1/r$, \begin{equation}\label{ProofCorollaryLimitingBesovMax3} \sum_{j=J}^\infty 2^{j (b+1/r) r} \|f\|_{B^{d(1/p-2^{-j})}_r L_{p, q}(Q_0); k}^r \asymp \|f\|_{L_{p,q}(Q_0)}^r + \int_0^1 t^{-d r/p} W(t) \omega_k (f,t)_{p,q}^r \frac{dt}{t}, \end{equation} where $V(t) = \sum_{j=J}^\infty 2^{j b r} t^{2^{-j} r}$ and $W(t) = \sum_{j=J}^\infty 2^{j(b+1/r) r} t^{2^{-j} d r}$. Next we estimate $V(t)$ and $W(t)$. For a fixed $t \in (0,1)$, changing variables, we obtain \begin{equation}\label{0030303} V(t) \asymp \int^{2^{-J}}_0 t^{\sigma r} \sigma^{-b r} \frac{d \sigma}{\sigma} = (-\log t)^{b r} \int_0^{2^{-J} (-\log t)} e^{-\sigma r} \sigma^{-b r} \frac{d \sigma}{\sigma} \asymp (-\log t)^{b r}, \end{equation} and, similarly, $W(t) \asymp (-\log t)^{(b+1/r)r}$. Inserting these estimates into \eqref{ProofCorollaryLimitingBesovMax2} and \eqref{ProofCorollaryLimitingBesovMax3}, we have \begin{equation*} \sum_{j=J}^\infty 2^{j b r} \|f^{\#}_{Q_0}\|_{L_{2^j, r}(Q_0)}^r \asymp \|f^{\#}_{Q_0}\|_{L_{\infty,r} (\log L)_b(Q_0)}^r \end{equation*} and \begin{align*} \sum_{j=J}^\infty 2^{j (b+1/r) r} \|f\|_{B^{d(1/p-2^{-j})}_r L_{p, q}(Q_0); k}^r & \asymp \|f\|_{L_{p,q}(Q_0)}^r + \int_0^1 t^{-d r/p} (-\log t)^{(b+1/r) r} \omega_k(f,t)_{p,q}^r \frac{dt}{t} \\ & \hspace{-4cm}\asymp \|f\|^r_{B_r^{d/p, b+ 1/r} L_{p,q}(Q_0);k}. \end{align*} Thus, by \eqref{ProofCorollaryLimitingBesovMax1}, we conclude the proof of \eqref{CorollaryLimitingBesovMax1New*}. The case $r=\infty$ can be done in a similar way. \end{proof} The optimal statement of Corollary \ref{CorollaryLimitingBesovMax} reads as follows. \begin{prop}\label{CorollaryLimitingBesovMaxOptimal} Let $1 < p < \infty, 0 < q, r \leq \infty, k > d/p, b < -1/r$, and $-\infty < \xi < \infty$. Then \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{\infty, r} (\log L)_b (Q_0)} \lesssim \|f\|_{B^{d/p, b+\xi}_{r} L_{p,q}(Q_0);k} \iff \xi \geq 1/r. \end{equation*} \end{prop} The proof is based on the limiting interpolation technique and Fefferman--Stein inequalities for Lorentz--Zygmund spaces recently obtained in \cite{AstashkinMilman,Lerner}. In particular, we will need the following interpolation formulas for Lorentz--Zygmund spaces and Lorentz--Besov spaces. \begin{lem}\label{Lemma8.2} Let $0 < p < \infty, 0 < r_0, r_1, r \leq \infty, -\infty < b_0, b < \infty, b_1 < -1/r$, and $0 < \theta < 1$. Then \begin{equation*} (L_{p,r_0}(\log L)_{b_0}(Q_0), L_{\infty, r_1}(\log L)_{b_1}(Q_0))_{\theta, r;b} = L_{p/(1-\theta), r} (\log L)_{(1-\theta) b_0 + \theta (b_1 + 1/r_1) + b}(Q_0). \end{equation*} \end{lem} \begin{lem}\label{Lemma8.3} Let $1 < p < \infty, 0 < q \leq \infty,$ $0 < r_0, r_1, r \leq \infty$, $0 < s_0 < s_1 < \infty$, $-\infty < b_0, b_1, b < \infty,$ and $0 < \theta < 1$. Then \begin{equation*} (B^{s_0,b_0}_{r_0} L_{p,q}(Q_0), B^{s_1,b_1}_{r_1} L_{p,q}(Q_0))_{\theta,r;b} = B^{(1-\theta) s_0 + \theta s_1, (1-\theta) b_0 + \theta b_1 + b}_r L_{p,q}(Q_0). \end{equation*} The corresponding formulas for homogeneous/inhomogeneous Besov spaces on $\mathbb{R}^d$ also hold true. \end{lem} The proofs of Lemmas \ref{Lemma8.2} and \ref{Lemma8.3} follow from abstract reiteration formulas (see \cite{EvansOpicPick}). \begin{lem} Let $0 < \theta_0, \theta_1, \theta < 1, \theta_0 \neq \theta_1, 0 < q_0, q_1, q \leq \infty, -\infty < b_0, b_1, b < \infty$. Then \begin{equation}\label{ReiterationOptim} ((A_0, A_1)_{\theta_0, q_0; b_0}, (A_0, A_1)_{\theta_1, q_1; b_1})_{\theta, q; b} = (A_0, A_1)_{(1-\theta) \theta_0 + \theta \theta_1, q; (1-\theta) b_0 + \theta b_1+b} \end{equation} and if, additionally, $b_1 < -1/q_1$, then \begin{equation}\label{ReiterationOptim2} (A_0, (A_0, A_1)_{(1, b_1), q_1})_{\theta, q; b} = (A_0, A_1)_{\theta, q; \theta (b_1 + 1/q_1) + b}. \end{equation} \end{lem} \begin{proof}[Proof of Lemma \ref{Lemma8.2}] According to Lemma \ref{LemInterp1}, we have \begin{equation*} L_{\infty, r_1}(\log L)_{b_1}(Q_0) = (L_{p,r_0}(\log L)_{b_0}(Q_0), L_\infty(Q_0))_{(1,b_1), r_1}. \end{equation*} Thus, by \eqref{ReiterationOptim2}, \begin{align*} (L_{p,r_0}(\log L)_{b_0}(Q_0), L_{\infty, r_1}(\log L)_{b_1}(Q_0))_{\theta, r;b} & = \\ & \hspace{-4.5cm} (L_{p,r_0}(\log L)_{b_0}(Q_0), (L_{p,r_0}(\log L)_{b_0}(Q_0), L_\infty(Q_0))_{(1,b_1), r_1})_{\theta, r;b} \\ & \hspace{-4.5cm} = (L_{p,r_0}(\log L)_{b_0}(Q_0), L_\infty(Q_0))_{\theta, r; \theta (b_1 + 1/r_1) + b} \\ & \hspace{-4.5cm} = L_{p/(1-\theta), r; (1-\theta) b_0 + \theta(b_1 + 1/r_1) + b}(Q_0), \end{align*} where the last step follows from the well-known interpolation properties of Lorentz--Zygmund spaces (see, e.g., \cite[Corollary 5.3]{GogatishviliOpicTrebels}). \end{proof} \begin{proof}[Proof of Lemma \ref{Lemma8.3}] Let $k \in {{\Bbb N}}$ be such that $k > s_1$. It follows from the well-known formula (see, e.g., \cite[(3.5)]{Martin} and \cite[Chapter 11]{MartinMilman14}; see also \eqref{ProofLemmaEmbBMOLorentzState1*}) \begin{equation}\label{747474} K(t^k, f;L_{p,q}(Q_0), W^k L_{p,q}(Q_0)) \asymp t^k \|f\|_{L_{p,q}(Q_0)} + \omega_k(f,t)_{p,q}, \quad t \in (0,1), \end{equation} and $K(t,f;L_{p,q}(Q_0), W^k L_{p,q}(Q_0)) \asymp \|f\|_{L_{p,q}(Q_0)}, \, t > 1$, that \begin{equation}\label{3993949} B^{s_i,b_i}_{r_i} L_{p,q}(Q_0) = (L_{p,q}(Q_0), W^k L_{p,q}(Q_0))_{\frac{s_i}{k},r_i;b_i}, \quad i=0, 1. \end{equation} Then, by \eqref{3993949} and \eqref{ReiterationOptim}, \begin{align*} (B^{s_0,b_0}_{r_0} L_{p,q}(Q_0), B^{s_1,b_1}_{r_1} L_{p,q}(Q_0))_{\theta,r;b} &= \\ & \hspace{-4cm} ((L_{p,q}(Q_0), W^k L_{p,q}(Q_0))_{\frac{s_0}{k},r_0;b_0}, (L_{p,q}(Q_0), W^k L_{p,q}(Q_0))_{\frac{s_1}{k},r_1;b_1})_{\theta,r;b} \\ & \hspace{-4cm} = (L_{p,q}(Q_0), W^k L_{p,q}(Q_0))_{((1-\theta) s_0 + \theta s_1)/k, r; (1-\theta) b_0 + \theta b_1 + b} \\ & \hspace{-4cm} = B^{(1-\theta) s_0 + \theta s_1, (1-\theta) b_0 + \theta b_1 + b}_r L_{p,q}(Q_0). \end{align*} \end{proof} Now we are in a position to give \begin{proof}[Proof of Proposition \ref{CorollaryLimitingBesovMaxOptimal}] In view of Corollary \ref{CorollaryLimitingBesovMax}, it only remains to show that the inequality $\|f^{\#}_{Q_0}\|_{L_{\infty, r} (\log L)_b (Q_0)} \lesssim \|f\|_{B^{d/p, b+\xi}_{r} L_{p,q}(Q_0);k}$ implies $\xi \geq 1/r$. Let us denote by $T$ the sublinear operator mapping $f \in L_1(Q_0)$ to $f^{\#}_{Q_0}$, that is, $T (f) = f^{\#}_{Q_0}$. Then, taking into account our assumptions (see the discussion after \eqref{Blowupnew}), we have \begin{equation*} T: B^{d/p, b+\xi}_{r} L_{p,q}(Q_0) \to L_{\infty, r} (\log L)_b (Q_0) \quad \text{and} \quad T: B^s_r L_{p,q}(Q_0) \to L_{d p/(d -s p), r}(Q_0), \end{equation*} where $0 < s < d/p$. Given any $\theta \in (0,1)$, by interpolation, we arrive at \begin{equation}\label{ProofCorollaryLimitingBesovMaxOptimal1} T : (B^s_r L_{p,q}(Q_0), B^{d/p, b+\xi}_{r} L_{p,q}(Q_0))_{\theta, r} \to (L_{d p/(d -s p), r}(Q_0), L_{\infty, r} (\log L)_b (Q_0))_{\theta,r}. \end{equation} Invoking Lemmas \ref{Lemma8.2} and \ref{Lemma8.3}, \begin{equation} (B^s_r L_{p,q}(Q_0), B^{d/p, b+\xi}_{r} L_{p,q}(Q_0))_{\theta, r} = B^{s_0, \theta (b + \xi)}_r L_{p,q}(Q_0), \label{ProofCorollaryLimitingBesovMaxOptimal2} \end{equation} where $s_0 = (1-\theta) s + \theta d/p \in (s, d/p)$, and \begin{equation}\label{ProofCorollaryLimitingBesovMaxOptimal3} (L_{d p/(d -s p), r}(Q_0), L_{\infty, r} (\log L)_b (Q_0))_{\theta,r} = L_{r_0, r} (\log L)_{\theta (b+1/r)}(Q_0), \end{equation} where $1/r_0 = (1-\theta) (d-s p)/d p$. According to \eqref{ProofCorollaryLimitingBesovMaxOptimal1}--\eqref{ProofCorollaryLimitingBesovMaxOptimal3}, we derive \begin{equation*} T: B^{s_0, \theta (b + \xi)}_r L_{p,q}(Q_0) \to L_{r_0, r} (\log L)_{\theta (b+1/r)}(Q_0) \end{equation*} with $s_0 -d/p = -d/r_0$. Moreover, if $s_0 -d/p = -d/r_0$, the r.i. hull of $B^{s_0, \theta (b + \xi)}_r L_{p,q}(Q_0)$ is the space $L_{r_0, r} (\log L)_{\theta (b+ \xi)}(Q_0)$ (see \cite[Theorem 3]{Martin}). Thus, in light of the Fefferman--Stein inequality for the space $L_{r_0, r} (\log L)_{\theta (b+1/r)}(Q_0)$ (cf. \cite{AstashkinMilman,Lerner}), we derive the embedding $L_{r_0, r} (\log L)_{\theta (b+ \xi)}(Q_0) \hookrightarrow L_{r_0, r} (\log L)_{\theta (b+1/r)}(Q_0)$, which implies $\xi \geq 1/r$. \end{proof} A careful examination of the proof of \eqref{CorollaryLimitingBesovMax1New*} given above shows that if \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{d/\varepsilon, r}(Q_0)} \lesssim \varepsilon^{-\xi} \|f\|_{B^{d/p-\varepsilon}_r L_{p, q}(Q_0); k}, \quad \varepsilon \to 0+, \end{equation*} holds for some $\xi > 0$, then \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{\infty, r} (\log L)_b (Q_0)} \lesssim \|f\|_{B^{d/p, b+\xi}_{r} L_{p,q}(Q_0);k}. \end{equation*} Hence, the optimality of \eqref{CorDeVoreExtrapol1cube} in Corollary \ref{CorDeVoreExtrapol} is a straightforward consequence of Proposition \ref{CorollaryLimitingBesovMaxOptimal}. More precisely, we have established the following \begin{prop}\label{CorDeVoreExtrapolSharp} Let $1 < p < \infty, k > d/p, 0 < q, r \leq \infty$ and $\xi > 0$. Then \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{d/\varepsilon, r}(Q_0)} \leq C \, \varepsilon^{-\xi} \|f\|_{B^{d/p-\varepsilon}_r L_{p, q}(Q_0); k} \iff \xi \geq 1/r. \end{equation*} \end{prop} Next we prove Corollaries \ref{CorollaryLimitingBesovMaxSecond} and \ref{CorDeVoreDerExtrapol}. \begin{proof}[Proof of Corollary \ref{CorDeVoreDerExtrapol}] We begin by proving \eqref{CorDeVoreDerExtrapol1Cubes}. Take any $p > d/(d - k)$. It follows from \eqref{ThmDeVoreDer<Cubes} that \begin{equation*} f^{\#*}_{Q_0}(t) \lesssim \sum_{l=0}^k \Big[ t^{-1/p} \Big(\int_0^{t} (u^{1/\nu} |\nabla^l f|^*(u))^r \, \frac{du}{u} \Big)^{1/r} + \sup_{t < u < 1} u^{k/d} |\nabla^l f|^*(u) \Big], \end{equation*} where $\nu = \frac{d p}{d + k p}$. There is no loss of generality in supposing that $0 < \varepsilon < d/p$. Let $r < \infty$. Therefore, applying monotonicity properties of the rearrangements and changing the order of integration, \begin{align*} \|f^{\#}_{Q_0}\|_{L_{d/\varepsilon, r}(Q_0)} & \lesssim \sum_{l=0}^k \Big[ \Big(\int_0^1 t^{(\varepsilon/d - 1/p) r} \int_0^t (u^{1/\nu} |\nabla^l f|^*(u))^r \, \frac{du}{u} \frac{dt}{t} \Big)^{1/r} \\ &\hspace{1cm}+ \bigg(\int_0^1 \Big(t^{\varepsilon/d} \sup_{t < u < 1} u^{k/d} |\nabla^l f|^*(u) \Big)^r \frac{dt}{t} \bigg)^{1/r} \Big] \\ &\lesssim \sum_{l=0}^k \Big[ \Big(\int_0^1 (u^{1/\nu} |\nabla^l f|^*(u))^r \int_u^1 t^{(\varepsilon/d - 1/p) r} \frac{dt}{t} \frac{du}{u} \Big)^{1/r} \\ & \hspace{1cm} + \Big(\int_0^1 t^{\varepsilon r/d} \int_t^1 (u^{k/d} |\nabla^l f|^*(u))^r \frac{du}{u} \frac{dt}{t} \Big)^{1/r} \Big] \\ & \asymp (1 + \varepsilon^{-1/r} ) \sum_{l=0}^k \Big(\int_0^1 (u^{ (k + \varepsilon)/d} |\nabla^l f|^*(u))^r \frac{du}{u} \Big)^{1/r} \\ & \asymp \varepsilon^{-1/r} \|f \|_{W^k L_{d/(k + \varepsilon), r}(Q_0)}. \end{align*} The case $r=\infty$ is easier and we omit further details. Estimate \eqref{CorDeVoreDerExtrapol1} is proved similarly but now applying \eqref{ThmDeVoreDer<}. \end{proof} \begin{proof}[Proof of Corollary \ref{CorollaryLimitingBesovMaxSecond}] Let $r < \infty$. In view of \eqref{CorDeVoreDerExtrapol1Cubes}, we have \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{2^j, r}(Q_0)} \leq C \, 2^{j/r} \|f\|_{W^k L_{\frac{d}{k+ 2^{-j} d}, r} (Q_0)}, \end{equation*} where $C$ does not depend on $j$. Hence, \begin{equation*} \Big(\sum_{j=0}^\infty 2^{j b r} \|f^{\#}_{Q_0}\|_{L_{2^j, r}(Q_0)}^r \Big)^{1/r} \lesssim \Big(\sum_{j=0}^\infty 2^{j (b + 1/r) r} \|f\|_{W^k L_{\frac{d}{k+ 2^{-j} d}, r} (Q_0)}^r \Big)^{1/r}. \end{equation*} These extrapolation spaces can be easily computed by applying Fubini's theorem. Namely, using $\sum_{j=0}^\infty 2^{jA} t^{B/2^j}\asymp (-\log t)^A$, $0<t<1/2$, for $A\in \mathbb{R}$ and $B>0$, (cf. \eqref{0030303}) we have \begin{equation*} \Big(\sum_{j=0}^\infty 2^{j b r} \|f^{\#}_{Q_0}\|_{L_{2^j, r}(Q_0)}^r \Big)^{1/r} \asymp \| f^{\#}_{Q_0}\|_{L_{\infty, r} (\log L)_b(Q_0)} \end{equation*} and \begin{equation*} \Big(\sum_{j=0}^\infty 2^{j (b + 1/r) r} \|f\|_{W^k L_{\frac{d}{k+ 2^{-j} d}, r} (Q_0)}^r \Big)^{1/r} \asymp \|f\|_{W^k L_{d/k, r} (\log L)_{b+1/r} (Q_0)}. \end{equation*} Hence $ \| f^{\#}_{Q_0}\|_{L_{\infty, r} (\log L)_b(Q_0)} \lesssim \|f\|_{W^k L_{d/k, r} (\log L)_{b+1/r} (Q_0)}. $ We omit the proof in the case $r=\infty$, since it is similar to that in the case $r < \infty$. \end{proof} The next result shows that Corollary \ref{CorollaryLimitingBesovMaxSecond} is in fact sharp. \begin{prop}\label{CorollaryLimitingBesovMaxSecondOptimal} Let $1 \leq r \leq \infty, k < d, b < -1/r$, and $-\infty < \xi < \infty$. Then \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{\infty, r} (\log L)_b (Q_0)} \lesssim \|f \|_{W^k L_{d/k, r} (\log L)_{b + \xi}(Q_0) } \iff \xi \geq 1/r. \end{equation*} \end{prop} \begin{proof} Corollary \ref{CorollaryLimitingBesovMaxSecond} implies the ``if part". Let us show that if the inequality \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{\infty, r} (\log L)_b (Q_0)} \lesssim \|f \|_{W^k L_{d/k, r} (\log L)_{b + \xi}(Q_0) } \end{equation*} holds true, then necessarily $\xi \geq 1/r$. As in the proof of Proposition \ref{CorollaryLimitingBesovMaxOptimal}, for $Tf=f^{\#}_{Q_0}$, we have $T: W^k L_{d/k, r} (\log L)_{b + \xi}(Q_0) \to L_{\infty, r} (\log L)_b (Q_0)$. By \eqref{ProofLem2.1} and Hardy's inequality (noting that $d/k > 1$), the operator $T$ acts boundedly on $L_{d/k, r} (\log L)_{b + \xi}(Q_0)$. For $\theta \in (0,1)$ and $q_0 \in (1,\infty)$, we set \begin{equation*} X = (L_{d/k, r} (\log L)_{b + \xi}(Q_0), W^k L_{d/k, r} (\log L)_{b + \xi}(Q_0))_{\theta, q_0} \end{equation*} and \begin{equation*} Y = (L_{d/k, r} (\log L)_{b + \xi}(Q_0), L_{\infty, r} (\log L)_b (Q_0))_{\theta, q_0}. \end{equation*} Therefore, by the interpolation property, we derive \begin{equation}\label{CorollaryLimitingBesovMaxSecondOptimalProof1} T : X \to Y. \end{equation} It remains to identify the interpolation spaces $X$ and $Y$. It is an immediate consequence of the well-known characterization (cf. \eqref{747474}) \begin{align*} K(t^k, f; L_{d/k, r}(\log L)_{b + \xi}(Q_0), W^k L_{d/k, r}(\log L)_{b + \xi}(Q_0)) &\asymp \\ & \hspace{-6cm} \min\{1,t^k\} \|f\|_{ L_{d/k, r}(\log L)_{b + \xi}(Q_0)} + \omega_k(f,t)_{d/k,r, b + \xi} \end{align*} for $f\in L_{d/k, r}(\log L)_{b + \xi}(Q_0)$ and $t > 0$ that $X = B^{\theta k}_{q_0} L_{d/k, r}(\log L)_{b + \xi}(Q_0).$ On the other hand, taking into account Lemma \ref{Lemma8.2}, $Y = L_{d/((1-\theta) k), q_0} (\log L)_{b + \xi +\theta (1/r-\xi)}(Q_0).$ Hence \eqref{CorollaryLimitingBesovMaxSecondOptimalProof1} can be rewritten as \begin{equation*} T: B^{\theta k}_{q_0} L_{d/k, r}(\log L)_{b + \xi}(Q_0) \to L_{d/((1-\theta) k), q_0} (\log L)_{b + \xi +\theta (1/r-\xi)}(Q_0), \end{equation*} or, equivalently, in light of the Fefferman--Stein inequality for Lorentz--Zygmund spaces, \begin{equation*} B^{\theta k}_{q_0} L_{d/k, r}(\log L)_{b + \xi}(Q_0) \hookrightarrow L_{d/((1-\theta) k), q_0} (\log L)_{b + \xi +\theta (1/r-\xi)}(Q_0). \end{equation*} Finally, it follows from the fact that $L_{d/((1-\theta) k), q_0} (\log L)_{b+\xi}(Q_0)$ is the r.i. hull of the Besov space $B^{\theta k}_{q_0} L_{d/k, r}(\log L)_{b + \xi}(Q_0)$ (see \cite[Theorem 3]{Martin}) that \begin{equation*} L_{d/((1-\theta) k), q_0} (\log L)_{b+\xi}(Q_0) \hookrightarrow L_{d/((1-\theta) k), q_0} (\log L)_{b + \xi +\theta (1/r-\xi)}(Q_0), \end{equation*} which yields $\xi \geq 1/r$. \end{proof} We are now in a position to prove the sharpness of Corollary \ref{CorDeVoreDerExtrapol}. Indeed, assume that there exists $\xi > 0$ such that \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{\frac{d}{\varepsilon}, r}(Q_0)} \leq C \, \varepsilon^{-\xi} \|f\|_{W^k L_{\frac{d}{k+\varepsilon}, r} (Q_0)}, \quad \varepsilon \to 0+. \end{equation*} Then, following step by step the proof of Corollary \ref{CorollaryLimitingBesovMaxSecond} given above, we arrive at \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{\infty, r} (\log L)_b (Q_0)} \lesssim \|f\|_{W^k L_{d/k, r} (\log L)_{b + \xi}(Q_0)}, \quad b < -1/r. \end{equation*} In light of Proposition \ref{CorollaryLimitingBesovMaxSecondOptimal}, we have shown the following \begin{prop}\label{CorDeVoreDerExtrapolOptim} Let $1 \leq r \leq \infty, k < d$, and $\xi > 0$. Then \begin{equation*} \|f^{\#}_{Q_0}\|_{L_{\frac{d}{\varepsilon}, r}(Q_0)} \leq C \, \varepsilon^{-\xi} \|f\|_{W^k L_{\frac{d}{k+\varepsilon}, r} (Q_0)} \iff \xi \geq 1/r. \end{equation*} \end{prop} \bigskip \section{Comparison between Theorems \ref{ThmDeVoreLorentz} and \ref{ThmDeVoreDer}} Let $1 < p, q < \infty$. Assume $f \in C^\infty_0(\mathbb{R}^d)$. We have shown in Theorems \ref{ThmDeVoreLorentz} and \ref{ThmDeVoreDer} that \begin{equation}\label{Comp1} t^{-d/p} \Big(\int_0^{t^d} (u^{1/p} f^{\# *}(u))^q \, \frac{du}{u} \Big)^{1/q} \lesssim \sup_{ t < u < \infty} u^{-d/p} \omega_k(f,u)_{p,q}, \quad k \geq d/p, \end{equation} and \begin{align} t^{-d/p} \Big(\int_0^{t^d} (u^{1/p} f^{\# *}(u))^q \, \frac{du}{u} \Big)^{1/q} & \lesssim t^{-d/p} \Big(\int_0^{t^d} (u^{1/r} |\nabla^k f|^*(u))^q \, \frac{du}{u} \Big)^{1/q} \nonumber\\ & \hspace{1cm}+ \sup_{t^d < u < \infty} u^{k/d} |\nabla^k f|^*(u), \quad k < d (1-1/p), \label{Comp2} \end{align} where $1/r = 1/p + k/d$. The goal of this section is to study the interrelations between these estimates. We distinguish four cases. \\ \textsc{Case 1:} If $p > 2$ and $k \in \big(\frac{d}{p}, d \big(1 - \frac{1}{p} \big)\big)$, then \eqref{Comp1} provides a sharper estimate than \eqref{Comp2}. More precisely, there holds \begin{align} \sup_{ t < u < \infty} u^{-d/p} \omega_k(f,u)_{p,q} & \lesssim t^{-d/p} \Big(\int_0^{t^d} (u^{1/r} |\nabla^k f|^*(u))^q \, \frac{du}{u} \Big)^{1/q} \nonumber \\ &\hspace{1cm} + \sup_{t^d < u < \infty} u^{k/d} |\nabla^k f|^*(u). \label{Comp3} \end{align} Let us show \eqref{Comp3}. According to \cite[Theorem 1.2]{SeegerTrebels}, we have \begin{equation*} \dot{W}^k L_{d/k,\infty}(\mathbb{R}^d) \hookrightarrow \dot{B}^{d/p}_{\infty} L_{p,q}(\mathbb{R}^d) \end{equation*} since $d/p <k < d$. Then this embedding and \eqref{ThmDeVoreDer1*} imply \begin{equation}\label{Comp4} K(t, f; L_{p,q}(Q_0), \dot{B}^{d/p}_{\infty} L_{p,q}(\mathbb{R}^d)) \lesssim K(t, f; \dot{W}^k L_{r,q}(\mathbb{R}^d) ,\dot{W}^k L_{d/k,\infty}(\mathbb{R}^d) ). \end{equation} Recall that (see \eqref{ProofThmDeVoreLorentz3} and \eqref{ThmDeVoreDer3}) \begin{equation}\label{Comp5} K(t, f; L_{p,q}(\mathbb{R}^d), \dot{B}^{d/p}_{\infty} L_{p,q}(\mathbb{R}^d)) \asymp t \sup_{t^{p/d} < u <\infty} u^{-d/p} \omega_k(f,u)_{p,q} \end{equation} and \begin{align} K(t, f; \dot{W}^k L_{r,q}(\mathbb{R}^d) ,\dot{W}^k L_{d/k,\infty}(\mathbb{R}^d) ) & \asymp \Big(\int_0^{t^p} (u^{1/r} |\nabla^k f|^*(u))^q \, \frac{du}{u} \Big)^{1/q} \nonumber \\ & \hspace{1cm}+ t \sup_{t^p < u < \infty} u^{k/d} |\nabla^k f|^*(u). \label{Comp6} \end{align} Combining \eqref{Comp4}--\eqref{Comp6}, we conclude the desired estimate \eqref{Comp3}. \\ \textsc{Case 2:} Let $p > 2$ and $k=d/p$ (and so, $k < d(1-1/p)$ and $r=p/2$). Under these assumptions, it turns out that \eqref{Comp1} and \eqref{Comp2} are independent of each other. More precisely, we will show that the right-hand side expressions in \eqref{Comp1} and \eqref{Comp2}, i.e., \begin{equation*} I(t) = t^{-k} \omega_{k}(f,t)_{p,q} \end{equation*} and \begin{equation*} J(t) =t^{-d/p} \Big(\int_0^{t^d} (u^{1/r} |\nabla^k f|^*(u))^q \, \frac{du}{u} \Big)^{1/q} + \sup_{t^d < u < \infty} u^{k/d} |\nabla^k f|^*(u), \end{equation*} are not comparable. This will be shown by contradiction. Assume first that \begin{equation}\label{Comp7} I(t) \leq C J(t), \end{equation} where $C$ is a positive constant which is independent of $t \in (0,1)$. Since \begin{equation*} t^{-d/p} \Big(\int_0^{t^d} (u^{1/r} |\nabla^k f|^*(u))^q \, \frac{du}{u} \Big)^{1/q} \lesssim \sup_{0 < u < \infty} u^{k/d} |\nabla^k f|^*(u), \end{equation*} we have \begin{equation*} \sup_{t^d < u < \infty} u^{k/d} |\nabla^k f|^*(u) \leq J(t) \lesssim \sup_{0 < u < \infty} u^{k/d} |\nabla^k f|^*(u). \end{equation*} Letting $t\to 0$, we derive (recall that $k=d/p$) \begin{equation*} \lim_{t \to 0+} J(t) \asymp \sup_{0 < u < \infty} u^{k/d} |\nabla^k f|^*(u) = \|\,|\nabla^k f| \,\|_{L_{p,\infty}(\mathbb{R}^d)}. \end{equation*} On the other hand, by \eqref{ProofLemmaEmbBMOLorentzState1*}, we find that \begin{equation*} \lim_{t \to 0+} I(t) \asymp \lim_{t \to 0+} t^{-1} K(t, f; L_{p,q}(\mathbb{R}^d), \dot{W}^k L_{p,q}(\mathbb{R}^d)) \asymp \|\,|\nabla^k f| \,\|_{L_{p,q}(\mathbb{R}^d)}, \end{equation*} where in the last step we used the fact that the space $\dot{W}^k L_{p,q}(\mathbb{R}^d)$ is reflexive (see \cite[Theorem 1.4, p. 295]{BennettSharpley}). Then, by \eqref{Comp7}, we arrive at \begin{equation*} \dot{W}^k L_{p,\infty}(\mathbb{R}^d) \hookrightarrow \dot{W}^k L_{p,q}(\mathbb{R}^d), \end{equation*} which fails to be true because $q < \infty$. Suppose now that \begin{equation}\label{Comp8} J(t) \leq C I(t). \end{equation} We observe that \begin{equation*} \sup_{t^d < u < \infty} u^{k/d} |\nabla^k f|^*(u) \lesssim t^{-d/p} \Big(\int_0^{\infty} (u^{1/r} |\nabla^k f|^*(u))^q \, \frac{du}{u} \Big)^{1/q}. \end{equation*} This yields \begin{equation*} \Big(\int_0^{t^d} (u^{1/r} |\nabla^k f|^*(u))^q \, \frac{du}{u} \Big)^{1/q} \leq t^k J(t) \lesssim \Big(\int_0^\infty (u^{1/r} |\nabla^k f|^*(u))^q \, \frac{du}{u} \Big)^{1/q} \end{equation*} and thus \begin{equation*} \lim_{t \to \infty} t^k J(t) \asymp \Big(\int_0^\infty (u^{1/r} |\nabla^k f|^*(u))^q \, \frac{du}{u} \Big)^{1/q} = \|\,|\nabla^k f| \,\|_{L_{r,q}(\mathbb{R}^d)}. \end{equation*} Therefore passing to the limit as $t \to \infty$ in \eqref{Comp8}, we obtain \begin{equation*} \|\,|\nabla^k f| \,\|_{L_{r,q}(\mathbb{R}^d)} \lesssim \lim_{t \to \infty} \omega_k(f,t)_{p,q} \lesssim \|f\|_{L_{p,q}(\mathbb{R}^d)}, \end{equation*} which yields the desired contradiction. \\ \textsc{Case 3:} Assume that either $p > 2$ and $k \not \in \big[\frac{d}{p}, d \big(1 - \frac{1}{p} \big)\big)$ or $p=2$. Then in the case when $k \geq d \big(1 - 1/p \big)$ we apply \eqref{Comp1} and if $k < d/p$ we use \eqref{Comp2}. \\ \textsc{Case 4:} Suppose $1 < p < 2$ (and so, $d(1-1/p) < d/p$). On the one hand, if $k \geq d/p$ then \eqref{Comp1} holds true. On the other hand, \eqref{Comp2} can be invoked whenever $k < d(1-1/p)$. In the remaining parameter range $k \in \big[d(1-1/p), d/p\big)$, \eqref{Comp1} and \eqref{Comp2} cannot be applied. \section*{Appendix A. Some integral properties of slowly varying functions} Here we show that for any $0 < q \leq \infty$ there exists a slowly varying function $b$ defined on $(B,\infty)$ such that $\int_B^{\infty} (b(u))^q \frac{du}{u} < \infty$ and, for any $p>0$, \begin{equation*} \frac{\int_t^{\infty} (b(u))^q \frac{du}{u}}{\int_{t (\log t)^p}^{\infty} (b(u))^q \frac{du}{u}}\rightarrow \infty \quad\mbox{as}\quad t\to\infty. \end{equation*} \begin{proof} Let $q < \infty$. Applying change of variables, matters reduce to the case $q=1$. Consider $$b(x)=\exp\left\{-\int_A^{\log x}\frac{1}{\sqrt{\log t}} dt\right\}, \quad x > e^{A}.$$ It is clearly a slowly varying function since $$1\leqslant \frac{b(x)}{b(cx) \leq \exp\left\{\log c \frac{1}{\sqrt{\log\log x}}\right\} \rightarrow 1\quad\mbox{as}\quad x\to\infty$$ for all $c \geq 1$. Moreover, the condition $\int_B^{\infty} b(u) \frac{du}{u} < \infty$ follows from the estimate $b(x)\leq (\log x)^{-2}$ for sufficiently large $x$, which can be checked straightforwardly. Now we fix $p>0$ and $C>0$. We note that \begin{equation}\label{sv1} \frac{b(x)}{b(x (\log x)^p)}\rightarrow \infty \quad\mbox{as}\quad x\to\infty. \end{equation} Then there is $M>0$ (assume that $M>e^{2^{1/p}}$) such that for $x>M$ there holds $b(x)>C b(x (\log x)^p)$. We denote $h_0(t)=t,\;h_{k+1}(t)=h_k(t)(\log h_k(t))^p$ for $k\geq 0$. Then $t (\log t)^p >2t$ for large $t$ implies $h_{k+1}(t) > 2 h_k(t)$ for $k\geq 0$ and then $$\int_t^{\infty} b(u) \frac{du}{u}=\sum_{k=0}^{\infty}\int_{h_k(t)}^{h_{k+1}(t)} b(u) \frac{du}{u}=:\sum_{k=0}^{\infty} H_k(t),$$ $$\int_{t (\log t)^p}^{\infty} b(u)\frac{du}{u}=\sum_{k=1}^{\infty}\int_{h_k(t)}^{h_{k+1}(t)} b(u) \frac{du}{u}=\sum_{k=1}^{\infty} H_k(t). $$ Further, for $x>M$ we have $$ \int_x^{x (\log x)^p} b(u) \frac{du}{u} \geq C \int_x^{x (\log x)^p} b(u (\log u)^p) \frac{du}{u}\geq \frac{C}{2} \int_{x (\log x)^p}^{x (\log x)^p (\log(x (\log x)^p))^p} b(u) \frac{du}{u}. $$ Since $h_k(t)\geq t$ for any $k\geq 0$, this yields that for $t>\max\{e,e^p,M\}$ there holds $H_k(t)\geq \frac{C}{2}H_{k+1}(t),$ which implies $$\int_t^{\infty} b(u) \frac{du}{u}\geq \frac{C}{2} \int_{t (\log t)^p}^{\infty} b(u) \frac{du}{u}.$$ Since, by (\ref{sv1}), $C>0$ is arbitrary, we conclude the proof. The previous reasoning can be easily adapted to the case $q=\infty$. \end{proof} {\bf{Acknowledgements} } We would like to thank Kristina Oganesyan for useful remarks. The first author was partially supported by MTM 2017-84058-P. The second author was partially supported by MTM 2017-87409-P, 2017 SGR 358, and the CERCA Programme of the Generalitat de Catalunya. Part of this work was done during the visit of the authors to the Isaac Newton Institute for Mathematical Sciences, Cambridge, EPSCR Grant no EP/K032208/1.
1,116,691,497,111
arxiv
\section*{Notation} Our main reference for the basic facts and related notation on BV functions is \cite{FP}. Let us recall that a real valued function $f$ defined on an interval $I$ is of Bounded Variation (we often simply write BV) if the so-called \emph{pointwise variation} $\pV(f,I)$ of $f$ on $I$, given by \[\pV(f,I):=\sup\left\{\sum_{0\le i<n}|f(t_{i+1})-f(t_i)|:\, t_i\in I, \, t_0<t_1<\cdots <t_n\right\}\] is finite. In this case there exist two increasing and bounded functions $f_1, f_2:I\to \mathbb{R}$ satisfying \begin{equation}\label{tag:bv} f=f_1-f_2,\qquad \pV(f,I)=\pV(f_1,I)+\pV(f_2,I). \end{equation} In particular, every function of bounded variation is locally integrable. The left and right limit of a BV function $f$ in $c$ will be denoted, respectively, $f(c^-)$ and $f(c^+)$. We find useful here to adopt the following sum notation that is quite common in the field of Discrete Calculus: if $a<b$ are natural numbers we set \[\dsum_{a\le k<b}f(k):=\dsum_{k=a}^{b-1}f(k).\] Moreover, we set $\left[f\right]_a^b=f(b)-f(a)$. \section{Introduction} The first order Euler-Maclaurin formula for a smooth function $f:[a,b]\to\mathbb R$ ($a<b$ in $\Z$) states that \begin{equation}\label{tag:em1} \sum_{a\le k<b}f(k)=\int_a^bf(t)\,dt-\dfrac12\left[f\right]_a^b+R,\quad R=\int_a^bf'(t)B_1(t-[t])\,dt, \end{equation} where $B_1(t)=t-\dfrac12$ is the first Bernoulli polynomial. The formula is useful in the approximation of finite sums, and to relate the convergence of generalized integrals with that of numerical series: we refer to \cite{MT, KP, L} for a survey on the subject. Since $|B_1|\le\dfrac12$ on $[0,1]$ it follows that the remainder $R$ is bounded above by $\displaystyle\dfrac12\int_a^b|f'(t)|\,dt$, so that if $f$ is monotonic one has \[|R|\le \dfrac12|f(b)-f(a)|.\] The proof is based on a simple, though smart, integration by parts and begins assuming that $f$ is defined $[0,1]$: since $B_1'=1$, writing that \[\int_0^1f(t)\,dt=\int_0^1f(t)B_1'(t)\,dt=[fB_1]_0^1-\int_0^1f'(t)B_1(t)\,dt\] yields \[f(0)=\int_0^1f(t)\,dt-\dfrac 12\left[f\right]_0^1+\int_0^1f'(t)B_1(t)\,dt,\] which is (\ref{tag:em1}) when $a=0$ and $b=1$. In Theorem~\ref{remark:critint4} we show that if $f$ is just of bounded variation (BV) on $[a,b]$ then (\ref{tag:em1}) holds with the exception that the remainder $R$ is bounded above by $\dfrac12\pV(f, [a,b])$. The proof of the result is elementary: indeed one can deal with monotonic function, and adapt the same arguments that are involved in the proof of the integral criterion for the convergence of a series with monotonic terms; part of the material arises from the thesis \cite{DZ}. In the final part of Section~\ref{subsect:monotonic} we obtain the results that follow the traditional Euler-Maclaurin formula for a smooth function, that is assumed here to be just BV: the approximation of the partial sums of the series $\dsum_{k=0}^Nf(k)$ in terms of $\dsum_{k=0}^nf(k)$ ($n<N$), the existence of the Euler constant with a related asymptotic formula for $\dsum_{k=0}^nf(k)$ as $n\to +\infty$ and a generalization to BV functions of the integral test for the convergence of a series. \begin{comment} The BV version of \eqref{tag:em1} is formulated in Theorem~\ref{thm:emmid}: the new formula takes into account the possible lack of continuity of the function $f$, and relates the sum of the averages of the left and right limits of $f$ in an interval of integers with the Euler-Maclaurin first-order development $\displaystyle\int_a^bf(t)\,dt-\dfrac12(f(b^-)-f(a^-))$. The remainder, the analogue of $R$ in \eqref{tag:em1}, is here the explicit integral of the mid-value modification of $B_1(t-[t])$, with respect to the Lebesgue-Stieltjes measure associated to $f$. Quite surprisingly, deducing Theorem~\ref{remark:critint4} from the Euler-Maclaurin formula for BV functions as stated in Theorem~\ref{thm:emmid} is not straightforward.\end{comment} In Section~\ref{sec:3} we prove a version of \eqref{tag:em1} based on a partial integration formula for BV functions; in this formula the measure theoretic variation of the function is involved, which may be smaller than the point variation for discontinuous functions, and to deduce \eqref{tag:em1} from it we need to explicit the formula that connects the two variations; this is done in Proposition~\ref{prop:pVlambda}. We are not aware of other formulations of the Euler-Maclaurin formulas for BV functions in the spirit of Theorem~\ref{remark:critint4}. Instead, the approximation formula for the sum of a series (Corollary~\ref{es:criterio integrale}) was established in a more general setting in \cite{TrigubP}, \cite[4.1.5]{TrigubB} for functions whose $r$-th derivative is BV. The methods involved there arise from Fourier analysis, far from our elementary approach. A recent extension, comparing in the multidimensional case the Fourier integral of a function of bounded variation and the corresponding trigonometric series with its Fourier coefficients was recently established in \cite{Lifl}: we thank Elijah Liflyand for sharing the above results with us. \section{A Euler-Maclaurin type formula for BV functions and its consequences}\label{subsect:monotonic} \subsection{A Euler-Maclaurin type formula} \begin{theorem}[Euler-Maclaurin type formula for BV functions]\label{remark:critint4} Let $a,b$ in $\mathbb Z$ and $f:[a,b]\to\mathbb R$ be a function of bounded variation. Then \begin{equation}\label{tag:critint_new}\begin{aligned} \sum_{a\le k<b}f(k)&=\int_a^bf(x)\,dx-\dfrac12\left[f\,\right]_a^b+R, &|R|\le \dfrac12\pV(f,[a,b]).\end{aligned}\end{equation} \end{theorem} \begin{proof} Assume first that $f$ is monotonic increasing. On every interval $[k, k+1]$ ($k\in\mathbb Z$) contained in $[a,b]$ one has \[f(k)\le f(t)\le f(k+1)\qquad \forall t\in [k, k+1],\] from which it follows that \[f(k)=\int_k^{k+1}f(k)\,dt\le \int_k^{k+1}f(t)\,dt\le \int_k^{k+1}f(k+1)\,dt=f(k+1).\] Summing the terms of the foregoing inequalities, as $k$ varies between $a$ and $b-1$, one obtains \[\dsum_{a\le k<b}f(k)\le \int_a^bf(t)\,dt\le \dsum_{a\le k<b}f(k)+f(b)-f(a).\] Subtracting the term $\dfrac 12(f(b)-f(a))$ from the members of the preceding inequalities one finds \[\begin{aligned}\dsum_{a\le k<b}f(k)-\dfrac 12(f(b)-f(a))&\le \int_a^bf(t)\,dt-\dfrac 12(f(b)-f(a))\\ &\le \dsum_{a\le k<b}f(k)+\dfrac 12(f(b)-f(a)),\end{aligned}\] from which the conclusion follows. \noindent If $f$ is of bounded variation, let $f_1,f_2$ be as in (\ref{tag:bv}): since \[\dsum_{a\le k<b}f_i(k)=\int_a^bf_i(x)\,dx-\dfrac12\left[f_i\,\right]_a^b+R_i, \quad |R_i|\le \dfrac12\pV(f_i,[a,b]) \quad (i=1,2)\] by subtracting term by term we get \[\dsum_{a\le k<b}f(k)=\int_a^bf(x)\,dx-\dfrac12\left[f\,\right]_a^b+R,\qquad R=R_1-R_2,\] so that \[|R|\le |R_1|+|R_2|\le \dfrac12\pV(f_1,[a,b])+\dfrac12 \pV(f_2,[a,b])=\dfrac12\pV(f,[a,b]).\] \end{proof} \begin{remark}\label{rem:monotonic} If $f$ is monotonic on $[a,b]$ then $\pV(f,[a,b])=|f(b)-f(a)|$, the remainder term $R$ can be thus estimated by $\dfrac12|f(b)-f(a)|$: this fact is well known as a consequence of the Euler-Maclaurin formula when $f$ is monotonic or of class $C^1$ \cite{MT}. \end{remark} \begin{corollary}[The approximation formula for finite sums]\label{coro:stimasummonotonic} Let $f:[0,+\infty[\to\mathbb R$ be of bounded variation. For every $N\ge n$ the following \textbf{approximation formula} holds: \begin{equation}\label{tag:stima_ridotte_monotone}\begin{aligned} \dsum_{0\le k<N}f(k)&=\dsum_{0\le k<n}f(k)+\int_n^Nf(x)\,dx-\dfrac12\left[f\,\right]_n^N+\varepsilon_1(n,N),\\ &|\varepsilon_1(n,N)|\le \dfrac12\pV(f,[n,N])\le \dfrac12\pV(f,[n, +\infty[).\end{aligned}\end{equation} \end{corollary} \begin{proof} It is enough to remark that \[\dsum_{0\le k<N}f(k)-\dsum_{0\le k<n}f(k)=\dsum_{n\le k<N}f(k)\] and to apply (\ref{tag:critint_new}) with $a=n$ and $b=N$. \end{proof} \subsection{A generalization of the integral criterion for the convergence of a series} Let $f:[0, +\infty[\to\mathbb R$ be locally integrable. We set \[\gamma^f_n\index{$\gamma^f_n$}:=\sum_{0\le k<n}f(k)-\int_0^nf(x)\,dx\qquad\forall n\in\mathbb N.\] Notice that, if $f$ is of bounded variation, then $f(\infty):=\displaystyle\lim_{x\to +\infty}f(x)$ exists and is finite. \begin{theorem}[The Euler constant]\label{Maclaurin1monotasint} Let $f:[0, +\infty[\to \mathbb R$ be of bounded variation. The Euler constant of $f$ defined by $\gamma^f:=\displaystyle\lim_{n\to +\infty}\gamma^f_n$ exists and is finite, and the following \textbf{estimate}\index{Euler!costant!approssimation!monotonic functions} of $\gamma^f$ holds: \begin{equation}\label{tag:approx_eulero_monotonic} \gamma^f=\gamma^f_n-\dfrac12\left[f\,\right]_n^{\infty}+\varepsilon_1(n),\quad |\varepsilon_1(n)| \le \dfrac{1}{2}\pV(f,[n, +\infty[)\qquad \forall n\in\mathbb N. \end{equation} \end{theorem} \begin{proof} Given $n,N\in\mathbb N$ with $N>n$, by Theorem~\ref{remark:critint4} we have \begin{equation}\label{tag:bepi} \gamma_N^f-\gamma_n^f=\dsum_{n\le k<N}f(k)-\int_n^Nf(x)\,dx=-\dfrac12\left[f\,\right]_n^N+R(n,N), \end{equation} with $|R(n,N)|\le \dfrac12\pV(f,[n,N])$. \noindent Since the limits $\dlim_{N\to +\infty}f(N)$ and $\dlim_{N\to +\infty}\pV(f,[0,N])=\pV(f,[0, +\infty[)$ are both finite, and $\pV(f,[n,N])=\pV(f,[0,N])-\pV(f,[0,n])$, it follows from the necessary part of the Cauchy convergence criterion that \[\dlim_{n,N\to +\infty}-\dfrac12\left[f\,\right]_n^N+R(n,N)=0.\] The sufficiency part of the very same criterion thus implies that the limit $\dlim_{n\to +\infty}\gamma^f_n$ exists and is finite. Passing to the limit in (\ref{tag:bepi}) we get \[\gamma^f-\gamma_n^f=\dsum_{n\le k<N}f(k)-\int_n^Nf(x)\,dx=\dfrac12(f(\infty)-f(n))+\varepsilon_1(n),\] where $\varepsilon_1(n):=\dlim_{N\to +\infty}R(n,N)$ is dominated by $\dfrac12\pV(f,[n,+\infty[)$. \end{proof} An immediate consequence of Theorem~\ref{Maclaurin1monotasint} is the following generalization of the well known integral criterion for the convergence of the series $\dsum_{k=0}^{\infty}f(k)$ for bounded and monotonic functions. \begin{corollary}[Integral criterion for series and approximation of its sum]\label{es:criterio integrale}\index{criterion!integral!series with monotonic terms} Let $f:[0,+\infty[\to \mathbb R$ be of bounded variation. \begin{enumerate} \item The series $\dsum_{k=0}^{\infty}f(k)$ and the generalized integral $\displaystyle\int_0^{+\infty}f(x)\,dx$ have the same behavior: both are either convergent or divergent. \item Assume that the series $\dsum_{k=0}^{\infty}f(k)$ converges. For every $n\in\mathbb N$ the following \textbf{approximation }\index{series!monotonic terms!sum!approximation} holds: \begin{equation}\label{tag:stima_ridotte_monotone_convergenti}\begin{aligned} \dsum_{k=0}^{\infty}f(k)&=\dsum_{0\le k<n}f(k)+\int_n^{+\infty}f(x)\,dx-\dfrac12\left[f\,\right]_n^{\infty}+\varepsilon_1(n),\\ &|\varepsilon_1(n)|\le \dfrac12\pV(f,[n,+\infty[).\end{aligned}\end{equation} \end{enumerate} \end{corollary} \begin{proof} 1. We know from Theorem~\ref{Maclaurin1monotasint} that \[\gamma^f=\lim_{n\to\infty}\left(\dsum_{0\le k<n}f(k)-\int_0^n\!\!f(x)\,dx\right)\in\mathbb R.\] Thus $\dsum_{k=0}^{\infty}f(k)$ and the limit $\displaystyle\lim_{\substack{n\to +\infty\\n\in\mathbb N}}\int_0^{n}f(x)\,dx$ have the same behavior. Since $f(\infty)$ belongs to $\mathbb R$, the value of $\displaystyle\lim_{\substack{n\to +\infty\\n\in\mathbb N}}\int_0^{n}f(x)\,dx$ coincides with that of $\displaystyle\int_0^{+\infty}f(x)\,dx$: the conclusion follows. \noindent 2. It follows from (\ref{tag:stima_ridotte_monotone}) that for every $N\ge n$ we have \begin{equation}\label{tag:woihfeo}\dsum_{0\le k<N}f(k)=\dsum_{0\le k<n}f(k)+\int_n^Nf(x)\,dx-\dfrac12\left[f\,\right]_n^N+\varepsilon_1(n,N),\end{equation} with $|\varepsilon_1(n,N)|\le \dfrac12\pV(f,[n,N])\le \dfrac12\pV(f,[n,+\infty[)$. From Point 1. we know that $f$ is integrable in a generalized sense on $[0, +\infty[$. Passing to the limit for $N\to +\infty$ in (\ref{tag:woihfeo}) we deduce that $\varepsilon_1(n):=\displaystyle\lim_{N\to +\infty}\varepsilon_1(n,N)$ is finite, whence the validity of (\ref{tag:stima_ridotte_monotone_convergenti}). \end{proof} \begin{remark} The approximation formula \eqref{tag:stima_ridotte_monotone_convergenti} was established, for a wider class of functions and with an explicit form of the reminder, in \cite{TrigubP}, \cite[4.1.5]{TrigubB} by means of Fourier analysis methods. \end{remark} \subsection{Asymptotic formulas} \begin{theorem}[Asymptotic formulas]\label{coro:stimaasint_ordine1_monotona} Let $f:[0, +\infty[\to \mathbb R$ be a function. \begin{enumerate} \item If $f$ is of bounded variation, then for every $n\in\mathbb N$ \[ \label{tag:asymptotic_monotoinic3435} \dsum_{0\le k<n}f(k)=\gamma^f+\int_0^nf(x)\,dx+\varepsilon'_1(n),\qquad |\varepsilon'_1(n)|\le \pV(f,[n,+\infty[). \] \item If $f$ is monotonic and unbounded then for every $n\in\mathbb N$ we have \[ \dsum_{0\le k<n}f(k)=\int_0^nf(x)\,dx+O\left(f(n)\right)\quad n\to +\infty; \] \end{enumerate} \end{theorem} \begin{proof}1. From (\ref{tag:approx_eulero_monotonic}) we obtain \[\gamma^f_n=\gamma^f+\varepsilon'_1(n),\] where $\varepsilon'_1(n):=\dfrac12\left[f\,\right]_n^{\infty}-\varepsilon_1(n)$ and since $|\varepsilon_1(n)| \le \dfrac{1}{2}\pV(f,[n, +\infty[)$, the following estimate holds \[|\varepsilon'_1(n)|=\left|\dfrac12\left[f\,\right]_n^{\infty}-\varepsilon_1(n)\right|\le \pV(f,[n, +\infty[):\] the conclusion follows. \noindent 2. It follows from Theorem~\ref{remark:critint4}, together with Remark~\ref{rem:monotonic}, that for every $n\in\mathbb N$ \[\dsum_{0\le k<n}f(k)=\int_0^nf(x)\,dx-\dfrac12(f(n)-f(0))+ R(n),\] with $|R(n)|\le \dfrac12|f(n)-f(0)|$. Since $\displaystyle\lim_{n\to +\infty}f(n)=\pm\infty$, then \[f(n)-f(0)= O(f(n))\qquad n\to +\infty,\] whence $-\dfrac12(f(n)-f(0))+ R(n)=O(f(n))$ for $n\to +\infty$: the conclusion follows. \end{proof} \section{The Euler-Maclaurin formula for BV functions: a more measure theoretic look}\label{sec:3} \subsection{Variation and point variation} A function of locally bounded variation (i.e. of bounded variation on every bounded interval) $f:\R\to\R$ provides a finite signed measure $\mu_f$ on the $\s-$algebra of Borel subsets of any subinterval of $\R$ on which $f$ is bounded, in particular on any bounded interval. Denoting by $f(x^-)$ (resp. $f(x^+)$) the left (resp. right) limit of $f$ at a point $x$, the measures of bounded intervals with end-points $c<d$ are: \[\mu_f\big(]c,d[\big)=f(d^-)-f(c^+),\, \mu_f\big([c,d]\big)=f(d^+)-f(c^-),\] \[\mu_f\big([c,d[\big)=f(d^-)-f(c^-),\,\mu_f\big(]c,d]\big)=f(d^+)-f(c^+),\] and for $c=d$ we have $\mu_f\big(\{c\}\big)=f(c^+)-f(c^-)$, the jump of $f$ at $c$. As for every signed measure the {\em total variation measure} $|\mu_f|$ of the Borel set $E$ is \[|\mu_f|(E)=\sup\left\{\sum_{k=1}^m|\mu_f(A_k)|: \,A_1,\dots,A_m\subseteq E \text{ disjoint and Borel}\right\}.\] When $E$ is an interval one can prove that the same supremum is obtained if $A_1,\dots,A_m$ range only over subintervals of $E$, so that, if $E$ is an interval \begin{multline*}|\mu_f|(E)\!=\!\sup\left\{\sum_{k=1}^m|\mu_f\big(]x_{k-1}, x_k[\big)|+\sum_{k=0}^m|\mu_f\big(\{x_k\}\big)|:\,x_k\in E,\,x_0<\dots<x_m\right\}\\ =\sup\left\{\sum_{k=1}^m|f(x_k^-)-f(x_{k-1}^+)|+\sum_{k=0}^m|f(x_k^+)-f(x_k^-)|:\,x_k\in E,\,x_0<\dots<x_m\right\}.\end{multline*} If, moreover, $E$ is open bounded then $|\mu_f|(E)$ coincides with the \emph{variation} $V(f, E)$ of $f$ on $E$ \cite{FP}, given by \[V(f, E):=\sup\left\{\int_Ef(x)\phi'(x)\,dx:\, \phi\in C^1_{\text{c}}(E),\,|\phi|\le1\right\}.\] If $f:\R\to\R$ is locally BV it is convenient to introduce the function \[\rho_f(x):=|f(x^+)-f(x)|+|f(x)-f(x^-)|-|f(x^+)-f(x^-)|\,\quad\forall x\in\mathbb R.\] Notice that $\rho_f(x)$ equals twice the distance from $f(x)$ to the interval whose end-points are $f(x^-), f(x^+)$. \noindent Here is how the pointwise variation of a BV function on a bounded \emph{open} interval is related to its variation. \begin{proposition}\label{prop:pVlambda} Let $f:\R\to\R$ be locally of bounded variation. Then for every bounded open interval $E$: \[\label{tag:pvv}\pV(f,E)=|\mu_f|(E)+\sum_{x\in E}\rho_f(x). \] \end{proposition} \begin{proof} Given $\varepsilon>0$ we can find $x_0<x_1<\dots< x_m$ in $E$ such that \[\pV(f,E)-\varepsilon<\sum_{k=1}^m|f(x_k)-f(x_{k-1})|;\] now for every $k\in\{0,\dots,m\}$ we pick $x'_k,\,x''_k\in E$ such that \[x'_0<x_0;\,x_m<x''_m; \quad x_{k-1}<x''_{k-1}<x'_k<x_k\] for every $k=1,\dots,m$. Consider now the set $\{x'_k,x_k,x''_k:\, k=0,\dots,m\}$; by the triangular inequality we get \[\begin{aligned}\pV(f,E)-\varepsilon&<\displaystyle\sum_{k=1}^m|f(x_k)-f(x_{k-1})|\\ &\le\sum_{k=0}^m(|f(x_k)-f(x'_k)|+|f(x''_k)-f(x_k)|)+\sum_{k=1}^m|f(x'_k)-f(x''_{k-1})|\\&\le\pV(f,E);\end{aligned}\] taking limits in the preceding inequality as $x'_k$ increases to $x_k$ and $x''_k$ decreases to $x_k$ we get \[\begin{aligned}\pV(f,E)-\varepsilon&<\displaystyle\sum_{k=0}^m(|f(x_k)-f(x^-_k)|+|f(x^+_k)-f(x_k)|)+\sum_{k=1}^m|f(x^-_k)-f(x^+_{k-1})|\\ &\le\pV(f,E),\end{aligned}\] which immediately yields \[\begin{aligned}\pV(f,E)-\varepsilon&<\displaystyle\left(\sum_{k=1}^m|f(x^-_k)-f(x^+_{k-1})|+\sum_{k=0}^m|f(x_k^+)-f(x_k^-)|\right)+\sum_{k=0}^m\rho_f(x_k)\\ &\le\pV(f,E);\end{aligned}\] taking suprema on $\{x_0,\dots,x_m\}$ this easily gives \[\pV(f,E)-\varepsilon<|\mu_f|(E)+\dsum_{x\in E}\rho_f(x)\le\pV(f,E),\] and ends the proof. \end{proof} \begin{remark}\label{nota} Notice that the claim of Proposition~\ref{prop:pVlambda} does not hold, in general, if $E$ is not open. It is easy to see that for a {\em compact} interval $[a,b]$ ($a<b$) we have \[\begin{aligned}\pV(f,[a,b])&=\pV(f,]a,b[)+|f(a)-f(a^+)|+|f(b)-f(b^-)|\\ &=|\mu_f|(]a,b[)+\sum_{x\in]a,b[}\rho_f(x)+|f(a)-f(a^+)|+|f(b)-f(b^-)|.\end{aligned}\] This proves actually that $\pV(f,I)$ and $|\mu_f|(I)$ coincide for every bounded interval $I$ if and only if $f$ is continuous; thus $\pV(f,I)$ gives rise to a measure if and only if $f$ is continuous. \end{remark} \subsection{The Euler-Mac Laurin formula} Let $f\in\ope{BV}_{\rm loc}(\R)$. The {\em mid-value modification $f_m$} for $f$ is the function defined by \[f_{m}(x):=\dfrac{f(x^-)+f(x^+)}2.\] The following version of the integration by parts formula for BV functions will be used in the sequel. \begin{lemma}[Integration by parts for BV functions] If $f,\,g:\mathbb R\to\R$ are locally of bounded variation then, for every $a<b$: \begin{equation}\label{tag:parts}\int_{[a,b[}g_m(x)\,d\mu_f(x)=g(b^-)f(b^-)-g(a^-)f(a^-)- \int_{[a,b[}f_m(x)\,d\mu_g(x).\end{equation} \end{lemma} \begin{proof} By following the lines of the proof of \cite[Theorem 3.36]{F} one gets \[\int_{[a,b[}g(x^-)\,d\mu_f(x)=g(b^-)f(b^-)-g(a^-)f(a^-)- \int_{[a,b[}f(x^+)\,d\mu_g(x),\] \[\int_{[a,b[}g(x^+)\,d\mu_f(x)=g(b^-)f(b^-)-g(a^-)f(a^-)- \int_{[a,b[}f(x^-)\,d\mu_g(x).\] The result is obtained by summing up term by term the members of the above equalities, and dividing by 2. \end{proof} The following Euler-Maclaurin formula for the sums $\displaystyle\sum_{a\le k<b}f_m(k)$ holds: differently from the classical one, the sums involve the mid-value modification of $f$, due to its possible discontinuities. The first Bernoulli polynomial $B_1(x)=x-\dfrac12$, restricted to $[0,1]$, is involved in the first-order Euler-Maclaurin formula for smooth functions \cite[Theorem 12.27]{MT}; we will use here the mid-value modification of its extension by periodicity $\beta_1:\R\to\R$ defined by \[\beta_1(x):=\begin{cases}B_1(x-[x])&\text{ if }x\notin \Z,\\ 0&\text{ otherwise}. \end{cases}\] \begin{theorem}[First-order Euler-Maclaurin formula for BV functions]\label{thm:emmid} Assume that $f:\R\to\R$ is locally of bounded variation. Then, for any $a<b$ in $\mathbb Z$, \begin{equation}\label{tag:nuovo}\sum_{a\le k<b}f_m(k)=\int_a^bf(x)\,dx-\dfrac12(f(b^-)-f(a^-))+\int_{]a,b[}\beta_1(x)\,d\mu_f(x). \end{equation} \end{theorem} \begin{proof} The proof of Theorem~\ref{thm:emmid} goes formally as that of the classical first-order Euler-Maclaurin formula. Clearly $\b_1$ is locally of bounded variation; plainly $\mu_{\beta_1}=\lambda_1-\displaystyle\sum_{n\in\Z}\de_{n}$, where $\lambda_1$ is the Lebesgue measure. Since $(\beta_1)_m=\beta_1$, applying formula (\ref{tag:parts}) with $g=\b_1$ we get \[\begin{aligned}\int_{[a,b[}\b_1(x)\,d\mu_f(x)&=\b_1(b^-)\,f(b^-)-\b_1(a^-)\,f(a^-)-\int_{[a,b]}f_m(x)\,d\mu_{\beta_1}(x)\\ &=\dfrac{f(b^-)-f(a^-)}2-\int_a^bf(x)\,dx+\int_{[a,b[}f_m(x)\,d\left(\displaystyle\sum_{k\in\Z}\de_{k}\right)(x)\\ &=\dfrac{f(b^-)-f(a^-)}2-\int_a^bf(x)\,dx+\sum_{a\le k<b}f_m(k),\end{aligned}\] which we can rewrite \[\sum_{a\le k<b}f_m(k)=\int_a^bf(x)\,dx-\dfrac{f(b^-)-f(a^-)}2+\int_{[a,b[}\beta_1(x)\,d\mu_f(x);\] since $\b_1(a)=0$, we get $\displaystyle\int_{[a,b[}\beta_1(x)\,d\mu_f(x)=\int_{]a,b[}\beta_1(x)\,d\mu_f(x)$. \end{proof} Theorem~\ref{thm:emmid} yields an alternative proof of (\ref{tag:critint_new}). \begin{proof}[Alternative proof of Theorem~\ref{remark:critint4}.] To deduce \eqref{tag:critint_new} from the preceding theorem we rewrite \begin{equation}\notag\sum_{a\le k<b}f(k)=\int_a^bf(x)\,dx-\dfrac{f(b)-f(a)}2+R,\end{equation} \[\begin{aligned} R:&=\int_{]a,b[}\b_1(x)\,d\mu_f(x)+\sum_{a\le k<b}f(k)-\sum_{a\le k<b}f_m(k)+\dfrac12\left[f\right]_a^b-\dfrac{f(b^-)-f(a^-)}2\\ &=\int_{]a,b[}\b_1(x)\,d\mu_f(x)+\dfrac12\sum_{a<k<b}\big((f(k)-f(k^-))+(f(k)-f(k^+))\big)+\\ &\phantom{AAAAAAAAAAAAAAAAAAAA}+\dfrac12\big((f(a)-f(a^+))+(f(b)-f(b^-))\big). \end{aligned}\] so that \begin{multline}\label{tag:B} |R|\le \left|\int_{]a,b[}\b_1(x)\,d\mu_f(x)\right|+\dfrac12\sum_{a<k<b}\big(|f(k)-f(k^-)|+|f(k)-f(k^+)|\big)+\\ +\dfrac12\big(|f(a)-f(a^+)|+|f(b)-f(b^-)|\big). \end{multline} Now, since $\displaystyle\int_{]a,b[}|\b_1(x)|\,d|\mu_f|(x)$ lacks the contribution of the jumps of $f$ on the integers and $|\b_1|\le 1/2$, \[\begin{aligned}\int_{]a,b[}|\b_1(x)|\,d|\mu_f|(x)&\le \dfrac12|\mu_f|\big(]a,b[\setminus\mathbb Z\big)\\ &= \dfrac12|\mu_f|\big(]a,b[\big)-\dfrac12\sum_{a< k<b}{|f(k^+)-f(k^-)|}. \end{aligned}\] It follows from \eqref{tag:B} and Proposition~\ref{prop:pVlambda}, taking account of Remark \ref{nota}, that \[\begin{aligned}|R|&\le \dfrac12\Big(|\mu_f|\big(]a,b[\big)+\sum_{a<k<b}\rho_f(k)\Big)+\dfrac12\big(|f(a)-f(a^+)|+|f(b)-f(b^-)|\big)\\ &\le \pV(f, ]a,b[) +\dfrac12\big(|f(b)-f(b^-)|+|f(a^+)-f(a)|\big)=\dfrac12\pV(f, [a,b]). \end{aligned}\] \end{proof} \section*{References} \bibliographystyle{elsarticle-num}
1,116,691,497,112
arxiv
\section{\label{sec:Intro}Introduction}% One of the most original and successful attempts to describe the low-energy regime of the theory of strong interactions comes from an idea suggested by Skyrme \cite{Skyrme:1958vn, *Skyrme:1961vq, *Skyrme:1961vr, *Skyrme:1962vh} that baryons (and nuclei) are topological soliton solutions arising from an effective Lagrangian of mesons. The proposal is supported by the work of Witten \cite{Witten:1979kh} who realized that the large $N_{c}$ limit of QCD points towards such an interpretation. More recently, an analysis of the low energy hadron physics in holographic QCD \cite{Sakai:2004cn} have led to a similar picture, i.e. the Skyrme Model. The model, in its original form, succeeds in predicting the properties of the nucleon within a precision of 30\% \cite{Adkins:1983ya}. This is considered a rather good agreement for model which involves only two parameters. Some attempts to improve the model have given birth to a number of extensions or generalizations. Most of them rely, to some extent, on our ignorance of the exact form of the low-energy effective Lagrangian of QCD namely, the structure of the mass term \cite{Marleau:1990nh,Bonenfant:2009zz,Kopeliovich:2005vg}, the contribution of other vector mesons \cite{Sutcliffe:2008sk, Adkins:1983nw} or simply the addition of higher-order terms in derivatives of the pion fields \cite{Marleau:1990nh}. Unfortunately, one of the recurring problems of Skyrme-like Lagrangians is that they almost inevitably give nuclei binding energy that are too large by at least an order of magnitude. Perhaps a better approach would be to construct an effective Lagrangians with soliton solutions that nearly saturate the Bogomol'nyi bound. If this indeed the case, then the classical static energy of such BPS-Skyrmions (Bogomol'nyi-Prasad-Sommerfeld) grows linearly with the baryon number $A$ (or atomic number) much like the nuclear mass. Support for this idea comes from a recent result from Sutcliffe \cite{Sutcliffe:2010et} who found that BPS-type Skyrmions seem to emerge for the original Skyrme Model when a large number of vector mesons are added. The additional degrees of freedom bring the mass of the soliton down to the saturation of the Bogomol'nyi bound. A more direct approach to construct BPS-Skyrmions was also proposed by Adam, Sanchez-Guillen, and Wereszczynski (ASW) \cite{Adam:2010fg}. Their prototype model consists of only two terms: one of order six in derivatives of the pion fields \cite{Jackson:1985yz} and a second term, called the potential, which is chosen to be the customary mass term for pions in the Skyrme Model \cite{Adkins:1983hy}. The model leads to BPS-type compacton solutions with size and mass growing as $A^{\frac{1}{3}}$ and $A$ respectively, a result in general agreement with experimental observations. However, the connection between the ASW model and pion physics, or the Skyrme Model, is more obscure due to the absence of the nonlinear $\sigma$ and so-called Skyrme terms which are of order 2 and 4 in derivatives, respectively. Pursuing in this direction, some of us \cite{Bonenfant:2010ab,Bonenfant:2012kt} reexamined a more realistic generalization of the Skyrme Model which includes terms up to order six in derivatives \cite{Jackson:1985yz} considering the regime where the nonlinear $\sigma$ and Skyrme terms are are small perturbations, refered in what follows as the near-BPS Skyrme Model . In that limit, it is possible, given an appropriate choice of potential, to find well-behaved analytical solutions for the static solitons in that approximation. Since they saturate the Bogomol'nyi bound, their static energy is directly proportional to $A$ and one recovers some of the results of Ref. \cite{Adam:2010fg}. In fact, these solutions allow computing the mass of the nuclei including static, rotational, Coulomb and isospin breaking energies. Adjusting the four parameters of the model to fit the resulting binding energies per nucleon with respect to the experimental data of the most abundant isotopes leads to an impressive agreement. These results support the idea of a BPS-type Skyrme Model as the dominant contribution to an effective theory for the properties of nuclear matter. However, a few issues remain to be addressed before such a model is considered viable. One of them concerns the shape of the energy and baryon densities. As for most extensions of the Skyrme Model, the BPS-type models in Refs. \cite{Adam:2010fg}, \cite{Bonenfant:2010ab} and \cite{Bonenfant:2012kt} generate compact, shell-like or gaussian-like configurations for the energy and baryon densities, respectively, as opposed to what experimental data suggests, i.e. almost constant densities in the nuclei. The purpose of this work is to show that it is possible to construct an effective Lagrangian which leads to a uniform baryon density and still preserve the agreement with nuclear mass data. It may be noted that near-BPS Skyrme models form a much bigger set than previously thought as suggested from the recent discovery of topological energy bounds \cite{Harland:2013rxa, Adam:2013tga} or different extensions \cite{Bednarski:2013yca}. \section{\label{sec:Skyrme}The near-BPS Skyrme Model} We consider an extension of the original Skyrme Model that consist of the Lagrangian density% \begin{equation} \mathcal{L}=\mathcal{L}_{0}+\mathcal{L}_{2}+\mathcal{L}_{4}+\mathcal{L}_{6} \label{model0to6}% \end{equation} with \begin{align} \mathcal{L}_{0} & =-\mu^{2}V(U)\label{L0}\\ \mathcal{L}_{2} & =-\alpha\ \text{Tr}\left[ L_{\mu}L^{\mu}\right] \label{L2}\\ \mathcal{L}_{4} & =\beta\ \text{Tr}\left[ f_{\mu\nu}f^{\mu\nu}\right] \label{L4}\\ \mathcal{L}_{6} & =-\frac{3}{2}\frac{\lambda^{2}}{16^{2}}\text{Tr}\left[ f_{\mu\nu}f^{\nu\lambda}f_{\lambda}^{\ \ \mu}\right] \label{L6}% \end{align} where $L_{\mu}=U^{\dagger}\partial_{\mu}U$ is the left-handed current and we write for simplicity, the commutators as $f_{\mu\nu}=\left[ L_{\mu},L_{\nu }\right] .$ Here the pion fields are represented by the $SU(2)$ matrix $U=\phi_{0}+i\tau_{i}\phi_{i}$ and obey the nonlinear condition $\phi_{0}% ^{2}+\phi_{i}^{2}=1$. The subscript $i$ in $\mathcal{L}_{i}$ denotes to the number of derivatives of the pion fields which determines how each term changes with respect to a scale transformation. In the original Skyrme Model, only the nonlinear $\sigma$ term, $\mathcal{L}% _{2},$ and the Skyrme term, $\mathcal{L}_{4},$ contribute. This implies that $\alpha,\beta>0$ otherwise the static solution would not be stable against scale transformations. A mass term --- or potential term --- $\mathcal{L}% _{0},$ is often added to take into account chiral symmetry breaking so as to generate a pion mass term for small fluctuations of the chiral field in $V(U)$. We shall analyze this term in more details in the coming sections but, as it turns out, the choice of potential $V(U)$ will have a direct bearing on the form of the solutions and on the predictions of our model. Finally, the term of order six in derivatives of the pion fields, $\mathcal{L}_{6},$ is equivalent to $\mathcal{L}_{J6}=-\varepsilon_{J6}\mathcal{B}^{\mu}% \mathcal{B}_{\mu}$ with $\varepsilon_{J6}=9\pi^{4}\lambda^{2}/4$ that was first proposed by Jackson et al.\ \cite{Jackson:1985yz} to take into account $\omega$-meson interactions. Here, $\mathcal{B}^{\mu}$ stands for the topological current density \begin{equation} \mathcal{B}^{\mu}=\frac{\epsilon^{\mu\nu\rho\sigma}}{24\pi^{2}}\text{Tr}% \left( L_{\nu}L_{\rho}L_{\sigma}\right) . \end{equation} The constants $\mu,$ $\alpha$, $\beta,$ and $\lambda$ are left as free parameters although we shall focus on the regime where $\alpha$ and $\beta$ are relatively small, i.e. in the limit where the solutions remain close to that of the BPS-solitons. It is well known that setting the boundary condition for $U$ at infinity to a constant in order to get finite energy solutions for the Skyrme fields also characterizes such solutions by a conserved topological charge which Skyrme identified as the baryon number $\mathcal{B}$ (or mass number $A$ in the context of nuclei) \begin{equation} \mathcal{B}=\int d^{3}r\mathcal{B}^{0}=-\frac{\epsilon^{ijk}}{24\pi^{2}}\int d^{3}r\text{Tr}\left( L_{i}L_{j}L_{k}\right) . \label{Bint}% \end{equation} Note that the static energy arising from $\mathcal{L}_{6}$ corresponds to the square of the baryon density% \[ E_{6}=\frac{9\pi^{4}\lambda^{2}}{4}\int\left( \mathcal{B}^{0}\left( \mathbf{r}\right) \right) ^{2}d^{3}r. \] It is often associated to the energy that would emerge if the Skyrme field is couple to the $\omega-$meson \cite{Zahed:1986qz}% \[ E_{\omega}=\frac{1}{2}\frac{g_{\omega}^{2}}{4\pi}\int\mathcal{B}^{0}\left( \mathbf{r}\right) \frac{e^{-m_{\omega}\left\vert \mathbf{r}-\mathbf{r}% ^{\prime}\right\vert }}{\left\vert \mathbf{r}-\mathbf{r}^{\prime}\right\vert }\mathcal{B}^{0}\left( \mathbf{r}^{\prime}\right) d^{3}rd^{3}r^{\prime}. \] where instead of following the $e^{-m_{\omega}\left\vert \mathbf{r}% -\mathbf{r}^{\prime}\right\vert }/\left\vert \mathbf{r}-\mathbf{r}^{\prime }\right\vert \ $law, the interaction is replaced by a $\delta-$function $\delta^{3}\left( \mathbf{r}-\mathbf{r}^{\prime}\right) $. Historically, $\mathcal{L}_{0}$ and $\mathcal{L}_{6}$ were introduced to provide a more general effective Lagrangian than the original Skyrme Model and indeed, the Lagrangian in (\ref{model0to6}) represents the most general $SU(2)$ model with at most two time derivatives. Since one generally relies on the standard Hamiltonian interpretation for the quantization procedure, higher-order time derivatives are usually avoided. On the other hand, it should be kept in mind that an effective theory based on the $1/N_{c}$ expansion of QCD should, in principle, include terms with higher-order derivatives of the fields. The model (\ref{model0to6}) has been studied rather extensively in the sector where the values of parameters $\mu,$ $\alpha$, $\beta,$ and $\lambda$ close to that of the original Skyrme Model \cite{Jackson:1985yz, Floratos:2001ih, *Floratos:2001bz, *Kopeliovich:2004pd, *Kopeliovich:2005hs}. Clearly these choices were made so that $\mathcal{L}_{2}$ and $\mathcal{L}_{4}$ would continue to have a significant contribution to the mass of the baryons and thereby preserve the relative successes of the Skyrme Model in predicting nucleon properties and their link to soft-pion theorems ($\alpha$ is proportional to the pion decay constant $F_{\pi}$). Yet this sector of the theory fails to provide an accurate description of the binding energy of heavy nuclei. Noting that this caveat may come from the fact that the solitons of the Skyrme Model do not saturate the Bogomol'nyi bound, ASW proposed a toy model \cite{Adam:2010fg} (equivalent to setting $\alpha=\beta=0)$ whose solutions are just BPS solitons.\ In principle however, the model cannot lead to stable nuclei since BPS-soliton masses are exactly proportional to the topological number, so $\mathcal{B}>1$ solutions have no binding energies. A more realistic approach was proposed in Refs. \cite{Bonenfant:2010ab, Bonenfant:2012kt} where the Lagrangian (\ref{model0to6}) is assumed to be in the sector where $\alpha$ and $\beta$ are relatively small, treating these two terms as perturbations. The solutions almost saturate without reaching the Bogomol'nyi bound so that it allows for small but non-zero binding energies. However, in spite of a very good agreement with experimental nuclear masses, there remain a few obstacles to the acceptance of such model. For instance, nuclear matter is believed to be uniformly distributed inside a nucleus whereas the solutions of the aforementioned models \cite{Adam:2010fg, Bonenfant:2010ab, Bonenfant:2012kt} display either compact, shell-like or gaussian-like baryon and energy densities respectively. The main purpose of this work is to demonstrate that it is possible to construct an effective Lagrangian which leads to a uniform densities and still preserves the agreement with nuclear mass data. Let us consider the static solution for $U$. It can be written in the general form \begin{equation} U=e^{i\mathbf{n}\cdot\mathbf{\tau}F}=\cos F+i\mathbf{n}\cdot\mathbf{\tau}\sin F \label{Hedgehog}% \end{equation} where $\mathbf{\hat{n}}$ is the unit vector \begin{equation} \mathbf{\hat{n}}=\left( \sin\Theta\cos\Phi,\sin\Theta\sin\Phi,\cos \Theta\right) \end{equation} and $F,\Theta,$ and $\Phi$ depend in general on the spherical coordinates $r,\theta,$ and $\phi$. We first consider the model in (\ref{model0to6}) in the limit where $\alpha$ and $\beta$ are small. For that purpose, we introduce the axial solutions for the $\alpha=\beta=0$ case,% \begin{equation} F=F(r),\qquad\Theta=\theta,\qquad\Phi=A\phi\label{axialsolution}% \end{equation} where $A$ is an integer that correspond to the baryon number or mass number of a nucleus. A word of warning is in order here. The solution (\ref{axialsolution}) is only one of an infinite dimensional families of solutions of the BPS model and, is not expected to be the true minimizing solution of the static energy of the model or, for that matter, of the total energy which includes also the (iso)rotational energy, the Coulomb energy and an isospin symmetry breaking term. Since $\alpha$ and $\beta$ are assumed to be small, the nonlinear $\sigma$ and Skyrme terms are not expected to a determining factor in minimizing the total energy. In fact, the dominant effect should come from the repulsive Coulomb energy which would have a tendency to favor a most symmetric configuration. Which form is the true minimizer remains an open question only to be answered by heavy numerical calculations. In the absence of such an analysis and for the sake of simplicity, we chose to consider ansatz (\ref{axialsolution}) which allows to easily estimate all the contributions to the mass of the nuclei. From hereon, we shall use whenever possible the dimensionless variable $x=ar$ where \ $a=\left( \mu/18A\lambda\right) ^{1/3}$ in order to factor out the explicit dependence on the model parameters $\mu,\alpha,\beta,$ and, $\lambda$ and baryon number $A$. In fact, most of the relevant quantities can be written in terms of three fundamental objects \begin{align} \left( \mathbf{\nabla}F\right) ^{2} & =\left( a\partial_{x}F\right) ^{2}\nonumber\\ \left( \sin F\mathbf{\nabla}\Theta\right) ^{2} & =\left( a\frac{\sin F}{x}\right) ^{2}\label{gradients}\\ \left( \sin F\sin\Theta\mathbf{\nabla\Phi}\right) ^{2} & =\left( aA\frac{\sin F}{x}\right) ^{2}\nonumber \end{align} The total static energy $E_{\text{s}}$ gets a contribution from each term in (\ref{model0to6}), respectively, \begin{align} E_{0} & =4\pi\left( \frac{\mu^{2}}{a^{3}}\right) I_{0}^{V}\nonumber\\ E_{2} & =4\pi\left( \frac{2\alpha}{a}\right) \left( I_{200}^{0}% +I_{020}^{0}+I_{002}^{0}\right) \label{Estatx}\\ E_{4} & =4\pi\left( 16\beta a\right) \left( I_{220}^{0}+I_{202}% ^{0}+I_{022}^{0}\right) \nonumber\\ E_{6} & =4\pi\left( \frac{9}{16}\lambda^{2}a^{3}\right) I_{222}% ^{0}\nonumber \end{align} where $I_{lmn}^{k}$ are parameter-free integrals given by \begin{align} I_{lmn}^{k}(z) & =\int_{0}^{z}dx\ x^{2}\mathcal{I}_{lmn}^{k}(x)\quad \text{with }\quad\mathcal{I}_{lmn}^{k}(x)=x^{k}\left( \partial_{x}F\right) ^{l}\left( \frac{\sin F}{x}\right) ^{m}\left( A\frac{\sin F}{x}\right) ^{n}\label{Ilmn}\\ I_{0}^{V} & =\int_{0}^{\infty}x^{2}dx\ V(F)=\sum_{m}C_{m}^{V}I_{0m0}^{m} \label{IV}% \end{align} and write $I_{lmn}^{k}=I_{lmn}^{k}(\infty)$ for simplicity. Note that some of these integrals are related in our case since $\mathcal{I}_{lmn}^{k}% =A^{n}\mathcal{I}_{l,m+n,0}^{k}$. In the last equality, we assume that one can recast $V(F)$ as a power series of $\sin F,$ i.e. $V(F)=\sum_{m}C_{m}^{V}% \sin^{m}F$ as suggested in Ref. \cite{Marleau:1990nh}. The terms $E_{0}$ and $E_{6}$ are proportional to the baryon number $A$ as one expects from solutions that saturate the Bogomol'nyi bound whereas the small perturbations $E_{2}=A^{1/3}(a_{2}+b_{2}A^{2})$ and $E_{4}=A^{-1/3}(a_{4}+b_{4}A^{2})$ have a more complex dependence. Part of this behavior, the overall factor $A^{\pm1/3},$ is due to the scaling. The additional factor of $A^{2}$ comes from the axial symmetry of the solution (\ref{axialsolution}) that can be factored out from $I_{lm2}^{k}=A^{2}I_{l,m+2,0}^{k}.$% The topological charge also simplifies to \begin{equation} A=\int d^{3}x\mathcal{B}^{0}(\mathbf{x})=-\frac{2}{\pi}I_{111}^{0}% \end{equation} The root mean square radius of the baryon density is given by \begin{equation} \left\langle r^{2}\right\rangle ^{\frac{1}{2}}=\frac{1}{2\pi a}\left( -2I_{120}^{2}\right) ^{1/2} \label{r2baryon}% \end{equation} which is consistent with experimental observation for the charge distribution of nuclei $\left\langle r^{2}\right\rangle ^{\frac{1}{2}}=r_{0}A^{\frac{1}{3}% }$.% The minimization of the static energy for $\alpha=\beta=0$ leads to the differential equation for $F:$ \begin{equation} \frac{\sin^{2}F}{288x^{2}}\partial_{x}\left( \frac{\sin^{2}F}{x^{2}}% \partial_{x}F\right) -\frac{\partial V}{\partial F}=0. \label{minimisation}% \end{equation} Multiplying by $\partial_{x}F,$ this expression can be integrated% \begin{equation} \left( \frac{\sin^{2}F}{x^{2}}\partial_{x}F\right) ^{2}=576V \label{equipartition}% \end{equation} which leads to% \begin{equation} \int\frac{\sin^{2}F}{8\sqrt{V}}dF=\pm\left( x^{3}-x_{0}^{3}\right) \label{Fz}% \end{equation} where $x_{0}$ is an integration constant. Finally, the expression for $F(x)$ can be found analytically provided the integral on the left-hand side is an invertible function of $F.$ For example, assuming that the potential may be written in the form \begin{equation} \sqrt{V}=\frac{u\left( 1-u^{2}\right) }{g^{\prime}(\sqrt{1-u^{2}})} \label{Vgu}% \end{equation} where $u=\cos\left( F/2\right) $ and, $g^{\prime}(u)=\partial g/\partial u,$ equation (\ref{Fz}) leads to \begin{equation} \sqrt{1-u^{2}}=\sin\left( F/2\right) =g^{-1}\left( \mp\left( x^{3}% -x_{0}^{3}\right) \right) \label{Fr}% \end{equation} Such solutions saturate the Bogomol'nyi bound \cite{Adam:2010fg}, so their static energy is proportional to the baryon number $A$. One would like ultimately to reproduce the observed structure of nuclei, i.e. a roughly constant baryon density becoming diffuse at the nuclear surface which is characterized by a skin constant thickness parameter. Unfortunately the chiral angle $F$ in (\ref{Fr}) cannot reproduce this last feature since $F$\ can only be a function of the ratio $r/A^{1/3}$. So the resulting thickness parameter is not constant and should scale like $A^{1/3}.$ It is interesting to note that (\ref{equipartition}) implies that for the minimum energy solutions% \begin{equation} V(x)=\frac{1}{576}\left( \frac{\sin F}{x}\right) ^{4}\left( \partial _{x}F\right) ^{2}\label{Vx}% \end{equation} so \[ E_{0}=4\pi\left( \frac{\lambda\mu}{32A}\right) I_{222}^{0}=E_{6}% \] where the last equality arises from Derrick scaling. Furthermore according to (\ref{Bint}) and (\ref{Vx}), the square root of the potential \[ \sqrt{V(x)}=-\frac{1}{24}\frac{\sin^{2}F}{x^{2}}\partial_{x}F=\frac{\pi}% {48A}\mathcal{B}^{0}(x) \] where $\mathcal{B}^{0}(x)$ corresponds to the radial baryon density $\mathcal{B}^{0}(x)=\int d\Omega\ \mathcal{B}^{0}(\mathbf{x})$. Thus, in order to obtain a nonshell baryon density, it suffices to construct a potential $V$ that does not vanish at small $x$ or, equivalently, a solution such that $\partial_{x}F(0)\neq0.$ Expression (\ref{Vgu}) must be used with caution: it only applies for potentials $V$ which turn out to be function of $u$ alone or, in other words, for potentials that depends on the real part of $\ $the pion field matrix $U$ or Tr$U.$ On the other hand, $\mathcal{L}_{0}$ in (\ref{model0to6}) needs to be explicitly written in terms of the fields $U$. A simple but not unique approach to construct such potential is to identify $u=\cos(F/2)$ to the expression \[ 2U_{+}=u^{2}I \] where $U_{\pm}=(2I\pm U\pm U^{\dagger})/8$ and $I$ is the $2\times2$ identity matrix. Then, a convenient expression for $V(U)$ is given by% \begin{equation} V(U)=\frac{16\text{Tr}\left[ U_{+}U_{-}^{2}\right] }{\left[ g^{\prime }\left( \left( \text{Tr}\left[ U_{-}\right] \right) ^{1/2}\right) \right] ^{2}}\nonumber \end{equation} In the context of the BPS-Skyrme Model, not only the\ potential $V$ appears as one of the dominant term in the static energy but it is also a key ingredient in the determination of the solution. In principle, the full effective theory including the potential should emerge from the low-energy limit of QCD, but apart from a few symmetry arguments, little is known on the exact form of $V$. A most simple expression for $V$ that reads \begin{equation} V_{\text{ASW}}(U)=-\text{Tr}\left[ U_{-}\right] =1-u^{2} \label{V1mcos}% \end{equation}% was first proposed by Adkins et al. \cite{Adkins:1983hy} and served as an additional term to the original Skyrme Lagrangian. Its main purpose was to recover the chiral symmetry breaking pion mass term $-\frac{1}{2}m_{\pi}% ^{2}\mathbf{\pi}\cdot\mathbf{\pi}$ \ in the limit of small pion field fluctuations $U=\exp(2i\tau_{a}\pi_{a}/F_{\pi}).$ It is sometimes useful to recast the potential in the form \cite{Marleau:1990nh} \begin{equation} \mu^{2}V=\sum_{k=1}^{4}C_{k}\text{Tr}\left[ 2I-U^{k}-U^{\dagger k}\right] \label{mumpi}% \end{equation} Taking the limit of small pion field fluctuations, this allows fixing the parameter $\mu$ in terms of the pion mass $m_{\pi}$ through the relation \[ \sum_{k=1}^{\infty}k^{2}C_{k}=-\frac{m_{\pi}^{2}F_{\pi}^{2}}{16}. \] The choice of potential (\ref{V1mcos}) corresponds to the choice $g(u)=u^{3}/3$ in (\ref{Fr}) and solving for $F$ leads to the BPS-compacton solution of ASW \cite{Adam:2010fg}: \begin{equation} F_{\text{ASW}}(x)=\left\{ \begin{tabular} [c]{lll}% $2\arccos\left( 3^{1/3}x\right) $ & $\qquad\text{for}$ & $x\in\left[ 0,3^{-1/3}\right] $\\ $0$ & $\qquad\text{for}$ & $x\geq3^{-1/3}$% \end{tabular} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ }\right. \end{equation} Note here that $\partial_{x}F(x)$ diverges as $\ x\rightarrow3^{-1/3}$ which implies that $E_{2}$ and $E_{4}$ are not well defined. Unfortunately, this solution as well as those arising from other similar models \cite{Adam:2012md} saturate the Bogomol'nyi bound and as such, they give no binding energies for the classical solitons with $B>1$. Several alternatives to (\ref{V1mcos}) have also been proposed \cite{Marleau:1990nh,Kopeliovich:2005vg} but recently, the major role played by the potential in the predictions for BPS-Skyrme Models was realized and it has led to a few interesting cases: \begin{itemize} \item One such example is a potential based on Ref. \cite{Bonenfant:2010ab} \[ V_{\text{BoM}}(U)=-8\text{Tr}\left[ U_{+}U_{-}^{3}\right] \]% which correspond to the choice $-C_{1}=C_{2}=C_{3}=4C_{4}=\mu^{2}/128$ and $C_{k>4}=0$ in (\ref{mumpi}). It leads to well-behaved solutions \begin{equation} F_{\text{BoM}}(x)=\pi\mp2\arccos\left[ \exp\left( -x^{3}\right) \right] \label{exp3}% \end{equation} where $\partial_{x}F$ remains negative and finite for all $x.$ In order to set the baryon number to $A,$ the boundary conditions are chose to be $F(0)=\pm \pi$ and $F(\infty)=0$ for positive and negative baryon number respectively. Note that the exponential fall off of $F$ at large $x$ prevents some quantities such as the moments of inertia from becoming infinite. However, $\partial_{x}F(x)$ vanishes at $x=0$ and so does the baryon density, leading to an unsatisfactory shell-like configuration. \item In that regard, a solution similar to that proposed in Ref. \cite{Bonenfant:2012kt} seems more appropriate \begin{equation} F_{\text{BHM}}(x)=\pi\mp2\arccos\left[ \exp\left( -x^{2}\right) \right] \end{equation} since it possesses the kind of non-shell like baryon density configurations observed in nature. It emerges from the potential of the form \[ V_{\text{BHM}}(U)=-\frac{64}{9}\frac{\text{Tr}\left[ U_{+}U_{-}^{3}\right] }{\ln\left( \text{Tr}\left[ U_{-}\right] \right) }% \]% \end{itemize} These models display compact, shell-like or gaussian-like baryon and energy densities (see Figs. \ref{F} and \ref{B}). However here, we shall demonstrate that it is possible to construct an effective Lagrangian which leads to a uniform baryon density and still preserves and even improves the agreement with nuclear mass data. \begin{figure}[ptbh] \centering\includegraphics[width=0.65\textwidth]{pub20131v6figF.pdf}\caption{Profile $F(x)$ for models ASW (dotdashed), BoM (dashed), BHM (dotted) and BeM (solid).\qquad\qquad\qquad\qquad\qquad}% \label{F}% \end{figure} \begin{figure}[ptbh] \centering\includegraphics[width=0.65\textwidth]{pub20131v6figB.pdf}\caption{Radial baryon density $B(x)$ for models ASW (dotdashed), BoM (dashed), BHM (dotted) and BeM (solid).}% \label{B}% \end{figure} If we assume for now that the observed baryon density can be appropriately approximated by the parametrization $\rho_{B}(r,A)$ then, one is looking for a solution for $F(r)$ such that% \begin{equation} \rho_{B}(r,A)=-\frac{A}{2\pi^{2}}\frac{\sin^{2}F}{r^{2}}F^{\prime} \label{rhorn}% \end{equation} Separating variables and integrating both sides of the equation \[ -\frac{2\pi^{2}}{A}r^{2}\rho_{B}(r,A)dr=\sin^{2}FdF \] we get the expression of the form \begin{equation} F(r)=G^{-1}(Z(r)) \label{Fremp}% \end{equation} where% \begin{align*} G(F) & \equiv\frac{1}{2}F-\frac{1}{4}\sin2F\\ Z(r) & =-\frac{2\pi^{2}}{A}\int r^{2}\rho_{B}(r,A)dr \end{align*} In order to be consistent, the boundary conditions for $Z$ must obey $Z(\infty)-Z(0)=-\pi/2.$ Matching expressions (\ref{Fr}) and (\ref{Fremp}) then provides an approach to construct a model, i.e. to choose a potential $V$, that reproduces the empirical baryon density $\rho_{B}$. Again we stress that our model leads to BPS-Skyrmions with a profile $F$ that must be a function of the ratio $r/A^{1/3}.$\ Unfortunately, this excludes most parametrizations in the literature, for example, densities such as the 2-parameter Fermi or Wood-Saxon form% \[ \rho_{B}^{\text{2pF}}(r)=\rho_{0}\frac{1+e^{-c/\tau}}{1+e^{\left( r-c\right) /\tau}}% \] since they tend to reproduce two empirical observations: (a) a baryon density that is roughly constant for all nuclei up to their boundary where (b) it is suppressed within a thickness $t\approx4.4\tau$ that is practically constant. The last behavior is inconsistent with the $r/A^{1/3}$ dependence of $F.$ Let us instead construct our model by modifying the gaussian-like profile $F_{\text{BHM}}(x)$ in such a way that baryon density $\mathcal{B}^{0}(x)$ is approximately constant. The solution $F_{\text{BHM}}(x)$ leads to a nonshell baryon density but it falls off too rapidly. In order to suppress this behavior we propose a solution of the form (see Figs. \ref{F} and \ref{B}) \begin{equation} F_{\text{BeM}}(x)=\pi\mp2\arccos\left[ \exp\left( -x^{2}-a_{4}x^{4}\right) \right] \label{FBeM}% \end{equation} and fix the coefficient $a_{4}=7/5$ by setting to zero the first coefficient of the series expansion of $\mathcal{B}^{0}(x)$ near $x=0$. (Note that we could, in principle, extend this procedure by changing the argument of the exponential to a truncated series $X(x)=x^{2}+\sum_{i=2}^{N}a_{2i}x^{2i}$. Imposing that the density remains constant further from the core would require to set $a_{6}=1384/525,$ $a_{8}=6302/1125,$ and so on.). It is easy to find a potential that would allow such a solution% \[ V_{\text{BeM}}(U)=\frac{1792}{45}\text{Tr}\left[ U_{+}U_{-}^{3}\right] \frac{\left( 1-\left( 14/5\right) \ln\left( \text{Tr}\left[ U_{-}\right] \right) \right) }{1-\sqrt{1-\left( 14/5\right) \ln\left( \text{Tr}\left[ U_{-}\right] \right) }}% \] Note that in the limit of small pion field fluctuations $U=\exp(2i\tau_{a}% \pi_{a}/F_{\pi})$, the potential has no quadratic term in the pion field i.e. the pion mass remains zero in this model.\ where the last result is obtained assuming the axial solution (\ref{axialsolution}).% Using the profile $F$ in (\ref{FBeM}), the static energy in (\ref{Estatx}) can be calculated. Recalling that $I_{lmn}^{k}=A^{n}I_{l,m+n,0}^{k}$ for the form of axial solution at hand, we need to evaluate numerically only four parameter-free integrals: \begin{align*} I_{200}^{0} & =2.68798\qquad I_{020}^{0}=0.48504\qquad I_{220}^{0}=5.13755\\ I_{040}^{0} & =1.88156\qquad I_{240}^{0}=20.27798. \end{align*} In order to represent physical nuclei, we have taken into account their rotational and isorotational degrees of freedom and quantize the solitons. The standard procedure is to use the semiclassical quantization which is described in the next section. \section{\label{sec:Quantization}Quantization} Skyrmions are not pointlike particles so we resort to a semiclassical quantization method which consists in adding an explicit time dependence to the zero modes of the Skyrmions and applying a time-dependent (iso)rotations on the Skyrme fields by $SU(2)$ matrix $A_{1}(t)$ and $A_{2}(t)$ \begin{equation} \tilde{U}(\mathbf{r},t)=A_{1}(t)U(R(A_{2}(t))\mathbf{r})A_{1}^{\dag}(t) \end{equation} where $R(A_{2}(t))$ is the associated $SO(3)$ rotation matrix. The approach assumes that the Skyrmion behaves as a rigid rotator. Upon insertion of this ansatz in the time-dependent part of the full Lagrangian (\ref{model0to6}), we can write the (iso)rotational Lagrangian as \begin{equation} \mathcal{L}_{\text{r}}=\frac{1}{2}a_{i}U_{ij}a_{j}-a_{i}W_{ij}b_{j}+\frac {1}{2}b_{i}V_{ij}b_{j}, \end{equation} where $a_{k}=-i$Tr$\tau_{k}A_{1}^{\dag}\dot{A}_{1}$ and $b_{k}=i$Tr$\tau _{k}\dot{A}_{2}A_{2}^{\dag}$ The moment of inertia tensors $U_{ij}$ are given by% \begin{align} U_{ij} & =\int d^{3}r\ \mathcal{U}_{ij}=-\frac{1}{a}\int d^{3}x\left[ \frac{2\alpha}{a^{2}}\text{Tr}\left( T_{i}T_{j}\right) \right. \nonumber\\ & +4\beta\text{Tr}\left( \left[ L_{p},T_{i}\right] \left[ L_{p}% ,T_{j}\right] \right) \nonumber\\ & +\left. \frac{9\lambda^{2}}{16^{2}}a^{2}\text{Tr}\left( \left[ T_{i},L_{p}\right] \left[ L_{p},L_{q}\right] \left[ L_{q},T_{j}\right] \right) \right] \label{MInertia}% \end{align} where $T_{i}=iU^{\dagger}\left[ \frac{\tau_{i}}{2},U\right] $. The expressions for $W_{ij}$ and $V_{ij}$ are similar except that the isorotational operator $T_{i}$ is replaced by a rotational analog $S_{i}=-\epsilon_{ikl}x_{k}L_{l}$ as follows: \begin{align} W_{ij} & =\int d^{3}r\ \mathcal{W}_{ij}=\int d^{3}r\ \mathcal{U}_{ij}% (T_{j}\rightarrow S_{j})\label{Wij}\\ V_{ij} & =\int d^{3}r\ \mathcal{V}_{ij}=\int d^{3}r\ \mathcal{U}_{ij}% (T_{j}\rightarrow S_{j},T_{i}\rightarrow S_{i}). \label{Vij}% \end{align} Following the calculations in \cite{Bonenfant:2010ab} for axial solution of the form (\ref{axialsolution}), we find that all off-diagonal elements of the inertia tensors vanish. Furthermore, one can show that $U_{11}=U_{22}$ and $U_{33}$ can be obtained by setting $A=1$ in the expression for $U_{11}$. Similar identities hold for $V_{ij}$ and $W_{ij}$ tensors. Finally the general expressions for the moments of inertia coming from each pieces of the Lagrangian read% \begin{align} U_{11} & =\frac{4\pi}{3a}\left( \frac{8\alpha}{a^{2}}I_{020}^{2}% +16\beta\left( 4I_{220}^{2}+3I_{022}^{2}+I_{040}^{2}\right) +\frac {9\lambda^{2}a^{2}}{16}\left( 3I_{222}^{2}+I_{240}^{2}\right) \right) \label{U11}\\ V_{11} & =\frac{4\pi}{3a}\left( \frac{2\alpha}{a^{2}}\left( I_{002}% ^{2}+3I_{020}^{2}\right) +16\beta\left[ \left( I_{202}^{2}+3I_{220}% ^{2}\right) +4I_{022}^{2}\right] +\frac{9\lambda^{2}a^{2}}{4}I_{222}% ^{2}\right) \label{V11}% \end{align} where due to the axial form of our solution, we can extract an explicit dependence on $A$ through the relation $I_{lmn}^{k}=A^{n}I_{l,m+n,0}^{k}.$ The axial symmetry of the solution imposes the constraint $L_{3}+AK_{3}=0$ which is simply the statement that a spatial rotation by an angle $\theta$ about the axis of symmetry can be compensated by an isorotation of $-A\theta$ about the $\tau_{3}$ axis. It follows from expressions (\ref{MInertia}% )-(\ref{Vij}) that $W_{11}=W_{22}=0$ for $\left\vert A\right\vert \geq2$ and $A^{2}U_{33}=AW_{33}=V_{33}$. Otherwise, for $\left\vert A\right\vert =1$, the solution have spherical symmetry and \begin{equation} W_{11}=\frac{4\pi}{3a}\left( \frac{8\alpha}{a^{2}}I_{020}^{2}+64\beta\left( I_{220}^{2}+I_{040}^{2}\right) +\frac{9\lambda^{2}a^{2}}{4}I_{240}% ^{2}\right) . \label{W11}% \end{equation} where here $A=1$ in $a$ as well.% The general form of the rotational Hamiltonian is given by \cite{Houghton:2005iu} \begin{equation} H_{\text{r}}=H_{\text{r}}=\frac{1}{2}% {\displaystyle\sum\limits_{i=1,2,3}} \left[ \frac{\left( L_{i}+W_{ii}\frac{K_{i}}{U_{ii}}\right) ^{2}}% {V_{ii}-\frac{W_{ii}^{2}}{U_{ii}}}+\frac{K_{i}^{2}}{U_{ii}}\right] \label{Hrot}% \end{equation} where ($K_{i}$) $L_{i}$ the body-fixed (iso)rotation momentum canonically conjugate to $(a_{i}$) $b_{i}$. It is also easy to calculate the rotational energies for nuclei with winding number $\left\vert A\right\vert \geq2$% \begin{equation} H_{\text{r}}=\frac{1}{2}\left[ \frac{\mathbf{L}^{2}}{V_{11}}+\frac {\mathbf{K}^{2}}{U_{11}}+\xi K_{3}^{2}\right] \end{equation} with% \[ \xi=\frac{1}{U_{33}}-\frac{1}{U_{11}}-\frac{A^{2}}{V_{11}}% \] These momenta are related to the usual space-fixed isospin ($\mathbf{I}$) and spin ($\mathbf{J}$) by the orthogonal transformations \begin{align} I_{i} & =-\frac{1}{2}\text{Tr}\left( \tau_{i}A_{1}\tau_{j}A_{1}^{\dag }\right) K_{j}=-R(A_{1})_{ij}K_{j},\label{eq:I}\\ J_{i} & =-\frac{1}{2}\text{Tr}\left( \tau_{i}A_{2}\tau_{j}A_{2}^{\dag }\right) ^{\text{T}}L_{j}=-R(A_{2})_{ij}^{\text{T}}L_{j}. \label{eq:J}% \end{align} According to (\ref{eq:I}) and (\ref{eq:J}), we see that the Casimir invariants satisfy $\mathbf{K}^{2}=\mathbf{I}^{2}$ and $\mathbf{L}^{2}=\mathbf{J}^{2}$ so the rotational Hamiltonian is given by \begin{equation} H_{\text{r}}=\frac{1}{2}\left[ \frac{\mathbf{J}^{2}}{V_{11}}+\frac {\mathbf{I}^{2}}{U_{11}}+\xi K_{3}^{2}\right] . \label{Erot}% \end{equation} We are looking for the lowest eigenvalue of $H_{\text{r}}$ which depends on the dimension of the spin and isospin representation of the nucleus eigenstate $|N\rangle\equiv|i,i_{3},k_{3}\rangle|j,j_{3},l_{3}\rangle$. For $\alpha =\beta=0,$ we can show that $\xi$ is negative and we shall assume that this remains true for small values of $\alpha$ and $\beta$. Then, for a given spin $j$ and isospin $i$, $\kappa$ must take the largest possible eigenvalue $k_{3}.$ Note that $\mathbf{K}^{2}=\mathbf{I}^{2}$ and $\mathbf{L}% ^{2}=\mathbf{J}^{2},$ so the state with highest weight is characterized by $k_{3}=i$ and $l_{3}=j.$ Furthermore, since nuclei are build out of $A$ fermions, the eigenvalues $k_{3}$ are limited to $k_{3}\leq i\leq A/2.$ On the other hand, the axial symmetry of the static solution (\ref{axialsolution}) implies that $k_{3}=-l_{3}/A\leq j/A$ where $j\leq A/2$ as well$.$ In order to minimize $H_{\text{r}}$, we need the largest possible eigenvalue $k_{3}$, so for even $A$ nuclei, $\kappa$ must be an integer such that \[ \kappa=\max(\left\vert k_{3}\right\vert )=\min\left( i,\left[ j/A\right] \right) . \] Similarly for odd nuclei, $\left\vert k_{3}\right\vert $ must be a positive half-integer so the only possible value is \[ \kappa=\min\left( i,\left[ j/A\right] +\frac{1}{2}\right) =\frac{1}{2}% \] This last relation only holds for the largest possible spin eigenstate $j=A/2$ which is not the most stable in general and so it signals that the ansatz (\ref{axialsolution}) may not be the most appropriate for odd nuclei. The axial symmetry may however be only marginally broken if we consider the odd nucleus as a combination of an additional nucleon with an even nucleus especially for large nuclei. Nonetheless, we shall retain the ansatz (\ref{axialsolution}) for both even and odd nuclei and choose the largest possible eigenvalue $k_{3}$ for the most stable isotopes as% \begin{equation} \kappa=\left\{ \begin{tabular} [c]{l}% $0\qquad$for $A=$ even\\ $\frac{1}{2}\qquad$for $A=$ odd \end{tabular} \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \right. . \label{kappa}% \end{equation} The lowest eigenvalue of the rotational Hamiltonian $H_{\text{r}}$ for a nucleus is then given by \cite{Bonenfant:2010ab} \begin{equation} E_{\text{r}}=\frac{1}{2}\left[ \frac{j(j+1)}{V_{11}}+\frac{i(i+1)}{U_{11}% }+\xi\kappa^{2}\right] \label{Erotijk}% \end{equation} The spins of the most abundant isotopes are well known. This is not the case for the isospins so we resort to the usual assumption that the most abundant isotopes correspond to states with lowest isorotational energy. Since $i\geq\left\vert i_{3}\right\vert $, the lowest value that $i$ can take is simply $\left\vert i_{3}\right\vert $ where $i_{3}=Z-A/2.$ For example, the nucleon and deuteron rotational energy reduces respectively to% \begin{align} E_{\text{r}}^{N} & =\frac{3}{8U_{11}}\quad A=1,\ j=i=\kappa=1/2\text{ }\\ E_{\text{r}}^{D} & =\frac{1}{V_{11}}\quad A=2,\ j=1,\ i=\kappa=0\text{ }% \end{align} The explicit calculations of the rotational energy of each nucleus then require the numerical evaluation of the following four parameter-free integrals in (\ref{U11}), (\ref{V11}) and (\ref{W11}) which, in our model, turn out to be \begin{align*} I_{020}^{2} & =0.142868\qquad I_{220}^{2}=1.43364\\ I_{040}^{2} & =0.352712\qquad I_{240}^{2}=3.94598. \end{align*} So far, both contributions to the mass of the nucleus, $E_{\text{s}}$ and $E_{\text{r}},$ are charge invariant. Since this is a symmetry of the strong interaction, it is reflected in the construction of the Lagrangian (\ref{model0to6}) and one expects that the two terms form the dominant portion of the mass. However, isotope masses differ by a few percent so this symmetry is broken for physical nuclei. In the next section, we consider two additional contributions to the mass, the Coulomb energy associated with the charge distribution inside the Skyrmion and an isospin breaking term\ that may be attributed to the up and down quark mass difference. \section{\label{sec:Coulomb}Coulomb energy and isospin breaking} The electromagnetic and isospin breaking contributions to the mass have been thoroughly studied for $A=1$, mostly in the context of the computation of the proton-neutron mass difference \cite{Durgut:1985mu, *Kaelbermann:1986ne, *Ebrahim:1987mu,*Jain:1989kn, *Weigel:1989eb,Rathske:1988qt,Meissner:2009hm}, but are usually neglected, to a first approximation, for higher $A$ since they are not expected to overcome the large binding energies predicted by the model. There are also practical reasons why they are seldom taken into account. The higher baryon number configurations of the original Skyrme Model are nontrivial (toroidal shape for $A=2$, tetrahedral for $A=3$, etc.) and finding them exactly either requires heavy numerical calculations (see for example \cite{Longpre:2005fr}) or some kind of clever approximation like rational maps \cite{Houghton:1997kg}. In our case however, we are interested in a precise calculation of the nuclei masses and an estimate of the Coulomb energy is desirable, and even more so in our model which generates nonshell configurations. It turns out that the axial symmetry of the solution and the relatively simple form of the chiral angle $F(r)$ in (\ref{FBeM}) simplify the computation of the Coulomb energy. Let us first consider the charge density inside Skyrmions. Following Adkins et al. \cite{Adkins:1983ya}, we write the electromagnetic current \begin{equation} J_{EM}^{\mu}=\frac{1}{2}\mathcal{B}^{\mu}+J_{V}^{\mu3}, \end{equation} with $\mathcal{B}^{\mu}$ the baryon density and $J_{V}^{\mu3}$ the vector current density. The conserved electric charge is given by \begin{equation} Z=\int d^{3}rJ_{EM}^{0}=\int d^{3}r\left( \frac{1}{2}\mathcal{B}^{0}% +J_{V}^{03}\right) \label{charge}% \end{equation} The vector current is then defined as the sum of the left and right handed currents \[ J_{V}^{\mu i}=J_{R}^{\mu i}+J_{L}^{\mu i}% \] which are invariant under $SU(2)_{L}\otimes SU(2)_{R}$ transformations of the form $U\rightarrow LUR^{\dagger}.$ More explicitly, we get% \begin{equation} J_{V}^{0i}=-\frac{1}{2}\{R(A_{1})_{ij},\left( \mathcal{U}_{jk}a_{k}% -\mathcal{W}_{jk}b_{k}\right) \}\label{J3V}% \end{equation} where $\mathcal{U}_{ij}$ and $\mathcal{W}_{ij}$ are the moment of inertia densities in (\ref{MInertia})-(\ref{Vij}). The calculations of the Coulomb energy here follows that in \cite{Adam:2013tda, *Adam:2013wya}; it differs that from Ref. \cite{Bonenfant:2012kt} where only the body-fixed charge density was considered. The anticommutator is introduced to ensure that $J_{V}^{0i}$ is a Hermitian operator. In the quantized version, $a_{j}\ $and $b_{j}$ are expressed in terms of the conjugate operators $K_{i}$ and $L_{i}.$ Here we only need the relation \[ K_{i}=U_{ij}a_{j}-W_{ij}b_{j}% \] The solution is axially symmetric then the off-diagonal elements of $U_{ij}$ and $W_{ij}$ vanish, $W_{11}=W_{22}=0$ for $\left\vert A\right\vert \geq2$ and $AU_{33}=W_{33}$. Then have \[ a_{1}=\frac{K_{1}}{U_{11}},\qquad a_{2}=\frac{K_{2}}{U_{22}},\qquad a_{3}=\frac{K_{3}}{U_{33}}+Ab_{3}% \] Inserting $a_{i}$ in (\ref{J3V}), the isovector electric current density reduces to \[ J_{V}^{03}=-\frac{1}{2}\{R(A_{1})_{3i},\frac{\mathcal{U}_{ii}}{U_{ii}}K_{i}\} \] where $\mathcal{U}_{ii}/U_{ii}$ may be interpreted here as a normalized moment of inertia density for the $i^{\text{th}}$ component of isospin in the body-fixed frame. The expectation value $R(A_{1})_{31}K_{1}\ $and $R(A_{1})_{32}K_{2}$ for eigenstate $|N\rangle=|i,i_{3},k_{3}\rangle |j,j_{3},l_{3}\rangle$ are equal so that we may simplify% \begin{equation} \left\langle N\right\vert J_{V}^{03}|N\rangle=\frac{\mathcal{U}_{11}% +\mathcal{U}_{22}}{2U_{11}}i_{3}+\left[ \frac{\mathcal{U}_{11}+\mathcal{U}% _{22}}{2U_{11}}-\frac{\mathcal{U}_{33}}{U_{33}}\right] \left\langle N\right\vert R(A_{1})_{33}K_{3}|N\rangle\label{J03VN}% \end{equation} where we have used relation (\ref{eq:I}). The moment of inertia density are given by \begin{align} \mathcal{U}_{11}+\mathcal{U}_{22} & =4\alpha\mathcal{I}_{020}^{2}(1+\cos ^{2}\theta)+32\beta a^{2}\left( \mathcal{I}_{220}^{2}(1+\cos^{2}% \theta)+\mathcal{I}_{040}^{2}\left( A^{2}+\cos^{2}\theta\right) \right) \nonumber\\ & +\frac{9\lambda^{2}}{8}a^{4}\mathcal{I}_{240}^{2}\left( A^{2}+\cos ^{2}\theta\right) \label{U11dens}\\ \mathcal{U}_{33} & =\left( 4\alpha\mathcal{I}_{020}^{2}+32\beta a^{2}\left( \mathcal{I}_{220}^{2}+\mathcal{I}_{040}^{2}\right) +\frac{9\lambda^{2}}% {8}a^{4}\mathcal{I}_{240}^{2}\right) \sin^{2}\theta\label{U33dens}% \end{align} The expression in brakets in equation (\ref{J03VN}) integrates to zero so that one recovers the relation $Z=A/2+i_{3}$ as expected. But while it does not contribute to the total charge, the charge density is not zero everywhere. Let us examine this contribution in more details. Since the electric charge does not depend on the angular momentum, we can limit our analysis to the isospin wavefunctions. Following Adkins \cite{Adkins:1987kj} we write the wavefunctions $\left\langle A_{1}\right. |i,i_{3},k_{3}\rangle$ in terms of the Wigner's functions $D_{mm^{\prime}}^{n}:$ \[ \left\langle A_{1}\right. |i,i_{3},k_{3}\rangle=\left( \frac{2i+1}{2\pi^{2}% }\right) ^{1/2}D_{k_{3}i_{3}}^{i}\left( A_{1}\right) \] Similarly the matrix $R(A_{1})_{33}$ corresponds to a spin zero and isospin zero transition that can be written \[ R(A_{1})_{33}=D_{00}^{1}\left( A_{1}\right) \] The appropriate expectation value is then given by% \begin{align*} \left\langle i,i_{3},k_{3}\right\vert R(A_{1})_{33}K_{3}|i,i_{3},k_{3}\rangle & =k_{3}\int dA_{1}\left( \frac{2i+1}{2\pi^{2}}\right) \left( D_{k_{3}i_{3}}^{i}\left( A_{1}\right) \right) ^{\ast}D_{00}^{1}\left( A_{1}\right) D_{k_{3}i_{3}}^{i}\left( A_{1}\right) \\ & =k_{3}\left( -1\right) ^{2(k_{3}+1-i)}\left\langle 1,0;i,k_{3}% |i,k_{3}\right\rangle \left\langle 1,0;i,i_{3}|i,i_{3}\right\rangle \\ & =\left\{ \begin{array} [c]{c}% \frac{i_{3}k_{3}^{2}}{i(i+1)}\qquad\text{ for }i\neq0\\ 0\qquad\text{ for }i=0 \end{array} \right. \end{align*} where the last two expressions on the second line are Clebsch-Gordan coefficients. Recalling that we have imposed the condition $\left\vert k_{3}\right\vert =\kappa=0$ or $1/2$ for even and odd nuclei respectively and fixed the value of the isospin to $i=\left\vert i_{3}\right\vert $, we find \begin{equation} \rho\equiv\frac{1}{2}\mathcal{B}^{0}+\frac{\mathcal{U}_{11}+\mathcal{U}_{22}% }{2U_{11}}i_{3}+\left[ \frac{\mathcal{U}_{11}+\mathcal{U}_{22}}{2U_{11}% }-\frac{\mathcal{U}_{33}}{U_{33}}\right] \frac{i_{3}\kappa^{2}}% {i(i+1)}\label{rhocharge}% \end{equation} The last term drops for even nuclei ($\kappa=0$). For odd nuclei, the cancellation in the brackets leads a relatively small contribution which is further suppressed by the factor $\kappa^{2}/\left( i+1\right) $ for large nuclei. It is indicative of the asymmetry in the moments of inertia.% The Coulomb energy associated to a given charge distribution $\rho (\mathbf{r})$ takes the usual form \begin{equation} E_{\text{C}}=\frac{1}{2}\frac{1}{4\pi}\int\rho\left( \mathbf{r}\right) \frac{1}{\left\vert \mathbf{r}-\mathbf{r}^{\prime}\right\vert }\rho\left( \mathbf{r}^{\prime}\right) d^{3}rd^{3}r^{\prime} \label{ECoulomb}% \end{equation} Since we have at hand an axially symmetric distribution, it is convenient to expand $\rho(\mathbf{r})$ in terms of normalized spherical harmonics to perform the angular integrations \begin{equation} \rho(\mathbf{r})=a^{3}\rho(\mathbf{x})=a^{3}\sum_{l,m}\rho_{lm}(x)Y_{l}% ^{m\ast}(\theta,\phi). \label{Ylm}% \end{equation} Following the approach described in \cite{Carlson:1963mr}, we define the quantities \begin{equation} Q_{lm}(r)=\int_{0}^{r}d\tilde{r}\ \tilde{r}^{l+2}\rho_{lm}(\tilde{r}% )=a^{-l}Q_{lm}(x) \end{equation} which, at large distance, are equivalent to a multipole moments of the distribution. The total Coulomb energy is given by% \[ E_{\text{C}}=\sum_{l=0}^{\infty}\sum_{m=-l}^{l}U_{lm}% \] where \[ U_{lm}=\left( 2\pi\alpha_{em}\right) a\int_{0}^{\infty}dx\ x^{-2l-2}% |Q_{lm}(x)|^{2}\ \] The isocalar part to the charge distribution is a spherically symmetric contribution% \[ \mathcal{B}^{0}(\mathbf{r})=a^{3}\mathcal{B}^{0}(\mathbf{x})=-\frac{a^{3}% }{2\pi^{2}}\mathcal{I}_{111}^{0}(x) \] where $\mathcal{I}_{lmn}^{k}$ is defined in (\ref{Ilmn}). On the other hand, the isovector contribution in (\ref{U1122dens}) possesses a simple angular dependence so that% the summation (\ref{Ylm}) consists of only two terms in $Y_{0}^{0\ast}$ and $Y_{2}^{0\ast}$. The moments $Q_{00}$ and $Q_{20}$ are then given by% \begin{align*} Q_{00}(x) & =\frac{2\sqrt{\pi}}{3}\left( -\frac{3A}{4\pi^{2}}I_{120}% ^{0}(x)+\frac{i_{3}}{a}\left( \frac{8\alpha}{a^{2}}I_{020}^{2}(x)C_{-}% +16\beta\left( 4I_{220}^{2}(x)C_{-}+C_{A}I_{040}^{2}(x)\right) \right. \right. \\ & +\left. \left. \frac{9\lambda^{2}a^{2}}{16}C_{A}I_{240}^{2}(x)\right) \right) \end{align*}% \[ Q_{20}(x)=\frac{4}{3}\sqrt{\frac{\pi}{5}}\frac{i_{3}}{a}C_{+}\left( \frac{2\alpha}{a^{2}}I_{020}^{4}(x)+16\beta\left( I_{220}^{4}(x)+I_{040}% ^{4}(x)\right) +\frac{9\lambda^{2}}{16}aI_{240}^{4}(x)\right) \] where \begin{align*} C_{\pm} & =\frac{1+C}{U_{11}}+\frac{C}{2U_{33}}\pm\frac{3C}{2U_{33}}\\ C_{A} & =(3A^{2}+1)\left( \frac{1+C}{U_{11}}\right) -\frac{4C}{U_{33}}% \end{align*} and $C=k_{3}^{2}/i(i+1)$. Finally, the Coulomb energy then takes the form% \begin{equation} E_{\text{C}}=\left( 2\pi\alpha_{em}\right) a\int_{0}^{\infty}(Q_{00}% ^{2}x^{-4}+Q_{20}^{2}x^{-8})\ x^{2}dx\label{EC}% \end{equation} It is again convenient to regroup the model parameters in the dimensionless quantity \[ \mathbf{p}_{0}=\left[ A,C_{-}\frac{\alpha}{a^{3}}i_{3},C_{A}\frac{\beta}% {a}i_{3},C_{-}\frac{\beta}{a}i_{3},C_{A}\lambda^{2}ai_{3}\right] \] \[ \mathbf{p}_{2}=C_{+}i_{3}\left[ \frac{\alpha}{a^{3}},\frac{\beta}{a}% ,\lambda^{2}a\right] \] such that we may write% \begin{equation} E_{\text{C}}=2\pi\alpha_{em}a\ \left( p_{0}^{i}M_{00}^{ij}p_{0}^{j}+p_{2}% ^{i}M_{00}^{ij}p_{2}^{j}\right) . \end{equation} Here, each element of $M_{00}^{ij}$ ($M_{20}^{ij}$) comes from squaring $Q_{00}$ ($Q_{20}$) in (\ref{EC}) and depend only on the form of the profile $F(x)$ and baryon number $A$ according to \[ M_{l0}^{ij}=\int_{0}^{\infty}v_{l}^{i}v_{l}^{j}x^{-2-2l}dx \] where \begin{align*} \mathbf{v}_{0} & =\frac{2\sqrt{\pi}}{3}\left( -\frac{3}{4\pi^{2}}% I_{120}^{0}(x),8I_{020}^{2}(x),16I_{040}^{2}(x),64I_{220}^{2}(x),\frac{9}% {16}I_{240}^{2}(x)\right) \\ \mathbf{v}_{2} & =\frac{4}{3}\sqrt{\frac{\pi}{5}}\left( 2I_{020}% ^{4}(x),16\left( I_{220}^{4}(x)+I_{040}^{4}(x)\right) ,\frac{9}{16}% I_{240}^{4}(x)\right) \end{align*} For the solutions at hand (\ref{FBeM}), we get% \[ \mathbf{M}_{00}=\left( \begin{array} [c]{ccccc}% 0.035244 & 0.295938 & 1.67062 & 24.5793 & 0.65734\\ 0.295938 & 2.6131624 & 14.1112 & 215.6395 & 5.56078\\ 1.67062 & 14.1112 & 79.5851 & 1173.4095 & 31.3461\\ 24.5793 & 215.6395 & 1173.4095 & 17835.4373 & 462.538\\ 0.65734 & 5.56078 & 31.3461 & 462.538 & 12.3494 \end{array} \right) \]% \[ \mathbf{M}_{20}=\left( \begin{array} [c]{ccc}% 0.0156167 & 1.62666 & 0.126600028\\ 1.62666 & 173.309 & 13.9867\\ 0.126600028 & 13.9867 & 1.20944 \end{array} \right) \] $\allowbreak$% The Coulomb energy can explain part of the isotope mass differences, but it is certainly not sufficient. For example for the nucleon, the Coulomb energy would suggest that the neutron mass is smaller than that of the proton. Of course, one can invoke the fact that isospin is not an exact symmetry to improve the predictions. Several attempts have been proposed to parametrize the isospin symmetry breaking term within the Skyrme Model \cite{Rathske:1988qt,Meissner:2009hm}. Here we shall assume for simplicity that this results in a contribution proportional to the third component of isospin% \begin{equation} E_{\text{I}}=a_{I}i_{3} \label{EI}% \end{equation} where the parameter $a_{I}$ is fixed by setting the neutron-proton mass difference to its experimental value $\Delta M_{n-p}^{\text{expt}}=1.293$ MeV. Since both of them have the same static and rotational energies, we find% \begin{equation} a_{I}=\left( E_{\text{C}}^{n}-E_{\text{C}}^{p}\right) -\Delta M_{n-p}% ^{\text{expt}} \label{aI}% \end{equation} where $E_{\text{C}}^{n}$ and $E_{\text{C}}^{p}$ are the neutron and proton Coulomb energy, respectively. Summarizing, the mass of a nucleus reads% \begin{equation} E(A,i,j,k_{3},i_{3})=E_{\text{s}}(A)+E_{\text{r}}(A,i,j,k_{3})+E_{\text{C}% }(A,i_{3})+E_{\text{I}}(A,i_{3}) \label{Etot}% \end{equation} where $E_{\text{s}}\,\ $is the total static energy. The prediction depends on the parameters of the model $\mu,$ $\alpha,\beta,$ and $\lambda$ \ and the relevant quantum numbers of each nucleus as shown in (\ref{Etot}). \section{\label{sec:Model}Results and discussion} The values of the parameters $\mu,$ $\alpha,\beta$ and $\lambda$\ remain to be fixed. Let us first consider the case where $\alpha=\beta=0$. This should provide us with a good estimate for the values of $\mu,\alpha,\beta,$ and $\lambda$ required in the 4-parameter model (\ref{model0to6}) \ and, after all, it corresponds to the limit where the minimization of the static energy leads to the exact analytical BPS solution in (\ref{FBeM}). For simplicity, we choose the mass of the nucleon and that of a nucleus$\ X$ with no (iso)rotational energy (i.e. a nucleus with zero spin and isospin)\ as input parameters. Neglecting for now the Coulomb and isospin breaking energies, the mass of these two states is according to expression (\ref{Etot}) \begin{align*} E_{N} & =15.92628\lambda\mu+0.026426\mu^{-1/3}\lambda^{-5/3}\\ E_{X} & =15.92628A\lambda\mu \end{align*} For example, if the nucleus $X$ is Calcium-40, a doubly magic number nucleus, with mass $E_{\text{Ca}}=37214.7$ MeV, then solving for $\lambda$ and $\mu,$ we get the numerical values $\mu=12322.3$ MeV$^{2}$ , $\alpha=\beta=0$ and, $\lambda=0.00474078$ MeV$^{-1}$ which we shall refer as Set~I. The masses of the nuclei are then computed using Eq. (\ref{Etot}) which results in predictions that are accurate to at least $0.6\%$, even for heavier nuclei. This precision is somewhat expected since the static energy of a BPS-type solution is proportional to $A$ so if it dominates, the nuclear masses should follow approximately the same pattern. However, the predictions remain surprisingly good compared to that of the original Skyrme model, another 2-parameter model$.$ Perhaps even more relevant are the predictions of the binding energy per nucleon $B/A=\left( E-Zm_{p}-(A-Z)m_{n}\right) /A$, in which case, the calculation simplifies. For example, subtracting from the static energy of a nucleus from that of its constituents we find that the binding energy does not depend on the static energies $E_{0}$ or $E_{6},$ \begin{align*} \Delta E_{\text{s}} & =AE_{\text{s}}(1)-E_{\text{s}}(A)\\ & =4\pi\left( A-1\right) \left( \frac{2\alpha}{a}\left( I_{200}% ^{0}-\left( A-1\right) I_{020}^{0}\right) -16a\beta\left( \left( A-1\right) I_{220}^{0}+AI_{040}^{0}\right) \right) \end{align*} whereas the contribution from $E_{\text{I}}$ simply cancels out. The dominant contributions come from the (iso)rotational and Coulomb energy differences, respectively,% \[ \Delta E_{\text{r}}=AE_{\text{r}}^{N}-E_{\text{r}}(A,i,j,k_{3}) \] dominated by $AE_{\text{r}}^{N}$ for large nuclei and% \[ \Delta E_{\text{C}}=ZE_{\text{C}}^{p}+(A-Z)E_{\text{C}}^{n}-E_{\text{C}% }(A,i_{3}) \] which is, of course, negative due to the repulsive nature of the Coulomb force between nucleons. The results for $B/A$ are presented in Fig. \ref{FigBoverA} (dashed line). They are compared to the experimental values (empty circles). We show here only a subset of the table of nuclei in \cite{Audi:2002rp} composed of the most abundant 140 isotopes. The parameters of Set~I lead to a sharp rise of the binding energy per nucleon at small $A$ followed by a slow linear increase for larger nuclei. The accuracy is found to be roughly within $10\%$ which is relatively good considering the facts that the model involves only two parameters at this point and the calculation involves a mass difference between the nucleus and its constituents. Experimentally the charge radius of the nucleus is known to behave approximately as $\left\langle r_{\text{em}}^{2}\right\rangle ^{\frac{1}{2}% }=r_{0}A^{\frac{1}{3}}$ with $r_{0}=1.23$~fm. It is straightforward to calculate the root mean square radius of the baryon density [see Eq (\ref{r2baryon})] which leads to $\left\langle r^{2}\right\rangle ^{\frac {1}{2}}=\left( 2.007\text{~fm}\right) A^{\frac{1}{3}}$. On the other hand the charge radius $\left\langle r_{\text{em}}^{2}\right\rangle ^{\frac{1}{2}}$ displays a\ more complex dependence on $A$ since it involves an additional isovector contribution (\ref{rhocharge}) \begin{equation} \left\langle r_{\text{em}}^{2}\right\rangle =\frac{\int d^{3}r\ r^{2}% \rho(\mathbf{r})}{\int d^{3}r\rho(\mathbf{r})}=\frac{A}{2Z}\left\langle r^{2}\right\rangle +\frac{i_{3}}{Z}\left\langle r_{V}^{2}\right\rangle \label{r2Z}% \end{equation} where $\rho(\mathbf{r})$ is given in expression (\ref{Ylm}) and $\left\langle r_{V}^{2}\right\rangle $ is given by% \[ \left\langle r_{V}^{2}\right\rangle =\frac{U_{11}^{(2)}}{a^{2}U_{11}}. \] where for the sake of conciseness we wrote $\left\langle r_{V}^{2}% \right\rangle $ in terms of $U_{11}^{(2)}=U_{11}\left( I_{lmn}^{2}\rightarrow I_{lmn}^{4}\right) .$ i.e. the integrals along the radial component in $U_{11}^{(2)}$ contains an extra factor of $r^{2}$. Our computation verifies that the charge radius obeys roughly the proportionality relation $\sim r_{0}A^{\frac{1}{3}}$ but overestimates the experimental value of $r_{0}$ by about $80\%$ with parameter Set~I.% Let us now release the constraint $\alpha=\beta=0,$ and allow for small perturbations from the nonlinear $\sigma$ and Skyrme term. In order to estimate the magnitude of the parameters $\alpha$ and $\beta$ in a real physical case, we perform two fits: the four parameters $\mu$, $\alpha,$ $\beta$ and $\lambda$ in Set~II optimizes the masses of the nuclei while Set~III reaches the best agreement with respect to the binding energy per nucleon, $B/A$. Both fits are performed with data from the same subset of the most abundant 140 isotopes as before. The best fits on both cases would lead to small negative values for $\beta$ similar to that of Refs. \cite{Bonenfant:2010ab,Bonenfant:2012kt}. However, since the classical (static) energy of the model is unbounded below if $\alpha,\beta<0$\ we impose the constraint $\alpha,\beta\geq0$ from hereon to avoid stability problems. (Note that in principle $\beta$ could take small negative values as long as the Skyrme term is overcome by the repulsive Coulomb energy in which case the physical nuclei would be stable but not the classical soliton.) A summary of the results is presented in Table I while Fig. \ref{FigBoverA} displays the general behavior of $B/A$ as a function of the baryon number for Sets I, II, III, and experimental values. Note that the proton and neutron mass differ slightly over Set I, II and III so for the sake of comparison we use their experimental values in calculating $B/A.$ \begin{figure}[ptbh] \centering\includegraphics[width=0.65\textwidth]{pub20131v6figBsurAall.pdf}\caption{Binding energy per nucleon $B/A$ as a function of the baryon number $A$: The experimental data (empty circles) are shown along with predicted values for parametrization of Set~I with $\alpha=\beta=0$ (dashed line), for Set~II, the best fit for nuclear masses (dotted line), and for Set~III, the best fit for $B/A$ (solid line), respectively.}% \label{FigBoverA}% \end{figure}% \[% \begin{tabular} [c]{|c|c|c|c|c|}\hline\hline \multicolumn{5}{|c|}{Table I: Sets of parameters}\\\hline\hline \ & $\quad$Set~I$\quad$ & $\quad$Set~II$\quad$ & $\quad$Set~III$\quad$ & Expt.\\\hline\hline $\mu$ $(10^{4}$ MeV$^{2})$ & $1.23223$ & $1.02259$ & $1.33515$ & ---\\ $\alpha$ $(10^{-3}$ MeV$^{2})$ & $0$ & $1.48244$ & $0.508933$ & ---\\ $\beta$ $(10^{-8}$ MeV$^{0})$ & $0$ & $1.20427$ & $1.31582$ & ---\\ $\lambda$ $(10^{-3}$ MeV$^{-1})$ & $4.74078$ & $5.70373$ & $4.36994$ & ---\\ $F_{\pi}$ $($ MeV$)$ & $0$ & $0.15401$ & $0.0902381$ & $186$\\ $m_{\pi}$ $($MeV$)$ & $0$ & $0$ & $0$ & $138$\\ $e^{2}$ ($10^{6}$) & --- & $2.59492$ & $2.37494$ & ---\\ $r_{0}$ (fm) & $2.00667$ & $2.27113$ & $1.90139$ & $1.23$\\\hline \end{tabular} \ \ \ \ \ \ \ \ \] We find that the two new sets of parameters are very close to Set~I. In order to make a relevant comparison, we look at the relative importance of the four terms in (\ref{model0to6}) and how they scale with respect to the parameters of the model, namely% \[% \begin{tabular} [c]{rccccccc} & $\mu\lambda$ & $:$ & $\alpha\left( \lambda/\mu\right) ^{1/3}$ & $:$ & $\beta\left( \mu/\lambda\right) ^{1/3}$ & $:$ & $\mu\lambda$\\ $\text{Set~I\qquad}$ & $58.42$ & $:$ & $0$ & $:$ & $0$ & $:$ & $58.42$\\ $\text{Set~II\qquad}$ & $58.33$ & $:$ & $1.226\times10^{-5}$ & $:$ & $1.463\times10^{-6}$ & $:$ & $58.33$\\ $\text{Set~III\qquad}$ & $58.35$ & $:$ & $3.507\times10^{-6}$ & $:$ & $1.909\times10^{-6}$ & $:$ & $58.35$% \end{tabular} \ \ \ \] for $\mathcal{L}_{0},\mathcal{L}_{2},\mathcal{L}_{4},$ and, $\mathcal{L}_{6},$ respectively. So the nonlinear $\sigma$ and Skyrme terms are found to be very small compared to that of $\mathcal{L}_{0}\mathcal{\ }$and $\mathcal{L}_{6},$ i.e. by at least five orders of magnitude. This provides support to the assumption that (\ref{FBeM}) is a good approximation to the exact solution. The energy scale $\mu\lambda$ remain approximately the same for all the sets while the values of $\mu$ and $\lambda$ shows noticeable differences. In particular for the fit involving $B/A$ turns out to be somewhat sensitive to these variations mostly because it involves a mass difference. We also note some variation in the baryonic charge radius $r_{0}=1.3982\ \left( \lambda/\mu\right) ^{1/3};$ all sets overestimates the experimental value by roughly 80\%. Since setting the parameters mainly involves fixing the relevant energy scale $\mu\lambda$, perhaps the process may not be as sensitive to setting a proper length scale for the nucleus so the predicted value of $r_{0}$ should probably be taken as an estimate rather than a firm prediction. Matching the parameters of the model with that of the original Skyrme Model, we identify $F_{\pi}=4\sqrt{\alpha},\ e^{2}=1/32\beta$ whereas $m_{\pi}=0$ due to the form of the potential$.$ The quantities $F_{\pi}$ and $e^{2}$ take values which are orders of magnitude away for those obtained for the Skyrme Model (see Table I) but this is not surprising since we have assumed from the start that $\alpha$ and $\beta$ are relatively small. Unfortunately, one of the successes of the original Skyrme Model is that it established a link with soft-pion physics by providing realistic values for $F_{\pi}$, $m_{\pi}$ and baryon masses. Such a link here is more obscure. The departure could come from the fact that the parameters of the model are merely bare parameters and they could differ significantly from their renormalized physical values. In other words, we may have to consider two quite different sets of parameters: a first one, relevant to the perturbative regime for pion physics where $F_{\pi}$ and $m_{\pi}$ are closer to their experimental value and, a second set which applies to the nonperturbative regime in the case of solitons. In our model, this remains an open question.% The model clearly improves the prediction of the nuclear masses and binding energies in the regime where $\alpha$ and $\beta$ are small. Let us look more closely at the results presented in Fig. \ref{FigBoverA}. The experimental data (empty circles) are shown along with predicted values for parametrization Set~I , Set~II and Set~III (dashed, dotted and solid lines, respectively). Setting $\alpha=\beta=0$ (Set~I) leads to sharp increase $B/A$ at low baryon number followed by a regular but slow growth in the heavy nuclei sector. This suggests that heavier nuclei should be more stable, in contradiction to observation. However the agreement remains within $\sim10\%$ in regards to the prediction of the nuclear masses. This is significantly better than what is obtained with the original Skyrme Model which overestimates $B/A$ by an order of magnitude. Since $B/A$ depends on the difference between the mass of a nucleus and that of its constituents, it is sensitive to small variation of the nuclear masses so the results for $B/A$ may be considered as rather good. The second fit (Set~II) is optimized for nuclear masses. The behavior at small $A$ is similar to that of Set~I (as well as in Set~III) while it reproduces almost exactly the remaining experimental values ($A\gtrsim40$). Finally, the optimization of $B/A$ (Set~III) provide a somewhat better representation for light nuclei at the expense of some of the accuracy found in Set~II for $A\gtrsim40$. Overall, the binding energy is rather sensitive to the choice of parameters. This is partly because the otherwise dominant contributions of $E_{0}$ and $E_{6}$ to the total mass of the nucleus simply cancel out in $B/A$. The difference of behavior between light and heavy nuclei shown by the model may be partly attributed to the (iso)rotational contribution to the mass. The spin of the most abundant isotopes remains small while isospin can have relatively large values due to the growing disequilibrium between the number of proton and the number of neutron in heavy nuclei. On the other hand, the moments of inertia increase with $A,$ so the total effect leads to a (iso)rotational energy $E_{\text{r}}<$ 1 MeV with $A>10$ for all sets of parameters considered and its contribution to $B/A$ decreases rapidly as $A$ increases. On the contrary, for $A<10,$ the rotational energy is responsible for a larger part of the binding energy which means that $B/A$ \ should be sensitive to the way the rotational energy is computed. So clearly, the variations in shape of the baryon density has some bearing on the predictions for the small $A$ sector not only the values of the parameters. To summarize, the main purpose this work is to propose a model in a regime where the nuclei are described by near-BPS solitons with approximately constant baryon density configuration. This is acheived with a 4-terms generalization of the Skyrme Model in the regime where the nonlinear $\sigma$ and Skyrme terms are considered small. The choice of an appropriate potential $V$ allows to build constant baryon density near-BPS solitons, i.e. a more realistic description of nuclei as opposed to the more complex configurations found in most extensions of the Skyrme Model (e.g. $A=2$ toroidal , $A=3$ tetrahedral, $A=4$ cubic,...). Fitting the model parameters, we find a remarkable agreement for the binding energy per nucleon $B/A$ with respect to experimental data. On the other hand, there remain some caveats. First, the Skyrme Model provides a simultaneous description for perturbative pion interactions and nonperturbative baryon physics with realistic values for $F_{\pi}$ and $m_{\pi}$ and baryon masses. The connection between the two sectors here seems to be much more intricate. Secondly, there may be place for improvement by proposing more appropriate solutions that would describe equally well the light and heavy nuclei. Finally, the model seems unable to reproduce a constant skin thickness in the baryon or charge density and the experimental size of the nucleus correctly. On the other hand, the concept of BPS-type Skyrmions also arises when one adds a large number of vector mesons to the Skyrme Model as suggested by recent results based on holographic QCD from Sutcliffe \cite{Sutcliffe:2010et}. Unfortunately, the emerging large $A$ Skyrmions configurations are rather complicated or simply unknown so that it has yet been impossible to perform an analysis of the nuclear properties comparable to that presented in this work. More recently Adam, Naya, Sanchez-Guillen and Wereszczynski \cite{Adam:2013tda, *Adam:2013wya} considered the special case of the pure BPS-model ($\alpha=\beta=0$) using the potential $V_{\text{ASW}}$. Although their treatment differ slightly they find a similar agreement for the binding energy per nucleon. Yet, all approaches clearly suggest that nuclei could be treated as near-BPS Skyrmions. This work was supported by the National Science and Engineering Research Council of Canada.%
1,116,691,497,113
arxiv
\section{Introduction}\label{sec:1} "What is dark matter (DM)?" and "Where does DM come from?" are two very questions that drive countless particle physicists and cosmologists to work day and night to solve these problems.\,\,As of now, the only thing we know for sure is that it contributes about 26\,\% energy density in the present universe.\,\,The remaining energy density is dominantly attributed to dark energy which is another mystery physicists aim to understand. The first question concerns the particle nature of DM, such as mass, spin, and fundamental interactions.\,\,Firstly, the mass of DM can spread over a very broad range from $10^{-15}\,\tx{GeV}$ to $10^{15}\,\tx{GeV}$\,\cite{Baer:2014eja}.\,\,Secondly, it could be comprised of a scalar boson, a vector boson, a Dirac fermion, a Majorana fermion, or a Rarita-Schwinger fermion.\,\,Thirdly, it may possess interactions to the ordinary matter other than the gravitational interaction.\,\,The second question asks about the production mechanism of DM.\,\,As we know, it can be produced thermally or non-thermally in the early universe.\,\,Lastly, there is a possibility that the universe contains more than one kind of DM just like the visible world exist many stable particles such as the electron, proton, and neutrinos.\,\,Indeed, there are many efforts along this direction~\cite{Hochberg:2014kqa,Katz:2020ywn,Choi:2021yps,Baek:2013dwa,Aoki:2016glu, Daido:2019tbm,Herms:2019mnu,Yaguna:2021rds}. The most popular thermally-produced DM is weakly interacting massive particles (WIMP) \cite{Lee:1977ua}, where the annihilation cross sections of DM pairs into the standard model (SM) particles determine the DM relic abundance.\,\,Nonetheless, the null result of direct detection experiments has pushed the WIMP scenario into a corner, which motivates physicists to come up with new prospects for DM.\,\,The so-called secluded WIMP scenarios are still viable since they are not to be strongly constrained by direct detection experiments~\cite{Pospelov:2007mp,Pospelov:2008jd}. Strongly interacting massive particles (SIMP) \cite{Hochberg:2014dra} is an alternative thermal DM scenario that has brought people attention due to its exotic dynamic, where the DM relic abundance is set by the self-annihilation cross sections of DM number-changing processes.\,\,In particular, the SIMP with a large self-interacting cross section can relax some inconsistencies between the N-body simulations and astrophysical observations at small-scale structures (\,$\lesssim$\,1 Mpc) of the universe.\,\,For instance, the collisionless cold DM predicts a cuspy density profile in the center of dwarf galaxy halos.\,\,However, what we observe is a relatively flat distribution \cite{Tulin:2017ara}.\,\,This is known as the core-vs-cusp problem.\,\,Besides, the collisionless cold DM also predicts dozens of large sub-halos with speeds $v > 25$ km/s in the Milky Way and M31, but no such halos have been discovered \cite{Brooks:2012vi}.\,\,This is commonly named the too-big-to-fail problem. With the above considerations, we study in Ref.\,\cite{Ho:2021ojb} the multi-component SIMP scenario by using the effective operator method.\,\,As in the single-component SIMP scenario, the DM relic abundance is determined by the reaction rate of the $3 \to 2$ process as shown in the left graph of Fig.\,\ref{fig:multiSIMP}.\,\,Surprisingly, we notice that in this scenario there is an irreducible two-loop induced $2 \to 2$ number-conserving process\footnote{Here the number-conserving means the total DM number is conserved.\,\,However, the individual DM density would change due to the $2 \to 2$ processes.} (see the right graph of Fig.\,\ref{fig:multiSIMP}) that would reshuffle the DM number densities after the chemical freeze-out of DM.\,\,We then dub this scenario as reshuffled SIMP ($r$SIMP) DM.\,\,Note that in the single-component SIMP scenario, since the external legs of such a two-loop diagram are the same particles, there is no redistribution of DM number densities due to this diagram.\,\,Intuitively, one may think that this $2 \to 2$ process is suppressed by the two-loop factor.\,\,However, for a $3 \to 2$ process to take place, it has to capture one extra DM particle whose number yield is Boltzmann-suppressed.\,\,It turns out that the reaction rate of the $2 \to 2$ process dominates over that of the $3 \to 2$ process.\,\,Furthermore, we find that the masses of DM particles must be nearly degenerate to weaken the reshuffled effect.\,\,Otherwise, the $2 \to 2$ process would actively enforce the heavy SIMP particle annihilating into the light one, with essentially no remaining of heavy SIMP DM.\footnote{In our perspective, each DM component should have a sizable amount in multi-component DM scenarios.} In order to make our analysis of the $r$SIMP scenario more robust and reliable, we build up a UV complete model in this paper instead of the effective theory.\,\,We consider a two-component SIMP DM model (hereafter we call it $r$SIMP model), where the DM is comprised of a complex scalar and a vector-like fermion.\footnote{The two-component SIMP model with complex scalar and vector-like fermion is constructed in this paper for the first time.\,\,In Ref.\,\cite{Choi:2021yps}, such a possibility based on U$(1)^{}_\tf{D} \to \mathbb{Z}^{}_2 \times \mathbb{Z}^{}_3$ was mentioned in the "footnote 2", but without explicit construction.}\,\,In this model, the DM particles have an accidental $\,\mathbb{Z}^{}_4$ charge after a U$(1)^{}_\tf{D}$ symmetry breaking.\footnote{Note that this discrete symmetry does not inherit from a gauge symmetry by the Krauss-Wilczek manner \cite{Krauss:1988zc}.}\,\,If this U$(1)^{}_\tf{D}$ symmetry is promoted to a gauge symmetry, then a vector-portal interaction naturally arises between the SIMP DM and SM particles.\,\,This interaction is necessary for the SIMP scenario to prevent the heat up of DM due to the $3 \to 2$ process before the chemical freeze-out of DM.\,\,This is known as the SIMP conditions~\cite{Hochberg:2015vrg}. \begin{figure}[t!] \centering \hs{0.5cm} \includegraphics[width=0.55\textwidth]{SIMPEO.pdf} \vs{-0.3cm} \caption{The Feynman diagrams of the $3 \to 2$ and the two-loop induced $2 \to 2$ processes in the $r$SIMP scenario, where ${\cal X}^{}_i$ denotes the SIMP particle and the arrow represents the dark charge flow.} \label{fig:multiSIMP} \end{figure} Following this setup, we then explicitly compute the annihilation cross sections of the $3 \to 2$ and $2 \to 2$ processes and solve the coupled Boltzmann equations numerically to get the correct number densities of DM.\,\,We find that the reshuffled phenomena still occur in the UV complete model.\,\,Thus, our previous effective operator analysis of the $r$SIMP scenario is valid.\,\,Also, the form of the $2 \to 2$ annihilation cross section derived by the effective operator is consistent with the one in this UV complete model if we treat the cut-off scale as the mediator mass in the two-loop diagram.\,\,Again, we emphasize that the $2 \to 2$ process in the multi-component SIMP scenario is generic and cannot be ignored in DM phenomenology, especially in estimating the DM relic abundance.\,\,Adding number-conserving $2 \to 2$ processes to number-changing $3 \to 2$ processes in multi-component SIMP models will not only change the fractions of DM particles but also the total DM number densities.\,\,It can dramatically modify model parameters that accommodate the correct relic density compared with only involving $3 \to 2$ processes. In most of the SIMP models, the DM is assumed to be a complex scalar in order to have the DM number changing $3 \rightarrow 2$ processes be allowed.\,\,And typically one has to choose large enough quartic or cubic couplings of the scalar DM to satisfy the DM relic density and the vacuum stability.\,\,With such couplings, the prediction of DM self-interacting cross section may be too big to be compatible with the astrophysical observations from the Bullet and Abell 3827 clusters \cite{Markevitch:2003at,Clowe:2003tk,Massey:2015dkw,Kahlhoefer:2015vua}.\,\,However, in the two-component SIMP model with complex scalar and vector-like fermion DM, this tension can be eased thanks to the reshuffled effect.\,\,For example, if the complex scalar is heavier than the vector-like fermion, the DM self-interacting cross section can be reduced since the portion of the complex scalar annihilates into the vector-like fermion due to the $2 \to 2$ process.\,\,Plus, the self-interaction of the vector-like fermion corresponds to a four-fermion interaction which is suppressed by the mass scale of the mediator at low energy.\,\,This is one of the interesting features of this model. The structure of this paper is as follows.\,\,In the next section, we introduce the $r$SIMP model and give a description of the relevant interactions and masses for the new particles.\,\,In Sec.\,\ref{sec:3}, we write down the formulas for the annihilation cross sections of the $3 \to 2$ and $2 \to 2$ processes.\,\,In Sec.\,\ref{sec:4}, we take into account various theoretical and experimental constraints on this model.\,\,In Sec.\,\ref{sec:5}, we evaluate the relic abundance of the $r$SIMP DM and explain the reshuffled mechanism.\,\,In Sec.\,\ref{sec:6}, we discuss the SIMP conditions.\,\,In Sec.\,\ref{sec:7}, we show the predictions of DM self-interacting cross section in this model.\,\,Finally, we briefly mention some outlook of this model and conclude our study in Sec.\ref{sec:8}.\,\,In the appendices, we demonstrate the computations of annihilation cross sections of the $3 \to 2$ and $2 \to 2$ processes in the $r$SIMP model. \section{$\bs{r}$SIMP model}\label{sec:2} To demonstrate the redistribution of DM mass densities in the $r$SIMP scenario, we consider one vector-like fermion, $N$, and three complex singlet scalars, $X, S$, and $\Phi$ in addition to the SM particles.\,\,These new particles have dark charges under a gauged U$(1)^{}_\tf{D}$ symmetry, and all SM particles are neutral under this U$(1)^{}_\tf{D}$ symmetry.\,\,We summarize the particle contents and their charge assignments in Tab.\,\ref{tab:1}.\,\,In our setup, the $X$ and $N$ are SIMP DM candidates, and $S$ is an unstable mediator connecting these two particles.\,\,In particular, the $\Phi$ particle triggers the breaking of the U$(1)^{}_\tf{D}$ symmetry.\,\,After the U$(1)^{}_\tf{D}$ symmetry breaking, these new particles can possess an accidental ${}^{}\mathbb{Z}^{}_4$ symmetry, which stabilizes the $X$ and $N$ and make them DM. \begin{table}[b!] \begin{center} \def1.2{1.3} \begin{tabular}{|c||c||c|c|c|c|} \hline $\vphantom{|_|^|}$ & ~$H$~ & ~$N$~ & ~$X$~ & ~$S$~ & ~$\Phi$~ \\\hline\hline ~\,SU$(2)\vphantom{|_|^|}$~ & ~$\mathbf{2}$~ & ~$\mathbf{1}$~ & ~$\mathbf{1}$~ & ~$\mathbf{1}$~ & ~$\mathbf{1}$~ \\\hline ~\,U$(1)^{}_\tf{Y}\vphantom{|_|^|}$~ & ~$-{}^{}1/2$~ & ~$0$~ & ~$0$~ & ~$0$~ & ~$0$~ \\\hline ~\,U$(1)^{}_\tf{D}\vphantom{|_|^|}$~ & ~$0$~ & ~$-{}^{}1/8$~ & ~$+{}^{}1/12$~ & ~$+{}^{}1/4$~ & ~$-{}^{}1/2$~ \\\hline ~$\mathbb{Z}^{}_4$~ & ~$+{}^{}1$~ & ~$\pm{}^{}i$~ & ~$-1$~ & ~$-1$~ & ~$+{}^{}1$~ \\\hline \end{tabular} \caption{Charge assignments of the fermion and scalars in the $r$SIMP model, where $H$ is the SM Higgs doublet and $i =\sqrt{-1}$.} \vs{-1.0cm} \label{tab:1} \end{center} \end{table} The Lagrangian density for the scalar fields in this model is given by \begin{eqnarray} {\cal L}_\tf{scalar} \,=\, \big({\cal D}^\rho H\big)^{\hs{-0.05cm}\dag} {\cal D}_\rho {}^{} H + \big({\cal D}^\rho X\big)^{\hs{-0.05cm}\dag} {\cal D}_\rho {}^{} X + \big({\cal D}^\rho S\big)^{\hs{-0.05cm}\dag} {\cal D}_\rho {}^{} S + \big({\cal D}^\rho \Phi\big)^{\hs{-0.05cm}\dag} {\cal D}_\rho {}^{} \Phi \,- {\cal V}(H, X, S, \Phi) ~, \end{eqnarray} where ${}^{}{\cal D}_\rho = \partial_\rho + (i/2)^{} g^{}_\tf{W} \tau^a W^a_\rho + i g^{}_\tf{Y} {\cal Q}^{}_\tf{Y} B^{}_\rho + i g^{}_\tf{D} {\cal Q}^{}_\tf{D} C^{}_\rho{}^{}$ denotes the covariant derivative with $g^{}_\tf{W}\,(W^a_\rho)$, $g^{}_\tf{Y}\,(B^{}_\rho{}^{})$, and $g^{}_\tf{D}\,(C^{}_\rho)$ being the SU$(2)$, U$(1)^{}_\tf{Y}$, and U$(1)^{}_\tf{D}$ gauge couplings (fields), respectively\,; $\tau^a\,\big(a = 1, 2, 3)$ the Pauli matrices, and ${\cal Q}^{}_\tf{Y}\,({\cal Q}^{}_\tf{D})$ the hypercharge (dark charge) operator.\,\,The scalar potential \,${\cal V} = {\cal V}(H, X, S, \Phi)$\, is given by \begin{eqnarray}\label{potential} {\cal V} &=& \mu_h^2 {}^{} H^\dag \hs{-0.05cm} H + \mu_X^2 X^\ast \hs{-0.05cm} X + \mu_S^2 {}^{} S^\ast \hs{-0.05cm} S + \mu_\phi^2 {}^{} \Phi^\ast \hs{-0.02cm} \Phi \nonumber\\[0.1cm] && +\, \lambda^{}_h \big(H^\dag \hs{-0.05cm} H {}^{}\big)\raisebox{1pt}{}{\hs{-0.03cm}^2} + \lambda^{}_X \big(X^\ast \hs{-0.05cm} X\big)\raisebox{1pt}{}{\hs{-0.03cm}^2} + \lambda^{}_S \big(S^\ast \hs{-0.05cm} S {}^{} \big)\raisebox{1pt}{}{\hs{-0.03cm}^2} + \lambda^{}_\phi \big(\Phi^\ast \hs{-0.02cm} \Phi\big)\raisebox{1pt}{}{\hs{-0.03cm}^2} \nonumber\\[0.1cm] && +\, \lambda_{h X} \big(H^\dag \hs{-0.05cm} H{}^{}\big) \big(X^\ast \hs{-0.05cm} X\big) + \lambda_{h S} \big(H^\dag \hs{-0.05cm} H{}^{}\big) \big(S^\ast \hs{-0.05cm} S {}^{} \big) + \lambda_{h \phi} \big(H^\dag \hs{-0.05cm} H {}^{}\big) \big(\Phi^\ast \hs{-0.02cm} \Phi\big) \nonumber\\[0.15cm] && +\, \lambda_{X \hs{-0.03cm} S} \big(X^\ast \hs{-0.05cm} X\big) \big(S^\ast \hs{-0.05cm} S {}^{} \big) + \lambda_{X \hs{-0.03cm} \phi} \big(X^\ast \hs{-0.05cm} X\big) \big(\Phi^\ast \hs{-0.01cm} \Phi\big) + \lambda_{S \phi} \big(S^\ast \hs{-0.05cm} S {}^{} \big) \big(\Phi^\ast \hs{-0.02cm} \Phi\big) \nonumber\\[0.1cm] && +\, \sx{1.2}{\big(} \lambda^{}_3 {}^{} X^3 \hs{-0.03cm} S^\ast + \tfrac{1}{\sqrt2} {}^{} \kappa {}^{} \upsilon^{}_\phi {}^{} S^2 \Phi + \text{h.c.}^{} \sx{1.2}{\big)} ~, \end{eqnarray} where $\upsilon^{}_\phi$ is the vacuum expectation value (VEV) of $\Phi$.\,\,The Hermiticity of the scalar potential ${\cal V}$ implies that $\mu_{h,X,S,\phi}^2$ and $\lambda_{h,X,S,\phi, h X, h S, h \phi, X \hs{-0.03cm} S, X \hs{-0.03cm} \phi, S \phi}$ must be real.\,\,For simplicity, we will choose $\,\lambda^{}_3\,$ and \,$\kappa$\, to be real and positive because one can redefine the scalar fields $X$ and $\Phi$ to absorb the phases of $\lambda^{}_3$ and $\kappa$. Based on our setup, we require that the VEVs of the scalar fields in this model satisfy the following conditions\,\,: \begin{eqnarray} \langle H {}^{} \rangle \,=\, \frac{1}{\sqrt2} \begin{pmatrix} 0 \\ \,\upsilon^{}_h \, \end{pmatrix} ~,\quad \langle \Phi \rangle \,=\, \frac{1}{\sqrt2} {}^{} \upsilon^{}_\phi ~,\quad \langle X \rangle \,=\, \langle S \rangle \,=\, 0 ~, \end{eqnarray} where $\upsilon^{}_h\,\simeq\,246.22\,\,\rm{GeV}$ is the VEV of $H$.\,\,On the other hand, the $\kappa {}^{} \upsilon^{}_\phi$ terms in the potential cause the mass splitting of the real and imaginary part of the $S$ field.\,\,Thus, after spontaneously symmetry breaking, we can expand the scalar fields around the VEVs as \begin{eqnarray} H \,=\, \frac{1}{\sqrt2} \begin{pmatrix} 0 \\ \upsilon^{}_h + h' \end{pmatrix} ~,\quad \Phi \,=\, \frac{1}{\sqrt2}\big(\upsilon^{}_\phi + \phi' {}^{}{}^{}\big) ~,\quad S \,=\, \frac{1}{\sqrt2}\big(S^{}_\tx{R} + i S^{}_\tx{I}\big) ~. \end{eqnarray} With these parametrizations, the minimum conditions for the scalar potential would give \begin{eqnarray}\label{VEV} \frac{{\mathrm{d}} {\cal V}}{{\mathrm{d}} \phi'}\bigg|_\tx{VEV} = \upsilon^{}_\phi \Big({}^{} \mu_\phi^2 + \lambda^{}_\phi \upsilon_\phi^2 + \tfrac{1}{2} \lambda^{}_{h \phi} \upsilon_h^2 \Big) \,=\, 0 ~, \quad \frac{{\mathrm{d}} {\cal V}}{{\mathrm{d}} h'}\bigg|_\tx{VEV} = \upsilon^{}_h \Big({}^{} \mu_h^2 + \lambda^{}_h \upsilon_h^2 + \tfrac{1}{2} \lambda^{}_{h \phi} \upsilon_\phi^2 \Big) \,=\, 0 ~. \end{eqnarray} Solving these two equations, one can express the VEVs in terms of the quadratic and quartic couplings in the scalar potential as \begin{eqnarray} \upsilon^{}_\phi \,=\, \sqrt{ \frac{4 {}^{} \lambda^{}_h {}^{} \mu_\phi^2 - 2 {}^{} \lambda^{}_{h \phi} {}^{} \mu_h^2} {\lambda_{h \phi}^2 - 4 {}^{} \lambda^{}_ h \lambda^{}_\phi} } ~,\quad \upsilon^{}_h \,=\, \sqrt{ \frac {4 {}^{} \lambda^{}_\phi {}^{} \mu_h^2 - 2 {}^{} \lambda^{}_{h \phi} {}^{} \mu_\phi^2} {\lambda_{h \phi}^2 - 4 {}^{} \lambda^{}_ h \lambda^{}_\phi} } ~. \end{eqnarray} Besides, the masses of $X, S^{}_\tx{R}$, and $S^{}_\tx{I}$ are given by \begin{eqnarray} m_X^2 \,=\, \mu_X^2 + \tfrac{1}{2} \sx{1.1}{\big(} \lambda^{}_{h X} {}^{} \upsilon_h^2 + \lambda^{}_{X \hs{-0.03cm} \phi} \upsilon_\phi^2 {}^{} \sx{1.1}{\big)} ~,\quad m_{S_\tx{R},{}^{}S_\tx{I}}^2 \,=\, \mu_S^2 + \tfrac{1}{2} \sx{1.1}{\big(} \lambda^{}_{h S} {}^{} \upsilon_h^2 + \lambda^{}_{S \phi} \upsilon_\phi^2 {}^{} \sx{1.1}{\big)} \pm \kappa {}^{} \upsilon_\phi^2 ~. \end{eqnarray} Also, the $\lambda^{}_{h \phi}$ term in the scalar potential induces a mass mixing between the $h'$ and $\phi'$.\,\,In the basis $\big({}^{}{}^{}h' \,\,\, \phi'{}^{}{}^{}\big)\raisebox{1pt}{}{\hs{-0.05cm}^\tf{T}}$, the corresponding mass mixing matrix is written as \begin{eqnarray}\label{mixing} M^2_{h \phi} \,=\, \begin{pmatrix} 2 {}^{} \lambda^{}_h \upsilon_h^2 & \lambda^{}_{h \phi} \upsilon^{}_h \upsilon^{}_\phi \\[0.1cm] \,\lambda^{}_{h \phi} \upsilon^{}_h \upsilon^{}_\phi & 2 {}^{} \lambda^{}_\phi \upsilon_\phi^2 \end{pmatrix} ~. \end{eqnarray} Here we have used the relations in Eq.\,\eqref{VEV} to simplify the form of $M^2_{h \phi}$.\,\,Upon diagonalizing $M^2_{h \phi}$, we obtain the mass eigenstates $h$ and $\phi$ and their respective masses $m^{}_h$ and $m^{}_\phi$ given by \begin{eqnarray} \begin{pmatrix} \, h' \, \\ \, \phi' \, \end{pmatrix} \,=\, \begin{pmatrix} \,\cos \alpha && -\sin\alpha \\ \,\sin\alpha && \cos \alpha \end{pmatrix} \begin{pmatrix} \, h \, \\ \, \phi \, \end{pmatrix} \,\equiv\, {\cal O}^{}_\alpha \begin{pmatrix} \, h \, \\ \, \phi \, \end{pmatrix} ~,\quad {\cal O}^\tf{T}_\alpha M^2_{h \phi} {\cal O}^{}_\alpha \,=\, \tx{diag} \big( m_h^2 {}^{},{}^{} m_\phi^2 {}^{} \big) ~, \end{eqnarray} \vs{-0.3cm} \begin{eqnarray} m_{h,\phi}^2 \,=\, \lambda^{}_h \upsilon_h^2 + \lambda^{}_\phi \upsilon_\phi^2 \pm \sqrt {\big( \lambda^{}_h \upsilon_h^2 - \lambda^{}_\phi \upsilon_\phi^2 {}^{} \big) \raisebox{0.5pt}{$\hs{-0.05cm}^2$} + \big( \lambda^{}_{h \phi} \upsilon^{}_h \upsilon^{}_\phi {}^{} \big) \raisebox{0.5pt}{$\hs{-0.05cm}^2$}} ~,\quad \tan (2 {}^{} \alpha) \,=\, \frac {\lambda^{}_{h \phi} \upsilon^{}_h \upsilon^{}_\phi} {\lambda^{}_h \upsilon_h^2 - \lambda^{}_\phi \upsilon_\phi^2} ~,\quad \end{eqnarray} where $h$ denotes the observed Higgs boson with $m^{}_h \simeq 125.1\,\tx{GeV}$, and $\phi$ is a new neutral scalar with $m_\phi$ as a free parameter.\,\,In our study, we will assume that the mass splitting of $S^{}_\tx{R}$ and $S^{}_\tx{I}$ and the mass mixing of $h$ and $\phi$ are negligibly small for simplicity.\,\,In such cases, the masses of $S^{}_\tx{R}, S^{}_\tx{I}, h$, and $\phi$ are reduced to \begin{eqnarray} m_{S_\tx{R}}^2 \simeq\, m_{S_\tx{I}}^2 \equiv\, m_S^2 \,=\, \mu_S^2 + \tfrac{1}{2} \sx{1.1}{\big(} \lambda^{}_{h S} {}^{} \upsilon_h^2 + \lambda^{}_{S \phi} \upsilon_\phi^2 {}^{} \sx{1.1}{\big)} ~,\quad m_h^2 \,=\, 2 {}^{} \lambda^{}_h \upsilon_h^2 ~,\quad m_\phi^2 \,=\, 2 {}^{} \lambda^{}_\phi \upsilon_\phi^2 ~. \label{Scalar_mass} \end{eqnarray} The Lagrangian density responsible for the mass and the interactions of newly added dark fermion $N$ is given by \begin{eqnarray}\label{Yukawa} {\cal L}^{}_N \,=\, \overline{N} \big( i \gamma^\rho {\cal D}_\rho - m^{}_N \big) N - \tfrac{1}{2} \sx{1.1}{\big(} \,y^{}_N \overline{N\raisebox{0.5pt}{$^\tf{c}$}} \hs{-0.03cm} N S + \tx{h.c.} \sx{1.1}{\big)} ~, \end{eqnarray} where $m^{}_N$ is the Dirac mass of $N$, $y^{}_N$ is the Yukawa coupling, and superscript $\tf{c}$ refers to the charge conjugation.\,\,Again, we will take $y^{}_N$ to be real and positive by absorbing its phase into the field $N$ or $S$ without loss of generality.\,\,Note that the $S$ particle can decay into a pair of $N$ if $m^{}_S > 2{}^{}m^{}_N$ and three $X$ particles if $m^{}_S > 3{}^{}m^{}_X$.\,\,Therefore, even $S$ has a $\,\mathbb{Z}^{}_4$ charge, it is still not suitable to be a DM candidate if $m^{}_S > 2 m^{}_N$ or $3{}^{}m^{}_X$. The Lagrangian density for the ${}^{}\tx{SU}(2) \otimes \tx{U}(1)^{}_\tf{Y} \otimes \tx{U}(1)^{}_\tf{D}$ gauge bosons is given by \begin{eqnarray} {\cal L}_\tf{gauge} \,=\, -\,\tfrac{1}{4} {}^{} W^{3 \rho\sigma} W^3_{\rho \sigma} - \tfrac{1}{4} {}^{} B^{\rho \sigma} \hs{-0.05cm} B^{}_{\rho \sigma} - \tfrac{1}{4} {}^{} C^{\rho \sigma} \hs{-0.03cm} C^{}_{\rho \sigma} - \tfrac{1}{2} {}^{} s_\epsilon {}^{} B^{\rho \sigma} \hs{-0.03cm} C^{}_{\rho \sigma} - \tfrac{1}{2} {}^{} m_C^2 {}^{}{}^{} C^{\rho} C^{}_{\rho} ~, \end{eqnarray} where $W^3_{\rho \sigma} = \partial^{}_\rho W^3_\sigma - \partial^{}_\sigma W^3_\rho + g^{}_\tf{W} W^1_\sigma W^2_\rho$, $B^{}_{\rho \sigma} = \partial^{}_\rho B^{}_\sigma - \partial^{}_\sigma B^{}_\rho$, and $C^{}_{\rho \sigma} = \partial^{}_\rho C^{}_\sigma - \partial^{}_\sigma C^{}_\rho$ are the field strength tensors of the gauge bosons, $s_\epsilon \equiv \sin \epsilon$ is the kinetic mixing parameter, and $m^{}_C = \frac{1}{2} {}^{} g^{}_\tf{D} \upsilon^{}_\phi$ coming from the $|{\cal D}^\rho \Phi|^2$ term after the $\tx{U}(1)^{}_\tf{D}$ symmetry breaking. After the breakdown of the electroweak symmetry, the kinetic and the mass mixing matrices of the gauge fields in the basis $\big(B \,\,\, W^3 \, C{}^{}{}^{}\big)\raisebox{1pt}{}{\hs{-0.05cm}^\tf{T}}$, are respectively given by \begin{eqnarray} K^{}_G \,=\, \begin{pmatrix} 1 & 0 & s_\epsilon\, \\ 0 & 1 & 0 \\ \,\,s_\epsilon & 0 & 1 \\ \end{pmatrix} ~,\quad M^2_G \,=\, \frac{1}{4} \begin{pmatrix} g_\tf{Y}^2 {}^{} \upsilon_h^2 & -{}^{}g^{}_\tf{W} {}^{} g^{}_\tf{Y} {}^{} \upsilon_h^2 & 0 \\[0.1cm] -{}^{}g^{}_\tf{W} {}^{} g^{}_\tf{Y} {}^{} \upsilon_h^2 & g_\tf{W}^2 {}^{} \upsilon_h^2 & 0 \\[0.1cm] 0 & 0 & g_\tf{D}^2 \upsilon_\phi^2\,{}^{}{}^{} \\ \end{pmatrix} ~. \end{eqnarray} To write the kinetic terms into the canonical form, it is known that one can diagonalize matrix $K^{}_G$ without changing the diagonal elements by utilizing a general linear transformation ${\cal T}$, and subsequently diagonalize $M^2_G$ by an orthogonal matrix ${\cal O}^{}_{\tf{W} \xi}$ as \begin{eqnarray}\label{TO} {\cal T} \,=\, \begin{pmatrix} \,\,1 & 0 & -{}^{}t_\epsilon{}^{}{}^{} \\ \,\,0 & 1 & 0 \\ \,\,0 & 0 & c_\epsilon \\ \end{pmatrix} ~,\quad {\cal O}^{}_{\tf{W} \xi} \,=\, \begin{pmatrix} \,c^{}_\tf{W} & -{}^{}s^{}_\tf{W} & 0\,{}^{} \\ \,s^{}_\tf{W} & c^{}_\tf{W} & 0\,{}^{} \\ \,0 & 0 & 1\,{}^{} \\ \end{pmatrix} \hs{-0.2cm} \begin{pmatrix} \,\,1& 0 & 0\,{}^{} \\ \,\,0 & c^{}_\xi & -{}^{}s^{}_\xi \\ \,\,0 & s^{}_\xi & c^{}_\xi \\ \end{pmatrix} ~, \end{eqnarray} where $t_\epsilon \equiv \tan \epsilon{}^{}{}^{}, c_\epsilon \equiv \cos \epsilon$, and $c^{}_\theta \equiv \cos \theta$ and $s^{}_\theta \equiv \sin \theta$ with $\theta = \tf{W}, \xi$.\,\,Upon diagonalizing $M^2_G{}^{}$, we get the mass eigenstates of the gauge bosons $A, Z$, and $Z'$ as \begin{eqnarray}\label{BWC} \begin{pmatrix} B \\ W^3 \\ C \end{pmatrix} \,=\, {\cal T} {\cal O}^{}_{\tf{W} \xi} \begin{pmatrix} A \\ Z \\ \,Z' \end{pmatrix} ~,\quad \big( {\cal T} {\cal O}^{}_{\tf{W} \xi} \big)\raisebox{1pt}{}{\hs{-0.05cm}^\tf{T}} M^2_G {}^{}{}^{} {\cal T}{\cal O}^{}_{\tf{W} \xi} \,=\, \tx{diag}\big( {}^{}{}^{} 0 {}^{},{}^{} m_Z^2 {}^{},{}^{} m_{Z'}^2 \big) ~, \end{eqnarray} \vs{-0.5cm} \begin{eqnarray}\label{mixG} \tan \tf{W} \,=\, \frac{g^{}_\tf{Y}}{g^{}_\tf{W}} ~,\quad \tan (2{}^{}\xi) \,=\, \frac{m_{\bar{Z}}^2 {}^{}{}^{} s^{}_{2\epsilon} {}^{}{}^{} s^{}_\tf{W}} {m_{\bar{Z}}^2 \big( c_\epsilon^2 - s_\epsilon^2 s^2_\tf{W} \big) - m_C^2} ~,\quad m_{\bar{Z}}^2 \,=\, \tfrac{1}{2} \big({}^{}{}^{} g_\tf{W}^2 + g_\tf{Y}^2 \big) \upsilon_h^2 ~, \end{eqnarray} where $s_\tf{W}^2 \simeq 0.23$, and the physical gauge boson masses are given by \begin{eqnarray}\label{mG2} m_A^2 \,=\, 0 ~,\quad m_{\white{\bar{\black{Z}}}}^2 \,=\, m_{\bar{Z}}^2 \big(1 + s^{}_\tf{W} {}^{} t_\epsilon {}^{} t_\xi {}^{} \big) ~,\quad m_{Z'}^2 \,=\, \frac{m_C^2}{c_\epsilon^2 \big(1 + s^{}_\tf{W} {}^{} t_\epsilon {}^{} t_\xi {}^{} \big)} \end{eqnarray} with $t^{}_\xi \equiv \tan \xi$.\,\,Here $A$ and $Z$ are the photon and neutral massive gauge boson in the SM, respectively, and $Z'$ is a new massive gauge boson in the dark sector. As we shall see soon, we are interested in the case where $\epsilon \ll 1$ and $m_Z^2 \gg m_{Z'}^2$, with which the second equation in Eq.\,\eqref{mixG} with Eq.\,\eqref{mG2} is reduced to \begin{eqnarray} t^{}_\xi \,\simeq\, s^{}_\xi \,\simeq\, \frac{m_{\bar{Z}}^2}{m_{\bar{Z}}^2-m_C^2} {}^{} s^{}_\tf{W} \epsilon \,\simeq\, \frac{m_Z^2}{m_Z^2 - m_{Z'}^2} {}^{} s^{}_\tf{W} \epsilon \,\simeq\, s^{}_\tf{W} \epsilon ~. \end{eqnarray} With this approximation and Eqs.\,\eqref{TO} and \eqref{BWC}, the covariant derivative (here we only show the dark gauge interaction) becomes \begin{eqnarray}\label{Drho} {\cal D}_\rho \,\supset\, i \big({}^{}{}^{} g^{}_\tf{D} {\cal Q}^{}_\tf{D} - g^{}_e {}^{}{}^{} c^{}_\tf{W} \epsilon {}^{} {\cal Q}^{}_e \big) Z_\rho' ~, \end{eqnarray} where ${\cal Q}^{}_e = \frac{1}{2} \tau^3 + {\cal Q}^{}_Y$ is the electromagnetic charge in unit $g^{}_e = g^{}_\tf{W} s^{}_\tf{W} \sim 0.3$.\,\,This interaction is crucial when we discuss the kinetic equilibrium between the dark sector and the SM sector. \section{Annihilation cross sections in dark sector}\label{sec:3} In this section, we will present the formulas for the annihilation cross sections of $3 \to 2$ and $2 \to 2$ processes in the dark sector.\,\,The detailed derivations for these cross sections can be found in the appendices.\,\,The relevant Lagrangian of the $3 \to 2$ and $2 \to 2$ processes is given by \begin{eqnarray} {\cal L}^{}_\tf{ann} \,=\, -\,\lambda^{}_3 \sx{1.2}{\big[} X^3 \hs{-0.03cm} S^\ast + (X^\ast)^3 \hs{-0.03cm} S {}^{}{}^{} \sx{1.2}{\big]} -\tfrac{1}{2}{}^{}{}^{}y^{}_N \sx{1.1}{\big(} \,{}^{} \overline{N\raisebox{0.5pt}{$^\tf{c}$}} \hs{-0.03cm} N S + \overline{N} N\raisebox{0.5pt}{$^\tf{c}$} \hs{-0.03cm} S^\ast \sx{1.1}{\big)} ~. \end{eqnarray} With these interactions and the \,U$(1)^{}_\tf{D}$ charge conservation, the possible $3 \to 2$ processes are $X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}, X\hs{-0.03cm}X\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{N}$, and $X\hs{-0.03cm}N\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{X}$ (here we have omitted their charge conjugation processes).\,\,For these processes to take place, the masses of $X$ and $N$ should satisfy the relation $3{}^{}m^{}_X > 2{}^{}m^{}_N > m^{}_X$, under which the $2 \to 3$ and $2 \to 4$ processes such as $\bar{N}\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}X\hs{-0.03cm}X$ and $X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}\hs{-0.03cm}N\hs{-0.03cm}\bar{N}$, etc., are kinematically forbidden.\,\,On the other hand, the $2 \to 2$ processes $N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}$ or $X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}$ can be induced via the two-loop diagrams.\,\,The Feynman diagrams of these $3 \to 2$ and $2 \to 2$ processes are depicted in Fig.\,\ref{fig:ann}. \begin{figure}[t!] \hs{0.2cm} \centering \includegraphics[width=0.31\textwidth]{XXXNN.pdf} \hs{0.2cm} \includegraphics[width=0.31\textwidth]{XXNXN.pdf} \hs{0.2cm} \includegraphics[width=0.31\textwidth]{XNNXX.pdf} \\[0.5cm] \includegraphics[width=0.41\textwidth]{NNXX.pdf} \hs{0.3cm} \includegraphics[width=0.41\textwidth]{XXNN.pdf} \vs{-0.3cm} \caption{The Feynman diagrams of the $3 \to 2$ and $2 \to 2$ processes in the $r$SIMP model, where the arrows represent the direction of dark charge flow.} \label{fig:ann} \end{figure} First, the non-thermally-averaged $3 \to 2$ annihilation cross sections are computed as \begin{eqnarray} (\sigma v^2)_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}} &=& \frac{\lambda^2_3 {}^{}{}^{} y^2_N}{128 {}^{} \pi {}^{} m^5_X} \frac{\big({}^{}9 - 4{}^{} r_N^2\big)^{\hs{-0.05cm}3/2}}{\big({}^{}9 - r_S^2{}^{}\big) \raisebox{1pt}{$^{\hs{-0.05cm}2}$}} \label{XXXNN} ~,\quad \\[0.15cm] (\sigma v^2)_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{N}} &=& \frac{9\sqrt{3}\,\lambda^2_3 {}^{}{}^{} y^2_N}{32{}^{}\pi{}^{}m^5_X} \frac{\big(1 + r^{}_N\big)\big(1 + 2{}^{}r^{}_N + 2{}^{}r_N^2 \big) \sqrt{3 + 8{}^{}r^{}_N +4{}^{}r_N^2}} {\big(2 + r^{}_N\big)\raisebox{1pt}{$^{\hs{-0.05cm}2}$} \sx{1.0}{\big[} r_S^2\big(1 + r^{}_N\big) + 2{}^{}r^{}_N \sx{1.0}{\big]}\raisebox{1pt}{$^{\hs{-0.05cm}2}$}} ~, \label{XXNXN} \end{eqnarray} where $r^{}_{N,S} \equiv m^{}_{N,S}/m^{}_X$, and we demand that $3/2 > r^{}_N > 1/2$ and $r^{}_S > 2{}^{}r^{}_N$.\,\,Notice that the $(\sigma v^2)_{\hs{-0.03cm}X\hs{-0.03cm}N\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{X}} = {\cal O}(v^2)$ is $p\,$-wave suppressed.\,\,Here we have applied the Feynman rules of fermion-number-violating interactions to derive these cross sections \cite{Denner:1992vza}.\,\,In our study, we will consider the resonant effect for SIMP DM \cite{Choi:2016hid,Ho:2017fte}, where $r^{}_S \simeq 3$, to reduce the values of $\lambda^{}_3$ and $y^{}_N$ to escape from the perturbative bounds.\,\,For the resonant SIMP DM, we have to adopt the Breit-Wigner form for \eqref{XXXNN} with a nonvanishing velocity of DM in the center of mass energy, $(p^{}_1 + p^{}_2 + p^{}_3)^2 \simeq 9{}^{}m_X^2\big( 1+ 2\beta/3\big)$, as \cite{Gondolo:1990dk,Choi:2016hid} \begin{eqnarray} (\sigma v^2)^\tf{BW}_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}} &=& \frac{c^{}_X}{m_X^5} \frac{\gamma^2_S}{\big(\epsilon^{}_S - 2\beta/3 {}^{} \big)\raisebox{-0.5pt}{$^{\hs{-0.03cm}2}$} + \gamma_S^2} ~, \quad c^{}_X \,=\, \frac{2\pi {}^{} \lambda_3^2}{y_N^2} \frac {r_S^2 \big({}^{}9 - 4{}^{} r_N^2\big)^{\hs{-0.05cm}3/2}}{\big({}^{}r_S^2 - 4{}^{}r_N^2\big) \raisebox{1pt}{$^{\hs{-0.05cm}3}$}} ~, \end{eqnarray} where $\beta \equiv \frac{1}{2}\big(v_1^2+v_2^2+v_3^2 {}^{} \big)$ with $v^{}_i$ the speeds of three initial $X$ particles.\,\,In this expression, the $\epsilon^{}_S$ indicates the level of degeneracy between $m^{}_S$ and $3{}^{}m^{}_X$, and the $\gamma^{}_S$ is the normalized dimensionless width of the resonance, respectively\,: \begin{eqnarray} \epsilon^{}_S &\equiv& \frac{m_S^2 - 9{}^{}m_X^2}{9{}^{}m_X^2} \,=\, \frac{r_S^2}{9}-1 ~,\quad \\[0.1cm] \gamma^{}_S &\hs{-0.2cm}\equiv\hs{-0.2cm}& \frac{m^{}_S \Gamma^{}_S}{9{}^{}m_X^2} \,=\, \frac{y_N^2 r_S^2}{144{}^{}\pi} \bigg(1-\frac{4{}^{}r_N^2}{r_S^2}\bigg)^{\hs{-0.15cm}3/2} ~. \end{eqnarray} Here the decay rate of the $S$ particle is given by\footnote{As mentioned in the previous section, the $S$ particle can also decay into three $X$ particles if it is kinematically allowed.\,\,However, since we are interested in the mass region where $m^{}_S \simeq 3{}^{}m^{}_X$, the decay rate of $S \to \bar{X}\hs{-0.03cm}\bar{X}\hs{-0.03cm}\bar{X}$ is then suppressed by phase space even if $\lambda^{}_3 \sim {\cal O}(10)$.\,\,Thus, we ignore this decay mode in our numerical study.} \begin{eqnarray} \Gamma^{}_S \,=\, \Gamma \big(S \to \bar{N}\hs{-0.02cm}\bar{N} {}^{} \big) \,=\, \frac{y_N^2 m^{}_S}{16{}^{}\pi} \bigg(1-\frac{4{}^{}m_N^2}{m_S^2}\bigg)^{\hs{-0.15cm}3/2} ~. \end{eqnarray} Employing the formula in Ref.\,\cite{Choi:2017mkk}, the annihilation cross section for the process $X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}$ near the resonance with thermal average is then \begin{eqnarray} \langle \sigma v^2 \rangle_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}} \,=\, \frac{x^3}{2} \hs{-0.03cm} \int_{\hs{-0.03cm}0}^{\infty} \hs{-0.05cm} {\mathrm{d}} \beta \,(\sigma v^2)^\tf{BW}_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}} \, \beta^2 \exp\hs{-0.08cm}\big({-}{}^{}x \beta {}^{} \big) ~, \end{eqnarray} where $x \equiv m^{}_X/T$ is the dimensionless cosmic time variable with $T$ being the thermal plasma temperature.\,\,For the process $X\hs{-0.03cm}X\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{N}$, we simply take $\langle\sigma v^2\rangle_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{N}} \simeq (\sigma v^2)_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{N}}$. Next, the thermally-averaged cross sections for the two-loop induced $2 \to 2$ processes are calculated as \begin{eqnarray} \hs{-0.8cm} \langle \sigma v \rangle^{2\tf{-loop}}_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}} &\hs{-0.2cm}=\hs{-0.2cm}& \frac {81{}^{} \lambda_3^4 {}^{}{}^{} y_N^4 \sqrt{r_N^2-1}} {\pi{}^{}(4\pi)^8{}^{}r_S^4{}^{}{}^{}m_X^2 r^{}_N} \Bigg[\hs{-0.03cm} \big(r_N^2-1\big) |{}^{}{}^{}{\cal I}^{}_1|^2 + \frac {\big(11-2{}^{}r_N^2\big) |{}^{}{}^{}{\cal I}^{}_1|^2 + 6{}^{}r_N^2|{}^{}{}^{}{\cal I}^{}_2|^2} {4{}^{}x} \Bigg] \label{cNNXX} \,, \\[0.2cm] \hs{-0.8cm} \langle \sigma v \rangle^{2\tf{-loop}}_{\hs{-0.05cm}X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}} &\hs{-0.2cm}=\hs{-0.2cm}& \frac {81{}^{} \lambda_3^4 {}^{}{}^{} y_N^4 {}^{} r_N^2 \sqrt{1- r_N^2}} {\pi{}^{}(4\pi)^8{}^{}r_S^4{}^{}{}^{}m_X^2} \Bigg[\hs{-0.03cm} \big(1-r_N^2\big) |{}^{}{}^{}{\cal I}^{}_2|^2 + \frac {2\big(1+2{}^{}r_N^{-2}\big) |{}^{}{}^{}{\cal I}^{}_1|^2 + 3\big(5{}^{}r_N^2-2\big)|{}^{}{}^{}{\cal I}^{}_2|^2} {4{}^{}x} \Bigg] \label{cXXNN} \,, \end{eqnarray} where $\,{\cal I}^{}_{1,2} = {\cal I}^{}_{1,2}(r^{}_N,r^{}_S)$ are two-loop functions in the form of quintuple integrals as \begin{eqnarray}\label{I1I2} {\cal I}^{}_{1,2}(r^{}_N,r^{}_S) \,=\, \int_{\hs{-0.02cm}0}^1 \hs{-0.05cm} {\mathrm{d}} z^{}_1 \int_{\hs{-0.02cm}0}^1 \hs{-0.05cm} {\mathrm{d}} z^{}_2 \int_{\hs{-0.02cm}0}^{1-z^{}_2} \hs{-0.05cm} {\mathrm{d}} z^{}_3 \int_{\hs{-0.02cm}0}^{{}^{}z^{}_1(1-z^{}_1)} \hs{-0.05cm} {\mathrm{d}} z^{}_4 \int_{\hs{-0.02cm}0}^1 \hs{-0.05cm} {\mathrm{d}} z^{}_5 \,{}^{}{\cal F}^{}_{1,2}(r^{}_N,r^{}_S) \end{eqnarray} with \begin{eqnarray} {\cal F}^{}_1(r^{}_N,r^{}_S) &=& \frac {r_S^2 {}^{} z_5^2 \sx{1.1}{\big[} 2P^2 z_5^3 - \big(P^2 + 3{}^{}Q^2 \big) z_5^2 + \big(2{}^{}Q^2 + 3 \big) z^{}_5 - 2 {}^{} \sx{1.1}{\big]}} {2 \big(P^2 z_5^2 - Q^2 z^{}_5 + 1 \big)\raisebox{1pt}{$^{\hs{-0.05cm}2}$}} ~, \\[0.15cm] {\cal F}^{}_2(r^{}_N,r^{}_S) &=& \frac{r_S^2 (1-z^{}_2-z^{}_3) {}^{} z_5^3 \big(2P^2 z_5^2 - 3{}^{}Q^2 z^{}_5 + 3 \big)} {2 \big(P^2 z_5^2 - Q^2 z_5 + 1 \big)\raisebox{1pt}{$^{\hs{-0.05cm}2}$}} ~, \end{eqnarray} \vs{-0.3cm} \begin{eqnarray} P^2 &=& \begin{cases} \,z^{}_4 {}^{} \sx{1.1}{\big[} {}^{} r_N^2 (z^{}_2-z^{}_3+1)(z^{}_2-z^{}_3-1) + 1 \sx{1.1}{\big]} &\,\,\text{for} \quad N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X} \\[0.3cm] \,z^{}_4 {}^{} \sx{1.1}{\big[} {}^{} r_N^2 (z^{}_2+z^{}_3 - 1)^2 - (2z^{}_2 - 1)(2z^{}_3 -1) \sx{1.1}{\big]} &\,\,\text{for} \quad X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N} \end{cases} ~,\quad \\[0.2cm] Q^2 &=& \begin{cases} \,1 + z^{}_4 {}^{} \sx{1.1}{\big[} 2{}^{}r_N^2 (z^{}_2+z^{}_3-1) - r_S^2 (z^{}_2+z^{}_3) + 1\sx{1.1}{\big]} &\text{for} \quad N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X} \\[0.3cm] \,1 + z^{}_4 {}^{} \sx{1.1}{\big[} \big(2 - r_S^2\big) (z^{}_2+z^{}_3) -1 \sx{1.1}{\big]} &\text{for} \quad X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N} \end{cases} ~. \end{eqnarray} We present the typical values of $\,{\cal I}^{}_1$ and $\,{\cal I}^{}_2$ for $r^{}_S \simeq 3$ and $3/2 > r^{}_N > 1/2$ in Fig.\,\ref{fig:I1I2}.\,\,Note that the $\langle \sigma v \rangle^{2\tf{-loop}}_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}}$ and $\langle \sigma v \rangle^{2\tf{-loop}}_{\hs{-0.03cm}X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}}$ are dominated by the $p\,$-wave contributions if the masses of $N$ and $X$ are degenerate.\,\,It is worth mentioning that the $2 \to 2$ annihilation cross sections in Eqs. \eqref{cNNXX} and \eqref{cXXNN} are in agreement with the ones derived by the effective operator approach, where we introduce $c/(2!\Lambda) X^3 \overline{N\raisebox{0.5pt}{$^\tf{c}$}} \hs{-0.03cm} N$ with $c$ the coupling constant and $\Lambda$ the cutoff scale of the theory \cite{Ho:2021ojb}.\footnote{For instance, in the case of $N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}$ with $r^{}_N \simeq 1$ and $\Lambda \sim m^{}_S \simeq 3{}^{}m^{}_X$, the two-loop induced annihilation cross sections in the UV complete model and the effective theory are approximately given by \begin{eqnarray} \langle \sigma v \rangle^\tf{UV}_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}} \approx \frac {243{}^{} \lambda_3^4 {}^{}{}^{} y_N^4 \sqrt{r_N^2-1}} {2\pi{}^{}(4\pi)^8{}^{}x{}^{}m_X^2} \bigg(\frac{m^{}_X}{m^{}_S}\bigg)^{\hs{-0.13cm}4} |{}^{}{}^{}{\cal I}^{}_2|^2 ~,\quad \langle \sigma v \rangle^\tf{EFT}_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}} \approx \frac {243{}^{}{}^{}c^4 \sqrt{r_N^2-1}} {2\pi{}^{}(4\pi)^8{}^{}x{}^{}m_X^2} \bigg(\frac{m^{}_X}{\Lambda}\bigg)^{\hs{-0.13cm}4} |{}^{}{}^{}{\cal I}^{}_\Lambda|^2 ~, \end{eqnarray} where $\,{\cal I}^{}_2 = {\cal I}^{}_2\big(r^{}_N=1,r^{}_S=3\big) \simeq 0.27$ and $\,{\cal I}^{}_\Lambda = {\cal I}^{}_\Lambda\big(r^{}_S = 3\big) \simeq 0.45$ \cite{Ho:2021ojb}. } In fact, the $X$ and $N$ particles can also annihilate each other via one-loop diagrams with the $\lambda_{X \hs{-0.03cm} S}$ term and $Z'$-mediated diagrams with the dark gauge coupling as shown in Fig.\,\ref{fig:NNXXZ}.\,\,We will discuss their effects in Sec.\,\ref{sec:5} and Sec.\,\ref{sec:6}, respectively. \begin{figure}[t!] \hs{0.1cm} \centering \includegraphics[width=0.477\textwidth]{I1.pdf} \hs{0.2cm} \includegraphics[width=0.47\textwidth]{I2.pdf} \vs{-0.3cm} \caption{The two-loop functions $\,{\cal I}^{}_1$ and $\,{\cal I}^{}_2$ as functions of $r^{}_N$ with different choices of $r^{}_S$ near the resonance.\,\,As indicated, the ${\cal I}^{}_{1,2} \sim {\cal O}(0.1)$ in the mass range of interest.} \label{fig:I1I2} \end{figure} \section{Theoretical \& Experimental constraints}\label{sec:4} In this section, we take into account various theoretical and experimental restrictions on the masses and couplings of the new particles in the $r$SIMP model. Theoretically, the quartic, Yukawa, and dark gauge couplings are subject to the conditions of perturbativity.\,\,We impose that \cite{Choi:2021yps,Perez:2021rbo,Allwicher:2021rtd} \begin{eqnarray} \lambda^{}_k < 4{}^{}\pi ~,\quad y^{}_N < \sqrt{8{}^{}\pi} ~,\quad g^{}_\tf{D} < 4{}^{}\pi ~, \end{eqnarray} where $k = \{h,X,S,\phi, h X, h S, h \phi, X \hs{-0.03cm} S, X \hs{-0.03cm} \phi, S \phi\}$.\,\,Besides, the thermally-averaged annihilation cross sections are bounded from above by partial wave unitarity, which can place bounds on the couplings for given masses.\,\,In the non-relativistic limit, we require that \cite{Namjoo:2018oyn} \begin{eqnarray} \langle \sigma v^2 \rangle_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}} \leqslant \frac{192 \sqrt{3} {}^{}{}^{} \pi^2 x^2}{m_X^5} ~,\quad \langle \sigma v^2 \rangle_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{N}} \leqslant \frac{16 {}^{}{}^{} \pi^2 x^2}{m_X^5} \bigg(\hs{-0.05cm} 1 + \frac{2}{r^{}_N}\bigg)^{\hs{-0.15cm}3/2} ~, \end{eqnarray} \vs{-0.5cm} \begin{eqnarray} \langle \sigma v^2 \rangle_{\hs{-0.03cm}X\hs{-0.03cm}N\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{X}} \leqslant \frac{4 {}^{}\pi^2 x^2}{m_X^5} \bigg(\frac{1}{r_N^2} + \frac{2}{r^{}_N}\bigg)^{\hs{-0.15cm}3/2} ~, \end{eqnarray} \vs{-0.5cm} \begin{eqnarray} \langle \sigma v \rangle_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}} \leqslant \frac{4 {}^{} \sqrt{\pi {}^{} x}}{m_X^2 r_N^{3/2}} ~,\quad \langle \sigma v \rangle_{\hs{-0.03cm}X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}} \leqslant \frac{64 {}^{} \sqrt{\pi {}^{} x}}{m_X^2} ~, \end{eqnarray} here ${}^{}x{}^{}$ will be set at the freeze-out time of DM.\,\,On the other hand, the quartic couplings must satisfy certain relations to stabilize the vacuum at large scalar field values, where the potential energy ${\cal V}$ is bounded from below.\,\,For simplicity, we only focus on the potential including the $X$ and $S$ fields, and assume that other quartic couplings are negligible but positive.\,\,Under these considerations, we found that \cite{Choi:2016tkj} \begin{eqnarray} \hs{-0.2cm} \lambda^{}_{X,{}^{}{}^{}S} > 0 ~,\quad \lambda^{}_{X\hs{-0.03cm}S} + 2 \sqrt{\lambda^{}_X \lambda^{}_S} {}^{} > 0 ~,\quad |\lambda^{}_3| < \sqrt{ \frac{ \big(12{}^{}\lambda^{}_X \lambda^{}_S + \lambda^{2}_{X\hs{-0.03cm}S}\big)\raisebox{1pt}{$\hs{-0.05cm}^{3/2}$} + 36{}^{}\lambda^{}_X \lambda^{}_S \lambda^{}_{X\hs{-0.03cm}S} - \lambda_{X\hs{-0.03cm}S}^3} {54{}^{}\lambda^{}_S} } ~. \end{eqnarray} In particular, the above conditions are reduced to $\lambda^{}_{X,{}^{}{}^{}S} > 0$ and $|\lambda^{}_3| < \big(16{}^{}\lambda^3_X \lambda^{}_S/27 \big)^{\hs{-0.05cm}1/4}$ in the limit $\lambda_{X\hs{-0.03cm}S} \to 0$, which turn out to be a stringent constraint in this model.\,\,Notice that these conditions also ensure that $\langle X \rangle = \langle S \rangle = 0$. \begin{figure}[t!] \hs{0.1cm} \centering \includegraphics[width=0.40\textwidth]{NNXX1loop.pdf} \hs{0.2cm} \includegraphics[width=0.34\textwidth]{NNXXZ.pdf} \caption{The $N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}$ process through the one-loop and $Z'$-mediated tree diagrams.} \label{fig:NNXXZ} \end{figure} Cosmologically, the light DM would contribute to the effective number of neutrino species, $N^{}_\tf{eff}$.\,\,Assuming the entropy of the universe is conserved and considering the DM particles mainly interact with electrons and positrons, the $N^{}_\tf{eff}{}^{}$ at the CMB temperature is estimated as \cite{Boehm:2013jpa} \begin{eqnarray}\label{Neff} \hs{-0.6cm} N^{}_\tf{eff} {}^{} \big(T^{}_\tf{CMB}\big) \,=\, \scalebox{1.2}{\bigg[} 1 + \frac{4}{11} \hs{-0.05cm} \sum_{j{}^{}={}^{}X,N} g^\tf{DM}_{\star s}\big(m^{}_j,T_{\nu\tf{d}}\big) \hs{-0.05cm} \scalebox{1.2}{\bigg]}^{\hs{-0.1cm}-{}^{}4/3} N^\tf{SM}_\tf{eff}\big(T^{}_\tf{CMB}\big) ~, \end{eqnarray} where $N^\tf{SM}_\tf{eff}\big(T^{}_\tf{CMB}\big) = 3.044$ in the SM \cite{Bennett:2020zkv,Akita:2020szl}, and $g^\tf{DM}_{\star s}\big(m^{}_j,T_{\nu\tf{d}}\big)$ counts the DM entropy degrees of freedom at neutrino decoupling temperatures, $T_{\nu \tf{d}} \hs{-0.03cm} \simeq 2\,\text{MeV}$\,\,\cite{Escudero:2018mvt}, which has the form as \cite{Lehmann:2020lcv} \begin{eqnarray}\label{gDM} g^\tf{DM}_{\star s}\big(m^{}_j,x\big) \,=\, \frac{15{}^{}g^{}_j}{4\pi^4} \mathop{\mathlarger{\int}_{\hs{-0.03cm}r^{}_j{}^{}x}^{{}^{}\infty}} \hs{-0.05cm} dw \, \frac{\big(4w^2 - r_j^2{}^{}x^2{}^{}\big)\big(w^2 - r_j^2{}^{}x^2{}^{}\big)^{\hs{-0.05cm}1/2}}{e^w \pm 1} \end{eqnarray} with $g^{}_j$ the internal degrees of freedom of particle $j$.\,\,The latest measurement from the Planck satellite gives $N^{}_\tf{eff} = 2.99^{+0.34}_{-0.33}$ (95\% C.L.) \cite{Aghanim:2018eyx}, which can provide lower bounds for DM masses. As we will see in the next section, the masses of DM should be near degenerate in this model. Using Eqs.\,\eqref{Neff} and \eqref{gDM} with this property, we suggest that the $m^{}_{X,N} \gtrsim {\cal O}(10)\,\tx{MeV}$.\,\,Another cosmological constraint is the observed relic abundance of DM. We will discuss it as well in the next section. For the gauge sector, there is a constraint for the kinetic mixing parameter, depending on the mass of the dark gauge boson.\,\,In this model, the $Z'$ mainly decays into invisible particles, $Z' \to X\hs{-0.03cm}\bar{X},N\hs{-0.03cm}\bar{N}$, and $S\hs{-0.01cm}\bar{S}$.\,\,Also, we will concentrate on the $Z'$ with a few hundred MeV mass.\,\,In these circumstances, the measurements from the BaBar collaboration cap $\epsilon \lesssim 10^{-3}$ \cite{BaBar:2017tiz,Fabbrichesi:2020wbt}. \section{Relic abundance of DM and reshuffled effect}\label{sec:5} To estimate the current density of DM in the $r$SIMP model, one has to numerically solve the coupled Boltzmann equations for the comoving number yields $Y_X$ and $Y_N$.\,\,Assuming there is no asymmetry in DM, namely $Y_{\white{\bar{\black{X}}}} = Y_{\bar{X}}$ and $Y_{\white{\bar{\black{N}}}} = Y_{\bar{N}}$, the Boltzmann equations are given by \cite{Ho:2021ojb} \begin{eqnarray} \frac{{\mathrm{d}} Y^{}_X}{{\mathrm{d}} x} &=& -{}^{}\frac{s(x)^2}{H(x){}^{}x} \Bigg\{ 12 {}^{} \langle \sigma v^2 \rangle_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}} \hs{-0.01cm} \sx{1.1}{\bigg[} Y^3_X - Y_N^2 \frac{(Y^\tf{eq}_X)^3}{(Y^\tf{eq}_N)^2} \sx{1.1}{\bigg]} \hs{-0.05cm} + 2 {}^{} \langle \sigma v^2 \rangle_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{N}} {}^{}{}^{} Y^{\white{q}}_X Y^{\white{q}}_N \sx{1.1}{\big(} {}^{} Y^{}_X - Y^\tf{eq}_X \sx{1.1}{\big)} \nonumber\\ &&\hs{2.0cm} {-} {}^{}{}^{} \langle \sigma v^2 \rangle_{\hs{-0.03cm}X\hs{-0.03cm}N\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{X}} {}^{}{}^{} Y^{}_X \hs{-0.05cm} \sx{1.1}{\bigg[} Y_N^2 - Y^{}_X \frac{(Y^\tf{eq}_N)^2}{Y^\tf{eq}_X} \sx{1.1}{\bigg]} \hs{-0.05cm} \Bigg\} \nonumber\\ && -{}^{}\frac{s(x)}{H(x){}^{}x} \Bigg\{ 4 {}^{} \langle \sigma v \rangle_{\hs{-0.03cm}X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}} \hs{-0.01cm} \sx{1.1}{\bigg[} Y_X^2 - Y_N^2 \frac{(Y^\tf{eq}_X)^2}{(Y^\tf{eq}_N)^2} \sx{1.1}{\bigg]} \hs{-0.05cm} - \langle \sigma v \rangle_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}} \hs{-0.01cm} \sx{1.1}{\bigg[} Y_N^2 - Y_X^2 \frac{(Y^\tf{eq}_N)^2}{(Y^\tf{eq}_X)^2} \sx{1.1}{\bigg]} \hs{-0.05cm} \Bigg\} ~, \label{dYX} \\[0.2cm] \frac{{\mathrm{d}} Y^{}_N}{{\mathrm{d}} x} &=& -{}^{}\frac{s(x)^2}{H(x){}^{}x} \Bigg\{ 2 {}^{} \langle \sigma v^2 \rangle_{\hs{-0.03cm}X\hs{-0.03cm}N\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{X}} {}^{}{}^{} Y^{}_X \hs{-0.05cm} \sx{1.1}{\bigg[} Y_N^2 - Y^{}_X \frac{(Y^\tf{eq}_N)^2}{Y^\tf{eq}_X} \sx{1.1}{\bigg]} \hs{-0.05cm} - 8 {}^{} \langle \sigma v^2 \rangle_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}} \hs{-0.01cm} \sx{1.1}{\bigg[} Y_X^3 - Y_N^2 \frac{(Y^\tf{eq}_X)^3}{(Y^\tf{eq}_N)^2} \sx{1.1}{\bigg]} \hs{-0.05cm} \Bigg\} \nonumber\\ && -{}^{}\frac{s(x)}{H(x){}^{}x} \Bigg\{ \langle \sigma v \rangle_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}} \hs{-0.01cm} \sx{1.1}{\bigg[} Y_N^2 - Y_X^2 \frac{(Y^\tf{eq}_N)^2}{(Y^\tf{eq}_X)^2} \sx{1.1}{\bigg]} \hs{-0.05cm} - 4 {}^{} \langle \sigma v \rangle_{\hs{-0.03cm}X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}} \hs{-0.01cm} \sx{1.1}{\bigg[} Y_X^2 - Y_N^2 \frac{(Y^\tf{eq}_X)^2}{(Y^\tf{eq}_N)^2} \sx{1.1}{\bigg]} \hs{-0.05cm} \Bigg\} ~, \label{dYN} \end{eqnarray} where $Y^\tf{eq}_j$ is the equilibrium comoving number yield of the species $j$ given by \begin{eqnarray} Y^\tf{eq}_{j} \,=\, \frac{45}{4{}^{}\pi^4} \frac{g^{}_j}{g^{}_{\star s}(x)} \big(r^{}_j {}^{} x\big)^{\hs{-0.05cm}2} K^{}_2 \hs{-0.03cm} \big(r^{}_j {}^{} x\big) \,\simeq\, \frac{45\sqrt{2}}{8{}^{}\pi^{7/2}} \frac{g^{}_j}{g^{}_{\star s}(x)} {}^{} (r^{}_j {}^{} x)^{3/2} e^{- r^{}_j {}^{} x} \end{eqnarray} with $K^{}_2(x)$ being the modified Bessel function of the second kind.\,\,The $s(x)$ and $H(x)$ are the comoving entropy density and the Hubble parameter, respectively, which are given by \begin{eqnarray} s(x) \,=\, \frac{2{}^{}\pi^2}{45} g^{}_{\star s}(x) {}^{}{}^{} \frac{m_X^3}{x^3} ~,\quad H(x) \,=\, \sqrt{\frac{\pi^2 g^{}_{\star}(x)}{90}} \frac{m_X^2}{x^2 m^{}_\tf{Pl}} \end{eqnarray} with $g^{}_{\star}\,(g^{}_{\star s})$ being the effective energy (entropy) degrees of freedom of thermal plasma~\cite{Saikawa:2018rcs}, and $m^{}_\tf{Pl} = 2.4 \times 10^{18}\,\tx{GeV}$ the reduced Planck mass.\,\,Now, with an appropriate initial condition $Y^{}_{X,N}(x_\tf{ini.}\hs{-0.03cm}) = Y^\tf{eq}_{X,N}(x_\tf{ini.}\hs{-0.03cm})$, where typically $10 < x_\tf{ini.} \hs{-0.05cm} < 20$, we can obtain the $Y^{}_{X,N}(x)$, and then predict the present density of DM by the relation below \cite{Bhattacharya:2019mmy} \begin{eqnarray} \Omega_\tf{DM} \hat{h}^2 \,=\, 2\big({}^{} \Omega^{}_X \hat{h}^2 + \Omega^{}_N {}^{} \hat{h}^2 {}^{}\big) \,\simeq\, 5.49 \times 10^5 {}^{} \sx{0.9}{\bigg(} \frac{m^{}_X}{\text{MeV}} \sx{0.9}{\bigg)} \sx{1.1}{\big(} Y^0_X + r^{} _N {}^{} Y^0_N \sx{1.1}{\big)} ~, \end{eqnarray} where $Y^0_j = Y^{}_j(x \to \infty)$.\,\,Imposing the observed DM abundance, $\Omega^\tf{obs}_\tf{DM} \hat{h}^2 = 0.12 \pm 0.0012$\,\,\cite{Aghanim:2018eyx}, one can fix the values of $\lambda^{}_3$ and $y^{}_N$ for given masses of $X$, $N$ and $S$.\,\,In the following we will first consider the case without the $2 \to 2$ processes, and then turn it on to see the effects. We present in Fig.\,\ref{fig:YNX_wo22} a few examples of the cosmological evolution of the comoving number densities of DM without the $2 \to 2$ process in the case of $m^{}_N > m^{}_X$, where the color solid lines satisfy the DM relic abundance.\,\,Note that the parameter inputs in these plots may not satisfy other constraints mentioned above.\,\,The plots shown here are merely for demonstration purposes.\,\,As indicated, one can see that both SIMP particles with non-degenerate masses can contribute a sizable amount to the observed DM density.\,\,In particular, there is a phenomenon of the increasing number density of $N$ right after the chemical freeze-out of DM, remarkably in Figs.\,\ref{fig:YNX_wo22}(c) and \ref{fig:YNX_wo22}(d).\footnote{The bouncing effect of DM density after the DM chemical freeze-out was first pointed out in \cite{Katz:2020ywn} and \cite{Shakya:2021pa}.}\,\,To account for this behavior of DM number density, let us first define the freeze-out temperature $x^{}_\tf{f.o.}$ and freeze-in temperature $x^{}_\tf{f.i.}$ of DM in the following ways\,\,: \begin{eqnarray} &&\tx{Freeze-out temp. of $X$}\,:\, Y^{}_X(x^X_\tf{f.o.}\hs{-0.03cm}) - Y^\tf{eq}_X(x^X_\tf{f.o.}\hs{-0.03cm}) \,\simeq\, Y^\tf{eq}_X(x^X_\tf{f.o.}\hs{-0.03cm}) ~, \\[0.1cm] &&\tx{Freeze-out temp. of $N$}\,:\, Y^{}_N(x^N_\tf{f.o.}\hs{-0.03cm}) - Y^\tf{eq}_N(x^N_\tf{f.o.}\hs{-0.03cm}) \,\simeq\, Y^\tf{eq}_N(x^N_\tf{f.o.}\hs{-0.03cm}) ~, \end{eqnarray} and we define the freeze-out temperature of DM as a temperature at which both DM particles start to depart from the chemical equilibrium, namely $x^{}_\tf{f.o.} \hs{-0.05cm} \equiv \tx{Max}(x^X_\tf{f.o.}, x^N_\tf{f.o.}\hs{-0.03cm})$\,; \begin{eqnarray} &&\hs{-1cm} \tx{Freeze-in temp. of $X$}\,:\, \tx{Max}\sx{1.2}{\big[}{}^{} 12{}^{}\Gamma_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}}(x^X_\tf{f.i.}\hs{-0.03cm})\,, 2{}^{}\Gamma_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{N}}(x^X_\tf{f.i.}\hs{-0.03cm}) \sx{1.2}{\big]} \,\simeq\, H(x^X_\tf{f.i.}\hs{-0.03cm}){}^{}{}^{}n^{}_X(x^X_\tf{f.i.}\hs{-0.03cm}) ~, \label{fiX} \\[0.1cm] &&\hs{-1cm} \tx{Freeze-in temp. of $N$}\,:\, 8{}^{}\Gamma_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}}(x^N_\tf{f.i.}\hs{-0.03cm}) \,\simeq\, H(x^N_\tf{f.i.}\hs{-0.03cm}){}^{}{}^{}n^{}_N(x^N_\tf{f.i.}\hs{-0.03cm}) ~, \label{fiN} \end{eqnarray} where $\Gamma_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}}(x) = n_X^3(x) \langle \sigma v^2 \rangle_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}}$ and $\Gamma_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{N}}(x) = n_X^2(x){}^{}n^{}_N(x)\langle \sigma v^2 \rangle_{\hs{-0.03cm}X\hs{-0.03cm}X\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{N}}$ are the $3 \to 2$ annihilation rates per unit volume per unit time with $n_j(x) = s(x){}^{}Y_j(x)$ the number density of DM, and the prefactors are the ones appearing in Eqs.\,\eqref{dYX} and \eqref{dYN}.\,\,Similar to the $x^{}_\tf{f.o.}$, we define the freeze-in temperature of DM as a temperature at which both DM number densities begin to be constants, that is $x^{}_\tf{f.i.} \hs{-0.05cm} \equiv \tx{Max}(x^X_\tf{f.i.},x^N_\tf{f.i.}\hs{-0.03cm})$. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{YNX_wo22_a.pdf} \hs{0.1cm} \includegraphics[width=0.48\textwidth]{YNX_wo22_b.pdf} \\[0.2cm] \hs{0.02cm} \includegraphics[width=0.48\textwidth]{YNX_wo22_c.pdf} \hs{0.1cm} \includegraphics[width=0.48\textwidth]{YNX_wo22_d.pdf} \vs{-0.3cm} \caption{The cosmological evolution of the comoving number densities of DM in the absence of the $2 \to 2$ processes for $r^{}_N > 1$ in the $r$SIMP model.\,\,In region (i), the DM particles are in chemical equilibrium via the $3 \to 2$ annihilations.\,\,In region (ii), the number densities of DM are out of the chemical equilibrium and keep changing (increased or decreased) before the freeze-in temperature of DM.\,\,Finally, the DM number densities are frozen until today in region (iii).} \label{fig:YNX_wo22} \end{figure} Now, we take Fig.\,\ref{fig:YNX_wo22}(d) as an example to explain the increasing phenomenon of DM number density after the DM freeze-out temperature.\,\,At high temperatures, the DM number-changing processes, $X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}$, and $X\hs{-0.03cm}X\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{N}$, as well as the conjugate processes (here we ignore the $X\hs{-0.03cm}N\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{X}$ process since it is $p\,$-wave suppressed) maintain the chemical equilibrium of DM such that the actual DM number densities follow the equilibrium DM number densities, $Y^{}_j(x) \simeq Y^\tf{eq}_j(x)$.\,\,Around the $x^{}_\tf{f.o.}$, the actual DM number densities are no longer tracking the equilibrium DM number densities due to the inefficiency of the chemical equilibrium of DM at lower temperatures.\,\,After the $x^N_\tf{f.o.}$, since the $x^N_\tf{f.i.} > x^N_\tf{f.o.}$ and the process $X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}$ produces two vector-like fermions, the number of $N$ is increased.\,\,Notice that the process $X\hs{-0.03cm}X\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{N}$ does not alter the number of $N$ in total.\,\,The reason this increasing phenomenon is remarkable in Figs.\,\ref{fig:YNX_wo22}(c) and \ref{fig:YNX_wo22}(d) is that the $x^N_\tf{f.i.}$ and $r^{}_N$ are much larger in comparison with Figs.\,\ref{fig:YNX_wo22}(a) and \ref{fig:YNX_wo22}(b).\,\,The former prolongs the time of the increasing number in $N$ and the latter decreases the number of $N$ fastly before the $x^N_\tf{f.o.}$.\,\,On the other hand, the number of $X$ is further decreased after the $x^X_\tf{f.o.}$ because the processes $X\hs{-0.03cm}X\hs{-0.03cm}X \to \bar{N}\hs{-0.03cm}\bar{N}$ and $X\hs{-0.03cm}X\hs{-0.03cm}N \to \bar{X}\hs{-0.03cm}\bar{N}$ both annihilate complex scalars until the $x^X_\tf{f.i.}$.\,\,However as we will see immediately, this increasing effect of DM number density would disappear when we switch on the $2 \to 2$ processes. We also show in Fig.\,\ref{fig:YXN_wo22} one example of the cosmological evolution of the comoving number densities of DM without the $2 \to 2$ process in the case of $m^{}_X > m^{}_N$.\,\,We see that in this case there is no increasing phenomenon of DM density after the chemical freeze-out of DM.\,\,This is because $r^{}_N < 1$ and $g^{}_N = 2{}^{}g^{}_X$, meaning the number density of $N$ is always bigger than that of $X$.\,\,Besides, we find that we have to choose large couplings and relatively degenerate masses of DM to satisfy the observed DM density.\,\,Again, the situation would change completely once we turn on the $2 \to 2$ process. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{YXN_wo22.pdf} \hs{0.1cm} \caption{The cosmological evolution of the comoving number densities of DM without including the two-loop induced $2 \to 2$ processes for $r^{}_N < 1$ in the $r$SIMP model.} \label{fig:YXN_wo22} \end{figure} We present in Fig.\,\ref{fig:YNX_w22} a few benchmark plots of the cosmological evolution of the comoving number densities of DM with both $3 \to 2$ and $2 \to 2$ processes in the case of $m^{}_X > m^{}_N$.\,\,By comparing Figs.\,\ref{fig:YNX_w22}(a-c) with Fig.\,\ref{fig:YNX_w22}(d), we see that the masses of DM must be nearly degenerate to contribute a non-negligible amount to the total DM relic abundance.\,\,Typically, the evolution of the comoving number density is divided into four stages as shown in color shaded regions of Figs.\,\ref{fig:YNX_w22}(a) and \ref{fig:YNX_w22}(b).\,\,In region (i), the $3 \to 2$ reaction rates are much larger than the Hubble expansion rate, $\Gamma_{3 \to 2} \gg H$, where the $3 \to 2$ processes deplete the DM number densities until the $x^{}_\tf{f.o.} \hs{-0.1cm} \simeq 20$.\,\,In region (ii), the DM particles deviate from the chemical equilibrium because of $\Gamma_{3 \to 2} \lesssim H$.\,\,However, the $2 \to 2$ process seems to be inert for a while after the $x^{}_\tf{f.o.}\hs{-0.1cm}$ even if the reaction rate of the $2 \to 2$ process governs over that of the $3 \to 2$ process.\,\,This is because the reaction rate of the forward $2 \to 2$ process $N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}$ is partially cancelled by that of the backward $2 \to 2$ process $X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}$, attributing to the degeneracy of DM masses in the $r$SIMP scenario.\,\,To understand this more clearly, one can look at the last term of Eq.\,\eqref{dYX}, where \begin{eqnarray} \langle \sigma v \rangle_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}} \hs{-0.01cm} \sx{1.1}{\bigg[} Y_N^2 - Y_X^2 \frac{(Y^\tf{eq}_N)^2}{(Y^\tf{eq}_X)^2} \sx{1.1}{\bigg]} \,=\, \langle \sigma v \rangle_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}} \hs{-0.01cm} \sx{1.1}{\Big[} Y_N^2 - 4{}^{}Y_X^2 r^3_N{}^{}e^{-2(r^{}_N-1){}^{}x} \sx{1.1}{\Big]} \end{eqnarray} with the first (second) term in the square bracket the reaction rate of the forward (backward) $2 \to 2$ process.\,\,At high temperatures with $r^{}_N \sim 1$, we have $ r^3_N{}^{}e^{-2(r^{}_N-1){}^{}x} \sim 1$ and $Y^{}_N \sim 2{}^{}Y^{}_X$ right after the $x^{}_\tf{f.o.}$.\,\,As a consequence, this term vanishes and gives no physical effect until the reshuffled temperature, $x^{}_\tf{r} \equiv 1/(2|r^{}_N-1|)$, after which the backward reaction is exponentially-suppressed.\,\,That is to say, the $X$ particles do not have enough kinetic energy to overcome the mass gap, $m^{}_N - m^{}_X$, to annihilate back into the $N$ particles.\,\,In region (iii), the forward $2 \to 2$ reaction becomes active, the $N$ particles annihilate into the $X$ particles during this stage.\,\,Note that since the $2 \to 2$ process preserves the total number of DM, it would only redistribute the number densities of DM until the $x^{}_\tf{f.i.}$, which now is defined as \begin{eqnarray} &&\tx{Freeze-in temp. of $X$}\,:\, \Gamma_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}}(x^X_\tf{f.i.}\hs{-0.03cm}) \,\simeq\, H(x^X_\tf{f.i.}\hs{-0.03cm}){}^{}{}^{}n^{}_X(x^X_\tf{f.i.}\hs{-0.03cm}) ~, \\[0.1cm] &&\tx{Freeze-in temp. of $N$}\,:\, \Gamma_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}}(x^N_\tf{f.i.}\hs{-0.03cm}) \,\simeq\, H(x^N_\tf{f.i.}\hs{-0.03cm}){}^{}{}^{}n^{}_N(x^N_\tf{f.i.}\hs{-0.03cm}) ~, \end{eqnarray} where $\Gamma_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}}(x) = n_N^2(x) \langle \sigma v \rangle_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}}$.\,\,In region (iv), the number densities of DM are frozen until the present day.\,\,In Fig.\,\ref{fig:YNX_w22}(c), there is no reshuffled period because the masses of DM are so degenerate ($r^{}_N = 1.00045$) that the $x^{}_\tf{r} > x^{}_\tf{f.i.}$.\,\,Finally, we see that in Fig.\,\ref{fig:YNX_w22}(d) the increasing phenomenon of $N$ is washed out by the $2 \to 2$ process after the $x^{}_\tf{f.o.}$, and the non-degenerate masses of DM lead to almost no abundance of $N$. Likewise, we show in Fig.\,\ref{fig:YXN_w22} two typical plots of the cosmological evolution of the comoving number densities of DM with both $3 \to 2$ and $2 \to 2$ processes in the case of $m^{}_X > m^{}_N$.\,\,By comparing Fig.\,\ref{fig:YXN_w22}(a) with Fig.\,\ref{fig:YNX_wo22}, we find that we can choose relatively small couplings to satisfy the relic abundance of DM.\,\,Similar to Fig.\,\ref{fig:YNX_w22}(d), we show again there is no reshuffled effect if the masses of DM are extremely degenerate ($r^{}_N =0.9995$). \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{YNX_w22_a.pdf} \hs{0.1cm} \includegraphics[width=0.48\textwidth]{YNX_w22_b.pdf} \\[0.2cm] \centering \hs{0.02cm} \includegraphics[width=0.48\textwidth]{YNX_w22_c.pdf} \hs{0.1cm} \includegraphics[width=0.48\textwidth]{YNX_w22_d.pdf} \vs{-0.3cm} \caption{Cosmological evolution of the comoving number densities in the presence of the $3 \to 2$ and $2 \to 2$ processes for some benchmark points in the $r$SIMP model for $r^{}_N > 1$.} \label{fig:YNX_w22} \end{figure} We briefly summarize the importance of the $2 \to 2$ processes for the cosmological evolution of the comoving DM number densities in the $r$SIMP model.\,\,First, the two-loop induced $2 \to 2$ processes are closely related to the tree-level $3 \to 2$ processes and their reaction rates cannot be omitted.\,\,Second, involving these $2 \to 2$ processes to the $3 \to 2$ processes can alter not only the fractions of DM particles but also total DM number densities.\,\,It is clear to compare the solid lines ($Y_{3 \to 2} + Y_{2 \to 2}$) and dashed lines ($Y_{3 \to 2}$ only) in Figs.\,\ref{fig:YNX_w22} and \ref{fig:YXN_w22} for displaying the differences, where the DM density is overproduced without the $2 \to 2$ processes.\,\,This is easy to understand since the $2 \to 2$ processes strengthen the chemical equilibrium of DM around the DM freeze temperature.\,\,It is crucial to include the two-loop induced $2 \to 2$ annihilations in order to get the correct thermal relic abundance of multi-component SIMP DM. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{YXN_w22_a.pdf} \hs{0.1cm} \includegraphics[width=0.48\textwidth]{YXN_w22_b.pdf} \vs{-0.3cm} \caption{Cosmological evolution of the comoving number densities in the presence of the $3 \to 2$ and $2 \to 2$ processes for some benchmark points in the $r$SIMP model for $r^{}_N < 1$.} \label{fig:YXN_w22} \end{figure} Before closing this section, let us discuss the effect of non-zero $\lambda_{X\hs{-0.03cm}S}$.\,\,As shown in Fig.\,\ref{fig:NNXXZ}, the dominant contributions for the $2 \to 2$ processes may come from the one-loop diagrams.\,\,Thus, with large values of $\lambda_{X\hs{-0.03cm}S}$, we can expect that the reshuffled effect is even stronger than that induced by the two-loop diagrams.\,\,However, since $\lambda_{X\hs{-0.03cm}S}$ is nothing to do with the $3 \to 2$ processes, we can naively turn it off to keep our model belonging to the two-component SIMP DM scenario.\,\,Of course, one can choose a special $\lambda_{X \hs{-0.03cm} S}$ value (which can be positive or negative) such that there is a destructive interference between one-loop and two-loop diagrams to avoid the reshuffled effect.\,\,But we do not consider this fine-tuning case in our analysis, since that would not be the generic situation. \section{SIMP conditions : Thermalization \& Annihilation}\label{sec:6} As in the typical SIMP paradigm, the DM particles should maintain the kinetic equilibrium with SM particles until the freeze-out temperature of DM.\,\,Hence, the interactions between the dark and SM sectors are required in the $r$SIMP model.\,\,Since the U$(1)^{}_\tf{D}$ symmetry introduced in the model is gauged, then it is natural to have a vector-portal coupling connecting these two sectors.\,\,On the other hand, as we have shown in the previous section, the preferred mass scale of DM in the $r$SIMP scenario is around ${\cal O}(20)\,\tx{MeV}$.\,\,It follows that the freeze-out temperature of DM is $T_f \simeq m^{}_X/20 \simeq {\cal O}(1)\,\tx{MeV}$, thereby the relativistic degrees of freedom in the thermal plasma the DM particles mainly interact with are electron and positron.\footnote{The neutrinos and photon are also relativistic particles in the thermal plasma, however, they can only interact with the DM particles via one-loop diagrams or the kinetic mixing which are much suppressed in this model.}\,\,Accordingly, we then consider the following Lagrangian based on Eq.\,\eqref{Drho} for the thermalization of the DM and $e^\pm$ as \begin{eqnarray}\label{Zprime} {\cal L}^{}_{Z'} \,=\, - \Big[{}^{} i g^{}_\tf{D} {\cal Q}^{}_X \big( X^\ast \partial^\rho X - X \partial^\rho X^\ast \big) + g^{}_\tf{D} {\cal Q}^{}_N \overline{N} \gamma^\rho N + g^{}_e {}^{}{}^{} c^{}_\tf{W} {}^{} \epsilon \, \overline{e} \gamma^\rho e {}^{} \Big] Z'_\rho ~, \end{eqnarray} where ${\cal Q}^{}_X$ and ${\cal Q}^{}_N$ are dark charges of the $X$ and $N$ particles, respectively. To determine how large the gauge coupling is sufficient for an efficient kinetic equilibrium, one has to compute the energy transfer rate of the DM and SM particles and then impose the thermalization condition.\,\,Assuming electron and positron are massless at the $T_f$, the energy transfer rate between the DM particles and $e^\pm$ is given by \cite{Gondolo:2012vh} \begin{eqnarray}\label{Gammae} \gamma^{}_{e}(T) &=& \sum_{j{}^{}={}^{}X,{}^{}N} \frac{1}{192 {}^{} \pi^3 m^3_j {}^{} T} \hs{-0.05cm} \mathop{\mathlarger{\int}_{\hs{-0.03cm}0}^{{}^{}\infty}} \hs{-0.05cm} {\mathrm{d}} E^{}_e \,\frac{e^{E^{}_e/T}}{\big(e^{E^{}_e/T} + 1\big)\raisebox{0.05pt}{$\hs{-0.03cm}^2$}} \mathop{\mathlarger{\int}^{{}^{}0}_{\hs{-0.05cm}-4 E_e^2}} \hs{-0.05cm} {\mathrm{d}}^{} t_j \,(-{}^{}{}^{}t^{}_j)\,\overline{\big|{\cal M}_{j e \to je}(t^{}_j, E^{}_e)\big|^{\hs{-0.03cm}2}} ~, \end{eqnarray} where $E^{}_e$ is the energy of $e^\pm$, $t^{}_{j} = \big({}^{}{}^{}p^{}_j -p_j'\big)\raisebox{1pt}{$\hs{-0.05cm}^2$}$, and $\overline{|{\cal M}_{j e \to je}|^2}$ here is the squared scattering amplitude and an overline represents the usual sum (average) over final (initial) spins.\,\,Using Eq.\,\eqref{Zprime}, the squared amplitudes of the DM particles scattering off the $e^\pm$ in the $m^{}_e = 0$ limit are calculated as \begin{eqnarray} \overline{\big|{\cal M}_{X\hs{-0.03cm}e \to X\hs{-0.03cm}e}(t^{}_X, E^{}_e)\big|^{\hs{-0.03cm}2}} &=& 4\bigg(\frac{c^{}_{X\hs{-0.03cm}e}}{t^{} _X - m^2_{Z'}}\bigg)^{\hs{-0.13cm}2} \Big[{}^{} s^2_{X\hs{-0.03cm}e} + \big({}^{}t^{}_X-2{}^{}m_X^2\big) s^{}_{X\hs{-0.03cm}e} + m_X^4 \Big] ~, \label{MXeXe} \\[0.1cm] \overline{\big|{\cal M}_{N\hs{-0.03cm}e\to N\hs{-0.03cm}e}(t^{}_N, E^{}_e)\big|^{\hs{-0.03cm}2}} &=& 4 \bigg(\frac{c^{}_{N\hs{-0.03cm}e}}{t^{}_N - m^2_{Z'}} \bigg)^{\hs{-0.13cm}2} \Big[{}^{} s_{N\hs{-0.03cm}e}^2 + \big(t^{}_N - 2{}^{}m_N^2\big) s^{}_{N\hs{-0.03cm}e} + \tfrac{1}{2}{}^{}t_N^2 + m_N^4 \Big] ~, \label{MNeNe} \end{eqnarray} where $c^{}_{je} \equiv g^{}_\tf{D}{}^{}g^{}_\tf{e}{}^{}c^{}_\tf{W}{}^{}\epsilon{}^{}{\cal Q}^{}_j{}^{}$, and $s^{}_{je} = \big({}^{}{}^{}p^{}_j + p^{}_e{}^{}\big)\raisebox{1pt}{$\hs{-0.05cm}^2$}$.\,\,Since the SIMP DM are non-relativistic and the $e^\pm$ are relativistic particles at the $T_f$, $E_j \simeq m_j \gg T_f \simeq E^{}_e$, thus $s_j \simeq \big(m_j + E^{}_e{}^{}\big)\raisebox{1pt}{$\hs{-0.05cm}^2$}$ in the center of mass (CM) frame of $j$ and $e^\pm$.\,\,Plugging Eqs.\,\eqref{MXeXe} and \eqref{MNeNe} with this approximate form of $s^{}_j$ into Eq.\,\eqref{Gammae} and taking the leading order in $E^{}_e$ for the integrations, for $r^{}_N \sim 1$ we arrive at \begin{eqnarray}\label{gammae} \gamma^{}_{e}(T) \,=\, \frac{31{}^{}\pi^3}{189{}^{}x^6} \frac{m_X^5}{m_{Z'}^4} \sx{1.1}{\big(} c_{X\hs{-0.03cm}e}^2 + c_{N\hs{-0.03cm}e}^2 {}^{}{}^{} \sx{1.1}{\big)} ~. \end{eqnarray} Imposing the thermalization condition of the DM and $e^\pm$, $\gamma^{}_e(x) \gtrsim H(x){}^{}x^2$\,\cite{Choi:2019zeb}, at the time of freeze-out, we then obtain the lower bound of the gauge coupling as \begin{eqnarray}\label{gDlower} g^{}_\tf{D} \,\gtrsim\, \frac{0.2}{\sqrt{{\cal Q}_X^2 + {\cal Q}_N^2}} \bigg(\frac{\epsilon}{10^{-3}}\bigg)^{\hs{-0.17cm}-1} \bigg(\frac{m^{}_{Z'}}{250\,\text{MeV}}\bigg)^{\hs{-0.15cm}2} \bigg(\frac{m^{}_X}{20\,\text{MeV}}\bigg)^{\hs{-0.17cm}-3/2} ~. \end{eqnarray} Here we have set $x^{}_\tf{f.o.} \simeq 20$ and $g^{}_{\star}(x^{}_\tf{f.o.}\hs{-0.03cm}) \simeq 10.75$.\,\,Employing Eq.\,\eqref{gammae}, we can also determine the highest kinetic decoupling temperature $x^{}_\tf{k.d.}$ of the DM particles from the thermal plasma by the conditions, $\gamma^{}_e(x^{}_\tf{k.d.}\hs{-0.03cm}) \simeq 2 H(x^{}_\tf{k.d.}\hs{-0.03cm})$ \cite{Gondolo:2012vh} and $\gamma^{}_e(x^{}_\tf{f.o.}\hs{-0.03cm}) \simeq H(x^{}_\tf{f.o.}\hs{-0.03cm}){}^{}x_\tf{f.o.}^2\hs{-0.03cm}$.\,\,Solving these equations, we find that $x^{}_\tf{k.d.} \hs{-0.03cm} \simeq x_\tf{f.o.}^{3/2}/\sqrt[4]{2} \,\simeq 75 < x^{}_\tf{f.i.}$, which implies that $\Gamma^{}_\tf{el} < \Gamma^{2\tf{-loop}}_{2 \to 2}$.\footnote{We have checked numerically that by using the general formula of $\gamma^{}_e(T)$ in Ref.\,\cite{Gondolo:2012vh} and the squared scattering amplitudes with $m^{}_e \neq 0$, the $x^{}_\tf{k.d.} \simeq 120-140$ for $m^{}_X \simeq 20-30\,\tx{MeV}$, which is still less than the $x^{}_\tf{f.i.}$.}\,\,Notice that since the total number and entropy of the DM particles are conserved after the chemical freeze-out and their masses are near degenerate, the DM temperatures after the kinetic decoupling are $T^{}_{X,{}^{}N} \propto R^{-2}$ with $R = R(x)$ the cosmic scale factor, just like usual DM in WIMP or SIMP scenarios. To achieve the SIMP mechanism, one also needs to suppress the $2 \to 2$ annihilations for the WIMP scenario.\,\,In the $r$SIMP model, such $2 \to 2$ processes are $X\hs{-0.03cm}\bar{X} \to e^+e^-$ and $N\hs{-0.03cm}\bar{N} \to e^+e^-$ through the $Z'$ exchange diagrams.\,\,Applying the crossing symmetry to Eqs.\,\eqref{MXeXe} and \eqref{MNeNe}, we can easily get the squared annihilation amplitudes of these processes as \cite{Lehmann:2020lcv} \begin{eqnarray} \overline{\big|{\cal M}_{X\hs{-0.03cm}\bar{X} \to e^+e^-} \hs{-0.03cm}\big|^{\hs{-0.03cm}2}} &=& -{}^{}8\bigg(\frac{c^{}_{X\hs{-0.03cm}e}}{s^{} _X - m^2_{Z'}}\bigg)^{\hs{-0.13cm}2} \Big[{}^{} t^2_{X\hs{-0.03cm}e} + \big(s^{}_X-2{}^{}m_X^2\big) t^{}_{X\hs{-0.03cm}e} + m_X^4 \Big] ~, \label{MXXee} \\[0.1cm] \overline{\big|{\cal M}_{N\hs{-0.03cm}\bar{N} \to e^+e^-} \hs{-0.03cm}\big|^{\hs{-0.03cm}2}} &=& 4 \bigg(\frac{c^{}_{N\hs{-0.03cm}e}}{s^{}_N - m^2_{Z'}} \bigg)^{\hs{-0.13cm}2} \Big[{}^{} t_{N\hs{-0.03cm}e}^2 + \big(s^{}_N - 2{}^{}m_N^2\big) t^{}_{N\hs{-0.03cm}e} + \tfrac{1}{2}{}^{}s_N^2 + m_N^4 \Big] ~, \label{MNNee} \end{eqnarray} where $s^{}_j = \big({}^{}{}^{}p^{}_{\white{\bar{\black{j}}}}+ p^{}_{\bar{j}}{}^{}\big)\raisebox{1pt}{$\hs{-0.05cm}^2$}$ and $t^{}_{je} = \big({}^{}{}^{}p^{}_j - p^{}_e\big)\raisebox{1pt}{$\hs{-0.05cm}^2$}$.\,\,The resultant thermally-averaged annihilation cross sections are calculated as \cite{Cheung:2012gi} \begin{eqnarray} \langle \sigma v \rangle_{\hs{-0.03cm}X\hs{-0.03cm}\bar{X} \to e^+e^-} \,=\, \frac{c_{X\hs{-0.03cm}e}^2}{\pi {}^{} x}\frac{m_X^2}{m_{Z'}^4} ~,\quad \langle \sigma v \rangle_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to e^+e^-} \,=\, \frac{c_{N\hs{-0.03cm}e}^2}{\pi}\frac{m_N^2}{m_{Z'}^4} ~, \end{eqnarray} where we have used the fact that $s^{}_j \simeq 4{}^{}m_j^2 \ll m_{Z'}^2$ in the CM frame of the DM pair.\,\,Since the $\langle \sigma v \rangle_{\hs{-0.03cm}X\hs{-0.03cm}\bar{X} \to e^+e^-}$ is dominated by $p\,$-wave contribution, the reaction of the $2 \to 2$ annihilation for the WIMP scenario is then approximated as \begin{eqnarray} \Gamma_\tf{ann}(x) \,=\, \sum_{j{}^{}={}^{}X,{}^{}N} n^{}_j(x) \langle \sigma v \rangle_{\hs{-0.03cm}j{}^{}{}^{}\bar{j} \to e^+e^-} \approx\, n^{}_N(x) \langle \sigma v \rangle_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to e^+e^-} ~, \end{eqnarray} where $n^{}_j(x) = g^{}_j r_j^3 m_X^3 e^{-x} /(2{}^{}\pi x)^{3/2}$.\,\,Now, to make this reaction is subdominant in the $r$SIMP scenario, we demand that $\Gamma_\tf{ann}(x^{}_\tf{f.o.}) \ll H(x^{}_\tf{f.o.}) \simeq \Gamma^{}_{3 \to 2}$ during the freeze-out temperature.\,\,With this requirement and $r^{}_N \sim 1$, we yield the upper bound of the gauge coupling as \begin{eqnarray}\label{gDupper} g^{}_\tf{D} \,\ll\, \frac{3}{|{\cal Q}^{}_N|} \bigg(\frac{\epsilon}{10^{-3}}\bigg)^{\hs{-0.17cm}-1} \bigg(\frac{m^{}_{Z'}}{250\,\text{MeV}}\bigg)^{\hs{-0.15cm}2} \bigg(\frac{m^{}_X}{20\,\text{MeV}}\bigg)^{\hs{-0.17cm}-3/2} ~. \end{eqnarray} Here again, we have chosen $g^{}_{\star}(x^{}_\tf{f.o.}\hs{-0.03cm}) \simeq 10.75$ with $x^{}_\tf{f.o.} \hs{-0.1cm} \simeq 20$.\,\,Therefore, saturating the marginal values of $g^{}_\tf{D}$ given in Eq.\,\eqref{gDlower}, we can have a successful $r$SIMP scenario. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{RNNXX.pdf} \hs{0.2cm} \includegraphics[width=0.48\textwidth]{RXXNN.pdf} \vs{-0.3cm} \caption{The ${\cal R}_{N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}}$ and ${\cal R}_{X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}}$ as functions of $x$ with the parameter inputs given in Figs.\,\ref{fig:YNX_w22} and \ref{fig:YXN_w22}.\,\,Here we have fixed the $g^{}_\tf{D} $ to the minimal value of Eq.\,\eqref{gDlower} with $\epsilon = 10^{-3}$ and $m^{}_{Z'} = 250\,\tx{MeV}$, and $3 {\cal Q}^{}_X = 2 {\cal Q}^{}_N$ for making these plots.} \label{fig:RZ} \end{figure} As we already mentioned in Sec.\,\ref{sec:3}, there are also tree-level $Z'$-mediated diagrams for the $2 \to 2$ processes in addition to the two-loop diagrams.\,\,Using Eq.\,\eqref{Zprime} again, the corresponding thermally-averaged cross sections are calculated as \begin{eqnarray} \langle \sigma v \rangle^{Z'}_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}} &=& \frac {c_{X\hs{-0.03cm}N}^2 m_X^2} {4{}^{}\pi{}^{}m_{Z'}^4} \frac{ \sqrt{r_N^2-1}}{r^{}_N} \Bigg(\hs{-0.05cm} r_N^2-1+ \frac {11-2{}^{}r_N^2} {4{}^{}x} \hs{-0.05cm}\Bigg) \label{cNNXXZ} ~, \\[0.15cm] \langle \sigma v \rangle^{Z'}_{\hs{-0.05cm}X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}} &=& \frac {c_{X\hs{-0.03cm}N}^2 m_X^2}{8{}^{}\pi{}^{}x{}^{}m_{Z'}^4} \sqrt{1- r_N^2} \sx{1.1}{\big(} 2 + r_N^2 \sx{1.1}{\big)} \label{cXXNNZ} ~, \end{eqnarray} where $c^{}_{X\hs{-0.03cm}N} \equiv g_\tf{D}^2 {\cal Q}^{}_X {\cal Q}^{}_N$.\,\,In Fig.\,\ref{fig:RZ}, we show the ratios of the cross sections induced by the $Z'$-mediated diagrams to the ones induced by the two-loop diagrams with the parameter inputs referring to Figs.\,\ref{fig:YNX_w22} and \ref{fig:YXN_w22}, where \begin{eqnarray} {\cal R}^{}_{N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}} \,\equiv\, \frac{\langle \sigma v \rangle^{Z'}_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}}} {\langle \sigma v \rangle^{2\tf{-loop}}_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}}} ~,\quad {\cal R}^{}_{X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}} \,\equiv\, \frac{\langle \sigma v \rangle^{Z'}_{\hs{-0.03cm}X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}}} {\langle \sigma v \rangle^{2\tf{-loop}}_{\hs{-0.03cm}X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}}} ~. \end{eqnarray} As indicated, the contribution of the $Z'$-mediated diagram for the $2 \to 2$ process is subdominant to that of the two-loop diagram.\,\,Notice that, unlike the $\lambda_{X\hs{-0.03cm}S}$, we cannot switch $g^{}_\tf{D}$ off to evade the reshuffled mechanism.\,\,As we have discussed in this section, a sufficiently large dark gauge coupling is required to maintain the kinetic equilibrium between the DM and SM particles.\,\,There are a couple of factors that make the ${\cal R}_{N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}}$ and ${\cal R}_{X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}}$ much smaller than the unity although the $\langle \sigma v \rangle^{2\tf{-loop}}_{\hs{-0.03cm}N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}}$ and $\langle \sigma v \rangle^{2\tf{-loop}}_{\hs{-0.03cm}X\hs{-0.03cm}\bar{X} \to N\hs{-0.03cm}\bar{N}}$ are suppressed by the two-loop factor $(4\pi)^8$.\,\,Firstly, we have to choose strong couplings, $\lambda^{}_3{}^{}{}^{}y^{}_N \sim {\cal O}(10)$, to satisfy the relic abundance of DM.\,\,Secondly, the SIMP conditions suggest that $c^{}_{X\hs{-0.03cm}N} \simeq 0.2 {\cal Q}^{}_X {\cal Q}^{}_N / ({\cal Q}_X^2 + {\cal Q}_N^2) \sim 0.02$ with $3 {\cal Q}^{}_X = 2 {\cal Q}^{}_N$.\,\,Thirdly, the mass of $Z'$ in the tree-level graphs is heavier than that of $S$ in the two-loop diagrams, where $m^{}_{Z'} \sim 4{}^{}m^{}_S$.\,\,As a result, the ${\cal R}^{}_{N\hs{-0.03cm}\bar{N} \to X\hs{-0.03cm}\bar{X}}$, for instance, is roughly equal to $(4\pi)^8(c_{X\hs{-0.03cm}N}^2/\lambda_3^4{}^{}y_N^4)(m_S^4 / m_{Z'}^4 \hs{-0.03cm}) \ll 1$. \section{Observational signature : DM self-interacting cross section}\label{sec:7} In this model, both $X$ and $N$ particles can have self-interactions via the contact coupling in Eq.\,\eqref{potential} and the Yukawa coupling in Eq.\,\eqref{Yukawa}, respectively, as displayed in Fig.\,\ref{fig:self}.\,\,There are also self-interactions of DM through the $Z'$-mediated diagrams akin to Fig.\,\ref{fig:NNXXZ}.\,\,However, these contributions are subleading due to small dark gauge coupling and heavy $Z'$ mass.\,\,In general, there is no well-defined effective self-interacting cross section for two-component DM scenarios.\,\,With the degeneracy of DM masses, we fairly define the self-interacting cross section as follows \begin{eqnarray}\label{selfcs} \frac{\sigma^{}_\tf{self}}{m^{}_\tf{DM}} \,=\, {\cal R}_X^2 \frac{\sigma^{}_X}{m^{}_X} + {\cal R}_N^2 \frac{\sigma^{}_N}{m^{}_N} ~, \end{eqnarray} where ${\cal R}^{}_X$ and ${\cal R}^{}_N$ are the fractions of DM particles given by \begin{eqnarray} {\cal R}^{}_X \,=\, \frac{\Omega^{}_X}{\Omega^{}_X + \Omega^{}_N} ~,\quad {\cal R}^{}_N \,=\, \frac{\Omega^{}_N}{\Omega^{}_X + \Omega^{}_N} ~, \end{eqnarray} and the self-interacting cross sections of $X$ and $N$ are computed as \begin{eqnarray} \sigma^{}_X &=& \tfrac{1}{4} \big( \sigma^{}_{X\hs{-0.03cm}X \to X\hs{-0.03cm}X} + \sigma^{}_{X\hs{-0.03cm}\bar{X} \to X\hs{-0.03cm}\bar{X}} + \sigma^{}_{\bar{X}\hs{-0.03cm}\bar{X} \to \bar{X}\hs{-0.03cm}\bar{X}} \big) \,=\, \frac{\lambda_X^2}{8{}^{}\pi{}^{}m_X^2} ~, \\[0.1cm] \sigma^{}_N &=& \tfrac{1}{4} \big( \sigma^{}_{N\hs{-0.03cm}N \to N\hs{-0.03cm}N} + \sigma^{}_{N\hs{-0.03cm}\bar{N} \to N\hs{-0.03cm}\bar{N}} + \sigma^{}_{\bar{N}\hs{-0.03cm}\bar{N} \to \bar{N}\hs{-0.03cm}\bar{N}} \big) \,=\, \frac{y_N^4}{16{}^{}\pi{}^{}m_X^2}\frac{r_N^2}{r_S^4} ~. \end{eqnarray} Note that the $\sigma^{}_{N\hs{-0.03cm}N \to N\hs{-0.03cm}N}$ and $\sigma^{}_{\bar{N}\hs{-0.03cm}\bar{N} \to \bar{N}\hs{-0.03cm}\bar{N}}$ are velocity-suppressed.\,\,When the ${\cal R}_N$ goes to 0, Eq.\,\eqref{selfcs} reduces to the usual definition of the self-interacting cross section for complex scalar DM. \begin{figure}[t!] \hs{0.1cm} \centering \includegraphics[width=0.4\textwidth]{XXXX.pdf} \hs{-2cm} \includegraphics[width=0.4\textwidth]{NNNN.pdf} \vs{-0.3cm} \caption{The dominant Feynman diagrams of DM self-interacting processes for $X$ and $N$, where the other processes can be obtained by rotating these diagrams.} \label{fig:self} \vs{0.3cm} \end{figure} \begin{table}[hbpt!] \begin{center} \def1.2{1.2} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline ~$\lambda^{}_X$~ & ~$\lambda^{}_S$~ & ~$\lambda^{}_3$~ & ~$y^{}_N$~ & ~$\big(m^{}_X,m^{}_N,m^{}_S\big)/\tx{MeV}$~ & ~${\cal R}^{}_X$~ & ~${\cal R}^{}_N$~ & ~$\sigma^{}_\tf{self}/m^{}_\tf{DM}\,(\tx{cm}^2/\tx{g})$~ \\[0.05cm] \hline ~$4.4$~ & ~$10.0$~ & ~$4.7$~ & ~$3.0$~ & ~$(20,20.02,59.6)$~ & ~$0.56$~ & ~$0.44$~ & ~$6.70$~ \\\hline ~$4.2$~ & ~$9.0$~ & ~$4.4$~ & ~$2.5$~ & ~$(22,22.01,67)$~ & ~$0.40$~ & ~$0.60$~ & ~$2.34$~ \\\hline ~$4.5$~ & ~$8.0$~ & ~$4.5$~ & ~$2.0$~ & ~$(25,25.1,76)$~ & ~$0.66$~ & ~$0.34$~ & ~$4.92$~ \\\hline ~$4.0$~ & ~$10.0$~ & ~$4.3$~ & ~$2.5$~ & ~$(25,25.2,77)$~ & ~$0.86$~ & ~$0.14$~ & ~$6.66$~ \\\hline ~$5.0$~ & ~$9.0$~ & ~$5.0$~ & ~$2.2$~ & ~$(30,30.3,92.4)$~ & ~$0.89$~ & ~$0.11$~ & ~$6.31$~ \\\hline \end{tabular} \caption{The benchmark points in the $r$SIMP model for $r^{}_N > 1$.} \label{tab:2} \end{center} \vs{-0.5cm} \end{table} \begin{table}[hbpt!] \begin{center} \def1.2{1.2} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline ~$\lambda^{}_X$~ & ~$\lambda^{}_S$~ & ~$\lambda^{}_3$~ & ~$y^{}_N$~ & ~$\big(m^{}_X,m^{}_N,m^{}_S\big)/\tx{MeV}$~ & ~${\cal R}^{}_X$~ & ~${\cal R}^{}_N$~ & ~$\sigma^{}_\tf{self}/m^{}_\tf{DM}\,(\tx{cm}^2/\tx{g})$~ \\[0.05cm] \hline ~$5.9$~ & ~$6.2$~ & ~$5.2$~ & ~$2.6$~ & ~$(15,14.9,43.5)$~ & ~$0.01$~ & ~$0.99$~ & ~$0.82$~ \\\hline ~$4.0$~ & ~$8.0$~ & ~$4.0$~ & ~$2.0$~ & ~$(20,19.99,63)$~ & ~$0.28$~ & ~$0.72$~ & ~$1.45$~ \\\hline ~$5.0$~ & ~$4.0$~ & ~$3.9$~ & ~$2.0$~ & ~$(20,19.9,61)$~ & ~$0.06$~ & ~$0.94$~ & ~$0.20$~ \\\hline ~$7.5$~ & ~$4.0$~ & ~$5.4$~ & ~$1.8$~ & ~$(25,24.9,76)$~ & ~$0.07$~ & ~$0.93$~ & ~$0.18$~ \\\hline ~$6.5$~ & ~$6.5$~ & ~$5.6$~ & ~$1.3$~ & ~$(28,27.9,85.4)$~ & ~$0.14$~ & ~$0.86$~ & ~$0.32$~ \\\hline \end{tabular} \caption{The benchmark points in the $r$SIMP model for $r^{}_N < 1$.} \label{tab:3} \end{center} \vs{-0.5cm} \end{table} To alleviate the discrepancy between simulations and observations, several analyses have set the bounds on the self-interacting cross section of DM.\,\,For instance, there are constraints of $0.1\,\tx{cm}^2/\tx{g} < \sigma^{}_\tf{self}/m^{}_\tf{DM} < 1\,\tx{cm}^2/\tx{g}$ from Milky Way and cluster scales \cite{Tulin:2013teo}.\,\,The Bullet cluster also imposes a similar upper bound, $\sigma^{}_\tf{self}/m^{}_\tf{DM} < 1\,\tx{cm}^2/\tx{g}$ \cite{Markevitch:2003at,Clowe:2003tk}.\,\,Nevertheless, it has been studied in Ref.\,\cite{Kamada:2016euw} that the self-interacting DM with baryons can explain the diverse rotation curves of spiral galaxies if $\sigma^{}_\tf{self}/m^{}_\tf{DM} = 3\,\tx{cm}^2/\tx{g}$.\,\,Therefore, to cover all of these observations, we then consider an optimistic bound, $0.1\,\tx{cm}^2/\tx{g} < \sigma^{}_\tf{self}/m^{}_\tf{DM} < 10\,\tx{cm}^2/\tx{g}$ \cite{Chu:2018fzy,Tulin:2013teo} in our study before the consensus for the value of the DM self-interacting cross section. We list in Tabs.\,\ref{tab:2} and \ref{tab:3} a few benchmark points satisfying all the constraints mentioned above with the predictions of the DM self-interacting cross section,\footnote{The unitarity of S-matrix sets a conservative bound for the amplitude of self-interacting scattering, where $|{\cal M}^{}_\tf{self}| < 16{}^{}\pi$~\cite{Biswas:2021dan,Namjoo:2018oyn}, by which the quartic couplings $\lambda^{}_{X,{}^{}S} < 4\pi$.} in the cases of $m^{}_N > m^{}_X$ and $m^{}_X > m^{}_N$, respectively.\,\,As can be seen in Tab.\,\ref{tab:2}, the prediction of $\sigma^{}_\tf{self}/m^{}_\tf{DM}$ is typically larger than $1\,\tx{cm}^2/\tx{g}$ but still well within the bound, $10\,\tx{cm}^2/\tx{g}$.\,\,This is easy to understand since the density of DM is dominated by the $X$ due to the reshuffled effect and we have to choose a sufficiently large $\lambda^{}_X$ to make the vacuum stable.\,\,In principle, one may consider heavier DM masses to suppress the $\sigma^{}_\tf{self}/m^{}_\tf{DM} \propto1/m_X^3$.\,\,However, we have to enhance the $\lambda^{}_3$ and $\lambda^{}_X$ at the same time to fulfill the DM relic abundance and the vacuum stability, respectively.\,\,The small values of $\sigma^{}_\tf{self}/m^{}_\tf{DM}$ can only be obtained if the DM masses are highly degenerate, with which the density of DM is dominated by the $N$ (no reshuffling in this case) as displayed in the third row of Tab.\,\ref{tab:2}.\,\,Hence, there is a tension among the constraints in the case of $m^{}_N > m^{}_X$.\,\,On the other hand, the size of $\sigma^{}_\tf{self}/m^{}_\tf{DM}$ can be smaller than or comparable with $1\,\tx{cm}^2/\tx{g}$ in the case of $m^{}_X > m^{}_N$ as indicated in Tab.\,\ref{tab:3}.\,\,There are two reasons for this occurrence.\,\,Firstly, the reshuffled effect reduces the number of the $X$ particle.\,\,Secondly, the self-interacting cross section of $N$ is suppressed by the mass of the mediator $S$.\,\,Therefore, it is much easier for the latter case to adjust the parameters to satisfy the DM self-interacting cross section and other constraints.\,\,Future observations and simulations may pin down the value of $\sigma^{}_\tf{self}/m^{}_\tf{DM}$ which can be used to test the reshuffled effect in this model. \section{Discussions \& Conclusion}\label{sec:8} We discuss some future investigations for the $r$SIMP model.\,\,Since the DM masses are about $20$\,MeV, the DM-$e^-$ scattering experiments can be used to test the allowed parameter space in this model~\cite{Hochberg:2021pkt,Blanco:2021hlm,Griffin:2021znd,Liang:2021zkg}.\,\,According to Ref.\,\cite{Hochberg:2021pkt}, the lower limit of the DM-$e^-$ scattering cross section can reach $\sigma_e\simeq 8.4 \times 10^{-41}$\,cm$^2$ for $m^{}_{X,N} \sim 20$\,MeV, and $m^{}_{Z'} \sim 250$\,MeV.\,\,It can be transferred to $g^2_\tf{D} \epsilon^2 \big(4.95 {}^{} {\cal R}_X {\cal Q}^2_X + 6.50 {}^{} {\cal R}_N {\cal Q}^2_N\big) \lesssim 10^{-6}$ in our $r$SIMP model.\,\,On the other hand, the dark boson $Z'$ is about hundreds MeV and mainly decays to $X\hs{-0.03cm}\bar{X},N\hs{-0.03cm}\bar{N}$, and $S\hs{-0.01cm}\bar{S}$.\,\,The Belle II~\cite{Belle-II:2018jsg}, KLEVER~\cite{KLEVERProject:2019aks}, LDMX@SLAC~\cite{LDMX:2018cma} and LDMX@CERN~\cite{LDMX:2018cma,Raubenheimer:2018mwt} experiments can be applied to the invisible searches of the $Z'$\,\cite{Fabbrichesi:2020wbt}.\,\,In particular, the LDMX@CERN experiment can constrain $3.0\times 10^{-6}\leq \epsilon\leq 1.4\times 10^{-4}$ for $0.1\,\tx{GeV} \leq m^{}_{Z'}\leq 1\,\tx{GeV}$. Except for the SIMP scenario, the WIMP scenario can also be realized in this model.\footnote{Fermion and scalar two-component DM with the discrete $\mathbb{Z}^{}_4$ symmetry in the WIMP scenario has recently been studied in Ref.~\cite{Yaguna:2021rds}. However, compared with~\cite{Yaguna:2021rds}, the residual $\mathbb{Z}^{}_4$ symmetry in our model is an accidental symmetry after the gauged U$(1)^{}_\tf{D}$ symmetry breaking, and the phenomenology in our model can be quite distinct from theirs.}\,\,Akin to the vector portal~\cite{Holdom:1985ag,Okun:1982xi} and Higgs portal~\cite{Patt:2006fw,Lebedev:2021xey} DM models, the typical DM annihilation channels are $N\hs{-0.03cm}\bar{N} \to Z' \to f\bar{f}$, $X\hs{-0.03cm}\bar{X} \to Z' \to f\bar{f}$, $X\hs{-0.03cm}\bar{X} \to \phi, h \to f\bar{f},VV, \phi\phi, hh$ and four-points interaction $X\hs{-0.03cm}\bar{X} \to \phi \phi, hh$.\,\,Besides, the secluded WIMP DM scenario~\cite{Pospelov:2007mp} for processes $N\hs{-0.03cm}\bar{N}, X\hs{-0.03cm}\bar{X} \to Z'Z'$ can also be achieved when $m^{}_{N,X} > m^{}_{Z'}$.\,\,Also, instead of assuming tiny mass splitting between $S_\tx{R}$ and $S_\tx{I}$ in Eq.\,\eqref{Scalar_mass}, we can set $m^{}_{S_\tx{I}} \sim m^{}_{X,N} \sim m^{}_{S_\tx{R}}/3$ such that $S_\tx{I}$ can be the DM candidate as well and our model becomes three-component DM.\,\,Not only the typical scalar DM annihilation channels in the Higgs portal but also the new DM semi-annihilation channel $N S_\tx{I} \to \bar{N} Z'$ and DM self-interaction channel $S_\tx{I} X \to X\hs{-0.03cm}X$ can occur.\,\,Furthermore, in the SIMP scenario, $S_\tx{I}$ can also be annihilated via $S_\tx{I} N\hs{-0.03cm}\bar{N} \to \bar{N}\hs{-0.03cm}\bar{N}$, $S_\tx{I} N\hs{-0.03cm}N \to X\hs{-0.03cm}\bar{X}, S_\tx{I} X\hs{-0.03cm}\bar{X} \to \bar{N}\hs{-0.03cm}\bar{N}$, and their conjugate processes.\,\,These details are beyond the scope of this work and we would like to study them in the future. In summary, we propose a novel scalar and fermion two-component SIMP DM model with a $\mathbb{Z}^{}_4$ symmetry.\,\,This residual $\mathbb{Z}^{}_4$ symmetry is an accidental symmetry after the gauged U$(1)^{}_\tf{D}$ symmetry breaking instead of a subgroup via the Krauss-Wilczek mechanism.\footnote{For multi-component DM models with the Krauss-Wilczek manner, see \cite{Choi:2021yps} for U$(1)^{}_\tf{D} \to \mathbb{Z}^{}_2 \times \mathbb{Z}^{}_3$\,; and see \cite{Ho:2016aye} for U$(1)^{}_{\tf{B}-\tf{L}} \to \mathbb{Z}^{}_4$.}\,\,With the help of an extra complex scalar $S$ as a mediator between the SIMP particles $X$ and $N$, we can have $3 \to 2$ number-changing processes as shown in Fig.\,\ref{fig:ann} which determine the DM relic density in this model.\,\,Note this complex scalar $S$ also has $\mathbb{Z}^{}_4$ symmetry compared with other mediators in SIMP models.\,\,Moreover, the SIMP DM particles can maintain kinetic equilibrium with the thermal bath until the freeze-out temperature of DM via the vector portal $Z'$ interactions with SM particles.\,\,To satisfy the thermalization condition and suppress the annihilation rate for the WIMP scenario, the lower and upper bounds of the U$(1)^{}_\tf{D}$ gauge coupling $g^{}_\tf{D}$ are estimated in Eq.\,\eqref{gDlower} and \eqref{gDupper} which can be tested in future experiments. An appealing feature of the multi-component SIMP DM model is that an unavoidable two-loop induced $2 \to 2$ process tightly connects to the $3 \to 2$ process.\,\,This process would reshuffle the SIMP DM number densities after the chemical freeze-out of DM.\,\,We underline that the $2 \to 2$ process in this kind of model is important and cannot be neglected.\,\,Including $2 \to 2$ processes with $3 \to 2$ processes in a multi-component SIMP model will not only change the fractions of DM particles but also the total DM number yields.\,\,As a result, model parameters to explain the correct relic density can be dramatically changed compared with only involving the $3 \to 2$ processes.\,\,Finally, the size of DM self-interacting cross section is also a feature in this model.\,\,Usually, the SIMP models predict inevitably large $\sigma^{}_\tf{self}/m^{}_\tf{DM}$.\,\,However, thanks to the redistribution behavior of SIMP DM number densities, the predictions of $\sigma^{}_\tf{self}/m^{}_\tf{DM} < 1\,\tx{cm}^2/\tx{g}$ are still possible in our model.\,\,Therefore, future observations and simulations of DM self-interactions can help to distinguish the $r$SIMP model from the usual SIMP models. \acknowledgments We would like to thank Fagner C. Correia and Chao-Jung Lee for useful discussions.\,\,This work is supported by KIAS Individual Grants under Grant No.\,PG081201 (SYH), No.\,PG075301 (CTL), and No.\,PG021403 (PK), and also in part by National Research Foundation of Korea (NRF) Grant No.\,NRF2019R1A2C3005009 (PK). \newpage
1,116,691,497,114
arxiv
\section{Conclusion} \label{sec:conclusion} Short- and/or long-term time series forecasting of both energy generation and load demand have been one of the key tools to guide the optimal decision-making for planning and operation of utility companies without over/underestimating the capabilities of renewable energy infrastructures. Different from the traditional big data scenarios, one of the most challenging issues in time series renewable energy forecasting in the real world is the short of historical data. This can render most prevalent machine learning models ineffective. In addition, the performance of machine learning models are sensitive to the choice of their corresponding hyperparameters w.r.t. the characteristics of the underlying forecasting tasks. Bearing these considerations in mind, this paper developed a \texttt{BiLO-Auto-TSF/ML}\ framework that automatically searches for a high-performance few-shot learning pipeline from a bi-level programming perspective. More specifically, the meta-learning routine at the lower level helps mitigate the small data challenges while the hyperparameter optimization at the upper level helps search for the optimal configuration options to achieve the peak few-shot learning performance. Extensive experiments fully demonstrate the effectiveness of our proposed \texttt{BiLO-Auto-TSF/ML}\ framework to significantly boost the performance of three prevalent machine learning models for time series renewable energy forecasting with extremely limited historical data. \section{Introduction} \label{sec:introduction} Smart grid technology has become the driving force that enables an effective management and distribution of renewable energy sources such as solar, wind and hydrogen. It connects a variety of distributed energy resource assets to the power grid. The relationship between the smart grid and renewable energy revolves around gathering data. With flourishing developments of the Internet of things (IoT), utility companies are able to quickly detect and resolve service issues through continuous self-assessments and self-healing by leveraging heterogeneous and time series data collected on the smart grid. In particular, time series predictive modeling, which provides short- and/or long-term forecasting of both energy generation and load demand, has been one of the key tools to guide optimal decision-making for planning and operation of utility companies without over/underestimating the capabilities of renewable energy infrastructures~\cite{AmjadyKZ10}. As discussed in some recent survey papers (e.g.,~\cite{AkhterMMM19,NatarajanK19,LaiCCP20}), there have been a noticeable amount of efforts on applying machine learning methods for time series renewable energy forecasting in the past two decades and beyond. For example, neural networks (NNs) with multi-layer perceptron were developed to forecast the daily solar radiation in time series dataset by using a transfer function of hidden layers~\cite{PaoliVMN10}. Recurrent neural networks (RNNs) such as long short-term memory (LSTM)~\cite{KongDJHXZ19}, which take historical time series data as the input and predict the trajectory over a certain time horizon, have been proved to be effective because they consider both the instantaneous interactions within contiguous time steps and the long-term dependencies stored in memory cells. In~\cite{MajidpourQCGP15}, a $K$-nearest neighbor based time weighted dot product dissimilarity measure was proposed to improve the forecasting accuracy and reduce the processing time. In~\cite{ChenCL04}, support vector regression (SVR) was applied to make a mid-term time series load prediction. In~\cite{KouGG13}, a sparse online warped Gaussian process regression was developed to provide a short-term probabilistic prediction of wind power generation with a non-Gaussian predictive distribution. To promote the further uptake of machine learning in smart grid industry, we need to address two imperative challenges. \begin{itemize} \item\underline{\textit{Small data challenge}}: Fitting a time series model with an adequate statistical confidence usually requires sufficient data. Unfortunately, this is hardly met in real-life scenarios. For example, when planning an island grid, due to various technical problems such as equipment failures, measurement errors or restrictions on the daily power supply time~\cite{McLarty17}, there may exist substantial gaps in the collected historical power consumption data, i.e., incomplete or missing data. Even worse, some islands may not have any historical data at all due to the laggard in infrastructure. All these significantly compromise the prediction performance in time series forecasting. How to use the limited historical data of multiple islands to predict the energy of an island without historical data is a big challenge for conventional prediction models. \item\underline{\textit{Hyperparameter tuning challenge}}: The performance of most, if not all, machine learning methods are very sensitive to their hyperparameters such as neural architectures in NNs, kernel functions and regularization methods in SVR. The optimal choice of hyperparameters also vary across tasks and datasets~\cite{ZophL17}. The black-box nature of machine learning models leads to a considerable barrier to domain experts, who are interested in applying machine learning methods yet have sufficient time and/or resources to learn the inside out, for choosing the optimal hyperparameters to achieve the state-of-the-art performance. \end{itemize} Bearing these challenges in mind, this paper develops \texttt{BiLO-Auto-TSF/ML}, a first of its kind bi-level programming framework to automate the time series forecasting in smart grid with limited historical data. The key contributions are summarized as follow. \begin{itemize} \item To address the small data challenge, we propose to use meta-learning to improve the generation performance of the base-learner to unseen tasks by only referring to limited historical data. Although there have been some successful applications of meta-learning in various domains such as object detection~\cite{SnellSZ17,Perez-RuaZHX20}, landmark prediction~\cite{GuiWRM18}, and image/video generation~\cite{GordenBBNT19,WangLTLKC19}, it has rarely been explored in the context of smart grid, to the best of our knowledge. \item Due to the nested structure between base- and meta-learners, there exist complex interactions among their associated hyperparameters. In this paper, we hypothesize that it may not be able to achieve a peak performance if the hyperparameters associated with the base- and meta-learners are set independently. Although there have been some previous attempts on hyperparameter optimization for meta-learning, they mainly consider the ones associated with the meta-learner, such as the learning rate in the inner loop~\cite{Vanschoren18}. In this paper, our ambition is to automate the optimal design of a few-shot learning\footnote{In this paper, we use meta-learning and few-shot learning interchangeably.} pipeline from a bi-level programming perspective. \item In \texttt{BiLO-Auto-TSF/ML}, the upper-level optimization searches for the optimal hyperparameter settings associated with both base- and meta-learners by a Monte Carlo tree search (MCTS)~\cite{BrownePWLCRTPSC12}; while the lower-level optimization implements a model-agnostic gradient-based meta-learning as done in~\cite{Finnal17}. In other words, any off-the-shelf machine learning can be used in \texttt{BiLO-Auto-TSF/ML}\ in a plug-in manner. \item To validate the effectiveness of our proposed \texttt{BiLO-Auto-TSF/ML}\ framework, we consider some selected real-world energy forecasting tasks for smart grid infrastructure planning in island areas at the English Channel. In particular, three prevalent machine learning methods are used as the base-learners for a proof-of-concept purpose. Extensive experimental results fully demonstrate the effectiveness of our proposed \texttt{BiLO-Auto-TSF/ML}\ framework for time series forecasting with highly limited historical data. \end{itemize} The rest of this paper is organized as follows. \prettyref{sec:related} provides a pragmatic overview of some related works. \prettyref{sec:proposed} delineates the implementation of our proposed \texttt{BiLO-Auto-TSF/ML}\ framework. \prettyref{sec:setup} gives the experimental setup while the empirical results are presented and analyzed in~\prettyref{sec:results}. Finally, \prettyref{sec:conclusion} concludes this paper and threads some lights on potential future directions. \section*{Acknowledgment} K. Li was supported by UKRI Future Leaders Fellowship (MR/S017062/1), Amazon Research Awards, Royal Society International Exchange Scheme (IES/R2/212077), Alan Turing Fellowship, and EPSRC (2404317). \bibliographystyle{IEEEtran} \section{Automated Few-Shot Learning for Time Series Forecasting} \label{sec:proposed} This section delineates the implementation of our proposed \texttt{BiLO-Auto-TSF/ML}\ framework that automates the design of a few-shot learning pipeline for time series forecasting with limited data from a bi-level programming perspective. We start with the overarching bi-level programming problem formulation considered in this paper. Then, we delineate the algorithmic implementation of the optimization routines at both levels respectively. \subsection{Problem Formulation of Bi-level Programming} \label{sec:problem_formulation} \begin{figure*}[t!] \centering \includegraphics [width=\linewidth]{figs/architecture.pdf} \caption{The overall architecture and workflow of \texttt{BiLO-Auto-TSF/ML}.} \label{fig:architecture} \end{figure*} The \texttt{BiLO-Auto-TSF/ML}\ framework involves a sequence of $l\geq 1$ decisions that choose the hyperparameters associated with the baseline machine learning model along with its parameters, the learning rates of both inner and outer loop of the meta-learning, the optimization algorithms for empirical loss function optimization and the number of shots in meta-learning, respectively. At the $i$-th decision step ($1\leq i\leq l$), a design component $c_i\in\mathcal{C}_i$ is selected where $\mathcal{C}_i$ is a finite set of possible alternative options at the $i$-th decision step. The space of hyperparameters associated with $c_i$ is denoted as $\Theta(c_i)$ and a complete pipeline structure is an $\ell$-tuple $\mathbf{c}=(c_1,\cdots,c_l)\in\mathcal{C}=\mathcal{C}_1\times\cdots\times\mathcal{C}_l$ where $\Theta(\mathbf{c})=\Theta(c_1)\times\cdots\times\Theta(c_l)$ is its associated hyperparameter space. The key challenge of the automated design of a few-shot learning pipeline for time series forecasting considered in this paper is an intertwined optimization of the parametric optimization of the associated hyperparameters $\mathbf{c}$ and the gradient-based optimization associated with the meta-learning simultaneously. In this paper, we propose to formulate this intertwined optimization problem as the following bi-level programming problem: \begin{equation} \begin{aligned} &\mathrm{minimize}\quad\mathcal{L}^\mathtt{val}(\mathbf{c},\hat{\boldsymbol{\theta}};\mathcal{D}^\mathtt{val})\\ &\mathrm{subject\ to} \quad \hat{\boldsymbol{\theta}}:=\argmin_{\boldsymbol{\theta}}\mathcal{L}_\mathtt{meta}^\mathtt{tr}(\mathbf{c},\boldsymbol{\theta};\mathcal{D}^\mathtt{tr}) \end{aligned}, \label{eq:blp} \end{equation} where $\mathcal{L}^\mathtt{val}$ and $\mathcal{L}^\mathtt{tr}_\mathtt{meta}$ denote the upper- and lower-level objective functions, and $\mathbf{c}\in\Theta(\mathbf{c})$ and $\hat{\boldsymbol{\theta}}\in\mathbb{R}^m$ denote the upper- and lower-level variables. More specifically, at the upper level, we aim to identify the best few-shot learning pipeline $\mathbf{c}^\ast$ having the optimal hyperparameters along with the optimal parameters $\hat{\boldsymbol{\theta}}^\ast$ associated with the baseline machine learning model that minimizes $\mathcal{L}^\mathtt{val}$ (i.e., the validation empirical risk) at the end of meta-training where: \begin{equation} \mathcal{L}^\mathtt{val}(\mathbf{c},\hat{\boldsymbol{\theta}};\mathcal{D}^\mathtt{val})\triangleq\frac{1}{|\mathcal{D}^\mathtt{val}|}\sum_{(\mathbf{x}^i,y^i)\in\mathcal{D}^\mathtt{val}}\ell_\mathbf{c}^\mathtt{val}\Big(y^i,f(\mathbf{x}^i;\hat{\boldsymbol{\theta}})\Big), \end{equation} where $\ell_\mathbf{c}^\mathtt{val}\Big(y^i,f(\mathbf{x}^i;\hat{\boldsymbol{\theta}})\Big)$ is the loss function regarding a given few-shot learning pipeline $\mathbf{c}$ on the validation set. Note that this validation empirical risk requires the parameters of the baseline machine learning model to be optimized by a meta-learning process at the lower level. The training empirical risk associated with the lower-level meta-learning is defined as: \begin{equation} \mathcal{L}^\mathtt{tr}_\mathtt{meta}\triangleq\sum_{\mathcal{D}^\mathtt{tr}_k\sim\mathcal{D}^\mathtt{tr}}\mathcal{L}^\mathtt{tr}(\mathbf{c},\boldsymbol{\theta};\mathcal{D}^\mathtt{tr}_k), \end{equation} where \begin{equation} \mathcal{L}^\mathtt{tr}(\mathbf{c},\boldsymbol{\theta};\mathcal{D}^\mathtt{tr}_k)\triangleq\frac{1}{|\mathcal{D}^\mathtt{tr}_k|}\sum_{(\mathbf{x}^i,y^i)\in\mathcal{D}^\mathtt{tr}_k}\ell_\mathbf{c}^\mathtt{tr}\Big(y^i,f(\mathbf{x}^i;\boldsymbol{\theta})\Big), \end{equation} where $\ell_\mathbf{c}^\mathtt{tr}\Big(y^i,f(\mathbf{x}^i;\boldsymbol{\theta})\Big)$ is the loss function regarding $\mathbf{c}$ on the meta-training set. The overall architecture of \texttt{BiLO-Auto-TSF/ML}\ is given in~\prettyref{fig:architecture}. It consists of two nested levels of optimization routines. Given the discrete nature of the upper-level hyperparameter optimization problem, \texttt{BiLO-Auto-TSF/ML}\ applies a Monte-Carlo tree search (MCTS), as delineated in~\prettyref{sec:upper}, to explore the hyperparameter space thus gradually navigating the exploration towards some most promising regions within the search tree. As for the lower-level optimization, we develop a gradient-based meta-learning approach as in~\cite{Finnal17}. As detailed in~\prettyref{sec:lower}, it identifies the promising initial parameters for the baseline machine learning model to adapt to the target task(s) with limited data. \subsection{Upper-level Optimization Routine} \label{sec:upper} In view of the discrete nature of the search space of hyperparameters, this paper considers using a tree-based search to implement the hyperparameter optimization at the upper level. As shown in~\prettyref{fig:architecture}, the MCTS algorithm iterates over the following four steps. \begin{itemize} \item\underline{\textit{Selection}}: The MCTS algorithm starts from the root node $n_\mathtt{r}$ and recursively selects internal nodes downward until it reaches a leaf node $n_\mathtt{l}$. In particular, the tree-path from $n_\mathtt{r}$ to an internal node represents a partial solution, i.e., an incomplete pipeline. As for each parent (non-leaf) node $n_\mathtt{p}$, its child node $n_\mathtt{c}\in\mathcal{N}_\mathtt{c}$ is selected by maximizing the upper confidence bound for trees~\cite{Auer03}: \begin{equation} n_c:=\argmax_{n_c\in\mathcal{N}_\mathtt{c}} \Bigg\{\frac{Q(n_\mathtt{c})}{\mathbb{N}(n_\mathtt{c})}+\alpha\sqrt{\frac{\ln \mathbb{N}(n_\mathtt{p})}{\mathbb{N}(n_\mathtt{c})}}\Bigg\}, \end{equation} where $Q(n_\mathtt{c})$ is the expected reward for $n_\mathtt{c}$, $\mathbb{N}(n_\mathtt{c})$ and $\mathbb{N}(n_\mathtt{p})$ is the number of times that $n_\mathtt{c}$ and $n_\mathtt{p}$ has been visited, respectively, and $\alpha$ is a parameter that controls the trade-off between exploration and exploitation. \item\underline{\textit{Expansion}}: Once a leaf node is reached, one or more child nodes will be amended to expand the tree structure. In this paper, we apply the rapid action value estimation method~\cite{GellyS11} to select the new nodes. The number of child nodes associated with its parent node $n_\mathtt{p}$ is selected and constructed according to a progressive widening trick~\cite{AugerCT13}. In particular, a new child node can be amended if and only if the $\lceil \mathbb{N}(n_\mathtt{p})^\kappa\rceil$ increases by one, where $0<\kappa<1$ is a constant coefficient. \item\underline{\textit{Simulation}}: If the expansion step finishes whereas a terminal node is not yet reached (i.e., we come up with an incomplete few-shot learning pipeline), a default policy (we use a random sampling strategy in this paper) is applied to select the remaining options until it reaches a terminal node. Thereafter, given the generated few-shot learning pipeline $\mathbf{c}$, its reward $Q(\mathbf{c})$ is evaluated as the validation empirical risk. In particular, the parameters of the baseline machine learning model are optimized by a meta-learning process at the lower level. \item\underline{\textit{Back-propagation}}: The reward value obtained in the expansion step is back-propagated to update the $\mathbb{N}$ and $Q$ values associated with all nodes along the visited tree-path until the root node. In particular, for each visited node $n$, the update rule is as follows: \begin{equation \begin{aligned} \mathbb{N}(n) &:= \mathbb{N}(n)+1, \\ Q(n) &:= Q(n)+Q(\mathbf{c}). \end{aligned} \end{equation} \end{itemize} Note that the above four steps iterate until either the computational budget is exhausted. \subsection{Lower-level Optimization Routine} \label{sec:lower} As discussed in Sections~\ref{sec:problem_formulation} and~\ref{sec:upper}, the lower-level optimization aims to train a machine learning model with limited historical data. In the following paragraphs, we will elaborate on the general setup of our meta-learning along with the meta-training process. \begin{itemize} \item\underline{\textit{Meta-learning setup}}: As introduced in~\prettyref{sec:introduction}, when considering the planning of island grids, a key challenge is the shortage of historical data that render the classic machine learning pipeline largely ineffective. This motivates us to use meta-learning to mitigate this small data challenge. Formally, let us denote the data collected from $T$ islands as $\mathcal{T}:=\{\mathcal{T}_i\}_{i=1}^T$ to constitute the training dataset. In particular, we assume that $\mathcal{T}$ is sampled from a fixed distribution $\mathbb{P}(\mathcal{T})$. For each island, the corresponding training dataset is constituted as $\mathcal{D}^\mathtt{tr}_i:=\mathcal{D}^{\mathtt{tr}_s}_i \bigcup\mathcal{D}^{\mathtt{tr}_q}_i$ where $i\in\{1,\cdots,T\}$, $\mathcal{D}^{{\mathtt{tr}_s}}_i$ and $\mathcal{D}^{{\mathtt{tr}_q}}_i$ denote the support and query sets respectively. In this paper, we have $|\mathcal{D}^{\mathtt{tr}_s}_i|:=|\mathcal{D}^{\mathtt{tr}}_i|\times 80\%$. \item\underline{\textit{Meta-learning process}}: Inspired by~\cite{Finnal17}, we develop a gradient-based meta-learning process where the parameters $\boldsymbol{\theta}$ of the baseline machine learning are updated as: \begin{equation} \boldsymbol{\theta}^{\prime}:=\boldsymbol{\theta}-\alpha \nabla_{\boldsymbol{\theta}}\mathcal{L}^\mathtt{tr}(\mathbf{c},\boldsymbol{\theta};\mathcal{D}^\mathtt{tr_s}_k), \label{eq:fogt} \end{equation} where $\alpha$ is the inner-loop learning rate and $\nabla_{\boldsymbol{\theta}}\mathcal{L}^\mathtt{tr}(\mathbf{c},\boldsymbol{\theta};\mathcal{D}^\mathtt{tr_s}_k)$ is the gradient of the loss function regarding the support set $\mathcal{D}^\mathtt{tr_s}_k$ where $k$ is for all picked up islands. \begin{itemize} \item During the meta-training phase, we randomly pick up $N\leq T$ islands, each of which contains $K$ data instances drawn from the corresponding island to constitute $\mathcal{D}_k^\mathtt{tr}$ where $k\in\{1,\cdots,N\}$. the model parameters are updated by minimizing the loss function associated with $\boldsymbol{\theta}$ across the query sets of $N$ islands sampled from $\mathbb{P}(\mathcal{T})$ defined as: \begin{equation} \mathcal{L}^\mathtt{tr}_\mathtt{meta}(\mathbf{c},\boldsymbol{\theta}^{\prime};\mathcal{D}^\mathtt{tr_q}_k)\triangleq\sum_{\mathcal{D}^\mathtt{tr_q}_k\sim\mathcal{D}^\mathtt{tr_q}}\mathcal{L}^\mathtt{tr}(\mathbf{c},\boldsymbol{\theta}^{\prime};\mathcal{D}^\mathtt{tr_q}_k), \label{eq:fogopt} \end{equation} where \prettyref{eq:fogopt} is calculated by using the updated parameters $\boldsymbol{\theta}^{\prime}$ for the given query sets $\mathcal{D}^\mathtt{tr_q}$ and the loss function regarding the $k$-th island is defined as: \begin{equation} \mathcal{L}^\mathtt{tr}(\mathbf{c},\boldsymbol{\theta}^\prime;\mathcal{D}^\mathtt{tr_q}_k):=\sum_{(\mathbf{x}^{i},y^{i})\in \mathcal{D}^\mathtt{tr_q}_k}||f(\mathbf{x}^{i};\boldsymbol{\theta}^\prime)-y^{i}||^2_2. \label{eq:lossfunc} \end{equation} Accordingly, the parameters $\hat{\boldsymbol{\theta}}$ of the baseline machine learning model are updated as: \begin{equation} \hat{\boldsymbol{\theta}}:=\boldsymbol{\theta}-\beta \nabla_{\boldsymbol{\theta}^{}}\mathcal{L}^\mathtt{tr}_\mathtt{meta}(\mathbf{c},\boldsymbol{\theta}^{\prime};\mathcal{D}^\mathtt{tr_q}_k), \label{eq:foga} \end{equation} where $\beta$ is the outer-loop learning rate and $\hat{\boldsymbol{\theta}}$ is the updated parameters of the baseline machine learning model after a one-step gradient descent \item After the meta-training phase, the updated parameters $\hat{\boldsymbol{\theta}}$ will be fine-tuned on the validation sets $\mathcal{D}^\mathtt{val}$ with a few gradient descent steps as: \begin{equation} \boldsymbol{\theta}^\ast:=\hat{\boldsymbol{\theta}}-\gamma \nabla_{\hat{\boldsymbol{\theta}}} \mathcal{L}^\mathtt{val}_\mathtt{meta}(\mathbf{c},\hat{\boldsymbol{\theta}};\mathcal{D}^\mathtt{val}), \label{eq:ftu} \end{equation} where $\gamma$ is the validation learning rate. \end{itemize} \end{itemize} At the end, $\boldsymbol{\theta}^\ast$ will be used as the optimal parameters of the baseline machine learning model for predicting the unseen data collected from a new island. The workflow of the lower-level optimization is illustrated at lower-level optimization part of~\prettyref{fig:architecture} and the pseudo-code is given in~\prettyref{alg:BML}. \begin{algorithm}[t!] \caption{Model-agnostic gradient-based meta-learning at the lower level of \texttt{BiLO-Auto-TSF/ML}.} \label{alg:BML} \KwIn{Meta training data set $\mathcal{D}^\mathtt{tr}=(\mathcal{D}^{\mathtt{tr}_s},\mathcal{D}^{\mathtt{tr}_q})$, meta validation data set $\mathcal{D}^\mathtt{val}$, parameters of base learner and meta learner including $\alpha$, $\beta$, $\gamma$} \KwOut{Predicted outputs} \textbf{Initialize} model parameter $\boldsymbol{\theta}$;\\ \For{$iteration \leftarrow 1,2,...$ \KwTo}{ Sample $N$ islands randomly from $T$;\\ \For{all $k={1,\cdots,N}$ \KwTo}{ Sample support sets $\mathcal{D}^{\mathtt{tr}_s}_k$ ; \\ Compute the loss $\mathcal{L}^\mathtt{tr}(\mathbf{c},\boldsymbol{\theta};\mathcal{D}^\mathtt{tr_s}_k)$; \\ Obtain the $\boldsymbol{\theta}^{\prime}$ via gradient descent in~(\ref{eq:fogt}) ;\\ Sample query sets $\mathcal{D}^{\mathtt{tr}_q}_k$;\\ Compute the loss $\mathcal{L}^\mathtt{tr}_\mathtt{meta}(\mathbf{c},\boldsymbol{\theta}^{\prime};\mathcal{D}^\mathtt{tr_q}_k)$; \\ } Obtain $\hat{\boldsymbol{\theta}}$ via gradient descent using $\mathcal{L}^\mathtt{tr}_\mathtt{meta}(\mathbf{c},\boldsymbol{\theta}^{\prime};\mathcal{D}^\mathtt{tr_q}_k)$ with respect to $\boldsymbol{\theta}$; \\ } Obtain the parameters $\boldsymbol{\theta}^*$ using $\mathcal{D}^\mathtt{val}$ via~(\ref{eq:ftu}) with a couple of gradient descent steps;\\ Feed data into the machine learning model to predict the outputs for new island; \Return Prediction results \end{algorithm} \section{Related Works} \label{sec:related} This section provides a pragmatic overview of some selected developments of both time series forecasting in smart grid and hyperparameter optimization for meta-learning. \subsection{Time Series Forecasting in Smart Grid} \label{sec:related_sg} Electric load demand or renewable energy generation forecasting is critical to the operation and management of a smart grid. Based on the length of time period, the predictive modeling can be divided into short-, mid-, and long-term forecasting~\cite{ZhengXZL17}. In recent years, NNs have been widely applied to obtain latent information to build prediction models, e.g., dynamic choice artificial neural network model~\cite{WangLSZ16}, generalized regression neural network~\cite{LiW12}, and nonlinear autoregressive neural network models with exogenous input (NARX)~\cite{BuitragoA17}. Nevertheless, NNs are notorious for overfitting and are also suffered from local optima in the backpropagation~\cite{ArifJANGF20}. To mitigate the overfitting problem by raising a new data dimension, Shi et al.~\cite{ShiXL18} proposed to use a pooling-based deep recurrent neural network for short-term household load forecasting. Moon et al.~\cite{MoonJRRH20} proposed to synthesize more than one deep neural network model with multiple hidden layers to select a model with the best prediction performance. In~\cite{Aly20}, Aly proposed a hybrid of wavelet neural network and Kalman filter for short-term load forecasting problems. Mid-term forecasting is used to coordinate load dispatch and balance load demand and renewable energy generation~\cite{KhuntiaRM16}. Jiang et al.~\cite{WeiHLHH19} proposed a dynamic Bayes network for mid-term forecasting problem to predict the peak time load in a year. In~\cite{GrzegorzPS21}, Grzegorz et al. proposed a hybrid deep learning model for mid-term forecasting that combined exponential smoothing (ETS), multi-layer LSTM and ensemble learning which have shown competitive performance against other models like ARIMA and ETS. Long-term forecasting is used to predict the power consumption and generation ranging from a few years to a couple of decades for the system planning and expansion in a smart grid. In~\cite{AgrawalMT18} and~\cite{ZhengXZL17}, variants of RNN have been proposed for long-term time series load forecasting and have shown better results than other forecasting methods like NARX and SVR. \subsection{Hyperparameter Optimization for Meta-Learning} \label{sec:related_m} Meta-learning has been proven to be effective in multiple tasks scenarios where task-agnostic knowledge is extracted from the same distribution of tasks with small datasets and used to find good starting parameters of the baseline machine learning model for new tasks~\cite{HospedalesAMS20}. It has been widely appreciated that the performance of machine learning is sensitive to the choice of the corresponding hyperparameters. For example, The existing gradient-based meta-learning usually rely on the choice of an appropriate optimizer to fine tune the parameters of the meta-learner. Furthermore, there are various hyperparameters associated with the base-learner such as the neural architecture and the learning rate, the configuration of which can influence the predictive performance. In the past decade, there have been many efforts devoated to the hyperparameter optimization for meta-learning. For example, in order to reduce the sensitivity to the hyperparameters, Li et al.~\cite{LiZCL17} proposed to use a stochastic gradient descent method to update the inner-loop learning rate for the meta-learner. The experimental results have shown the effectiveness of hyperparameter optimization for the meta-learner. In~\cite{AntoniouES18}, Antoniou et al. proposed an improved gradient-based meta-learning method in which the inner-loop learning rate is updated according to the performance of the selected model. In~\cite{RusuRSVPOH18}, Rusu et al. proposed a latent embedding optimization approach to optimize a range of hyperparameters in meta-learning. The experimental results have shown superior performance against some gradient-based hyperparameter adaptation. Franceschi et al.~\cite{FranceschiFSGP18} proposed a bi-level programming framework to optimize the parameters of a neural network along with the hyperparameters of a meta learner in a concurrent manner. In~\cite{BaikCCKL20}, Baik et al. proposed a fast adaptation approach to predict the adaptive hyperparameters by using the current parameters and their gradients. The proposed method from a random initialization with an adaptive learning of hyperparameters outperform other existing algorithms. \begin{remark} Most, if not all, existing time series forecasting models in the smart grid literature require sufficient amount of historical data. Unfortunately, this can be hardly met in real-world applications of which available data is usually scarce especially for the power system design and planning in remote areas like an isolated island network. \end{remark} \begin{remark} From the above literature review, we find that the existing studies on hyperparameter optimization only take the hyperparameters in the inner loop of meta-learning into consideration. However, it is not difficult to envisage that there exist certain dependencies between the hyperparameters associated with both base- and meta-learners. Unfortunately, the concurrent optimization w.r.t. these two types of hyperparameters have been largely ignored in the literature. \end{remark} \section{Results and Discussions} \label{sec:results} Our empirical study is driven by addressing the following three research questions (RQs). \begin{itemize} \item\underline{\textbf{RQ1}}: Does \texttt{BiLO-Auto-TSF/ML}\ framework work for different types of energy sources? \item\underline{\textbf{RQ2}}: Does the meta-learning alone can help a base-learner handle small data challenge? \item\underline{\textbf{RQ3}}: What is the added value of involving different types of configuration options in the hyperparameter optimization of a few-shot learning pipeline? \item\underline{\textbf{RQ4}}: What is the benefits of the bi-level programming formulation in \texttt{BiLO-Auto-TSF/ML}? \item\underline{\textbf{RQ5}}: What is the impact of the computational budget allocated to the upper-level optimization? \end{itemize} \subsection{Performance Evaluation of the Effectiveness of the Proposed \texttt{BiLO-Auto-TSF/ML}\ Framework} \label{sec:rq1} \subsubsection{Methods} \label{sec:methods_rq1} As introduced in~\prettyref{sec:setup}, by using \texttt{NN}, \texttt{SVR}, and \texttt{LSTM} as the base-learners, we come up with three instances of our \texttt{BiLO-Auto-TSF/ML}\ framework, denoted as \texttt{BiLO-Auto-TSF/ML-NN}, \texttt{BiLO-Auto-TSF/ML-SVR} and \texttt{BiLO-Auto-TSF/ML-LSTM} respectively. As introduced in~\prettyref{sec:setup}, the historical data of wind generation, PV generation, and load demand for a period of a week are used to constitute the training dataset while the performance of different forecasting models is validated on the island Lundy for a period of $24$ hours. \subsubsection{Results} \label{sec:results_rq1} \begin{table}[t!] \scriptsize \centering \caption{Comparison results of the MSE values obtained by different models for forecasting tasks of wind generation, PV generation and load demand.} \begin{tabular}{|c|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{Wind generation} \\\cline{2-4} \multicolumn{1}{c|}{} & $n^{\mathrm{g}}=1$ & $n^{\mathrm{g}}=2$ & $n^{\mathrm{g}}=10$ \\ \hline \texttt{NN} & 1.041E-1(7.01E-3)$^{\dagger}$ & 9.453E-2(5.82E-4)$^{\dagger}$ & 7.845E-2(4.42E-4)$^{\dagger}$ \\ \hline \texttt{ML-NN} & 5.922E-2(3.56E-3)$^{\dagger}$ & 5.33E-2(5.71W-4)$^{\dagger}$ & 3.925E-2(8.38E-5)$^{\dagger}$ \\ \hline \texttt{BiLO$^\ast$-NN} & 5.248E-2(7.32E-5)$^{\dagger}$ & 4.736E-2(2.52E-6)$^{\dagger}$ & 3.624E-2(4.82E-6)$^{\dagger}$ \\ \hline\hline \texttt{SVR} & 1.082E-1(1.84E-3)$^{\dagger}$ & 9.845E-2(8.12E-4)$^{\dagger}$ & 7.919E-2(1.12E-4)$^{\dagger}$ \\ \hline \texttt{ML-SVR} & 6.017E-2(1.32E-4)$^{\dagger}$ & 5.761E-2(4.74E-4)$^{\dagger}$ & 4.233E-2(5.71E-5)$^{\dagger}$ \\ \hline \texttt{BiLO$^\ast$-SVR} & 5.217E-2(6.69E-5)$^{\dagger}$ & 4.754E-2(2.62E-6)$^{\dagger}$ & 3.546E-2(3.59E-6)$^{\dagger}$ \\ \hline\hline \texttt{LSTM} & 7.684E-2(6.82E-3)$^{\dagger}$ & 6.892E-2(9.72E-4)$^{\dagger}$ & 5.622E-2(3.73E-4)$^{\dagger}$ \\ \hline \texttt{ML-LSTM} & 5.247E-2(8.25E-4)$^{\dagger}$ & 4.818E-2(7.78E-4)$^{\dagger}$ & 3.232E-2(3.58E-5)$^{\dagger}$ \\ \hline \texttt{BiLO$^\ast$-LSTM} & \multicolumn{1}{>{\columncolor{mycyan}}c|}{\textbf{4.125E-2(9.55E-5)}} & \multicolumn{1}{>{\columncolor{mycyan}}c|}{\textbf{3.844E-2(6.87E-6)}} & \multicolumn{1}{>{\columncolor{mycyan}}c|}{\textbf{2.476E-2(8.55E-6)}} \\ \hline \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{PV generation} \\\cline{2-4} \multicolumn{1}{c|}{} & $n^{\mathrm{g}}=1$ & $n^{\mathrm{g}}=2$ & $n^{\mathrm{g}}=10$ \\ \hline \texttt{NN} & 1.313E-1(2.42E-2)$^{\dagger}$ & 1.157E-1(1.92E-2)$^{\dagger}$ & 1.049E-1(1.15E-2)$^{\dagger}$ \\ \hline \texttt{ML-NN} & 3.621E-2(1.25E-3)$^{\dagger}$ & 2.972E-2(4.82E-4)$^{\dagger}$ & 2.238E-2(7.48E-5)$^{\dagger}$ \\ \hline \texttt{BiLO$^\ast$-NN} & 2.592E-2(3.72E-5)$^{\dagger}$ & 2.276E-2(1.26E-6)$^{\dagger}$ & 1.785E-2(5.34E-6)$^{\dagger}$ \\ \hline\hline \texttt{SVR} & 1.261E-1(3.42E-2)$^{\dagger}$ & 1.077E-1(5.67E-2)$^{\dagger}$ & 9.854E-2(4.39E-2)$^{\dagger}$ \\ \hline \texttt{ML-SVR} & 3.443E-2(3.11E-3)$^{\dagger}$ & 2.823E-2(7.25E-4)$^{\dagger}$ & 2.107E-2(4.65E-5)$^{\dagger}$ \\ \hline \texttt{BiLO$^\ast$-SVR} & 2.318E-2(1.72E-5)$^{\dagger}$ & 2.179E-2(3.85E-5)$^{\dagger}$ & 1.762E-2(5.36E-6)$^{\dagger}$ \\ \hline\hline \texttt{LSTM} & 5.482E-2(1.01E-3)$^{\dagger}$ & 4.743E-2(7.72E-3)$^{\dagger}$ & 3.934E-2(8.25E-3)$^{\dagger}$ \\ \hline \texttt{ML-LSTM} & 3.122E-2(6.23E-3)$^{\dagger}$ & 2.173E-2(8.45E-4)$^{\dagger}$ & 1.775E-2(5.58E-5)$^{\dagger}$ \\ \hline \texttt{BiLO$^\ast$-LSTM} & \multicolumn{1}{>{\columncolor{mycyan}}c|}{\textbf{2.215E-2(6.54E-6)}} & \multicolumn{1}{>{\columncolor{mycyan}}c|}{\textbf{1.943E-2(7.25E-6)}} & \multicolumn{1}{>{\columncolor{mycyan}}c|}{\textbf{1.372E-2(8.66E-6)}} \\ \hline \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{Load demand} \\ \cline{2-4} \multicolumn{1}{c|}{} & $n^{\mathrm{g}}=1$ & $n^{\mathrm{g}}=2$ & $n^{\mathrm{g}}=10$ \\ \hline \texttt{NN} & 7.518E-2(3.52E-3)$^{\dagger}$ & 7.214E-2(2.67E-3)$^{\dagger}$ & 6.278E-2(1.78E-4)$^{\dagger}$ \\ \hline \texttt{ML-NN} & 6.338E-2(3.12E-3)$^{\dagger}$ & 5.682E-2(6.45E-4)$^{\dagger}$ & 5.017E-2(8.62E-5)$^{\dagger}$ \\ \hline \texttt{BiLO$^\ast$-NN} & 5.231E-2(6.54E-5)$^{\dagger}$ & 4.776E-2(9.53E-5)$^{\dagger}$ & 3.890E-2(3.42E-6)$^{\dagger}$ \\ \hline\hline \texttt{SVR} & 6.945E-2(4.01E-3)$^{\dagger}$ & 6.522E-2(6.84e-4)$^{\dagger}$ & 6.116E-2(4.12E-4)$^{\dagger}$ \\ \hline \texttt{ML-SVR} & 6.229E-2(2.23E-3)$^{\dagger}$ & 5.943E-2(3.67E-4)$^{\dagger}$ & 5.334E-2(5.66E-5)$^{\dagger}$ \\ \hline \texttt{BiLO$^\ast$-SVR} & 5.317E-2(2.42E-5)$^{\dagger}$ & 4.882E-2(6.76E-6)$^{\dagger}$ & 3.631E-2(7.66E-6)$^{\dagger}$ \\ \hline\hline \texttt{LSTM} & 6.853E-2(1.71E-3)$^{\dagger}$ & 6.659E-2(4.58E-4)$^{\dagger}$ & 4.547E-2(5.87E-4)$^{\dagger}$ \\ \hline \texttt{ML-LSTM} & 4.828E-2(1.04E-3)$^{\dagger}$ & 4.113E-2(5.75E-4)$^{\dagger}$ & 3.282E-2(3.42E-6)$^{\dagger}$ \\ \hline \texttt{BiLO$^\ast$-LSTM} & \multicolumn{1}{>{\columncolor{mycyan}}c|}{\textbf{4.254E-2(3.65E-6)}} & \multicolumn{1}{>{\columncolor{mycyan}}c|}{\textbf{3.123E-2(5.72E-6)}} & \multicolumn{1}{>{\columncolor{mycyan}}c|}{\textbf{2.524E-2(6.88E-6)}} \\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item[1] $^{\dagger}$ indicates the better result (highlighted in bold face with a gray background) is of statistical significance according to the Wilcoxon's rank-sum test at the $5\%$ significance level. \end{tablenotes} \label{tab:overall_performance}% \end{table}% \begin{figure*}[t!] \centering \includegraphics [width=\linewidth]{figs/comparison_three_models.pdf} \caption{Comparison of the forecasting performance of different models for wind generation, PV generation and load demand, respectively. Note that BiLO$^\ast$ is short for \texttt{BiLO-Auto-TSF/ML}.} \label{fig:comparison_three_models} \end{figure*} From the comparison results shown in~\prettyref{tab:overall_performance}, we can see that the performance of all three base-learners have been significantly improved by being embedded into our proposed \texttt{BiLO-Auto-TSF/ML}\ framework. To have a better visual interpretation of the performance comparison, we plot the forecasting results of $24$ hours for wind generation, PV generation and load demand in~\prettyref{fig:comparison_three_models}. From these trajectories, it is clear to see that the vanilla \texttt{NN} and \texttt{SVR} cannot make any meaningful forecasting at all in all cases. This is anticipated as the training data at hand is extremely scarce so that neither \texttt{NN} nor \texttt{SVR} can be appropriately trained. Although the vanilla \texttt{LSTM} can capture the variation of the time series data as shown in~\prettyref{fig:comparison_three_models}, its prediction is largely offset with regard to the ground truth. In contrast, by being equipped with our proposed \texttt{BiLO-Auto-TSF/ML}, the performance of all base-learners have been significantly improved. Especially for \texttt{NN} and \texttt{SVR}, after using few-shot learning and hyperparameter optimization, the performance of \texttt{BiLO-Auto-TSF/ML-NN} and \texttt{BiLO-Auto-TSF/ML-SVR} have been leveled up to become comparable with \texttt{BiLO-Auto-TSF/ML-LSTM}. \vspace{0.5em} \noindent \framebox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule}{ \textbf{\underline{Response to RQ1:}} \textit{From the observations in this experiment, it is clear to see that our proposed \texttt{BiLO-Auto-TSF/ML}\ framework is able to significantly improve the forecasting performance of a base-learner for different types of energy source. In particular, the adaptation to extremely scarce historical data can be attributed to the few-shot learning in \texttt{BiLO-Auto-TSF/ML}.} }} \subsection{Investigation of the Effectiveness of Meta-Learning} \label{sec:rq2} \subsubsection{Methods} \label{sec:methods_rq2} The results in~\prettyref{sec:rq1} have shown the overall superiority of our proposed \texttt{BiLO-Auto-TSF/ML}\ framework for improving a base-learner for time series forecasting with extremely limited historical data. To address \textbf{RQ2}, we directly apply the meta-learning approach introduced in~\prettyref{sec:lower} to each of \texttt{NN}, \texttt{SVR} and \texttt{LSTM} models. The resulted forecasting models are denoted as \texttt{ML-NN}, \texttt{ML-SVR} and \texttt{ML-LSTM}, respectively. Note that the hyperparameters associated with the base- and meta-learners are fixed a priori (e.g., we set the number of neurons in the \texttt{NN} to be $512$, the kernel is set to be rbf, and the number of cells in the \texttt{LSTM} to $640$, three different learning rates set as $0.01$, $0.001$, and $0.05$, respectively, and SGD is selected as the optimizer). In addition, we plan to investigate the influence of the number of gradient descent steps (denoted as $n^{\mathrm{g}}$) for fine-tuning the updated parameters $\hat{\boldsymbol{\theta}}$ during the meta training phase. Specifically, $n^{\mathrm{g}}$ is set as $1$, $2$, and $10$ in this experiment. \subsubsection{Results} \label{sec:results_rq2} From the comparison results shown in~\prettyref{tab:overall_performance}, we find that the performance of the vanilla \texttt{NN}, \texttt{SVR}, and \texttt{LSTM} can be improved by directly using the meta-learning. In particular, it is worth noting that the corresponding hyperparameters of \texttt{ML-NN}, \texttt{ML-SVR}, and \texttt{ML-LSTM} are fixed a priori whereas our proposed \texttt{BiLO-Auto-TSF/ML}\ framework is able to automatically search for an optimal few-shot learning pipeline for the underlying base-learner. As the comparison results shown in~\prettyref{tab:overall_performance}, we find that the performance of \texttt{ML-NN}, \texttt{ML-SVR}, and \texttt{ML-LSTM} can be further improved under our proposed \texttt{BiLO-Auto-TSF/ML}\ framework. To have a better investigation for the performance difference achieved by using \texttt{BiLO-Auto-TSF/ML}\ framework against its vanilla version and the conventional meta-learning, we show the statistical results of $A_{12}$ in~\prettyref{fig:a12_comp_ours}. From this result, it is clear to see that the better performance achieved by using \texttt{BiLO-Auto-TSF/ML}\ framework are always classified to have a large effect size. This observation supports the importance of choosing appropriate hyperparameters in time series forecasting for a given dataset. Furthermore, let us look into the influence of the number of gradient descent steps. As the results shown in~\prettyref{tab:overall_performance}, it is appreciated that the forecasting performance can be further improved by increasing the number of gradient descent steps during the meta training phase. This is not difficult to understand as more gradient descent steps lead to a better fine-tuned result. To have a visual illustration of the impact of $n^{\mathrm{g}}$, we plot the forecasting results of \texttt{LSTM}, \texttt{ML-LSTM} and \texttt{BiLO-Auto-TSF/ML-LSTM} for different energy sources in Figs.~\ref{fig:lstm_wind} to~\ref{fig:lstm_load}. From these trajectories, it is clear to see that the number of gradient descent steps does not have any visible impact on the performance of the vanilla \texttt{LSTM}; whereas it is able to fine tune the performance of meta-learning. \begin{figure}[t!] \centering \includegraphics [width=.5\linewidth]{figs/a12_comp_ours.pdf} \caption{Percentage of the large, medium, small, and equal $A_{12}$ effect size, respectively, when comparing our proposed \texttt{BiLO-Auto-TSF/ML} framework based models against their base-learners and the one only using meta-learning.} \label{fig:a12_comp_ours} \end{figure} \begin{figure*}[t!] \centering \includegraphics [width=\linewidth]{figs/lstm_wind.pdf} \caption{Comparison of the forecasting performance of \texttt{LSTM}, \texttt{ML-LSTM} and \texttt{BiLO-Auto-TSF/ML-LSTM} with different number of gradient descent steps for the wind generation task.} \label{fig:lstm_wind} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{figs/lstm_pv.pdf} \caption{Comparison of the forecasting performance of \texttt{LSTM}, \texttt{ML-LSTM} and \texttt{BiLO-Auto-TSF/ML-LSTM} with different number of gradient descent steps for the PV generation task.} \label{fig:lstm_pv} \end{figure*} \begin{figure*}[t!] \centering \includegraphics [width=\linewidth]{figs/lstm_load.pdf} \caption{Comparison of the forecasting performance of \texttt{LSTM}, \texttt{ML-LSTM} and \texttt{BiLO-Auto-TSF/ML-LSTM} with different number of gradient descent steps for the load demand task.} \label{fig:lstm_load} \end{figure*} \begin{figure}[t!] \centering \includegraphics [width=.7\linewidth]{figs/boxplot_diff_num.pdf} \caption{Comparison of the forecasting performance of \texttt{BiLO-Auto-TSF/ML-LSTM} against the other the variants \texttt{BiLO-Auto-TSF/ML}\ that involve different number of hyperparameters in optimization, represented as the index $1\leq i<5$ in the $x$-coordinate.} \label{fig:boxplot_diff_num} \end{figure} \noindent \framebox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule}{ \textbf{\underline{Response to RQ2:}} \textit{There are three takeaways from this experiment. First, meta-learning can enable a vanilla machine learning model to be capable of carrying out time series forecasting with extremely limited historical data. Second, by using our proposed \texttt{BiLO-Auto-TSF/ML}\ framework, the performance of the corresponding forecasting model can be further improved. This can be attributed to the hyperparameter optimization that helps identifies the most competitive few-shot learning pipeline. Last but not the least, using more gradient descent steps can be beneficial for fine tuning during the meta-training phase.} }} \begin{figure}[htbp] \centering \includegraphics [width=.3\linewidth]{figs/a12_comp_sk.pdf} \caption{Percentage of the large, medium, small, and equal $A_{12}$ effect size, respectively, when comparing our proposed \texttt{BiLO-Auto-TSF/ML} framework based models against Auto-Sklearn framework.} \label{fig:a12_comp_sk} \end{figure} \subsection{Investigation of the Hyperparameters Optimization for Meta-Learning} \label{sec:rq3} \subsubsection{Methods} \label{sec:methods_rq3} As introduced in~\prettyref{sec:setup}, there are more than one type of hyperparameters considered in our few-shot learning pipeline. Another question is whether we need to optimize all those hyperparameters or can we achieve comparable performance by only optimizing some of them? To address \textbf{RQ3}, we come up with $\sum_{i=1}^4 {{5}\choose{i}}$ different variants by considering $1\leq i<5$ hyperparameters in \texttt{BiLO-Auto-TSF/ML}. Given the outstanding performance observed in~\prettyref{sec:results_rq1}, here we only consider \texttt{LSTM} as the base-learner in this experiment without loss of generality. To address \textbf{RQ4}, we compare the performance of \texttt{BiLO-Auto-TSF/ML}\ with \texttt{Auto-Sklearn}~\cite{FeurerEFLH20}, one of the most popular tools in the automated machine learning literature. Here we still only consider \texttt{LSTM} as the base-learner as before without loss of generality. Note that \texttt{Auto-Sklearn} does not apply a bilevel programming paradigm for hyperparameter optimization. \subsubsection{Results} \label{sec:results_rq3} \begin{table}[t!] \centering \caption{Comparison results of MSE values obtained by \texttt{Auto-Sklearn} and \texttt{BiLO-Auto-TSF/ML-LSTM} } \begin{tabular}{ccc} \cline{2-3} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{MSE} \\ \hline \multicolumn{1}{|c|}{\texttt{Data}} & \multicolumn{1}{c|}{\texttt{Auto-Sklearn}} & \multicolumn{1}{c|}{\texttt{BiLO$\ast$-LSTM}} \\ \hline \multicolumn{1}{|c|}{Wind generation} & \multicolumn{1}{c|}{5.143E-2(4.76E-6)} & \multicolumn{1}{>{\columncolor{mycyan}}c|}{\textbf{2.476E-2(8.55E-6)}} \\ \hline \multicolumn{1}{|c|}{PV generation} & \multicolumn{1}{c|}{1.745E-2(6.42E-6)} & \multicolumn{1}{>{\columncolor{mycyan}}c|}{\textbf{1.372E-2(8.66E-6)}} \\ \hline \multicolumn{1}{|c|}{Load demand} & \multicolumn{1}{c|}{3.628E-2(2.93E-6)} & \multicolumn{1}{>{\columncolor{mycyan}}c|}{\textbf{2.524E-2(6.88E-6)}} \\ \hline \end{tabular}% \label{tab:comp_sk}% \end{table}% From the box-plots of the MSE values obtained by different variants of \texttt{BiLO-Auto-TSF/ML}\ considering $1\leq i<5$ hyperparameters shown in~\prettyref{fig:boxplot_diff_num}, it is clear to see that our proposed \texttt{BiLO-Auto-TSF/ML}\ constantly outperforms the other variants. It is interesting to note that the predictive accuracy can be improved by involving more types of hyperparameters in the optimization. In other words, we can envisage a further improvement of \texttt{BiLO-Auto-TSF/ML}\ by involving more configuration options in the few-shot learning pipeline. According to the comparison results shown in~\prettyref{tab:comp_sk}, we can see that the performance of \texttt{BiLO-Auto-TSF/ML}\ is constantly better than that of \texttt{Auto-Sklearn} in all three types of forecasting tasks. Furthermore, as the $A_{12}$ results shown in~\prettyref{fig:a12_comp_sk}, it is clear to see that the better results achieved by our proposed \texttt{BiLO-Auto-TSF/ML}\ is always classified to have a large effect size. It is worth noting that one of the key differences between \texttt{BiLO-Auto-TSF/ML}\ and \texttt{Auto-Sklearn} lies in the bi-level programming perspective for coordinating the meta-learning and hyperparameter optimization in an intertwined manner. \noindent \framebox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule}{ \textbf{\underline{Response to RQ3:}} \textit{From the observations in this experiment, we can see that the performance of a few-shot learning pipeline can be improved by involving more configuration options in the hyperparameter optimization. } }} \vspace{0.2em} \noindent \framebox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule}{ \textbf{\underline{Response to RQ4:}} \textit{From the comparison results w.r.t. \texttt{Auto-Sklearn}, we confirm the effectiveness of bi-level programming paradigm for handling meta-learning and hyperparameter optimization in a concurrent manner. } }} \subsection{Impact of Computational Budget at the Upper Level} \label{sec:rq4} \subsubsection{Methods} \label{sec:methods_rq4} In practice, it is not uncommon that the computational resource for time series forecasting, in particular the time budget, is limited. Given the iterative nature of MCTS, searching for the optimal few-shot learning pipeline at the upper-level optimization by using a MCTS can be time consuming. In this subsection, we are interested to investigate how is the number of iterations in MCTS (i.e., the computational budget allocated to the upper-level optimization) related to the performance. To this end, for each of the three instances of our proposed \texttt{BiLO-Auto-TSF/ML}\ framework, we keep a record of the variation of the MSE across $720$ iterations in MCTS for the forecasting tasks of wind generation, PV generation and load demand, respectively. \subsubsection{Results} \label{sec:results_rq4} \begin{figure*}[t!] \centering \includegraphics [width=\linewidth]{figs/mcts.pdf} \caption{The trajectories of the MSE across $720$ iterations in the MCTS of \texttt{BiLO-Auto-TSF/ML}\texttt{-NN}, \texttt{BiLO-Auto-TSF/ML}\texttt{-SVR} and \texttt{BiLO-Auto-TSF/ML}\texttt{-LSTM}.} \label{fig:mcts_traj} \end{figure*} \begin{figure*}[t!] \centering \includegraphics [width=\linewidth]{figs/time.pdf} \caption{The trajectories of the CPU wall clock time across $720$ iterations in the MCTS of \texttt{BiLO-Auto-TSF/ML}\texttt{-NN}, \texttt{BiLO-Auto-TSF/ML}\texttt{-SVR} and \texttt{BiLO-Auto-TSF/ML}\texttt{-LSTM}.} \label{fig:mcts_time} \end{figure*} From the trajectories shown in~\prettyref{fig:mcts_traj}, we can see that the overall MSE keeps on reducing with the increase of the iterations in MCTS. However, for all three \texttt{BiLO-Auto-TSF/ML}\ instances, the MSE trajectories does not show to be significantly changed after around the $240$-th iteration. This suggests that we can reduce the number of iterations in MCTS without significantly deteriorating the performance of the identified few-shot learning pipeline. In addition, as the trajectories of CPU wall clock time shown in~\prettyref{fig:mcts_time}, we can see that the computational time can be significantly reduced when early terminating the upper-level optimization routine. \vspace{0.5em} \noindent \framebox{\parbox{\dimexpr\linewidth-2\fboxsep-2\fboxrule}{ \textbf{\underline{Response to RQ5:}} \textit{From the experiment in this subsection, we find that the computational budget allocated to the upper-level hyperparameter optimization of the few-shot learning pipeline can be narrowed down without significantly compromising the forecasting performance.} }} \section{Experimental Setup} \label{sec:setup} This section introduces the setup of our empirical study including the dataset, parameter settings, the performance metric, and the statistical tests~\cite{ChenLY18,ZouJYZZL19,LiZZL09,BillingsleyLMMG19,LiZLZL09,Li19,LiK14,LiFK11,LiKWTM13,CaoKWL12,CaoKWL14,LiDZZ17,LiKD15,LiKWCR12,LiWKC13,CaoKWLLK15,LiDY18,WuKZLWL15,LiKCLZS12,LiDAY17,LiDZ15,LiXT19,GaoNL19,LiuLC19,LiZ19,KumarBCLB18,CaoWKL11,LiX0WT20,LiuLC20,LiXCT20,WangYLK21,ShanL21,LaiL021,LiLLM21,WuKJLZ17,LiCSY19,LiLDMY20,WuLKZ20,PruvostDLL020}. \begin{itemize} \item\underline{\textit{Dataset}}: Our empirical study considers the energy forecasting tasks for smart grid infrastructure planning in islands at the English Channel~\cite{MatthewFCWTHMAYH18}. We consider three different energy sources including the wind generation, the photovoltaic (PV) generation, and the load demand. The training set consists of data collected from four islands including Ushant, Molene, Sein and Isles of Scilly while those obtained from the island Lundy constitute the validation set. As discussed in~\prettyref{sec:introduction}, the key challenge here is the lack of sufficient historical data. In particular, for each energy source, there is only a week's time series data for each island. \item\underline{\textit{Parameter settings}}: There are $\ell=5$ hyperparameters considered in our automated few-shot learning pipeline. \begin{itemize} \item Note that our proposed \texttt{BiLO-Auto-TSF/ML}\ is model agnostic where any off-the-shelf machine learning method can be used as the base-learner. In our experiments, we consider three widely used machine learning models including a two-layer \texttt{NN}, \texttt{SVR} and \texttt{LSTM}, for a proof of concept purpose. \item hyperparameters associated with the base-learner: the number of hidden neurons of \texttt{NN} (from $128$ to $1,024$), kernels used in \texttt{SVR} (including linear, poly, rbf, sigmoid, precomputed), and the number of units in \texttt{LSTM} (from $128$ to $1,024$). \item Three different learning rates $\alpha\in[0.0001,0.5]$, $\beta\in[0.0001,0.5]$, and $\gamma\in[0.0001,0.5]$. \item Optimization methods for the loss function: SGD~\cite{Amari93}, Adam~\cite{KingmaB17}, RMSprop~\cite{SaadnaBA21}, Adadelta~\cite{Zeiler12} and Adagrad~\cite{TraoreP21}. \end{itemize} \item\underline{\textit{Performance metric}}: Here we use the widely used mean squared error (MSE) as the metric to evaluate the predictive accuracy of a forecasting model. \begin{equation} \mathrm{MSE}=\frac{1}{N}\sum_{i=1}^{N}(y_i-\hat{y_i})^2, \label{eq:mse} \end{equation} where $N$ is number of instances in the testing set, $y_i$ and $\hat{y_i}$ are the ground truth and predicted value, respectively. \item\underline{\textit{Statistical tests}}: For statistical interpretation of the significance of the comparison results, we apply the following two statistical methods in our empirical study. \begin{itemize} \item\underline{Wilcoxon signed-rank test}~\cite{Wilcoxon1992}: It is a non-parametric statistical test that makes no assumption about the underlying distribution of the data. In particular, the significance level is set to $p=0.05$ in our experiments. \item\underline{$A_{12}$ effect size}~\cite{LiC21}: To ensure the resulted differences are not generated from a trivial effect, we apply $A_{12}$ as the effect size measure to evaluate the probability that one algorithm is better than another. Specifically, given a pair of peer algorithms, $A_{12}=0.5$ means they are \textit{equivalent}. $A_{12}>0.5$ denotes that one is better for more than 50\% of the times. $0.56\leq A_{12}<0.64$ indicates a \textit{small} effect size while $0.64 \leq A_{12} < 0.71$ and $A_{12} \geq 0.71$ mean a \textit{medium} and a \textit{large} effect size, respectively. \end{itemize} \end{itemize}